NeurIPS 2023 Unboxed : What’s In & What’s Out — Cutting-Edge Generative AI Research (148 curated from 3,584 papers)

Krishna Sankar
5 min readDec 29, 2023

--

As I had written earlier, we look forward to the NeurIPS conference every year, in December.

In my blog “A Prelude to NeurIPS 2023 — Spotlight & Exploring Generative AI Frontiers” [Here], I had mentioned a few interesting papers, tutorials and workshops.

Now is the time to look back at the riches examining the prevailing trends and highlighting emerging themes. The list of papers is an ocean — 3,584 papers ! Before the year was out I managed to go through all the abstracts and curate 148 papers into a decent taxonomy ! Of course, it is from my POV, so have missed many …

Update [12.31.23 11:00] : Whew, completed with a hour to spare !! I was at it for a few weeks — since the end of NeurIPS. Wanted to complete the review this year and start doing deep dives/coding next year !

BTW the today’s date is 123123 — happens only once in 100 years !

:30 more minutes before welcoming the New Year .… Am going to take off for the rest of this year … Happy New Year 2024 !!!

On a different note, I have updated the “About Me — The Pitter-Patter of Small Feats” [here]

Only claim I can make is of commission/precision (i.e., interesting papers from the selected) rather than omission/recall (i.e., may more interesting might have been omitted and not all interesting ones are selected)

Analysis Methodology

First of all, my gratitude to Jacob Marks and his hard work to create the GitHub repo with paper title and abstract ! Just elegant !

His repo [Here] and Jacob’s analysis of NeurIPS 2023 [Here] — a must read

  • I did a unigram, bigram and trigram analysis on the title+abstract. Did the lemmatize (normalize plurals) and deduplicate words from the same paper to get keywords once per paper
  • I had some intuition and took notes — as I was going through the papers, and supported it (roughly) by the uni/bi/tri gram analysis
  • Unigrams, by themselves, didn’t give too many interesting results (words like algorithm, model, learning and performance show everywhere) but helped to clarify — when viewed in context of bigrams and trigrams

Observations — What’s in & What’s out

Actually the exploration goes beyond a mere observation of what’s in and what’s out; I categorize the findings into distinct sections: — “Shift in main themes”, “Found more of - Expecting less of”, “New Frontiers”, “Found less of - Expecting more of” and finally “Out

Curated Paper List (148 Papers)

As I was alluding to earlier, I have a curated a set of papers in my Github repository, duly organized by granular topics. There is an opportunity to organize all the NeurIPS papers — may be next year !

  • The paper list (148) with links to Poster, Paper OpenReview & GitHub [Here]
  • The top level (double integral) portal for all things AI [Here]

A quick rundown

  • Very interesting Datasets [Here] — spanning annotated video images of Chimpanzee Behavior, Multi-locale Shopping Session Dataset, Chinese Medical Exam Dataset, Chinese Application Privacy Policy Summarization and Interpretation (!), Structured Text Dataset of Historical U.S. Newspapers, Corpus of Patent Applications, Criminal Cases, Room Acoustics (!) & Legal Corpus … and a lot more !

Datasets curation is an under appreciated and thankless work — extremely essential. So kudos to the authors of these papers. We all should contribute to dataset curation …

  • Interesting [Here] — lots of papers in the interesting category — New Computing Paradigms, a new object detection variation (Gold-YOLO), local LIME (GLIME), couple of papers on Nash, ChessGPT (one of my long term interests is to see if GPT can play like the chess grandmasters of yesteryears viz. Tal, Paul Keras, Paul Morphy, Capablanca, Botvinnik and of course Fisher!),…
  • AI Generated Text Detection, Watermarking & Origin Attribution — Couple of papers
  • Benchmarks — Good set of papers

One paper I had high hopes for viz. “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models” did win the best paper award !!

  • Please checkout the GitHub repo [here] for all the 148 papers. You might not agree with my taxonomy — let me know or better yet send me a pull request
  • Most probably most of us wouldn’t work on counterfactual fairness or debate the principles of neural representation in brains and machines or model the longitudinal behavior and social relations of chimpanzees within a social group.
  • Still good to take a quick look at the latest research in Generative AI.

Who knows, you might find that elusive dataset for your pet Generative AI project (pun intended !)

--

--