The #AIDebate #YoshuaBengio vs. @GaryMarcus

Krishna Sankar
4 min readDec 25, 2019

--

Yesterday there was a very interesting debate hosted by Montreal.ai. My goal for this blog is to give a few points, that I found insightful, on the discussions as well as collect all the information in one place.

The debate was more of broad strokes with many asymptotes than a focussed talk on a specific topic … that is the beauty of it !

My focus was on two areas — Deep Learning 2.0 and the art of injecting knowledge (like common sense reasoning (an older blog), factoids and the knowledge how the world works — probably via a Knowledge Graphs), into neural networks … I was not disappointed at all — came out with a few vectors to ponder and explore ! I urge you all to listen to the debate — you will get a different set of ideas than me !

There was a good amount of discussion around symbolism vs connectivism (reminding one of Prof.Domingos’ book The Master Algorithm (book review here). There was also an interesting discussion on priors — soft prior vs meta prior vs deep prior, but flew a little above my head !

Links:

First things first — the links : The video (this Facebook link worked better for me), Gary’s slides and Yoshua’s slides. The tweet stream #AIDebate is interesting as well. The goodies don’t end there — there is also a list of pre-reads !

[12/28/19] Updates :

  • Gary’s Postmortem of the debate [here]
  • Yoshua’s reply [here]
  • Gary’s reply to Yoshua’s reply [here]

[1/1/20] Update:

  • Gary’s Blog — Deep Learning Terminology [Here]

There is lots of back and forth — am not sure anything was specifically achieved.

Pre-reads:

Yoshua Bengio’s Pre-Readings

  1. BabyAI: First Steps Towards Grounded Language Learning With a Human In the Loop, Chevalier-Boisvert et al., 2018 https://arxiv.org/abs/1810.08272v2.
  2. A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms, Bengio et al., 2019: https://arxiv.org/abs/1901.10912.
  3. Learning Neural Causal Models from Unknown Interventions, Ke et al., 2019: https://arxiv.org/abs/1910. 01075.
  4. Recurrent Independent Mechanisms, Goyal et al., 2019: https://arxiv.org/abs/1909.10893.
  5. The Consciousness Prior, Bengio et al., 2017: https://arxiv.org/abs/1709.08568.

Gary Marcus’s Pre-Readings

  1. Rebooting AI (Chapters 4 and 7), Gary Marcus and Ernest Davis, 2019
  2. The Algebraic Mind (Chapters 3–5), Gary Marcus, 2001
  3. The Birth of the Mind (Chapters 6–8), Gary Marcus, 2004
  4. Deep Learning: A Critical Appraisal, Gary Marcus, 2018: https://arxiv.org/abs/1801.00631.
  5. Innateness, AlphaZero, and Artificial Intelligence, Gary Marcus, 2018: https://arxiv.org/abs/1801.05667.
  6. Rethinking Eliminative Connectionism : https://www.sciencedirect.com/science/article/pii/S0010028598906946

Gary : There is a line of thought that says lots of data will solve all the problems. Gary is not fully sold on this argument — he believes that we definitely need certain class of innovations in the algorithm space rather than just rely on more data.

On the topic of Ethical AI, they both agree that we are building tools that are too powerful for our collective reasoning ! Even if we have a goal of Do No Harm, how would AI know what harm is ? Gary mentions that Harm is not how the pixels fall on a screen !

What’s next for Deep learning ? Bengio’s discussions about System 1 and System 2 Deep Learning is a good direction. The latest work on this is Bengio’s talk at NeurIPS’2019.

A related tweet stream by LeCun on what DeepLearning is. Good chain that clarifies a few fundamental concepts.

An interesting discussion on mathematical exposition vs building blocks ensured during Gary Marcus’ answering the question “What methods are required for innate knowledge and deep understanding for things like reasoning and consciousness ?”

This is Gary’s slide #50, where he feels that we should do more in terms of formal theory of our world to compose common sense reasoning. The right side of the slide is the formalism — a paper on the prior knowledge for understanding the logical form for a container — for example, physical reasoning (time space manipulation, rigid objects and histories of objects) like if we have water in a container and tilt it, the water will overflow; if one drops a microphone in it, it will get wet and so forth. The left side is Bengio’s paper on Casuality (#2 on the reading list) — a very mathematical paper on how distributions change over time. Gary’s opinion is that we should have people working on both sides of the spectrum — the mathematical formulations as well as the broad frameworks for space, time, causality and so forth. Gary’s work on the knowledge formulation is very thorough and deep — it is 64 pages long and will take many reads to understand fully !

Bengio mentioned that the Boltzman Machines have the capability to reason with things like meta graphical models and Markov chain ! An interesting line of thought. I plan to understand more …

Interestingly, am reading Gary’s Book Rebooting AI and plan to write a review as part of my Ethical & Responsible AI seriesDone [here]

--

--

No responses yet