A Human’s Guide To Machine Intelligence — Book Review |The Excesions of xAI : Part 2(a)/7
As I had mentioned in my earlier blog, am on a path to review three very interesting books on Ethical and Responsible AI.
Interestingly there is a new book that needed to be added to our review list “Human Compatible: Artificial Intelligence and the Problem of Control” by Stuart Russel ! It is perfectly possible that we are destined to be in this vicious circle (new books being published before we could complete the review series) and never get to Part 3 and onwards !! May be, in order to have more headroom, I should have named the blogs with hexadecimal digits than alphabets …
Back to the main feature, today we will have a virtual chat with Prof. Kartik Hosanagar, on understanding AI, via his book “A Human’s Guide to Machine Intelligence”.
Very interesting read, full of insights and real world examples (many from Kartik’s professional experience), illustrating the deeper principles of AI, Ethics and Algorithms.
A must read — Run, not walk, to the nearest computer with the fastest internet bandwidth and buy a copy from Amazon via prime next day delivery !
I took ~ 11 pages of notes and this blog is a quick summary [of the long summary ;o)].
The book is organized in three parts :
- Part 1 The Rogue Code introduces the unanticipated (and complex) consequences of AI
- Part 2 Algorithmic Thinking Dives deep into the Algorithms themselves, to understand what causes the algorithms to behave in an unpredictable manner
- And, Part 3 Taming The Code explores the aspects of Responsible AI - when, where and how to use AI as well as shape the narrative of why it does what it does
The book starts with looking into the evolution of two chat bots, both from Microsoft XiaoIce and the now notorious Tay. Tay, probably had more freedom to learn, and it did ! Within 24 hrs it had more than 100,000 interactions and soon it learned the worst (“hostility & prejudice”) from it’s interactions, to say the least.
The difference, of course is data ! As I wrote earlier, classical Machine Learning was mostly either concrete steps to do something (e.g. Rules based) or very deterministic equations with a few parameters filled in from data (e.g. Linear Regression)
But now, the algorithm is just an architecture (usually string layers and layers of neurons in layers) with billions of parameters (the GPT-2 has more than 1.9 Billion parameters). The parameters determine it’s behavior — how do you figure out what the parameters should be ? By feeding the network huge piles of data and churning them for weeks in a huge stack of machines, sometimes specific for numerical computing like the GPUs and TPUs. The result is a model which in turn makes decisions when we give actual data !
- Model := Algorithm + Training Data
- Decision := Model + Actual Data
And, … The models are opaque … it is a lot harder for us to understand why a particular decision was made because we can’t infer anything just looking at the billions of parameters that make up a neural network !
Kartik has done an excellent job framing this progression in the context of the “nature” vs “nurture” question.
In the case of AI, “nature” is the type of the algorithm and “nurture” is the data, the frameworks, the fine-tuning, the guardrails and the explainability metrics.
Diving a little deep into nature vs nurture, while the behavior of earlier algorithms were fully programmed, modern AI models “acquire significant chunks of their logic from data”
Remember, we could have tried to introspect what Tay has learned, by a set of Voight-Kampff Tests depicted in BladeRunner ! So there are methodologies (like explainability and xAI) that can introspect what nurturing (via exposure to data) has been done to a model.
That, then implies that we could fine-tune the algorithms so that they don’t fall off the rails. Possible, but not that easy !
In fact, there are many articles about how Google fine-tunes it’s search algorithms … This Wall Street write-up is an interesting read ! And, if even the juggernaut Google has problems with AI, what chance do we mere mortals have ? Of course, Google’s problems are yuuge compared with ours !!
Leaving the why aside, let us switch gears and look at the influence of the algorithms …
Kartik makes a very good point — while we may think that life is an accidental result of the sum of (mostly) inconsequential past decisions, with the pervasiveness of personalized experiences and automated recommendation algorithms, “many of us clearly do not have quite the freedom of choice that we believe we do” and the “algorithms exert significant influence on precisely what and how much we consume” !
And, with gamifications that amplify and exploit human vulnerabilities, in the name of user engagement, we are not even in control of how we spent our time, let alone how we spend our hard earned money !
I think the strongest suit of the book is Part 2, where Kartik goes deep into the inner working and evolution of the Algorithms (and models). The scenarios are detailed, illuminating how the algorithms usher a new world of digital neighborhoods and the pitfalls — how they flounder due to the unintended consequences we had mentioned earlier.
The language is succinct and precise, yet has enough descriptive depth to understand the nuances.
As an added bonus, Kartik traces the origin of AI from the Dartmouth Summer Research Project on AI in 1956 to the AI Winter to the current era — of irrational exuberance ! (last three words mine, not the author’s).
Couldn’t resist creating a collage on the quote in Chapter 5 about mind …
The narration on the importance of AlphaGo and move 37 is interesting (“Go from an alternate dimension”) and also brings out an important distinction … I had written a few blogs about AlphaGo [here, here and here]as well as IBM’s Deep Blue and Watson [here and here]
Kartik raises the very relevant Predictability-Resilience paradox ! If one creates intelligent algorithms in a highly curated environment with rigid guard rails, they will be very brittle; if we expect resilience, we need to add the unpredictability and the capability for our algorithms to learn, which in turn creates the challenges we are talking about here ! The reference to MIT economist David Autor’s Polanyi’s paradox “We know more than we can tell” is very relevant — how can we teach the machines to understand what we can’t tell, “the tacit knowledge and the innate sense of knowing” ?
Another interesting quote from Harry Potter, very much akin to our topic — we do need to understand where our AI keeps it’s brain and would want to peer into the brain to review it’s decisions !
BTW, a place for interesting Happy Potter quotes [here]
Am sure, by now, you have good birds-eye view of the book and the narration. There is more — for example, his description of why AI explainability should aim at carefully calibrated transparency balancing :
- The how (addressing the weakened competence belief)
- The why (addressing the weakened benevolence belief) and
- The trade-off (addressing the weakened integrity belief)
This careful curation literally differentiates a successful well accepted AI system, from one that will be rejected by it’s users … One can’t just plop a generic explainability layer and call it a day !
Interestingly, towards the end of the book, Kartik talks about Prof. Michael Kearns’ address to scientists at the Santa fe Institute … coincidentally, my next blog is the review of the book “the ethical algorithm” by Michael Kearns !!
Stay Tuned … While you start reading Kartik’s book, I will start transcribing the notes from Michael’s book (you see, I have already read the Ethical Algorithm and am off to the 3rd (Rebooting AI: Building Artificial Intelligence We Can Trust), and looking forward to reading the 4th (Human Compatible: Artificial Intelligence and the Problem of Control) !)
And, during the Christmas vacation’19, am building the Lego Millennium Falcon (7,541 pieces)
Before you ask, yep — we have already built the Lego Taj (5,900 pieces) which is part of our Golu !!
Cheers & Holiday Wishes …