What A Tangled (World-Wide-Artificially-Intelligent) Web We Weave, When We Practice To Infer & Predict !

Krishna Sankar
4 min readJul 27, 2019

--

The societal and generational Intellectual Debt accrued, as a result of our reliance on modern (invisibly pervasive and widely ubiquitous) Artificial Intelligence overlords - who offer us persuasive, indispensable, succinct and (seemingly) correct answers, is the topic of an insightful article in The New Yorker by Jonathan Zittrain.

It is not that the Artificial Intelligence Systems are not effective, but the fact that they are impenetrable and our focus to “produce answers that work, without offering any underlying theory” that are worrisome. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue Intellectual Debt.

We have AI systems that work without us fully understanding how they work and they are everywhere - making decisions for us, driving us around, investing our hard earned assets, enticing us to buy things and (at some point) making life & death verdicts !

I urge all to read Jonathan’s article in The New Yorker (or, a slightly longer version in the Medium) and ponder about our artificially intelligent digital systems. (Spoiler : The Longer versions is better !)

Let me quickly point out the salient arguments articulated by Jonathan and then get on a soapbox, pontifying how our AI systems should behave, which will be familiar to all the regular readers of my occasional blogs ;o)

1 The “answers first, explanations later” approach incurs Intellectual Debt when we can’t (or won’t) seek the explanations for

2 The systems we can’t explain fully are likely to have unknown gaps in accuracy/inference that amount to vulnerabilities for a smart and determined attacker

3 Seduced by the predictive power of such systems, we may stand down the human judges whom they promise to replace and eventually will have no easy process for validating the answers they continue to produce

4 While the risk of a smaller piece of AI blackbox can be mitigated, as AI frameworks become easy to embed making them invisibly pervasive and widely ubiquitous, their unintended aggregated consequences become unmanageable & highly risky

5 The most profound, issue with intellectual debt is the prospect that it represents a larger movement from basic science towards applied technology, one that threatens to either transform academia’s investigative rigors or bypass them entirely

6 We need to quantify and track the Intellectual Debt. It has a funny way of shifting the control from the borrower to the lender, from the consumer to the manufacturer — for example, morality might shift from a driver to the society (e.g.where to turn in case of the Trolley Problem will be legislated) and the culpability of an accident, from the driver to the vehicle manufacturer !

Intelligible Systems & Counterfactuals

I read somewhere that an intelligent dishwasher need not wash dishes the way we do; they need not even be intelligent, but the end result should be the same. Airplanes do not have floppy wings but they fly, cars do not have legs but they take us to places !

The counterfactual interpretation (as in, The Turing machine) is that, our digital systems need not exhibit intelligence (as we define it), but results should be indistinguishable (from what a human would do). But as the wizard says “Toto, I’ve a feeling we’re not in Kansas anymore".

  • We need more than indistinguishable behavior — we need explainability, interpretability, transparency and above all, be able to understand why they work they way they work !
  • We need to be able to trust the AI systems — Are they doing the right things and would they continue to do so ?
  • Transparency is necessary for trust, but not sufficient for understanding. Moreover, the systems can be interpretable without exposing their internal mechanisms. For example mechanisms like LIME,SHAP and Integrated Gradients enable us to understand the AI models.
  • The domain of Explainable AI is very nascent; in the mean time, we should build-in defenses and guardrails that are vigilant about the data distributions that our AI systems encounter at runtime (they shouldn’t extend beyond what they have knowledge about), keep an eye on their model drifts as well as blind spots (e.g. tendency to meander around, large dimensional, saddle points)

Interestingly, we might need the BladeRunner-like question banks to periodically introspect what the AI systems are learning. It might have prevented the Microsoft bot from turning into a not-so nice artifact !!

--

--

No responses yet