The Explainability 7 : Are we ready for Explainable AI ?

Krishna Sankar
6 min readMay 28, 2021

Next week as part of the REWORK ML Fairness Summit we are organizing a panel titled “Are We Ready to Deploy Fair and Explainable AI/ML? — Challenges & Responses” — A multi-talented panelists and an interesting topic, what else can one ask for ? Here is a quick overview (BTW, I was able to get a nice link http://bit.ly/explainability !):

Update : Thanks to the distinguished panelists — Keegan/Sahab/Medeleine, the discussions were very illuminating, informative and thought provoking !

The link to YouTube video [Here] and my notes [Here].

The agenda is very simple — 1st : Introductions, 2nd : A quick page set on the context to think about explainability (The Explainability 7) and then the main event — we will hear from our esteemed panelists.

The Explainability 7 : A Context to think about Explainability

Explainability, essentially, is how Algorithms, Data & Predictions influence decisions including counterfactuals.

Remember, nobody cares about predicting churn, they care more about understanding churn ![1]

One interesting challenge is that there is no consistent nomenclature, but a general understanding … which is neither precise nor uniform.

Let us dive into The Explainability 7[2]

1 : Model := Algorithm + Training Data, while Decision := Model + Actual Data

As I had written earlier [here], in the bygone years, we could precisely list the steps to do a thing, call it an algorithm and the decisions it makes are very clear to us. We could look at the coefficients of a linear regression model and understand the effects of the features on a decision.

But now, we can’t introspect 3Trillion parameters of a neural network and even know if it is a Cat-Dog classifier or a traffic sign recognizer or an NLP model !

Unfortunately, no more …

Now we define an algorithm as an architecture … i.e. how to string a bunch (sometimes a yuuge bunch) of neurons together and then we let them loose over a large pile of data .. they learn and the result is a model … when we give it actual data, it churns out decisions

  • Model := Algorithm + Training Data
  • Decision := Model + Actual Data

We don’t exactly know what they have learned, don’t have a clue how they make decisions and more problematic, we can’t reason about their decisions like we do with algorithms … and that is where explainability comes in !

BTW, if you are wondering why Harrison Ford a.k.a Rick Deckard from Blade Runner is relevant here, it is the Voight-Kampff test that can introspect a bot to analyze what it has learned ! I have been working on Conversational AI (my blog “Conversational AI roBots : Pragmas & Practices”). As modern AI models “acquire significant chunks of their logic from data” we need introspection methods to understand what they have learned.

Remember, we could have tried to introspect what the controversial bot Tay has learned, by a set of Voight-Kampff Tests depicted in BladeRunner !

In short, the algorithm no more defines a model. In fact, an algorithm is a model factory — a template to create a family of models that can do a variety of things, depending on the data and the training pipeline !

2 : Transparency is necessary but not sufficient

Just knowing the internals of an algorithm or even a pictorial depiction of an algorithm is not explainability. Transparency is necessary to understand what a model is doing, but not sufficient

3 : Interpretability is intelligible analysis — the technical and algorithmic framework

Many times interpretability and explainability are used interchangeably. But they are different — Interpretability is the technical foundation the LIME, the SHAP, the Integrated Gradients, …

But explainability is not just SHAP or LIME it is …

4 : Interpretability + Business Sense™ = Explainability

Yep, we need to add the contextual reasoning to get Explainability … The recent EU legal framework on AI (draft) says “…we have a right to understand how the AI came to a decision, regardless of its complexity…” and “Explainable AI can answer the fundamental question: why?

In that sense Interpretability (which answers “How”) is a property of an AI system, while Explainability (which answers the “Why”) is an outcome …

5 : Justification is a higher from of explainability

But it doesn’t stop there. Even when we can explain an inference we might not be able to justify it — which needs additional layer of logic and insights …

And, finally we have to answer two quintessential questions viz.

6 : Is the Model Doing the RightThings™ ? Will it continue to do so ?

Is the model doing what is written on the tin in which it came from ? Are there corner cases ?

Can we trust the model ? The recent EU legal framework on AI (draft) says “Trust is a must

7 : Is it operationally sound ? Is it seeing things it hasn’t seen before ?

We have to be careful when a model extrapolates. Interpolation is looking inwards, but extrapolation is uncharted territories … tread carefully

And Now ….

We will here from our panelists. We have 3 questions to our beloved panelists:

1 How is Explainability and Fairness important in your world ?

2 What barriers or challenges for explainability exist for developing, adopting, and managing AI?

3 What do you see in the future ? How will our new digital overlords look like, in the future ? Can we explain them ? Are we, linear machines, facing a double exponential world ? Can we handle it ? If so, how so ? How would explainability help us to make sense of that world ?

Future is ~2–3 years. It is not even credible to predict 5 years from now. We, humans, are linear machines- even though we are facing double exponential systems, we can’t feel them in our bones ! Double exponential is just unimaginable !!

Interestingly, we are not the only ones asking these questions — there is an RFI out from the regulators, exploring the same topics ! The documents and comments can be viewed here

Panel Updates:

I have a new blog [Here] with the quick notes from the panel. Thanks to the distinguished panelists — Keegan/Sahab/Medeleine, the discussions were very illuminating, informative and thought provoking !

References

[1] https://towardsdatascience.com/5-lessons-mckinsey-taught-me-that-will-make-you-a-better-data-scientist-66cd9cc16aba

[2] Probably many of you remember “The Magnificent 7” Movie with Yul Brynner and Gang or better yet Akira Kurosawa’s “Seven Samurai”

--

--