Explainable AI — Solving the Black Box Problem : Panel @ RE•WORK Deep Learning Summit / SFO
As an extension of my work on Responsible AI, I am hosting a panel at the RE•WORK DL Summit on Jan 30, 2020 — A multi-talented panelists and an interesting topic, what else can one ask for ? Let me jot down some thoughts as a prelude to this panel …
As I had written earlier [here], in the bygone years, we could precisely list the steps to do a thing, call it an algorithm and the decisions it makes are very clear to us.
Unfortunately, no more …
Now we define an algorithm as an architecture … i.e. how to string a bunch (sometimes a yuuge bunch) of neurons together and then we let them loose over a large pile of data .. they learn and the result is a model … when we give it actual data, it churns out decisions
- Model := Algorithm + Training Data
- Decision := Model + Actual Data
We don’t exactly know what they have learned, don’t have a clue how they make decisions and more problematic, we can’t reason about their decisions like we do with algorithms … and that is where explainability comes in !
BTW, if you are wondering why Harrison Ford a.k.a Rick Deckard from Blade Runner is relevant here, it is the Voight-Kampff test that can introspect a bot to analyze what it has learned ! I have been working on Conversational AI (my blog “Conversational AI roBots : Pragmas & Practices”). As modern AI models “acquire significant chunks of their logic from data” we need introspection methods to understand what they have learned.
Remember, we could have tried to introspect what Tay has learned, by a set of Voight-Kampff Tests depicted in BladeRunner !
So what exactly is explainability ? It is extracting meaning out of the black box, using various techniques. Later below we will see some of them … Before that, some pragmas …
The basic question to ask is, can we trust our AI models to do the right thing and will they continue to do so ?
Unfortunately the answer is not simple. It has at least two dimensions — the type of explainability and the audience asking the question.
There is transparency (which is the internal workings of an algorithm, not necessarily the model — see definitions above), the interpretability (the analysis of it’s decisions), the explainability (of the model), the metrics (things like model drift, data distribution, exceptions and so forth), the counterfactuals and finally justification (of the decisions).
The audience includes the developers, the model reviewers, the AI governance, Fair and Responsible group, and the regulators. The developers and model reviewers would look at all of the above while the regulators are interested in justification, the Fair and Responsible group might look at the counterfactuals and the AI governance might be interested in the metrics. In short, an explainability framework has to satisfy a broad spectrum of constituents !
If all these do not convince you, may I suggest an excellent book “A Human’s Guide to Machine Intelligence” by Prof. Kartik Hosanagar ? My review of the book is here. I also have a few more reference at the end of this post.
Moving on, explainability need to be incorporated in all the stages of an AI project, not just at the end. Easier to explain in a diagram below.
And, there are many mechanics and mechanisms — it is still a nascent domain that is evolving constantly.
In the diagram below, the red box which says “introspection techniques e.g. question banks” could be the Voight-Kampff tests we had mentioned earlier.
And finally, some of the questions we will be discussing in the panel. If you have more, please let me know. I will try to add the notes from the panel after the event:
- What is Explainability?
- Does AI need to be explainable? If so, why so ?
- Who’s responsibility is it ? When does it need to be considered in the process of designing & deploying systems?
- What are the methods/algorithms to attain Explainability ?
- Why is it difficult to open up the black box for many industries?
- Questions from audience …
And couple of good references: