Are we ready for Explainability ? The RE•WORK Panel

Krishna Sankar
4 min readJun 4, 2021

As I had written earlier [https://bit.ly/explainability] we all were looking forward to the panel. Thanks to the distinguished panelists— Keegan Hines/Medeleine Elish/Sahab Aslam, the discussions were very illuminating, informative and thought provoking … The credit goes to Katie Pollitt, Erum Af and the RE•WORK team …

Quick notes below …

You should really watch the full panel in YouTube it to get the full benefit of the wise words from our panelists …[YouTube Video]

As I had mentioned in my earlier blog, the main questions were:

1. How is Explainability and Fairness important in your world ?

  • [Keegan] Years ago when all of these started there was a sense of excitement that AI/ML will bring objectivity to processes and decision making that were otherwise run by humans … But fast forward to now, there are challenges like recapitulation of biases and the problems in the data, hoping that we input some data to this mathematical toy to make this world better is not a plan — we need techniques to make this better
  • [Medeleine] Things like explainability are not just technical — they are Social, or more precisely SocioTechnical concepts. The key framing is that we can’t think of the technical dimension with out the social context
  • Integration not deployment is the operative word
  • Deployment, in a sense, is contextless dropping in models and hope they will work as intended
  • Integration into/with the existing social context is the key for success
  • [Sahab]Even in industry that are not regulated, we don’t know the impacts. There are unintended consequences, spanning generations — the kids nowadays are (almost) born with an iPad. We need to make sure the society, especially the kids, are protected

2. What barriers or challenges for explainability exist for developing, adopting, and managing AI?

  • [Sahab] Lots of Causality & Interpretation are lost, which we need to bring back. We need diversity in talent, diversity in thought and diversity in datasets
  • [Medeleine] No agreed upon definitions and standards of what constitutes explainability, bias etc; or even what are the acceptable ways to collect and use data for the purposes of creating diverse and inclusive datasets
  • You can find some of the academic work at conferences like the FACT — ACM Conference https://facctconference.org/
  • Every one defines things their own way; so the safest thing is to look really at the context where you are going to integrate your technology
  • [Keegan] Problems of choice — many ways to think about explainability; many notions of fairness. But algorithms are getting better and there is a bright future

3. What do you see in the future ? How will our new digital overlords look like, in the future ?

  • [Medeleine] There will more focus — from a regulatory level, public expectation but also journalistic media coverage. We will also see growth in technical mediation
  • While there is excitement, we will also see lots of failures because there is a big gap between technology that is working in the lab vs integrating it into a system i.e. practice in real world; failures not only for businesses but also for communities whose lives will be shaped by the failed experiments and systems, that fundamentally didn’t work for them.
  • [Sahab] While there is lots of attention to explainability and fairness, as we get more and more close to mimicking a human brain, we might not be able to explain it — AGI explainability will be hard
  • [Me] I saw a link where they say that AGI representation itself is not what we think it is !
  • [Keegan] Part of it is a math problem; part of it is an algorithmic problem and there is much more outside the technology layers. We can envision a world where the process is looked at more closely — why are we using this data ? How did we collect the data ? Or even do we need AI/ML here ?

4. What advise do you have for the audience ?

  • Take time — there is no quick way
  • Approach with humility & long term View; build in for unknown unknowns; speak to more people
  • Educate yourself — not computer or technical papers but other areas
  • Mathematical and quantitive papers are always there — expand to humanities and social sciences and bring together a diverse points of view

There was an excellent question from Erum on Cultural Norms — we can’t take an AI application and just drop it into another place with different cultural norms; in short be cognizant about the cultural context where your AI app is going to reside and interact with. This is under explored in mainstream discussion …

As you can tell, I enjoyed the panel immensely … Hope you do too !

--

--