AI Bias Detection — A New Normal from NIST
NIST has been in the forefront of a lot of technology initiatives (years ago I used to participate in the PKI initiative — even co-chaired the 3rd annual research workshop then) and so there is no surprise that NIST has taken a leadership in AI.
They have a set of initiatives starting from the AI Risk Management Framework [Here] and also in the bias in AI. Their document Identifying and Managing Bias in Artificial Intelligence is the topic of this blog.
Let me just dive in … I will write about the implications, impressions and some thoughts. A very short summary of ~45 pages plus definitions and references.
Implications
While the publication covers a lot of ground, the impliucations are very interesting.
- We should question Data Driven Decisions … more precisely the data might not be qualified to make those decisions — focus on what data should be used rather than what is available !
- Societal value implications will triumph computational/accuracy metrics to evaluate an AI Model; in the future, this will become a regulatory policy
- It takes a village to develop AI models — Multi-stakeholder Engagement & Impact Analysis is essential for a robust AI practice. For example, Technology or datasets that seem non-problematic to one group may be deemed disastrous by others
- The document acknowledges the realities — AI is neither built nor deployed in a vacuum, sealed off from societal realities of discrimination or unfair practices
- While there are lots of principles, the policy and practice side of AI Bias is still murky. This is the most challenging part — We need precise, concise and actionable best practices. May be things like Model Bias Score Card or similar mechanisms are need to be defined and transparently available, which I hope NIST will work on. May be I can help.
- Bias Mitigation is still untouched — Whom do we call when we see indications of negative impacts ? And, what do we do then ? What are the ways of increasing fairness ? Another challenge
Now, onto 1st impressions …
They have a few very informative and detailed diagrams.
As I mentioned earlier, the guidance part is a little sketchy. It is also a little verbose and could use a tad more organization. Of course this is a draft and the guidance will be concise only after the other parts are in good shape.
In short …
In short, good work on framing the issues of Bias in AI with a broad stroke beyond the normal statistical and computational vectors. But need more work on how to actually being the ideas to practice — Best Practices, Mitigation Strategies et al. Probably that requires additional input from practitioners from different industries…