In another blog post, we defined explainable AI (XAI) as:
“…artificial intelligence (AI) systems that are transparent and understandable to humans. The goal of XAI is to develop AI models that can provide clear explanations of their decision-making processes so that humans can trust and verify their outputs.”
Explainable AI is often linked and sometimes conflated with Interpretable AI. While explainability and interpretability both aim to increase transparency and understanding, it is important to distinguish how these approaches differ from one another.
What is Interpretable artificial intelligence?
Whereas XAI gives you insight into why an AI system produced a specific prediction set, interpretability refers to the ability of humans to understand how an AI system works. Explainability lends itself to the end-user or the decision-maker and interpretability applies more often for the AI expert or analyst.
Interpretability enables a thorough understanding of the internal mechanisms of an AI/ML system and how they relate to one another as well as how they may respond to real-world variables.
This can look different depending on whether your system is deterministic or stochastic. In a deterministic system, meaning a system where the same inputs will always produce the same outputs, it may be relatively trivial to understand the cause and effect between internal mechanisms and real-world variables when parameters change. In a stochastic system, it can be difficult to definitively identify all relevant mechanisms and parameters that contributed to a decision. While statistical analysis can help derive meaning from stochastic, random processes by considering the probability or likelihood that certain factors contributed to an output, it falls short of achieving true interpretability.
Why interpretability matters.
In much of the literature, interpretability seems ancillary to explainability, often used as an additional descriptor or secondary component. But they are equally important for trust and transparency and the development of responsible AI solutions.
As with any technology, when something goes wrong the first things people want to know are why and how this could have happened. In the case of emerging AI, complex problems are met with even more complex problem-solving methods and while we tend to have a keen understanding of the problem, we have a more difficult time navigating the solution. For global model systems, such as neural networks, 'How did this happen?' is a particularly challenging question. A model’s data, memory structure, and algorithm are tightly coupled making it difficult to parse, pinpoint, and extract useful information. Thousands of data records interact with potentially millions of nodes and parameters with varied levels of opaqueness to reach an output.
Interpretability is critical for gaining a deeper understanding of AI decision-making and invites a thorough analysis of a system’s components when unintended behavior occurs. This is a crucial element for any AI system being utilized for high-stakes predictions and decision-making.
Moving forward with interpretable AI systems.
Moving forward with interpretable AI requires a combination of technical, ethical, and social considerations. Overall, interpretable AI has the potential to improve decision-making, reduce bias, increase transparency and accountability, and facilitate collaboration between humans and machines. In order to press on, we must consider interpretability as highly as we consider accuracy when developing, designing, and deploying AI systems for the sake of auditability, transparency, and safety.
References:
1 KDnuggets - Interpretability, Explainability, and Machine Learning - What Data Scientists Need to Know
2 Interpretable AI or How I Learned to Stop Worrying and Trust AI
3 The Gradient - Interpretability in Machine Learning: An Overview