Some may misinterpret this principle to state that “all human-centered AI should be as transparent as possible”. This is, in fact, not true by any means. To understand why, consider the range of transparency used by humans in our everyday decision-making. Some human decisions are best made with complete transparency, such as political or economic decision that affect many people. Some decisions don’t need much transparency at all, such as whether you would like cream and sugar in your coffee (we largely assume that humans have their own personal preferences). Some decisions might only require transparency in the case of a disagreement, such as deciding on a place to go for dinner. Humans are capable of bringing transparency to our decisions in a variety of ways, by using explanations or by pointing to similar examples. However, it would be impossible to demand that someone be 100% transparent about everything all the time. Instead, we use transparency contextually, based on situation and need. In the same way as humans, AI should not be expected to guarantee 100% transparency for all decisions—this is neither feasible nor desirable. Instead, the best way to see transparency is as a principle, where well-designed systems use it when needed to earn our trust and to seamlessly integrate with our lives.
Risk and Responsibility
Transparency is most often desirable in high-risk situations, where the AI has the responsibility to make a high-quality decision. In order for humans to be comfortable with an AI’s decision in these cases, the AI must provide some justification in a human-interpretable way. Often, this justification is more challenging for AI systems than the ability to make the decision itself. However, it is critical to see justifiability as a core design question that impacts the usability of your system. For example, a non-transparent AI can’t easily be used in group situations where multiple stakeholders need to make a mutually agreeable decision. When making decisions as a group, we use explanations and reasoning to aggregate our individual beliefs, something an AI is typically not designed to do.
Many practitioners confuse transparency with explainability. To make an AI transparent does not mean that the system must therefore be explainable. Often, a simple signal such as a green, yellow, or red light is enough to bring clarity to your system. Humans don’t always need detailed dashboards with descriptive explanations—how many times have you known to take your car to a mechanic simply because the engine light was blinking?
AI systems can be highly transparent without necessarily justifying every decision they make. If users understand the general data-set used to build the AI, they may not necessarily care that the actual system is a ‘black box’. Sometimes, systems do not have to be so transparent at all, especially if users are not required to take the decisions very seriously. Often, an AI recommendation can be a single source of evidence in a larger decision-making process (see Evidence).
With such a diversity of considerations, transparency is an aspect of your AI system that can only be designed with input from your users by taking into account their own terminology, concepts, and processes. Transparency means different things to different people—transparency to a doctor is entirely distinct from transparency to a patient, nurse, administrator, or malpractice lawyer (see Observing Human Behavior).
Often, humans use storytelling and pattern recognition to develop explanations for technology on our own. Sometimes, this allows your product to be wonderfully simple and elegant, since your users will be able to fill in the blanks for themselves as they discover what your product is capable of. However, this may also result in a variety of false explanations and stereotypes that will detract from your product. Voice assistants are often subject to these stereotypes, as users develop expectations, even developing comparative opinions between different voice assistants. In such instances, humans are simply developing personal interpretations of a technology in the absence of transparency. As a designer, you must decide whether these social narratives detract from your product.
Transparency Through Context
Providing some simple context is sometimes all you need to bring clarity to an AI’s decision.
Explaining the decision of a high-risk system should be a core part of the interaction, perhaps even more important than the decision itself.
Be careful of transparency that is layered onto your algorithmic system by another algorithmic system.
Transparency Through Quantity
When content is recommended in aggregate, the total group of recommendations can have its own explanatory power.
Certain systems benefit from opacity (lack of transparency), especially those whose functionality is to generate insight or creativity.
Consider a narrow form of transparency that applies only to outliers or surprising results.
- Explainable Machine Learning in Deployment by Bhatt, et al.