Transparency
Principle
Definition
The effort to bring clarity to opaque systems whose behavior cannot be readily explained from immediate relationships or parameters. Transparency may require a system’s behavior to be contextualized within a greater framework of data collection and interpretation.

Overview

Some may misinterpret this principle to state that “all human-centered AI should be as transparent as possible”. This is, in fact, not true by any means. To understand why, consider the range of transparency used by humans in our everyday decision-making. Some human decisions are best made with complete transparency, such as political or economic decision that affect many people. Some decisions don’t need much transparency at all, such as whether you would like cream and sugar in your coffee (we largely assume that humans have their own personal preferences). Some decisions might only require transparency in the case of a disagreement, such as deciding on a place to go for dinner. Humans are capable of bringing transparency to our decisions in a variety of ways, by using explanations or by pointing to similar examples. However, it would be impossible to demand that someone be 100% transparent about everything all the time. Instead, we use transparency contextually, based on situation and need. In the same way as humans, AI should not be expected to guarantee 100% transparency for all decisions—this is neither feasible nor desirable. Instead, the best way to see transparency is as a principle, where well-designed systems use it when needed to earn our trust and to seamlessly integrate with our lives.

Risk and Responsibility

Transparency is most often desirable in high-risk situations, where the AI has the responsibility to make a high-quality decision. In order for humans to be comfortable with an AI’s decision in these cases, the AI must provide some justification in a human-interpretable way. Often, this justification is more challenging for AI systems than the ability to make the decision itself. However, it is critical to see justifiability as a core design question that impacts the usability of your system. For example, a non-transparent AI can’t easily be used in group situations where multiple stakeholders need to make a mutually agreeable decision. When making decisions as a group, we use explanations and reasoning to aggregate our individual beliefs, something an AI is typically not designed to do.

Many practitioners confuse transparency with explainability. To make an AI transparent does not mean that the system must therefore be explainable. Often, a simple signal such as a green, yellow, or red light is enough to bring clarity to your system. Humans don’t always need detailed dashboards with descriptive explanations—how many times have you known to take your car to a mechanic simply because the engine light was blinking?

Lingua Franca: Artificial Intelligence (AI) transparency vs explainability different explained diagram

AI systems can be highly transparent without necessarily justifying every decision they make. If users understand the general data-set used to build the AI, they may not necessarily care that the actual system is a ‘black box’. Sometimes, systems do not have to be so transparent at all, especially if users are not required to take the decisions very seriously. Often, an AI recommendation can be a single source of evidence in a larger decision-making process (see Evidence).

XAI

The rise of Explainable AI (XAI) as a field of research and work is important, yet we caution designers away from using off-the-shelf tools and techniques, such as feature importance or factor analysis. These measurements can create misleading certainty for users about risks and causality, and carry strong caveats of their own.

With such a diversity of considerations, transparency is an aspect of your AI system that can only be designed with input from your users by taking into account their own terminology, concepts, and processes. Transparency means different things to different people—transparency to a doctor is entirely distinct from transparency to a patient, nurse, administrator, or malpractice lawyer (see Observing Human Behavior).

Social Phenomena

Often, humans use storytelling and pattern recognition to develop explanations for technology on our own. Sometimes, this allows your product to be wonderfully simple and elegant, since your users will be able to fill in the blanks for themselves as they discover what your product is capable of. However, this may also result in a variety of false explanations and stereotypes that will detract from your product. Voice assistants are often subject to these stereotypes, as users develop expectations, even developing comparative opinions between different voice assistants. In such instances, humans are simply developing personal interpretations of a technology in the absence of transparency. As a designer, you must decide whether these social narratives detract from your product.

Design Questions

Consider the best alternative that exists to your system, and how transparent that alternative is to different kinds of stakeholders.
How do users of that system understand and explain its behavior?
What is the expectation of information-sharing in your given task?
Enumerate all surprising or unusual cases that your system may show.
Is there a way to justify the more extreme results of your system?
Can you identify any contextual assumptions made by your system that may need to be explained?
Investigate alternatives to your system that are helpful despite being less accurate.
Can you sacrifice the accuracy of your AI algorithm in order to create more transparent, helpful decisions?
How might you make transparency part of your product’s value proposition?

Considerations

Transparency Through Context

Providing some simple context is sometimes all you need to bring clarity to an AI’s decision.

Important content that is recommended should be provided with some contextual information, perhaps hidden in a sub-menu or tooltip text that a user can click into. This contextual information should reference how the recommendation was surfaced, through a statement such as “because you …”, “similar to …”, or “others who liked …”.

Explainability

Explaining the decision of a high-risk system should be a core part of the interaction, perhaps even more important than the decision itself.

A core reason why humans are such good decision makers is that we can explain our reasoning to a skeptic, and we can discuss our reasonings to reach a collaborative solution. AI systems are often expected to ‘brute force’ a solution that is correct regardless of opinion. Some explanation may be critically helpful, such as showing activated regions in an image. Some explanations may simply identify an assumption that helps others interpret the result.

Layered Transparency

Be careful of transparency that is layered onto your algorithmic system by another algorithmic system.

Many modern algorithms are now offered with an ‘explainability engine’ that involves another algorithm exposing the behavior of the system. However, explainability cannot be easily baked into an algorithm, as it may require outside context that the algorithm doesn’t have. Instead, start your design process by observing human justifications of similar decisions. Then, make sure your AI surfaces the same terms and concepts used by humans.

Transparency Through Quantity

When content is recommended in aggregate, the total group of recommendations can have its own explanatory power.

Aggregate content is itself informational and potentially self-descriptive. When multiple recommendations are provided, users can begin to see an overall reason or justification. This aggregate transparency, however, may be false and lead users to incorrect conclusions. Recognize that aggregate results will naturally lend themselves to human interpretation.

Strategic Opacity

Certain systems benefit from opacity (lack of transparency), especially those whose functionality is to generate insight or creativity.

Too much transparency may inhibit the spontaneity of a system, if that system is designed to provide unique or creative results. These creativity-enhancing systems are often better if left mysteriously opaque, to let users continually explore, gain intuition, and draw their own conclusions. An AI that creates art or suggests creative color palettes may not need to explain itself.

Extremal Transparency

Consider a narrow form of transparency that applies only to outliers or surprising results.

Not all results need to be explained. If a result is expected or straightforward, users may not care for a deep explanation and may appreciate the simplicity of a system that gets the job done. However, when a system gives unexpected results, users may demand more information. Therefore, consider whether your system needs to explain all results or simply provide context for abnormal results.

Further Resources