Clarification
Element
Definition
Ability to amend a model's decision to reflect the user’s true intent
Applications
Personalization・Social Media・Conversational Interfaces・Behavior Change & Nudging

Work In Progress

Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!

Usage

An AI system often has to understand user intent, such as with a conversational agent. However, this intent can be understood wrong, leading to unintended behavior that undermines trust. Allowing users to make a clarification to the system provides an empathetic, intuitive way to continue interactions. This clarification phase can occur in a variety of ways depending on the interaction design of the system, either in real-time or well after the incorrect behavior happened.

Theory

Humans often do not worry as much about absolute accuracy, but rather about relative accuracy, where the accuracy of an AI system displays evidence of improvement and increased understanding. In our everyday lives, we often interact with technologies that do not correctly comprehend out intents, preferences, meanings, etc. However, our frustration does not usually stem from the misunderstanding itself, but rather from the fact that the technology will consistently make that mistake. A clarification allows the user to input their true intent into the system, in the hope that the system will better understand the next time. Therefore, such a system must be used thoughtfully to ensure that it is indeed possible to improve the AI’s behavior instead of making a sort of empty promise.

Implementation

A clarification can function as a re-labeling of input data, or as a hard-coded preference (such as removing a certain genre or topic from a newsfeed). Designers should take care to consider a different cycle of feedback for clarifications than for typical model training. If a recommendation system is globally updated only once a month for all users, individual users may not recognize the clarification was effective, or will lose the sense that the system is personalized. Instead, the clarification may have to function as part of a layered AI which sits atop the general-purpose system.

Further Resources