Intuition
Principle
Definition
Requiring less attention over time to a system’s operation, allowing users to focus on the task at hand. Intuition may not only be built and maintained across time, but may also be lost and undermined through altered behavior that creates uneven expectations.

Overview

Intuition-building is a complex ritual of learning metaphors, altering expectations and other inscrutable processes of the human mind. It is one of the few mental activities that starts as fully conscious and ends somewhere deep in the unconscious. Intuition is perhaps best described as practical forgetting.

Building intuitive AI requires more than well-worded buttons and explanatory tutorials. It requires understanding users’ immediate grasp of your system as well as their evolving perception of your system over time. Because AIs typically have a wide range of behaviors, learning curves can be steep as your users must gain a nuanced intuition that may not come quite naturally.

Learning Curves

Some of the most rewarding tools take time to learn, and some tools just aren’t for everyone. Take a skateboard—infamously frustrating, yet capable of astonishing feats when wielded by experts. Today, many designers shy away from learning curves of any kind, assuming that their products can only succeed if users understand them at first glance. We have paradigms of UI design that leave nothing to interpretation. Designers conduct usability studies that ask users to manipulate a UI that they have never seen before, observing where users get lost or confused, and attempt to completely remove these moments.

User Testing

Do not take our guide as advocating against user testing, only as a caution against testing an AI product without giving users sufficient time and space to actually learn its complexities. By only measuring metrics like ‘time-on-task’, you may gain a flawed picture of successful UX design.

The same rigid logic of UI paradigms creates an impossible standards for AI: to be totally intuitive, yet capable of automating processes more intelligently than humans can. An intelligent AI is bound to make a different decision than a human would, which is a source of inevitable friction. However, this is also a crucial opportunity for design to provide a gradual learning curve so that users may eventually benefit from artificial intelligence (see Design Tradeoffs). Eliminating all moments of confusion and uncertainty will not work in an age of AI, where users must crucially have some degree of trust in the system’s decisions.

Trust

At its core, an intuitive AI is one that has built trust. Trust comes in many different forms, often functioning differently depending on domain—it can be notoriously inscrutable. In some cases, an AI can build trust simply by providing the same output every time to a given input so that a person doesn’t feel that the system is acting ‘randomly’. In other cases, an AI may build trust by providing different outputs every time it runs with the same inputs, allowing a user to retry until they receive a result that suits their tastes. Unfortunately, trust-building may require extensive iteration and trial-and-error, especially considering so-called ‘expert’ domains where your user must make frequent judgment calls of their own.

When trust in an AI is lost with a user, that user may become nervous using the AI (take for example how users become untrusting of their phone’s autocorrect tool). Sometimes, users may actually attempt to sabotage the AI, providing false or misleading signals, and attempting to subvert the AI’s decision-making abilities. Because AIs are highly dynamic and responsive, these forms of sabotage could quite easily pay off to the antagonized user.

Interpretation and Confusion

Lingua Franca: Artificial Intelligence (AI) counter-intuitive behavior in a UI

Humans interpret things in wildly different ways—test your AI out on users, asking what they assume is happening under the hood, and their explanations might surprise you. Many engineers expect that a sufficiently intelligent AI will be ‘self-evident’ with decisions that are always intuitive and users that trust the system wholeheartedly. However, this rarely occurs in complex situations. Users bring in context and understanding from experience, while AIs bring context from vast quantities of data. Data can often disagree with, or misinterpret, human experience, and the AI system may need to justify its decision to doubtful users (see Transparency). Unfortunately, this interpretability cannot be achieved by simply conveying ‘how the system works’. Instead, consider interpretability as a design space, and seek to always provide users with meaningful assistance rather than inexplicable decisions.

Design Questions

As concisely as possible, explain the purpose of your AI system to a novice.
Can you answer any ‘ignorant’ questions without appealing to technical details?
Does the user need to have a nuanced understanding of probability or statistics to trust your system?
Consider how users frame your product for themselves and each other.
Is your product intuitive enough for a non-technical user to recommend it to another non-technical user?
Apart from a high-quality AI, what makes your product valuable?
Identify points of user confusion, for example when receiving counter-intuitive results from the AI.
What is the learning curve of your technology?
What steps can the user take to gain or regain understanding?
At what touch-points could an AI be more trustworthy?

Considerations

Attribution Intuition

Clarify where dataset comes from or brand the dataset accordingly to its source to help users attribute behavior.

Users are not typically aware of the quality of data underlying a system’s decision. However, humans are very skilled at creating long-term evaluations of a certain system over many uses. Take advantage of this intuitive skill by branding different interactions and the dataset contributed to them. Alternatively, let users choose between different AI models.

Intuition Building

User actions that directly affect a recommender system or other ML decision making system should be called out in real time.

In modern applications, many kinds of user behavior may influence future recommendations by the system. For example, in a music app, the system may learn from the user skipping songs, rearranging songs in a playlist, searching for an artist, and more. Use a standard motif when possible to call out actions that ‘re-train’ the system (see Mark). That way, users will be able to take more intelligent actions and may even spend more time teaching the system.

Internalization

Users should be able to easily internalize the capabilities of your system with an intuitive mental model.

Many tools are highly open-ended or creative, allowing more expert users to have greatly enhanced capabilities. However, if users cannot easily identify the capabilities of your system, they will never take any action. Improve the cues leading users to take advantage of the intelligent features of the system, such as a ‘browse’ page that shows examples of other user-generated content.

Malleable Interactions

Interactions that are highly malleable can affect user expectations negatively as the user is constantly undermined by missed expectation.

Modern technologies are increasingly designed to be highly adaptive, modular, and malleable based on interaction. A product may move buttons or information around on the interface in order to ‘optimize’ the user experience. However, this ever-changing dynamism can create missed expectations or finding a particular item. This can create a learning curve that never levels off, as the user is repeatedly forced to search for what they want.

Social Narrative

People tend to create narratives around how a system behaves, and tell those narratives to others.

People tell stories, and think in narratorial ways. We will often assume that certain systems behave in a way for specific reasons, perhaps based on our own understanding of the underlying technology or data. These stories are not always true, or they are true but for incorrect reasons. Decide whether story-telling is hindering your product’s capabilities, or leading users to misuse it.

Tracing History

Provide users with a rich history of their actions, allowing users to navigate highly dynamic interfaces.

Consider giving users access to a rich ‘history’ of their past actions, allows them to place themselves in the overall workflow of your product, especially if information is constantly re-organized and re-arranged based on different factors. History is the most logically intuitive way to organize information, because it mimics how our brains linearly process information (see History).

False Intuition

Users can build false and damaging intuition about your product, jeopardizing further use and data collection.

When a user creates a false intuition about your product, it becomes difficult to help the user unlearn those false intutions and learn correct ones. Think about where in your product’s user experience users may misinterpret the system, and steer them back towards an accurate interpretation.

Further Resources