Dynamics
Principle
Definition
Describes the co-evolution and co-adaption of a system with its environment. Certain dynamics may be emergent and unpredictable, impossible to describe precisely, and require the discovery of patterns and heuristics to guide design.

Overview

Many of us expect AIs to be autonomous and highly independent. However, this couldn’t be further from the truth. Large-scale AI systems have a number of stakeholders that are passively or even actively involved. From users to data-labelers to reviewers to managers and everyone in between, every AI system must be ‘raised by a village’. The dynamics of an AI refers to its interactions with the world around it, as the system gains new information and updates its behavior over time. Designing for the dynamics of AI requires understanding the relationship between stakeholders and the system, as humans adapt to AIs and AIs in turn co-adapt as well.

Machine Learning Requires Humans

Consider the example of Toutiao, a very popular news and gossip app in China. This app used machine learning to recommend articles to users based on their past clicks. However, over time this resulted in users seeing low-quality, ‘click-bait’ content on their apps. In order to combat this, Toutiao has hired over 10,000 content moderators to continuously review the app’s content.

Stakeholders

Designing for the complex and evolving dynamics of an AI system can be a daunting challenge. Instead of trying to untangle this complexity head-on, it helps to keep in mind that a human-centered AI should make people’s lives better. Therefore, your design process should include mapping out the various stakeholders of your system, who may be either directly or tangentially involved (see Observing Human Behavior). For example, in a real-estate price estimation app, stakeholders might include your users (buyers), brokers, data vendors, customer support personnel, etc. Describe their needs and expectations, as well as how they exchange value with your AI system.

Lingua Franca: Artificial Intelligence (AI) Mapping Stakeholders of a System

Unusual feedback cycles may emerge in your AI’s dynamics with stakeholders. For example, an automated system may flag certain data as ‘inaccurate’ simply because the human data labelers are rounding numbers to a higher decimal. This may encourage the labelers to add arbitrary precision to their data points, which may create odd downstream behavior with the model. It is important to keep tabs on seemingly minute details like these in an AI system, as there are fewer humans to exercise common sense judgments at various stages.

Co-Adaptation

While AI systems leverage data on human behavior, those humans are also updating their behavior based on the AI. This duality, called ‘co-adaptation’ can have both beneficial and strange consequences. For example, a person may realize that their voice assistant recognizes certain names better if pronounced intentionally wrong (for example, someone named ‘A-J’ might be pronounced as ‘ah-jā’). If users adapt to this behavior by the AI, the system will end up learning a highly inaccurate model and may collect incorrect data for further training.

Designing Complex Systems

Attempting to control the dynamics of a large, complex AI may seem fruitless. Many of the behaviors of the system may be emergent, only addressable after-the-fact. Perhaps some more engineering-minded individuals would argue that such unintended outcomes may be solved by adding further complexity to the models, to make those models ever more nuanced in their understanding of context and situation. However, this is certain to add more unintended possibilities to the system. Instead, limit the complexity of your system with a variety of guard rails, or known rules for bad behavior that a user or operator is easily able to interpret (see Guard Rails). One kind of guard rail for a chatbot could be that the chatbot may never repeat back to the user any of what the user typed. This is a check that may easily be performed at various points in model training and deployment, and testing. While such a guard rail may seem asinine, AI chatbots have failed spectacularly before[1], for this exact reason.

Design Questions

Detail your plan for monitoring your system in production.
How will you continue to manage the behavior of your system long after it has started collecting and using new data?
How might your system diverge significantly from what you intended?
Define different categories of stakeholders, such as users, operators, managers, labelers, analysts, and business-people.
Can you enumerate these stakeholders’ needs and describe all their tasks and roles?
Do any of these stakeholders have conflicting priorities, and how might you resolve them?
Gather ideas about possible risks and opportunities from your system’s widespread deployent.
What would a ‘war room’ look like for your AI tool, where team members can diagnose issues with your system?
Is there an escalation process in the case of various failures?

Considerations

Gate-Keeping

AI systems that serve a gating function will naturally become the target of adverse actors—people who attempt to subvert the functionality of the system.

Automated systems are often designed to replace human systems. Yet many human systems serve a gate-keeping function, such as in regulation or hiring. Automated systems that use AI are susceptible to gaming, where adverse actors try to get around restrictions. Keep in mind—systems cannot be over-simplified without sacrificing their gate-keeping ability. A gate-keeper is by definition a hindrance, and sometimes automation may undermine that role.

Operators

Human operators that monitor AI systems also need tools to help them make their decisions faster and better.

When thinking of the user experience of your product, consider the dynamics of how that experience changes the role of the operator of your system, not just the user. An operator may be in place to make executive decisions in exceptional cases, or to confirm the overall function of the system. The operator has an extremely challenging job in keeping a large scale automated system functioning smoothly, so improving the tools of their role may have positive effects on the system’s own user experience.

Sending & Receiving

After receiving information from an AI, users often change their behavior patterns, requiring the AI to adapt to this new behavior.

AI systems have the complex task of both understanding and communicating with their users. Sometimes, these two functions can undermine each other. For example, an AI might attempt to give intelligent notifications encouraging a user to exercise. However, the user might rely too much on these notifications and change their behavior to only exercise after receiving a notification. This behavior change (called non-stationarity) can be difficult for the AI to re-adapt to. Consider simplifying the dynamics of your AI to minimize this feedback cycle.

Cold Start

Bootstrapping your AI system with bad data can hinder future data collection efforts.

A common chicken-and-egg problem for AI occurs when the product needs some data for users to interact with before it can start to collect more data. In cases as diverse as spell-checking and music recommendations, a product needs to start with a strong dataset before users adopt the tool and provide data of their own (see Warm-Up). Be careful of starting with a weak dataset, as user interaction data will be weaker from it as well.

Onboarding Dynamics

Onboarding should be considered as a continuous learning process for both your system and your user.

In modern AI systems, users may continue learning new features and use cases for the system. With voice agents, users may not know that certain capabilities are even possible without prompting. Treat users as constantly ‘onboarding’ where any user interaction may be a teaching and learning opportunity throughout your entire product experience.

Further Resources

Footnotes


  1. Microsoft Silences its New AI Bot Tay on Techcrunch ↩︎