Ability to take partial or complete human command of a system, or to handoff such command to an autonomous system
Autonomous Vehicles・Manufacturing・Operations

Work In Progress

Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!


Overriding the decision of an automated system is not only an important measure of safety for systems that can cause danger, it is also a form of engaging users to take part in the continual improvement of the system’s function. The design of an override system should provide both the ability to take control, but also the visibility to know when human command is necessary. As such, it can be paired with the Intent element to bring potentially danger situations under user control.


There are both theoretical and largely practical justifications for providing an override system. Theoretically, AI systems are not trained on the massive breadth of data that humans have access to when controlling a complex system (i.e. the ‘context’). To take even a seemingly constrained example of autonomous driving, humans bring a great deal of context to their everyday driving—construction indicators, human signals, past experiences, interpretations of others’ driving habits, events, etc. Rather that continually attempt to integrate this context into the machine, users can take an executive role as long as they know when their context is likely to change the vehicle’s decisions.

Practically speaking, an override system allows a new technology to still function reasonably well in an existing social structure that largely assumes non-autonomous behavior. Most of the world still uses verbal and gestural communication, involving socio-cultural signifiers that machines do not interpret. Therefore, despite the fact that an autonomous system may be functional 100% of the time, a human can serve as the necessary interface between environment and machine.

Man-Machine Interaction

The override system might actually switch on a separate AI that enables humans to use their limbs and movements to control the machine. In other words, we do not wish to imply that overrides are necessary for every kind of AI, only when an AI system is expected to perform autonomously.

Unfortunately, in the autonomous vehicle industry, the override is seen as a failure of the system (and termed disengagement). In California, these disengagements are even measured and tracked to determine a given technology’s readiness for the mass market[1]. We believe that this thinking will ironically hinder the development of successful autonomous vehicles that must seamlessly share responsibility between human and AI.


Designers play a critical role in the construction of override systems, for several reasons. First, an override must necessarily simplify the complexity of a large-scale system so that a person may take control without significant loss of capability (see Augmentation). To this end, human factors and ergonomics[2] must play an outsized role, as real-time systems often take advantage of human instincts and muscle memory to create intuitive interfaces.

It is often assumed that a human override requires the autonomy to ‘shut-off’, however this is not entirely the case. Override systems do not have to transition complete control of all subsystems to the user—in the Apollo 11 moon landing, the commander Lt. Michael Collins took manual control of the altitude subsystem during the final approach, maintaining automated control of other crucial subsystems[3]. It is likely that override systems of the future will be integrated and interconnected with autonomy systems, some of which will not have an override of their own.


Further Resources


  1. Autonomous vehicles’ disengagements: Trends, triggers, and regulatory limitations by Favarò, et al. ↩︎

  2. Human factors and ergonomics on Wikipedia ↩︎

  3. Apollo 11: The Fifth Mission on NASA History Archives ↩︎