Preface

The key to a design language is a prevailing sense of cohesion between each piece and the whole. In our age, AI has come to mean so many things that in some ways, it now fails to mean anything at all. Lingua Franca is our answer to that—we include here a set of eight guiding principles that each documents a core tension that human-centered AI must resolve. Unlike past design principles (e.g. Dieter Rams’ 10 Principles[1]), ours are not succinct notions of what constitutes good design. To define design so precisely might lead the AI community to seek flawed and perverse optimizations, inevitably undermining the purpose of human-centric design. Instead, each principle is outfitted with a set of design questions for further reflection as an individual or as a team. Lastly, each principle contains a set of considerations that apply to real situations, showing how this thinking can affect one’s design decisions. The principles are each listed below with their definitions.

Architecture

Concerns the digital infrastructure, operational modalities, and information flows that collectively underlie AI. Helping users navigate this complexity requires thoughtful interaction design that maps the journeys taken by both users and data streams through a product.

Read More

Dynamics

Describes the co-evolution and co-adaption of a system with its environment. Certain dynamics may be emergent and unpredictable, impossible to describe precisely, and require the discovery of patterns and heuristics to guide design.

Read More

Intuition

Requiring less attention over time to a system’s operation, allowing users to focus on the task at hand. Intuition may not only be built and maintained across time, but may also be lost and undermined through altered behavior that creates uneven expectations.

Read More

Embodiment

Giving human-like qualities to an AI system, either to interface with its environment, or to align behavior with human expectations. When used indiscriminately, undermines perceived utility to users due to its incomplete capabilities.

Read More

Augmentation

Refers to a system that extends an individual’s goals beyond their independent capabilities. Such a system must align values between the the user and designer. Not simply an ethical stance, augmentation yields a broader design space of solutions to a given problem.

Read More

Errata

An inevitability within probabilistic systems, erroneous and unintended behaviors may vary from catastrophically dangerous to unexpectedly beneficial. It is incumbent upon a system’s designers to produce robust behavior that accommodates various failure modes.

Read More

Bias

The socio-cultural interpretation of an AI’s behavior over time. Fundamentally intrinsic to decision-making systems, bias can never be entirely removed, only constrained. To do so, the system must allow for external normative evaluation.

Read More

Transparency

The effort to bring clarity to opaque systems whose behavior cannot be readily explained from immediate relationships or parameters. Transparency may require a system’s behavior to be contextualized within a greater framework of data collection and interpretation.

Read More

Footnotes


  1. 10 Principles of Good Design by Dieter Rams & Vitsœ ↩︎