Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
In our day-to-day lives, we interact with systems that have certain implicit rules. We call these rules of each system its design language. Take, for example, the experience of walking into an unfamiliar office. Almost every office has a ‘main entrance’ for visitors. Upon entering, visitors are often guided visually to a front desk. This front desk may not be labeled as such, but we know from social and cultural cues that we should probably walk up to this desk and introduce ourselves to the person sitting behind it. This entire experience can be quite vexing if the office has no clear entrance or front desk — in other words, these elements form a design language that facilitates an intuitive and seamless office experience.
Our design language for AI includes an extensive set of reusable components, which we call Elements. These elements should serve as starting points for your design, or as inspiration for new features and modalities. However, unlike visual design frameworks (e.g. iOS Human Interface Guidelines), Lingua Franca does not define a specific ‘look’. Instead, we focus on the ‘feel’ of AI in a way that may translate across interactions as diverse as voice, gesture, conversation, and data exploration. Designed modularly, our elements provide guidelines, examples, and best practices for integrating them into your next project.
An algorithmically generated group of items, often shown as follow-up recommendations or related actions
A single or group of recommendations that takes focus in the application, allowing a human operator to make an immediate decision
Ability to amend a model’s decision to reflect a user’s true intent
A way to compare the results of alternate models in order to give human executive oversight that prevents a single model’s biases from taking prominence
Auxilliary information that contextualizes a model’s inference by showing correlated fields or properties
Module that assists in explainability by displaying particularly salient examples from the dataset that relate to the current decision
Tools to identify types or regimes for which the system fails to operate successfully
Highly simple, straightforward rules that limit the behavior of an AI
Interaction component that allows users to view past actions and return to them if desired
In real-time systems, an indication of model’s decision that is given within a reasonable human response time
In creative domains, the ability to explore by navigating hybrid representations of input data
Visual indication of a model’s training signals when it may help users better interact over time
Auditable and legally precise description of a model with associated details and known caveats
Allowing the user to give or receive information via multiple modes of interaction
Ability to take partial or complete human command of a system, or to handoff such command to an autonomous system
Non-intrusive piece of delightfully personalized information to continually engage users
Point of interest or additional context that can be overlaid on model’s output to further indicate behavior
Allowing users to input multiple items or examples into a model for inference
Supporting tool that detects content generated by an algorithm or model, often bundled with the model itself
Practice or pre-training period for a user to gain familiarity with the system and vice-versa