Many of us would echo author Arthur Clarke’s famous adage that ‘any sufficiently advanced technology is indistinguishable from magic’. However, any engineer or scientist working on Artificial Intelligence (AI) technology might remark quite the opposite—that AI is extremely unmagical. It is brittle and difficult to operate, presenting enormous risks and concerns due to cascading unpredictable consequences.
What is AI?
Over the past fifty years of AI, researchers have inherited a set of techniques (largely from statistical analysis) that allow the academic community to describe and evaluate AI systems, providing standardized metrics for success that avoid messy complexity. Nevertheless, these techniques are taken from mathematics and largely ignore the AI’s actual behavior in any real human environment.
In other words, the AI industry is effectively backwards. We declare success before a human ever interacts with our AI. When users inevitably encounter problems while using the system, we blame the users rather than ourselves. We declare the AI a ‘black box’, implying that the logic of the AI must lie deep inside it, somewhere inscrutable to us mortals. If the AI’s failure is too glaring to ignore, we dismiss the whole enterprise by telling ourselves that AI will never do what humans can do anyway. Paradoxically, we continue to assume that an AI’s only purpose is to ape humans.
The human-centered AI movement is a recognition that making AI work for people is not about more AI, different AI, or even better AI, at least in how it is defined today. Instead, human-centered AI is about defining the goals of AI to meet human needs and to work within human environments. This can’t be done in a single sitting, unfortunately. It’s like the entire AI industry needs to be turned upside-down and shaken, letting a whole host of misleading, problematic, and archaic assumptions fall away. Not only do we need a set of new tools and techniques to make AI work in practice, but we need to shift the process by which AI is even designed in the first place. A variety of new voices, up until now excluded from the design process, need seats at this table. Not just a single seat for a ‘Human-Centered AI Specialist’, but seats for all manner of experts in law, policy, anthropology, psychology, behavioral science, ethnography, and many more. The aperture of AI’s design scope must expand widely to encompass users as well as operators, moderators, labelers, editors, and broader communities.
The good news is that the changes needed to make AI usable are not massive, impossible interventions. It will take time for the industry as a whole to change, but each individual can contribute, both in their own work and in team meetings. Our wish with Lingua Franca is to create a design language for human-centered AI, giving both users and designers a common framework to begin this transformation.
The term ‘design’ can often trip people up, but we mean it in the most general sense. If you’re still confused, it helps to think of the following—what would make an AI product seem ‘well-designed’? This is certainly the confluence of several factors, including whether the product actually does something of value at all. However, a well-designed AI product should also fit into the fabric of human society, the way a well-designed sidewalk isn’t just a concrete slab to walk on, but something that creates a space to play, to converse, or to wander in within our built environment. Finally, a well-designed AI ought not be the work of a single ‘designer’, but instead the result of many people’s collective effort spanning industries and professions. To put it simply, if an AI is ‘well-designed’, it was probably, well, designed.