Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
Not every part of an AI system needs to be dynamic. Guard rails—certain hard rules or constraints on your system’s behavior can de-risk against known faults, such as ensuring that the predicted price of a product lies within a set range. While these hard rules may not make your data scientists happy, the resulting behavior yields an overall level of transparency and prevents the AI from making entirely illogical decisions.
Many AI systems are trained ‘end-to-end’ so that all information needed to make a decision is placed into a single automated decision-making system with no hard-coded ‘rules’ created by humans. This has, for example, shown success in image classification tasks where different kinds of human assistance have fallen out of usage in favor of systems that learn on their own. However, this kind of learning technique creates ‘unbounded’ failures where the system is free to fail catastrophically, making decisions that have no resemblance to a sensible decision. In high-risk situations, this may be unacceptable. In fact, in high-risk situations, a system that always makes a slightly wrong decision could be much safer than a system that makes the correct decision 99.9% of the time, with one catastrophically wrong decision out of a thousand.
A guard rail may look like a simple mathematical rule, or it may limit the AI’s behavior to a small set of vetted decisions. In some cases, principles of job design may be used to create a specific human role for the creation of guard rails. The interface for entering these rules may become the task of a manager or overseer of the AI system.
- Trading Curb on Wikipedia