Model Card
Element
Definition
Auditable and legally precise description of a model with associated details and known caveats
Applications
Academia・Open Source・Community Content・Consulting Services

Work In Progress

Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!

Usage

A model card is a standardized way to document your AI model, similar to a terms of service for a technology company. At the moment, there does not exist a standardized format for describing AI systems in a legally transparent way that provides users with any indication of where and how the AI is being used. However, a number of confluent factors have made it likelier that such a ‘model card’ format will emerge. One factor is the surge in regulation around ‘automated decision-making’ as in the EU’s GDPR[1]. Another is the explosion of startups offering ‘pre-trained’ AI models for use or download.

A model card should at the very least include a list of demographics or identifying factors that were used to train the model (such as race, income, facial capture) as well as a description of the intended use patterns and false positive/false negative rates. While this information cannot convey the totality of the model’s impact, it can at least identify concerns early on.

Theory

The concept of the model card was first introduced in a research paper from 2018[2], but only as a purely voluntary method of disclosure designed largely around academic communities. Our stance is that such a concept should be co-opted by policy-makers to create a general-purpose mechanism for disclosing known properties and limitations of models. Currently, organizations are disincentivized to responsibly disclose information about their models for reasons including a lack of standard practices and a fear of revealing trade secrets. There also exists a grey zone around ownership and responsibility—any company selling or offering their models to other companies can plausibly deny responsibility for faulty or biased decisions. By enforcing disclosure practices, policy-makers can encourage a more robust economy to form around AI.

Examples

Footnotes


  1. Article 22 of GDPR ↩︎

  2. Model Cards for Model Reporting by Mitchell, et al. 2018 ↩︎