Work In Progress
Our Elements guide is still in progress, and therefore lacks full visual and technical assets. We hope to release them by summer of 2020. Thanks for reading Lingua Franca!
A model card is a standardized way to document your AI model, similar to a terms of service for a technology company. At the moment, there does not exist a standardized format for describing AI systems in a legally transparent way that provides users with any indication of where and how the AI is being used. However, a number of confluent factors have made it likelier that such a ‘model card’ format will emerge. One factor is the surge in regulation around ‘automated decision-making’ as in the EU’s GDPR. Another is the explosion of startups offering ‘pre-trained’ AI models for use or download.
A model card should at the very least include a list of demographics or identifying factors that were used to train the model (such as race, income, facial capture) as well as a description of the intended use patterns and false positive/false negative rates. While this information cannot convey the totality of the model’s impact, it can at least identify concerns early on.
The concept of the model card was first introduced in a research paper from 2018, but only as a purely voluntary method of disclosure designed largely around academic communities. Our stance is that such a concept should be co-opted by policy-makers to create a general-purpose mechanism for disclosing known properties and limitations of models. Currently, organizations are disincentivized to responsibly disclose information about their models for reasons including a lack of standard practices and a fear of revealing trade secrets. There also exists a grey zone around ownership and responsibility—any company selling or offering their models to other companies can plausibly deny responsibility for faulty or biased decisions. By enforcing disclosure practices, policy-makers can encourage a more robust economy to form around AI.