Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Model Cards



Model Cards

Model cards are short documents accompanying trained machine learning (ML) models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups and intersectional groups that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. This framework can be used to document any trained ML model. 

The proposal of “Model Cards” specifically aims to standardize ethical practice and reporting – allowing stakeholders to compare candidate models for deployment across not only traditional evaluation metrics but also along the axes of ethical, inclusive, and fair considerations. This goes further than current solutions to aid stakeholders in different contexts. For example, to aid policy makers and regulators on questions to ask of a model, and known benchmarks around the suitability of a model in a given setting. Outline a few use cases for different stakeholders: 

  • Policymakers can understand how a machine learning system may fail or succeed in ways that impact people. 
  • Impacted individuals who may experience effects from a model can better understand how it works or use information in the card to pursue remedies.
  • ML and AI practitioners can better understand how well the model might work for the intended use cases and track its performance over time. 
  • Model developers can compare the model’s results to other models in the same space, and make decisions about training their own system. 
  • Software developers working on products that use the model’s predictions can inform their design and implementation decisions. 
  • Organizations can inform decisions about adopting technology that incorporates machine learning. 
  • ML-knowledgeable individuals can be informed on different options for fine-tuning, model combination, or additional rules and constraints to help curate models for intended use cases without requiring technical expertise. 

In addition to supporting decision-making processes for determining the suitability of a given machine learning model in a particular context, model reporting is an approach for responsible, transparent and accountable practices in machine learning. People and organizations releasing models may be additionally incentivized to provide model card details because it helps potential users of the models to be better informed on which models are best for their specific purposes. If model card reporting becomes standard, potential users can compare and contrast different models in a well-informed way. 

Disclaimer – The aforementioned description is based on the following paper and authors: ”Model Cards for Model Reporting”, Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru. Link: https://arxiv.org/pdf/1810.03993.pdf 

 

Use Cases

Reporting Carbon Emissions on Open-Source Model Cards

Reporting Carbon Emissions on Open-Source Model Cards

Training AI models takes a substantial amount of energy, which means they emit a substantial amount of carbon dioxide. One study out of the University of Massachusetts, Amherst found that training a large and common AI model can create five times the...
Apr 19, 2023

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.