Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Risk Management Profile for Artificial Intelligence and Human Rights



Risk Management Profile for Artificial Intelligence and Human Rights

The Risk Management Profile for Artificial Intelligence and Human Rights serves as a practical guide for organisations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.

This Risk Management Profile for Artificial Intelligence and Human Rights, also known as “the Profile” aims at bridging the gap between risk management approaches and human rights. It shows how actions related to assessing, addressing and mitigating human rights risks are part of risk management practices. The Profile studies how the AI risk management processes from the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) can provide actions which organisations may use in their human rights' due diligence processes. This work aims to serve and be applied across applications, stakeholders, sectors and throughout the AI lifecycle, to increase the capacity to engage in AI risk management practices which promote enjoyment of human rights. 

The Profile aims at providing a non-exhaustive, non-binding guidance on how organisations can use the NIST AI RMF. It includes two key interrelated goals:

  1. Show AI designers, developers, deployers, and users how to apply NIST’s AI Risk Management Framework to contribute to human rights due diligence practices.
  2. Facilitate rights-respecting AI governance throughout AI design, development, deployment, and use by all stakeholders.

The Profile shows how some human rights-related actions can be taken as part of implementing the AI RMF’s four organisational functions: 

  • Govern: setting up institutional structures and processes 
  • Map: understanding context and identify risks
  • Measure: assessing and monitoring risks and impacts
  • Manage: prioritising, preventing, and responding to incidents
     

About the tool


Developing organisation(s):




Country of origin:


Type of approach:






Geographical scope:


Tags:

  • ai incidents
  • ai governance
  • ai risk management
  • human rights

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.