These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Risk Management Profile for Artificial Intelligence and Human Rights
The Risk Management Profile for Artificial Intelligence and Human Rights serves as a practical guide for organisations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.
This Risk Management Profile for Artificial Intelligence and Human Rights, also known as “the Profile” aims at bridging the gap between risk management approaches and human rights. It shows how actions related to assessing, addressing and mitigating human rights risks are part of risk management practices. The Profile studies how the AI risk management processes from the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) can provide actions which organisations may use in their human rights' due diligence processes. This work aims to serve and be applied across applications, stakeholders, sectors and throughout the AI lifecycle, to increase the capacity to engage in AI risk management practices which promote enjoyment of human rights.
The Profile aims at providing a non-exhaustive, non-binding guidance on how organisations can use the NIST AI RMF. It includes two key interrelated goals:
- Show AI designers, developers, deployers, and users how to apply NIST’s AI Risk Management Framework to contribute to human rights due diligence practices.
- Facilitate rights-respecting AI governance throughout AI design, development, deployment, and use by all stakeholders.
The Profile shows how some human rights-related actions can be taken as part of implementing the AI RMF’s four organisational functions:
- Govern: setting up institutional structures and processes
- Map: understanding context and identify risks
- Measure: assessing and monitoring risks and impacts
- Manage: prioritising, preventing, and responding to incidents
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Country of origin:
Type of approach:
Maturity:
Target groups:
Target users:
Benefits:
Geographical scope:
Tags:
- ai incidents
- ai governance
- ai risk management
- human rights
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case