These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Human Rights Artificial Intelligence Impact Assessment
The Law Commission of Ontario and the Ontario Human Rights Commission have joined together to create an artificial intelligence (AI) impact assessment tool to provide organisations a method to assess AI systems for compliance with their obligations under Canadian human rights law. Existing and emerging artificial intelligence (AI) regulations are increasingly requiring organisations to conduct AI impact assessments and/or comply with human rights laws prior to deploying AI systems. Not all impact assessments are the same. For example, privacy, while an element of human rights, is best addressed through specific privacy impact assessment methodologies.
The purpose of this human rights AI impact assessment (HRIA) is to assist developers and administrators of AI systems to identify, assess, minimise or avoid discrimination and uphold human rights obligations throughout the lifecycle of an AI system.
The HRIA is a practical, step-by step guide to help private and public organisations assess and mitigate the human rights impact of AI systems in a broad range of applications. It is intended to:
- Strengthen knowledge and understanding of human rights impacts;
- Provide practical guidance on specific human rights impacts, particularly in relation to non-discrimination and equality of treatment;
- Identify practical mitigation strategies and remedies to address bias and discrimination from AI systems.
This HRIA is split into two parts:
- Part A is an assessment of the AI system for human rights implications. In this section, organisations are asked questions about the purpose of the AI, the significance of the AI system, and the treatment of individuals and communities. It includes questions to help organisations assess an AI system to determine whether it is “high risk” for human rights issues; whether the AI system is demonstrating differential treatment on protected grounds; whether differential treatment is justified; and whether the system is accommodating different needs.
- Part B is about mitigation. Once the AI system has been categorised, Part B provides a series of questions to assist organisations to minimize the identified human rights issues in the given AI system.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target users:
Stakeholder group:
Geographical scope:
Tags:
- ai assessment
- bias
- regulation compliance
- privacy
- human rights
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case