These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AIAAIC Repository
An independent, open, public interest resource, the AIAAIC Repository details incidents and controversies driven by and relating to artificial intelligence, algorithms, and automation.
Started in June 2019 as a private, professional project to better understand the reputational risks of artificial intelligence, the AIAAIC Repository has evolved into an open, public interest initiative that collects, dissects, examines, and divulges a broad range of incidents and issues posed by AI, algorithmic, and automation systems.
The repository is independent of government and industry, and is used by researchers, academics, teachers, NGS, journalists, policymakers, and industry experts across the world looking to better understand the nature, risks, and impacts of AI, algorithms, and automation.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Purpose(s):
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Stakeholder group:
Validity:
People involved:
Required skills:
Technology platforms:
Tags:
- incident database
- transparency
- ai risk management
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case