Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

RAI Institute: Artificial Intelligence Impact Assessment (AIIA)



This tool, developed by the Responsible Artificial Intelligence Institute (RAI Institute) informs the broader assessment of AI risks and risk management. Based on the occurrence of specific events, it allows management and development teams to identify actual and potential impacts at the AI system level through a set of defined controls across stages of the system lifecycle. The impacts identified are categorized in line with generally accepted principles for safe and trustworthy AI, in particular: accountaibility and transparency, fairness, safety, secruity and resilience, explainability and interpretability, validity and reliability and privacy. For assurance purposes, this assessment tool is accompanied by complmenetary guidance on evidence documentation requirements.

The RAI Insitute aims to advance the practice of responsible AI through the development of tools and guidance to put responsible AI principles into practice. The AIIA was developed to further this aim since it is essential for an organisation to be able to accurately and easily assess the impacts of its AI systems and confirm the compliance status against relevant regulations and standards in order to ensure adequate oversight and risk management.

The RAI Insitute AIIA equips organisations with a vital tool for ensuring their AI models and systems comply with pertinent policies and industry benchmarks for responsible AI. It increases visibility and accountability, allows for early risk identification, and fosters stakeholder trust by demonstrating a commitment to safe, secure and trustworthy AI. By streamlining the evaluation process and defining actionable controls for assessment, the AIIA not only helps organisations put responsible AI principles into practice but also strengetherns governance, mitigates risks, and supports sustainable innovation. 

The RAI Institute AIIA informs the broader risk assessment and risk management scheme in relation to the development and deployment of an AI system. As such it may function as a stand-alone tool but for risk assessment purposes, it will be required to understand the outcomes of the AIIA in relation to the chances of identified impacts occurring and the severity of such impacts. Moreover, the results of the AIIA and consequent assurance level, will depend on the evidence documenation provided by an organisation to assert the extent to which the controls have been met. 

Link to the full use case.

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool




Impacted stakeholders:


Country of origin:



Type of approach:






Tags:

  • nlp
  • transparency
  • accountability
  • risk
  • data

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.