Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Advai: Implementing a Risk-Driven AI Regulatory Compliance Framework

Jun 5, 2024

Advai: Implementing a Risk-Driven AI Regulatory Compliance Framework

As AI becomes central to organisational operations, it is crucial to align AI systems and models with emerging regulatory requirements globally. This use case focuses on integrating a risk-driven approach, based on and aligning with ISO 31000 principles, to assess and mitigate risks associated with AI implementation. In the risk stages briefly outlined below, stress testing is influenced by stages 1 and 5, and instrumental to stages 2, 3 and 4. Accurate assignment of risk is built on the understanding of the point of model failure. Stress testing is the technical capability that brings integrity to any effective risk assessment of AI.

1. Context Understanding: Assess the AI model type, expected user behaviour, application, and the potential for wider impact within its operational environment; if things go wrong, what and who might be impacted?

2. Risk Identification: Stress testing methods to identify potential AI risks such as data privacy issues, biases and security vulnerabilities.

3. Risk Assessment: Evaluate the likelihood of AI model failure in a given context. Stress testing reveals X vulnerability, but how likely is this to occur under real-world conditions?

4. Risk Treatment: Analysis of causes for failure implies strategies to mitigate these risks, such as augmenting training data with data designed to counter a bias.

5. Monitoring and Review: Continuously monitor AI systems to detect new risks and assess the effectiveness of risk mitigation strategies. This informs where to target future stress testing and which methods to use.

 

Some emerging ISO 31000 standards we incorporate into our AI Alignment approach include:

  • ISO/IEC TR 24027:2021, Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making
  • ISO/IEC 25059:2023, Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model for AI systems
  • ISO/IEC FDIS 5338 (Under Development), Information technology — Artificial intelligence — AI system life cycle processes

It addresses the AI threat landscape evolution and introduces a structured process for organisations to ensure their AI systems are digestible by those responsible for risk management.
The approach modernises traditional risk management frameworks to encompass unprecedented risk challenges that are nuanced to AI. Many modern efforts seem to throw out decades of otherwise sensible and effective risk management processes. We’ve chosen to align the unique risks posed by AI with the corpus of proven risk management philosophies and methods. Further, in leveraging existing enterprise risk management frameworks and tailoring them to the nuances of AI, we translate the modern challenges of AI risk management into terms and processes organisations are already equipped to manage. This therefore ensures compliance, enhances trust, and safeguards against AI-specific threats. 

Benefits of using the tool in this use case

Enhanced compliance with global AI regulations.

  • Improved risk identification and mitigation strategies for AI systems.
  • Strengthened trust from stakeholders through robust governance and accountability measures.
  • A clear mechanism for addressing and rectifying issues arising from AI system errors or biases. 

Shortcomings of using the tool in this use case

Rapidly changing regulatory environments may necessitate frequent updates to the risk framework.

  • The technique relies upon the accuracy of an assigned risk in the context of a particular organisation, i.e. the effectiveness of the technique is subject to the relevance of the assigned risk.
     

Related links: 

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Objective(s):