Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Advai: Regulatory Aligned AI Deployment Framework

Jun 5, 2024

Advai: Regulatory Aligned AI Deployment Framework

The adoption of AI systems in high-risk domains demands adherence to stringent regulatory frameworks to ensure safety, transparency, and accountability. This use case focuses on deploying AI in a manner that not only meets performance metrics but also aligns with regulatory risks, robustness, and societal impact considerations.
System interfaces are built around task APIs that can be used to generate benchmarks, run evaluations and create metrics. The metrics are selected and compiled to deliberately align with regulatory principles relevant to the use case. Practically speaking, the service is deployed as a microservice architecture in the cloud or on-premise dependent on the user requirement. For data scientists, a lab interface enables users to easily interact with technical assessments and fine-tune required metrics. For business users, a dashboard presents the data to the user at an appropriate granularity.  
By integrating MLOps practices with a regulatory framework, we aim to address the pain points of various stakeholders such as data scientists, risk/compliance officers, C-suite executives, and the public, ensuring responsible innovation and deployment of AI systems.

This approach is taken to overcome the disconnect between technical AI development and the broader needs of stakeholders. We are future-proofing AI deployment against emerging legal and ethical standards by aligning MLOps with underlying regulatory principles and current regulatory requirements, enabling innovation without compromising on accountability, compliance or public trust. 

Benefits of using the tool in this use case

Ensures greater compliance with international regulatory standards, reducing legal risks. It does this by filtering, interpreting and matching compliance requirements appropriate to the industry and use case.

  • Fosters trust among end-users and the public by demonstrating a commitment to ethical AI deployment.
  • Enhances the resilience of AI systems against adversarial attacks and real-world uncertainties.
  • Connects the broader agenda of regulatory compliance with the more granular details along the machine learning ideation-development-deployment lifecycle.

Shortcomings of using the tool in this use case

Interpretation of Regulatory Requirements: The principles set forth by regulations can be subject to interpretation. Different stakeholders might have varying interpretations of how to meet these standards, leading to potential misalignment and inconsistencies in implementation.

  • Dynamic Regulatory Environments: Different regions have different regulatory requirements which can change rapidly. An approach effective in one region needs adjusting to the relevant regulatory environment.  
  • The efficacy of the feedback mechanisms designed to fine-tune the approach depends on user willingness and ability to use them.  
  • It’s a substantial task to extract the relevant regulatory requirements for a given use case and industry.  
  • This is still fundamentally a subjective approach with no metric or evidence that the correct regulations are selected by the organisation, although we advise based on prior comparable experience.
     

Related links: 

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

Modify this use case

About the use case


Objective(s):