Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

The Alan Turing Institute: Assurance of third-party AI systems for UK national security



This case study explores how national security bodies can effectively evaluate AI systems designed and developed, at least in part, by industry suppliers, before they are deployed in high stakes national security environments. Our tailored AI assurance framework for UK national security facilitates transparent communication about AI systems between industry suppliers and national security customers so that all stakeholders understand the risks which come with a particular AI system.

We provide a method for the robust assessment of whether AI systems meet the stringent requirements of national security bodies. The framework centres on a structured system card template for UK national security. This template sets out how AI system properties should be documented – to cover legal, supply chain, performance, security, and ethical considerations. We recommend government and industry work together to ‘fill out’ this system card template with relevant evidence that an AI system is safe to deploy.

In addition to this, we offer guidance on what evidence should be used to fill out the system card template and on how contractual clauses should be used to mandate transparent information sharing from suppliers.

Finally, we offer guidance to national security bodies about how they should go about assessing all the evidence in the system card. We address the need to establish clear lines of accountability and to ensure ongoing post-deployment checks are in place to monitor any residual risks associated with an AI system. 

Choice of approach

While much progress has been made by other sectors on AI assurance, no other dedicated approach exists to meet the needs of the national security community. AI uses in national security come with additional risks due to the prevalence of high-stakes deployment contexts where tolerance for error is especially low. We build on existing AI assurance research to explicitly address risks that emerge when security services use industry-designed AI systems.

Our approach to AI assurance also aims to build on existing industry and government practices to make sure its implementation is feasible in the near term. Our assurance process is robust, with thorough safeguards included, but also practical, with streamlined stages that should be straightforward to implement.

Finally, this assurance framework explicitly tackles issues associated with third party AI, introducing crucial considerations such as supply chain accountability and data provenance, which are missing in assurance tools that deal instead with assurance from the perspective of a single developing organisation. 

Benefits include:

  • Aids transparent communication between suppliers and customers about the properties of an AI system.
  • Increases oversight of AI customers over the whole AI lifecycle.
  • Allows AI customers to easily compile all evidence that an AI system is trustworthy within a single document – the AI system card.
  • Meets the specific needs of the national security sector.
  • Offers guidance on how AI suppliers might prove their systems are ethical, legally compliant, secure, and reliable.

Limitations include:

  • Requires multidisciplinary expertise and significant organisational capacity.
  • Is reliant upon contributions from industry suppliers who may be reluctant to share all relevant information about an AI system with the government customer. 

 

Link to the full use case.

This case study was published in collaboration with the UK Department for Science, Innovation and Technology Portfolio of AI Assurance Techniques. You can read more about the Portfolio and how you can upload your own use case here.

About the tool


Developing organisation(s):


Objective(s):


Impacted stakeholders:


Country of origin:



Type of approach:








Geographical scope:



Required skills:


Tags:

  • ai responsible
  • data assurance
  • robustness
  • safety

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.