Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Trustworthy and Ethical Assurance Platform



Trustworthy and Ethical Assurance Platform

The Trustworthy and Ethical Assurance platform (or, TEA Platform) is an open-source tool and framework that has been designed and developed by researchers at the Alan Turing Institute, in collaboration with the University of York, to support the process of developing and communicating trustworthy and ethical assurance cases for data-driven technologies, such as machine learning or AI, in a wide-range of contexts and domains. It enables users to develop a structured argument, which provides reviewable (and contestable) assurance for a set of claims about some goal of a data-driven technology (e.g. fairness, explainability), which is grounded in relevant forms of evidence (e.g. results of bias auditing, qualitative assessment of user interpretability).

To support this process, the TEA platform also provides freely available resources and guidance to help scaffold a supportive community of practice. For instance, users can share and comment on publicly available assurance cases, access argument patterns that serve as templates that help implement ethical principles throughout a project’s lifecycle, and, in general, help build best practices and consensus around assurance standards (e.g. determining evidence for specific claims).

Previously, argument-based assurance had been used primarily as a methodology for auditing and assurance in safety-critical domains (e.g. energy, manufacturing, aviation). However, it had focused largely on goals related to technical and physical safety. Our rationale for taking this approach was to build on a well-established and validated method, with existing standards, norms, and best practices, but to extend the methodology to include ethical goals, such as sustainability, accountability, fairness, explainability, and responsible data stewardship.

We also sought to ensure that our platform was easy to use and accessible, recognising the needs and challenges that many sectors or domains have (e.g. low levels of readiness for data-driven technologies). Therefore, the methodology used is simplified but flexible and adaptable using additional guidance, freely available on our documentation site.

The open-source nature of the tool also allows for extensibility and community-support. In addition, it has been containerised (i.e. Docker images available) for users or organisations who wish to deploy it in a private environment or cloud.

The benefits of trustworthy and ethical assurance include:

  • aiding transparent communication among stakeholders and building trust;
  • integrating evidence sources and disparate methods (e.g. model cards, international standards);
  • making the implicit explicit through structured assurance cases;
  • aiding project management and governance, e) supporting ethical reflection and deliberation;
  • contributing to the sustainability of an open community of practice.

For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools.
For more information on relevant standards visit the AI Standards Hub.

About the tool


Developing organisation(s):



Impacted stakeholders:


Country of origin:



Type of approach:





Target users:


Stakeholder group:


Validity:


Enforcement:


Geographical scope:




Technology platforms:


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.