These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Trustworthy and Ethical Assurance Platform
The Trustworthy and Ethical Assurance platform (or, TEA Platform) is an open-source tool and framework that has been designed and developed by researchers at the Alan Turing Institute, in collaboration with the University of York, to support the process of developing and communicating trustworthy and ethical assurance cases for data-driven technologies, such as machine learning or AI, in a wide-range of contexts and domains. It enables users to develop a structured argument, which provides reviewable (and contestable) assurance for a set of claims about some goal of a data-driven technology (e.g. fairness, explainability), which is grounded in relevant forms of evidence (e.g. results of bias auditing, qualitative assessment of user interpretability).
To support this process, the TEA platform also provides freely available resources and guidance to help scaffold a supportive community of practice. For instance, users can share and comment on publicly available assurance cases, access argument patterns that serve as templates that help implement ethical principles throughout a project’s lifecycle, and, in general, help build best practices and consensus around assurance standards (e.g. determining evidence for specific claims).
Previously, argument-based assurance had been used primarily as a methodology for auditing and assurance in safety-critical domains (e.g. energy, manufacturing, aviation). However, it had focused largely on goals related to technical and physical safety. Our rationale for taking this approach was to build on a well-established and validated method, with existing standards, norms, and best practices, but to extend the methodology to include ethical goals, such as sustainability, accountability, fairness, explainability, and responsible data stewardship.
We also sought to ensure that our platform was easy to use and accessible, recognising the needs and challenges that many sectors or domains have (e.g. low levels of readiness for data-driven technologies). Therefore, the methodology used is simplified but flexible and adaptable using additional guidance, freely available on our documentation site.
The open-source nature of the tool also allows for extensibility and community-support. In addition, it has been containerised (i.e. Docker images available) for users or organisations who wish to deploy it in a private environment or cloud.
The benefits of trustworthy and ethical assurance include:
- aiding transparent communication among stakeholders and building trust;
- integrating evidence sources and disparate methods (e.g. model cards, international standards);
- making the implicit explicit through structured assurance cases;
- aiding project management and governance, e) supporting ethical reflection and deliberation;
- contributing to the sustainability of an open community of practice.
For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools.
For more information on relevant standards visit the AI Standards Hub.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Objective(s):
Impacted stakeholders:
Country of origin:
Lifecycle stage(s):
Type of approach:
Usage rights:
License:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case