Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Documentation process

Clear all

Transparency & explainability

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Approach Procedural
Approach Educational
Tool type Documentation process
Objective Transparency & explainability

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

ProceduralUnited StatesUploaded on Nov 29, 2023
AI/ML applications have unique security threats. Project GuardRail is a set of security and privacy requirements that AI/ML applications should meet during their design phase that serve as guardrails against these threats. These requirements help scope the threats such applications must be protected against.

EducationalProceduralFinlandUploaded on Nov 13, 2023
Saidot is a platform that helps you create and manage AI applications that are safe, ethical and transparent. With Saidot, you can: 1) Adopt AI policies from a library of templates, 2) Identify, mitigate and monitor AI risks, 3) Share AI reports with your stakeholders and 4) Learn how to use generative AI responsibly.

ProceduralUnited StatesUploaded on Oct 3, 2023
Comprehensive AI governance and compliance platform focused on empowering organizations with the risk management capabilities they need to adopt AI swiftly and responsibly.

EducationalProceduralUploaded on Apr 17, 2023<1 day
Fast AI with Assurance, Integrity, and Reliability for comprehensive AI governance and oversight.

ProceduralUploaded on Apr 13, 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

ProceduralUploaded on Apr 5, 2023<1 day
FRAIA assesses the risks to human rights posed by algorithms and promotes measures to address them. It fosters dialogue among professionals involved in algorithm development or deployment. The client is accountable for implementing FRAIA to prevent uncertain outcomes of algorithm use. FRAIA mitigates risks of carelessness, ineffectiveness, or violation of citizens' rights.

ProceduralSloveniaUploaded on Apr 4, 2023
QLECTOR LEAP is a cloud-based application that helps guide production processes in the same way the GPS supports your road trip. It uses historical data and AI to forecast production and suggest optimal measures when unplanned events occur. It combines machine learning methods specific for manufacturing and uses own AI technology to build and maintain data-driven digital twin of a factory.

Related lifecycle stage(s)

Verify & validate

ProceduralEducationalUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralUploaded on Mar 30, 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

ProceduralUploaded on Mar 20, 2023
An accessible framework (single A3 sheet) for governing the ethical development and use of AI - including the what (ethic), how (realisation principals), and submission of evidence for approval. Trustworthiness and bias are also covered

ProceduralUploaded on Mar 2, 2023
Zupervise is a unified risk transparency platform for AI governance.

ProceduralUploaded on Feb 14, 2023
An AI governance and risk management platform that helps organisations understand and manage the risks that come with AI, whilst navigating the emerging regulatory landscape

EducationalProceduralUploaded on Sep 15, 2022

Model cards are the tool for transparent AI documentation. Model cards are essential for discoverability, reproducibility, and sharing. You can find a model card as the README.md file in any model repo. Under the hood, model cards are simple Markdown files with additional metadata.

Related lifecycle stage(s)

Deploy

ProceduralEducationalUploaded on Sep 15, 2022

A documentation-first, human-centered data collection project as part of the BigScience initiative. A geographically diverse set of target language groups were identified (Arabic, Basque, Chinese, Catalan, English, French, Indic languages, Indonesian, Niger-Congo languages, Portuguese, Spanish, and Vietnamese, as well as programming languages) for which to collect metadata on potential data sources. To structure this effort, an online catalogue was developed as a supporting tool for gathering metadata through organized public hackathons.

Related lifecycle stage(s)

Collect & process data

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.