Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Audit Process

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Tool type Audit Process
Approach Procedural

ProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

ProceduralSaudi ArabiaUploaded on Mar 26, 2024
LLM Survey for responsible and transparent and safe AI covering international compliance regulations and data/ model evaluations

ProceduralBrazilUploaded on Mar 14, 2024
Ethical Problem Solving (EPS) is a framework to promote the development of safe and ethical artificial intelligence. EPS is divided into an evaluation stage (performed via Algorithmic Impact Assessment tools) and a recommendation stage (the WHY-SHOULD-HOW method).

ProceduralUnited KingdomUploaded on Feb 20, 2024
Anekanta® AI’s AI Risk Intelligence System™ solves the problem of biometrics and AI system risk classification by challenging the entire AI system lifecycle and providing risk reports which inform compliance requirements in line with the EU AI Act and international regulatory frameworks.

ProceduralGermanyUploaded on Feb 19, 2024
Casebase is a platform for portfolio management of data analytics & AI use cases. It supports companies in systematically developing their ideas in the field of artificial intelligence, documenting the development process and managing their data & AI roadmap. Particularly in the context of AI governance and the EU AI Act, Casebase helps to manage the risks of AI systems over their entire life cycle.

ProceduralSwitzerlandGermanyUploaded on Oct 31, 2023
How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch has developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

ProceduralUploaded on Sep 29, 2023
Human rights impact assessments of AI systems require meaningful engagement of individuals and groups most affected.It is particularly relevant for products and services using AI because there is still not enough practical guidance on how to involve stakeholders in the design, development and deployment of AI systems.

ProceduralFranceUploaded on Sep 26, 2023
Our software equips data scientists and AI engineers with powerful tools to help them create robust and explainable AI systems in conformity with ISO standards. Trust Saimple to help you build the foundation for reliable AI.

ProceduralUnited KingdomJapanEuropean UnionUploaded on Aug 31, 2023<1 day
Fujitsu AI Ethics Impact Assessment assesses the potential risks and unwanted consequences of an AI system throughout its lifecycle and produces evidence which can be used to engage with auditors, approvers, and stakeholders. This is a process-driven technology that allows to: 1) map all interactions among the stakeholders and the components of the AI system; 2) assess the ethical risks emerging from such interactions; 3) understand the mechanisms whereby incidental events could occur, based on previous AI ethics incidents.

ProceduralUploaded on Jul 4, 2023
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

ProceduralGermanyUploaded on Jun 22, 2023<1 day
QuantPi’s platform unites AI testing with AI governance. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

ProceduralNetherlandsUnited KingdomUploaded on Jun 20, 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

ProceduralUnited KingdomUploaded on Jun 16, 2023
Aival software allows non-technical users to evaluate commercial AI products in terms of performance, fairness, robustness and explainability (healthcare / imaging)

ProceduralGermanyUploaded on May 16, 2023
The KIDD process aims to empower companies and their employees to participate in shaping and introducing AI applications, ensuring that newly introduced systems comply with jointly negotiated ethical requirements. The KIDD process is based on the assumption that the development of software applications that have far-reaching implications for companies and employees should not be the sole responsibility of developers or decision-makers, but that users and employees should be fully involved in the design of the digital system.

ProceduralUnited KingdomUploaded on May 11, 2023
To accelerate the journey to net zero, our team has designed the first publicly available tool to be used on new data projects to assess and measure the data CO2 footprint along the dataflow, called the “Data Carbon Ladder”. Using this tool, organizations can reduce their data CO2 footprint by ensuring optimal dataset(s) selection, frequency of updates, storage location, and analytics for the project to not only succeed but to do so in the most environmentally friendly way.


ProceduralUploaded on Apr 13, 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

ProceduralUploaded on Apr 5, 2023<1 day
FRAIA assesses the risks to human rights posed by algorithms and promotes measures to address them. It fosters dialogue among professionals involved in algorithm development or deployment. The client is accountable for implementing FRAIA to prevent uncertain outcomes of algorithm use. FRAIA mitigates risks of carelessness, ineffectiveness, or violation of citizens' rights.

ProceduralUploaded on Apr 4, 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralUploaded on Apr 4, 2023<1 day
The Human-Ai Paradigm for Ethics, Conduct and Risk (HAiPECR)

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.