Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Clear all

Audit Process

Clear all

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT
Tool type Documentation process
Objective Transparency & explainability
Approach Technical

ProceduralSwitzerlandGermanyUploaded on 31 oct. 2023
How can we ensure a trustworthy use of automated decision-making systems (ADMS) in the public administration? AlgorithmWatch has developed a concrete and practicable impact assessment tool for ADMS in the public sector. This publication provides a framework ready to be implemented for the evaluation of specific ADMS by public authorities at different levels.

ProceduralUploaded on 29 sept. 2023
Human rights impact assessments of AI systems require meaningful engagement of individuals and groups most affected.It is particularly relevant for products and services using AI because there is still not enough practical guidance on how to involve stakeholders in the design, development and deployment of AI systems.

ProceduralFranceUploaded on 26 sept. 2023
Our software equips data scientists and AI engineers with powerful tools to help them create robust and explainable AI systems in conformity with ISO standards. Trust Saimple to help you build the foundation for reliable AI.

ProceduralUnited KingdomJapanEuropean UnionUploaded on 31 août 2023<1 day
Fujitsu AI Ethics Impact Assessment assesses the potential risks and unwanted consequences of an AI system throughout its lifecycle and produces evidence which can be used to engage with auditors, approvers, and stakeholders. This is a process-driven technology that allows to: 1) map all interactions among the stakeholders and the components of the AI system; 2) assess the ethical risks emerging from such interactions; 3) understand the mechanisms whereby incidental events could occur, based on previous AI ethics incidents.

ProceduralUploaded on 4 juil. 2023
The Data Carbon Scorecard provides a rapid assessment of the environmental impact of your proposed data project enabling you to quickly gauge the CO2 implications of your data project and determine the ‘best’ course of action for the environment. Through answering just 9 questions, the scorecard provides you with a traffic light display of data CO2 hotspots and an overall Data CO2 Score for the whole project within minutes. The scorecard can be used by any individual, team, or organization.

ProceduralGermanyUploaded on 22 juin 2023<1 day
QuantPi’s platform unites AI testing with AI governance. It is the cockpit of AI-first organizations to collaborate on – to efficiently and responsibly understand, enhance and steer their individual AI models as well as their complete AI landscape.

ProceduralNetherlandsUnited KingdomUploaded on 20 juin 2023>1 year
A software-based EU AI Act compliance solution for your clients and AI value chain actors

ProceduralUnited KingdomUploaded on 16 juin 2023
Aival software allows non-technical users to evaluate commercial AI products in terms of performance, fairness, robustness and explainability (healthcare / imaging)

ProceduralGermanyUploaded on 16 mai 2023
The KIDD process aims to empower companies and their employees to participate in shaping and introducing AI applications, ensuring that newly introduced systems comply with jointly negotiated ethical requirements. The KIDD process is based on the assumption that the development of software applications that have far-reaching implications for companies and employees should not be the sole responsibility of developers or decision-makers, but that users and employees should be fully involved in the design of the digital system.

ProceduralUnited KingdomUploaded on 11 mai 2023
To accelerate the journey to net zero, our team has designed the first publicly available tool to be used on new data projects to assess and measure the data CO2 footprint along the dataflow, called the “Data Carbon Ladder”. Using this tool, organizations can reduce their data CO2 footprint by ensuring optimal dataset(s) selection, frequency of updates, storage location, and analytics for the project to not only succeed but to do so in the most environmentally friendly way.


ProceduralUploaded on 13 avr. 2023
This paper addresses the complex and contentious issues surrounding AI governance, focusing on organizing overarching and systematic approaches for the transparency of AI algorithms, and proposes a practical toolkit for implementation. Designed to be highly user-friendly for both businesses and government agencies engaging in self- and co-regulation, the toolkit presents disclosure items in a list format, allowing for discretion and flexibility when considering AI algorithm providers, users, and risks.

ProceduralUploaded on 5 avr. 2023<1 day
FRAIA assesses the risks to human rights posed by algorithms and promotes measures to address them. It fosters dialogue among professionals involved in algorithm development or deployment. The client is accountable for implementing FRAIA to prevent uncertain outcomes of algorithm use. FRAIA mitigates risks of carelessness, ineffectiveness, or violation of citizens' rights.

ProceduralUploaded on 4 avr. 2023
Enterprise platform for governance, risk and compliance in AI. Regulatory compliance with global laws intersecting with AI.

ProceduralUploaded on 4 avr. 2023<1 day
The Human-Ai Paradigm for Ethics, Conduct and Risk (HAiPECR)

ProceduralUploaded on 30 mars 2023<1 day
GRACE is an AI governance platform that offers a central registry for all AI models, tools, and workflows for risk mitigation, compliance, and collaboration across AI teams and projects. It provides real-time compliance, transparency, and trust while promoting AI innovation. The platform works across all major cloud vendors and offers out-of-the-box frameworks for complying with EU AI Act, AI standards, data regulations, and AI frameworks.

ProceduralFinlandUploaded on 27 mars 2023
The framework helps to identify and manage risks of algorithmic bias and discrimination and to promote equality in the use of AI. It is especially intended for public sector use in close collaboration between public procurers and technical AI developers.

ProceduralUploaded on 2 mars 2023
Zupervise is a unified risk transparency platform for AI governance.

ProceduralUploaded on 14 févr. 2023
An AI governance and risk management platform that helps organisations understand and manage the risks that come with AI, whilst navigating the emerging regulatory landscape

ProceduralNetherlandsUploaded on 13 oct. 2022

Privacy Library Of Threats for AI/ML


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.

Inscrivez-vous pour recevoir des alertes de notre blog, le AI Wonk: