Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

SUBMIT

TechnicalUnited StatesUploaded on Nov 8, 2024
The Python Risk Identification Tool for generative AI (PyRIT) is an open access automation framework to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

ProceduralUploaded on Nov 7, 2024
Trustworthy AI Procurement CardTM is a non-exhaustive list of information which can accompany acquisition decisions. The Card is similar to the DataSheets or Model Cards, in the sense that the objective is to promote transparency and better due diligence during AI procurement process.

EducationalUnited StatesUploaded on Nov 6, 2024<1 day
The deck of 50 Trustworthy AI Cards™ correspond to 50 most relevant concepts under the 5 categories of: Data, AI, Generative AI, Governance, and Society. The Cards are used to create awareness and literacy on opportunities and risks about AI, and how to govern these technologies.

ProceduralFranceUploaded on Oct 25, 2024
Online tool for estimating the carbon emissions generated by AI model usage.

Related lifecycle stage(s)

Plan & design

ProceduralSingaporeUploaded on Oct 2, 2024
Resaro offers independent, third-party assurance of mission-critical AI systems. It promotes responsible, safe and robust AI adoption for enterprises, through technical advisory and evaluation of AI systems against emerging regulatory requirements.

ProceduralUnited KingdomUploaded on Oct 2, 2024
Warden AI provides independent, tech-led AI bias auditing, designed for both HR Tech platforms and enterprises deploying AI solutions in HR. As the adoption of AI in recruitment and HR processes grows, concerns around fairness have intensified. With the advent of regulations such as NYC Local Law 144 and the EU AI Act, organisations are under increasing pressure to demonstrate compliance and fairness.

ProceduralUploaded on Oct 2, 2024
FairNow is an AI governance software tool that simplifies and centralises AI risk management at scale. To build and maintain trust with customers, organisations must conduct thorough risk assessments on their AI models, ensuring compliance, fairness, and security. Risk assessments also ensure organisations know where to prioritise their AI governance efforts, beginning with high-risk models and use cases.

TechnicalUploaded on Nov 5, 2024
garak, Generative AI Red-teaming & Assessment Kit, is an LLM vulnerability scanner. Garak checks if an LLM can be made to fail.

TechnicalInternationalUploaded on Nov 5, 2024
A fast, scalable, and open-source framework for evaluating automated red teaming methods and LLM attacks/defenses. HarmBench has out-of-the-box support for transformers-compatible LLMs, numerous closed-source APIs, and several multimodal models.

ProceduralUnited StatesUploaded on Sep 10, 2024
The Risk Management Profile for Artificial Intelligence and Human Rights serves as a practical guide for organisations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.

EducationalUnited StatesUploaded on Nov 5, 2024
Community jury is concept where multiple stakeholders impacted by a same technology are given the possibility to learn about a project, discuss with one another and provide feedback.

EducationalUnited StatesUploaded on Sep 9, 2024
Judgment Call is an award-winning responsible innovation game and team-based activity that puts Microsoft's AI principles of fairness, privacy and security, reliability and safety, transparency, inclusion, and accountability into action. The game provides an easy-to-use method for cultivating stakeholder empathy through scenario-imagining.

Objective(s)


TechnicalUnited StatesUploaded on Sep 9, 2024
Harms Modeling is a practice designed to help you anticipate the potential for harm, identify gaps in product that could put people at risk, and ultimately create approaches that proactively address harm.

ProceduralJapanUploaded on Sep 9, 2024
The General Understanding on AI and Copyright, released by Japan Copyright Office aims at providing clarity on how Japan's current Copy Right Act should be applied in relation to AI technologies. Three main topics presented include: the training stage of AI developments, the generation, utilisation stage, and copyrightability of AI-generated material.

TechnicalUnited StatesUploaded on Sep 9, 2024
Dioptra is an open source software test platform for assessing the trustworthy characteristics of artificial intelligence (AI). It helps developers on determining which types of attacks may impact negatively their model's performance.

EducationalSpainIrelandUploaded on Aug 2, 2024
The AI Governance module available in TrustWorks suite helps to achieve regulatory compliance with the EU AI Act and the coming wave of AI regulations. The tool allows any privacy or AI governance leader to get instant visibility and control over the AI systems used within the organisation, with continuous risk monitoring, risk classification and fulfilment of transparency obligations.

TechnicalFranceUploaded on Aug 2, 2024
Evaluate input-output safeguards for LLM systems such as jailbreak and hallucination detectors, to understand how good they are and on which type of inputs they fail.

TechnicalUnited StatesUploaded on Aug 2, 2024
AI Security Platform for GenAI and Conversational AI applications. Probe enables security officers and developers identify, mitigate, and monitor AI system security.

TechnicalUploaded on Aug 2, 2024
Responsible AI (RAI) Repairing Assistant

ProceduralNew ZealandUploaded on Jul 11, 2024
The Algorithm Charter for Aotearoa New Zealand is a set of voluntary commitments developed by Stats NZ in 2020 to increase public confidence and visibility around the use of algorithms within Aotearoa New Zealand’s public sector. In 2023, Stats NZ commissioned Simply Privacy to develop the Algorithm Impact Assessment Toolkit (AIA Toolkit) to help government agencies meet the Charter commitments. The AIA Toolkit is designed to facilitate informed decision-making about the benefits and risks of government use of algorithms.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.