Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Type

Origin

Scope

SUBMIT A TOOL

If you have a tool that you think should be featured in the Catalogue of AI Tools & Metrics, we would love to hear from you!

Submit

TechnicalProceduralUnited StatesJapanUploaded on Apr 19, 2024
Diagnose bias in LLMs (Large Language Models) from various points of views, allowing users to choose the most appropriate LLM.

Related lifecycle stage(s)

Plan & design

ProceduralUnited KingdomJapanEuropean UnionUploaded on Aug 31, 2023<1 day
Fujitsu AI Ethics Impact Assessment assesses the potential risks and unwanted consequences of an AI system throughout its lifecycle and produces evidence which can be used to engage with auditors, approvers, and stakeholders. This is a process-driven technology that allows to: 1) map all interactions among the stakeholders and the components of the AI system; 2) assess the ethical risks emerging from such interactions; 3) understand the mechanisms whereby incidental events could occur, based on previous AI ethics incidents.

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.