Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Respect of human rights

Clear all

SUBMIT A METRIC USE CASE

If you have a metric use case that you think should be featured in the Catalogue of Tools & Metrics for Trustworthy AI, we would love to hear from you!

SUBMIT
Objective Respect of human rights

Interaction support/chatbotsPersonalisation/recommendersReasoning with knowledge structures/planningUploaded on Apr 10, 2024

The Human-Computer Trust scale (HCTS) is a simple, nine-item attitude Likert scale that gives a global view of subjective assessments of trust in technology.

The HCTS resu...


catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.