Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

The Human-Computer Trust scale (HCTS) is a simple, nine-item attitude Likert scale that gives a global view of subjective assessments of trust in technology.

The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust. It can facilitate a human-centred understanding of end-user needs regarding these AI-based tools.

The Human-Computer Trust Model (HCTM) is an empirically tested model that assesses users’ perception of trust based on three indicators: (1) Perceived Risk, which refers to users' subjective assessment of the probability of negative consequences from the system's use; (2) Competence, referring to a system's ability to develop the expected tasks; and (3) Benevolence, referring to the system's (perceived) intentions.

This instrument is a culmination of a thorough investigation done in the past ten years on the effect of users' trust in technology that aimed to create an empirically and scientifically validated user research instrument to facilitate the design of trustworthy technologies in various application contexts (e.g. eHealth, eGovernment, fictional scenarios, Human-Robot Interaction, etc.). 

 

catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.