Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Human-Computer Trust Scale (HCTS)



Human-Computer Trust Scale (HCTS)

The Human-Computer Trust scale (HCTS) is a simple, nine-item attitude Likert scale that gives a global view of subjective assessments of trust in technology.

 

The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust. It can facilitate a human-centred understanding of end-user needs regarding these AI-based tools.

 

This tool looks at trust as the degree to which a user or other stakeholder has confidence that a product or system will behave as intended (ISO/IEC TR 24028:2020).

 

Considering context-dependency and multidimensional trust, this web-based survey tool guides practitioners in measuring users' trust at any point in their product lifecycle. The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust. 

 

The Human-Computer Trust Model (HCTM) is an empirically tested model that assesses users’ perception of trust based on three indicators: (1) Perceived Risk, which refers to users' subjective assessment of the probability of negative consequences from the system's use; (2) Competence, referring to a system's ability to develop the expected tasks; and (3) Benevolence, referring to the system's (perceived) intentions.

 

This instrument is a culmination of a thorough investigation done in the past ten years on the effect of users' trust in technology that aimed to create an empirically and scientifically validated user research instrument to facilitate the design of trustworthy technologies in various application contexts (e.g. eHealth, eGovernment, fictional scenarios, Human-Robot Interaction, etc.). 

 

https://www.trustux.org/

 

Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015. DOI: 10.1080/0144929X.2019.1656779 

 

A Trust Scale for Human-Robot Interaction: Translation, Adaptation, and Validation of a Human-Computer Trust Scale. Human Behavior and Emerging Technologies, Wiley-Hindawi, 1−12. DOI: 10.1155/2022/6437441.

 

Measuring trust with psychophysiological signals: a systematic mapping study of approaches used. Multimodal Technologies and Interaction, 4(3), 63. DOI 10.3390/mti4030063 

Use Cases

Human-Robot Interaction Trust Scale (HRITS)

Human-Robot Interaction Trust Scale (HRITS)

Trust has since long been addressed in various disciplines (e.g., social psychology, economy, philosophy, and industrial organization). Each domain that explores trust, affects either an attitude, an intention, or a behavior. A violation of trust usu...
May 8, 2023

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.