These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Human-Computer Trust Scale (HCTS)
The Human-Computer Trust scale (HCTS) is a simple, nine-item attitude Likert scale that gives a global view of subjective assessments of trust in technology.
The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust. It can facilitate a human-centred understanding of end-user needs regarding these AI-based tools.
This tool looks at trust as the degree to which a user or other stakeholder has confidence that a product or system will behave as intended (ISO/IEC TR 24028:2020).
Considering context-dependency and multidimensional trust, this web-based survey tool guides practitioners in measuring users' trust at any point in their product lifecycle. The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust.
The Human-Computer Trust Model (HCTM) is an empirically tested model that assesses users’ perception of trust based on three indicators: (1) Perceived Risk, which refers to users' subjective assessment of the probability of negative consequences from the system's use; (2) Competence, referring to a system's ability to develop the expected tasks; and (3) Benevolence, referring to the system's (perceived) intentions.
This instrument is a culmination of a thorough investigation done in the past ten years on the effect of users' trust in technology that aimed to create an empirically and scientifically validated user research instrument to facilitate the design of trustworthy technologies in various application contexts (e.g. eHealth, eGovernment, fictional scenarios, Human-Robot Interaction, etc.).
Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology, 38(10), 1004-1015. DOI: 10.1080/0144929X.2019.1656779
A Trust Scale for Human-Robot Interaction: Translation, Adaptation, and Validation of a Human-Computer Trust Scale. Human Behavior and Emerging Technologies, Wiley-Hindawi, 1−12. DOI: 10.1155/2022/6437441.
Measuring trust with psychophysiological signals: a systematic mapping study of approaches used. Multimodal Technologies and Interaction, 4(3), 63. DOI 10.3390/mti4030063
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Lifecycle stage(s):
Type of approach:
Usage rights:
Target users:
Stakeholder group:
Validity:
Enforcement:
Geographical scope:
Required skills:
Technology platforms:
Tags:
- human-ai
- trustworthiness
- ai behavioral research tool
Use Cases

Human-Robot Interaction Trust Scale (HRITS)
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case