These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The Human-Computer Trust scale (HCTS) is a simple, nine-item attitude Likert scale that gives a global view of subjective assessments of trust in technology.
The HCTS results reveal how the users' perceptions of these attributes shape their trust in the system and help the researchers identify the technology's strengths and weaknesses in terms of trust. It can facilitate a human-centred understanding of end-user needs regarding these AI-based tools.
The Human-Computer Trust Model (HCTM) is an empirically tested model that assesses users’ perception of trust based on three indicators: (1) Perceived Risk, which refers to users' subjective assessment of the probability of negative consequences from the system's use; (2) Competence, referring to a system's ability to develop the expected tasks; and (3) Benevolence, referring to the system's (perceived) intentions.
This instrument is a culmination of a thorough investigation done in the past ten years on the effect of users' trust in technology that aimed to create an empirically and scientifically validated user research instrument to facilitate the design of trustworthy technologies in various application contexts (e.g. eHealth, eGovernment, fictional scenarios, Human-Robot Interaction, etc.).
About the metric use case
You can click on the links to see the associated metric use cases
Objective(s):
Purpose(s):
Target sector(s):