Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

VDI-EE 4030 - Consideration of human reliability in the design of autonomous systems



The scope of the expert recommendation is the design and layout of autonomous systems from the perspective of human reliability. In many technical areas, autonomous systems are being developed to support or replace humans in their tasks. The expectation for these developments is that these activities can be carried out faster and more precisely, or even more safely. Typical examples are autonomous driving, which is expected to increase road safety, or human-robot collaboration in the assembly and manufacturing industry. Autonomous systems support humans in many ways, but can also have negative effects. From the perspective of human reliability, the question arises as to which conditions must be fulfilled so that the emerging human-machine system can act reliably. The expert recommendation describes the known and expected interactions between humans and machines, as well as important prerequisites for the reliable success of a collaboration between humans and machines. An important prerequisite for avoiding automation-related accidents is that human reliability is correctly and comprehensively considered in the design of autonomous systems and that safety contributions from humans and technology are analysed and evaluated in such a way that a safer overall system can be created. In practice, however, it often turns out that the positive contribution of humans is underestimated and the positive contribution of the automaton is overestimated in the safety-related consideration. The expert recommendation is aimed at manufacturers of autonomous systems, employees of testing institutes, safety engineers and industrial designers as well as experts in occupational science. © 2023 Beuth Verlag GmbH

The information about this standard has been compiled by the AI Standards Hub, an initiative dedicated to knowledge sharing, capacity building, research, and international collaboration in the field of AI standards. You can find more information and interactive community features related to this standard by visiting the Hub’s AI standards database here. To access the standard directly, please visit the developing organisation’s website.

About the tool


Developing organisation(s):


Tool type(s):


Objective(s):


Target sector(s):


Type of approach:



Usage rights:


Geographical scope:


Tags:

  • Human-computer interaction
  • human-centred design
  • safety

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.