Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

An interdisciplinary framework to operationalise AI ethics



An interdisciplinary framework to operationalise AI ethics

Artificial intelligence (AI) increasingly pervades all areas of life. To seize the opportunities this technology offers society, while limiting its risks and ensuring citizen protection, different stakeholders have presented guidelines for AI ethics. Nearly all of them consider similar values to be crucial and a minimum requirement for “ethically sound” AI applications – including privacy, reliability and transparency. However, how organisations that develop and deploy AI systems should implement these precepts remains unclear. This lack of specific and verifiable principles endangers the effectiveness and enforceability of ethics guidelines. To bridge this gap, this paper proposes a framework specifically designed to bring ethical principles into actionable practice when designing, implementing and evaluating AI systems. 

We have prepared this report as experts in spheres ranging from computer science, philosophy, and technology impact assessment via physics and engineering to social sciences, and we work together as the AI Ethics Impact Group (AIEI Group). Our paper offers concrete guidance to decision-makers in organisations developing and using AI on how to incorporate values into algorithmic decision-making, and how to measure the fulfilment of values using criteria, observables and indicators combined with a context-dependent risk assessment. It thus presents practical ways of monitoring ethically relevant system characteristics as a basis for policymakers, regulators, oversight bodies, watchdog organisations and standards development organisations. So this framework is for working towards better control, oversight and comparability of different AI systems, and also forms a basis for informed choices by citizens and consumers. 

The report does so in four steps: 

In chapter one, we present the three main challenges for the practical implementation of AI ethics: (1) the context-dependency of realising ethical values, (2) the sociotechnical nature of AI usage and (3) the different requirements of different stakeholders concerning the ‘ease of use’ of ethics frameworks. We also explain how our approach addresses these three challenges and show how different stakeholders can make use of the framework. 

In chapter two, we present the VCIO model (values, criteria, indicators, and observables) for the operationalisation and measurement of otherwise abstract principles and demonstrate the functioning of the model for the values of transparency, justice and accountability. Here, we also propose context-independent labelling of AI systems, based on the VCIO model and inspired by the energy efficiency label. This labelling approach is unique in the field of AI ethics at the time of writing.

For the proposed AI Ethics Label, we carefully suggest six values, namely justice, environmental sustainability, accountability, transparency, privacy, and reliability, based on contemporary discourse and operability. 

Chapter three introduces the risk matrix, a two-dimensional approach for handling the ethical challenges of AI, which enables the classification of application contexts. Our method simplifies the classification process without abstracting too much from the given complexity of an AI system’s operational context. Decisive factors in assessing whether an AI system could have societal effects are the intensity of the system’s potential harm and the dependence of the affected person(s) on the respective decision. This analysis results in five classes which correspond to increasing regulatory requirements, i.e. from class 0 that does not require considerations in AI ethics to class 4 in cases where no algorithmic decision-making system should be applied. 

Chapter four then reiterates how these different approaches come together. We also make concrete propositions to different stakeholders concerning the practical use of the framework, while highlighting open questions that require a response if we ultimately want to put ethical principles into practice. The report does not have all the answers but provides valuable concepts for advancing the discussion among system developers, users and regulators. 

Coming together as AI Ethics Impact Group, led by VDE Association for Electrical, Electronic & Information Technologies and Bertelsmann Stiftung and presenting our findings here, we hope to contribute to work on answering these open questions, to refine conceptual ideas to support harmonisation efforts, and to initiate interdisciplinary networks and activities. 

We look forward to continuing the conversation.

About the tool



Type of approach:


Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.