Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Framework for meaningful engagement of external stakeholders in AI development



Framework for meaningful engagement of external stakeholders in AI development

The use of artificial intelligence (AI) is accelerating. So is the need to ensure that AI systems are not only effective, but also fair, non-discriminatory, transparent, rights-based, accountable, and sustainable – in short, responsible. An important step for preventing and mitigating harm from AI systems is to identify and assess their impacts on human rights, so that adequate measures can be taken to address negative impacts. AI impact assessments need to include diverse voices, disciplines and lived experiences from a variety of external stakeholders. A key question inevitably arises: how can AI developers and deployers meaningfully engage stakeholders, so that their crucial input informs and shapes the AI impact assessment?

Three questions essentially come up:

  1. What makes engagement ‘meaningful’?
  2. What does a trustworthy engagement process look like?
  3. How to distinguish the meaningful from the meaningless?

In an appeal to see that the time and energy invested in the engagement leads to concrete results, both convenors (such as public institutions and private sector companies) as well as potential participants in such engagement processes increasingly expect clear answers to the above questions.

This Framework attempts to provide some answers. ECNL in collaboration with numerous contributors from all sectors, have been developing a Framework for meaningful engagement of external stakeholders in the context of impact assessments of AI systems.

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
Partnership on AI

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.