Catalogue of Tools & Metrics for Trustworthy AI

These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.

Framework for meaningful engagement in Human Rights impact assessments of AI



Framework for meaningful engagement in Human Rights impact assessments of AI

ECNL and SocietyInside have created this practical Framework to help anyone designing products or services using artificial intelligence (AI), machine learning or algorithm-based data analytics to involve their stakeholders in that process. You may be a small or large business, a civil society organisation, government department or civic institution of any type who wants to understand how to engage with stakeholders at timely points in software development process using AI. This could be as part of your broader human rights due diligence responsibility, AI human rights impact assessment, ethical assessment, risk assessment or compliance with similar processes and frameworks.

The Framework is the result of a co-creation and consultation process involving over 150 individuals and groups from civil society, business and public service across the globe. It builds on the UN Guiding Principles on Business on Human Rights (UNGPs) which establish a global expectation of business conduct. It is designed to provide guidance for effective planning, delivery, action and feedback on stakeholder engagement. Those seeking to involve stakeholders will feel more confident about its purpose, process and outcomes and therefore are more motivated to involve them and take their contributions seriously. It provides tools and templates, that will be refined following the pilot phase and the lessons learned will contribute to a final Framework and suite of online materials.

 

 

 

About the tool










Usage rights:










Technology platforms:


Tags:

  • ai ethics
  • ai responsible
  • ai risks
  • build trust
  • collaborative governance
  • evaluation
  • responsible ai collaborative
  • trustworthy ai
  • ai assessment
  • ai governance
  • fairness
  • decision support tool
  • ai risk management
  • ai compliance
  • ai quality
  • accountability
  • participation
  • requirements management
  • social impact
  • auditing
  • sustainable ai
  • ethical risk

Modify this tool

Use Cases

There is no use cases for this tool yet.

Would you like to submit a use case for this tool?

If you have used this tool, we would love to know more about your experience.

Add use case
catalogue Logos

Disclaimer: The tools and metrics featured herein are solely those of the originating authors and are not vetted or endorsed by the OECD or its member countries. The Organisation cannot be held responsible for possible issues resulting from the posting of links to third parties' tools and metrics on this catalogue. More on the methodology can be found at https://oecd.ai/catalogue/faq.