Working Group on Tools & Accountability
This Working Group completed its mandate in early 2023.
A trustworthy AI system is inclusive and benefits people and the planet; respects human rights and is unbiased/fair; transparent and explainable; robust, secure and safe; and accountable.
The OECD.AI expert group on implementing Trustworthy AI highlighted how tools and approaches might vary across different operational contexts.
The expert group’s mission was to identify practical guidance and standard procedural approaches for policies that lead to trustworthy AI. These tools will serve AI actors and decision-makers in implementing effective, efficient and fair AI-related policies.
The expert group developed a practical framework that provides concrete examples of tools to help implement each of the five values-based AI Principles. Based on the framework, the expert group developed a catalogue of tools to help actors ensure their AI systems are trustworthy.
One of this work group’s outputs is the OECD Framework for Implementing Trustworthy AI Systems, which serves as a reference for AI actors in their implementation efforts and includes:
- Process-related approaches such as codes of conduct, guidelines or change management processes, governance frameworks, risk management frameworks, documentation processes for data or algorithms, and sector-specific codes of conduct;
- Technical tools, including software tools, technical research and technical standards[1], tools for bias detection, for explainable AI, for robustness; and,
- Educational tools such as those to build awareness and new capacities.
Former co-chairs and members
Nozha Boujemaa, Global Digital Ethics and Responsible AI Director, IKEA Retail (Ingka Group).
Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy (GRID), Centre for European Policy Studies.
Barry O’Brien, Government and Regulatory Affairs Executive, IBM.
The group meets virtually every 4 to 5 weeks.