OECD Network of Experts on AI (ONE AI)
The OECD Network of Experts on AI (ONE AI) provides policy, technical and business expert input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.
Tools & accountability
An AI system that is trustworthy is inclusive and benefits people and planet; respects human rights and is unbiased/fair; transparent and explainable; robust, secure and safe; and accountable.
The OECD.AI expert group on implementing Trustworthy AI aims to highlight how tools and approaches may vary across different operational contexts.
The expert group’s mission is to identify practical guidance and standard procedural approaches for policies that lead to trustworthy AI. These tools will serve AI actors and decision-makers in implementing effective, efficient and fair AI-related policies.
The expert group developed a practical framework that provides concrete examples of tools to help implement each of the five values-based AI Principles. Based on the framework, the expert group is developing a catalogue of tools that help actors to ensure their AI systems are trustworthy.
The OECD Framework for Implementing Trustworthy AI Systems serves as a reference for AI actors in their implementation efforts and includes:
- process-related approaches such as codes of conduct, guidelines or change management processes, governance frameworks, risk management frameworks, documentation processes for data or algorithms, and sector-specific codes of conduct;
- technical tools including software tools, technical research and technical standards[1], tools for bias detection, for explainable AI, for robustness; and,
- educational tools such as those to build awareness and new capacities.
Co-chairs and members
Nozha Boujemaa, Global Digital Ethics and Responsible AI Director, IKEA Retail (Ingka Group).
Barry O’Brien, Government and Regulatory Affairs Executive, IBM.
The group meets virtually every 4 to 5 weeks.