OECD Network of Experts on AI (ONE AI)
The OECD Network of Experts on AI (ONE AI) provides policy, technical and business expert input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.
Tools & accountability
Trustworthy AI systems are inclusive and benefit people and planet; respect human rights and are unbiased/fair; transparent and explainable; robust, secure and safe; and accountable.
The OECD.AI expert group on implementing Trustworthy AI (ONE TAI) aims to highlight how tools and approaches may vary across different operational contexts.
The expert group’s mission is to identify practical guidance and standard procedural approaches for policies that lead to trustworthy AI. These tools will serve AI actors and decision-makers in implementing effective, efficient and fair AI-related policies.
The expert group is developing a short and practical framework that provides concrete examples of tools to help implement each of the five values-based AI Principles.
The OECD Framework for Implementing Trustworthy AI systems serve as a reference for AI actors in their implementation efforts and includes:
- process-related approaches such as codes of conduct, guidelines or change management processes, governance frameworks, risk management frameworks, documentation processes for data or algorithms, and sector-specific codes of conduct;
- technical tools including software tools, technical research and technical standards, tools for bias detection, for explainable AI, for robustness; and,
- educational tools such as awareness building and capacity building.
ONE TAI is co-chaired by:
Carolyn Nguyen, Director of Technology Policy, Microsoft;
Barry O’Brien, Government and Regulatory Affairs Executive, IBM.
The group meets virtually every 3 to 4 weeks.
See more information on What are the tools for implementing trustworthy AI? A comparative framework and database. And stay tuned for the report publication and the development of the database of tools for trustworthy AI!