OECD Network of Experts on AI (ONE AI)

The OECD Network of Experts on AI (ONE AI) provides policy, technical and business expert input to inform OECD analysis and recommendations. It is a multi-disciplinary and multi-stakeholder group.

OECD.AI expert group on the Classification of AI

Different types of AI systems raise very different policy opportunities and challenges. The OECD Network of Experts on AI working group on the Classification of AI (ONE CAI) is developing a user-friendly framework to classify AI systems.

The upcoming OECD AI Systems Classification Framework provides a structure for assessing and classifying AI systems according to their impact on public policy in areas covered by the OECD AI Principles including economic and social benefits; human rights, privacy and fairness; safety, security, and risk assessment; transparency; accountability; research; data, compute and technologies; labour and skills; and international cooperation.

The Framework builds on the conceptual view of a generic AI system established in previous OECD work. It identifies policy considerations associated with different AI systems’ attributes including the:

  1. System’s socio-economic context including its sector, application, and whether it constitutes a critical activity;
  2. Data/input of the AI system;
  3. AI model/technologies in use; and
  4. Task and action of the AI system.

ONE CAI is co-chaired by:

Marko Grobelnik (AI Researcher & Digital Champion, AI Lab, Slovenia Jozef Stefan Institute);

Dewey Murdick (Director of Data Science, Center for Security and Emerging Technology (CSET), School of Foreign Service, Georgetown University); and

Jack Clark (Policy Director, OpenAI).

The group meets virtually every 3 to 4 weeks.

OECD.AI expert group on implementing Trustworthy AI

Trustworthy AI systems are inclusive and benefit people and planet; respect human rights and are unbiased/fair; transparent and explainable; robust, secure and safe; and accountable.

The OECD.AI expert group on implementing Trustworthy AI (ONE TAI) aims to highlight how tools and approaches may vary across different operational contexts.

The expert group’s mission is to identify practical guidance and standard procedural approaches for policies that lead to trustworthy AI. These tools will serve AI actors and decision-makers in implementing effective, efficient and fair AI-related policies.

The expert group is developing a short and practical framework that provides concrete examples of tools to help implement each of the five values-based AI Principles.

The OECD Framework for Implementing Trustworthy AI systems serve as a reference for AI actors in their implementation efforts and includes:

  1. process-related approaches such as codes of conduct, guidelines or change management processes, governance frameworks, risk management frameworks, documentation processes for data or algorithms, and sector-specific codes of conduct;
  2. technical tools including software tools, technical research and technical standards[1], tools for bias detection, for explainable AI, for robustness; and,
  3. educational tools such as awareness building and capacity building.

ONE TAI is co-chaired by:

Adam Murray, ONE AI Chair and US delegate to the OECD Committee on Digital Economy Policy (CDEP);

Carolyn Nguyen, Director of Technology Policy, Microsoft and

Barry O’Brien, Government and Regulatory Affairs Executive, IBM.

The group meets virtually every 3 to 4 weeks.

See more information on What are the tools for implementing trustworthy AI? A comparative framework and database. And stay tuned for the report publication and the development of the database of tools for trustworthy AI!

OECD.AI expert group on Policies for AI

The ONE AI working group on national AI Policies (ONE PAI) is developing practical guidance for policy makers on a wide array of topics:

  • investing in AI R&D;
  • data, infrastructure, software & knowledge;
  • regulation, testbeds and documentation;
  • skills and labour markets; and
  • international co-operation.

It leverages lessons learned and analysis by other OECD bodies, as well as analysis of the database of national AI policies at OECD.AI. The working group is focusing on the practical implementation of the OECD AI Principles throughout the AI policy cycle for:

  1. Policy design – focusing on national AI governance policies and approaches;
  2. Policy implementation – focusing on lessons learned to date through national implementation examples;
  3. Policy intelligence – identifying different evaluation methods and monitoring exercises; and
  4. Approaches for international and multi-stakeholder co-operation on AI policy.

ONE PAI is co-chaired by:

Andras Hlács, Vice-Chair of the OECD Committee on Digital Economy Policy, and 

Michael Sellitto, Deputy Director, Stanford Institute for Human-Centred AI (HAI). 

The group’s virtual meetings every 3 to 4 weeks provide “deep dives” into national experience in implementing AI policies in practice and sharing lessons learned, good practices and challenges.

After one year of work, this working group published on 22 June 2021 the OECD “State of Implementation of the OECD AI Principles: Insights from National AI Policies”  report. This report looks at how countries are implementing the five recommendations to governments contained in the OECD AI Principles and examines emerging trends on AI policy.

OECD.AI task force on AI compute (ONE AI compute)

Why focus on AI compute?

Alongside data and algorithms, AI computing capacity (“AI compute”) has emerged over recent years as a key enabler for AI and AI-enabled economic growth and competitiveness (Figure 1). While data and machine learning algorithms have received significant attention in policy circles at the OECD and beyond, the computational infrastructure that makes AI possible has been comparatively overlooked.  Since understanding domestic AI compute capacity is increasingly critical to formulating effective AI policies and informing national AI investments, the OECD is focusing efforts on this area in 2021.

Figure 1. AI Enablers

The creation of a ONE AI task force on AI compute late 2020 or early 2021 will help the OECD create a framework for understanding, measuring and benchmarking domestic AI computing supply by country and region. The task force will coordinate the broad engagement of key AI compute players and a data gathering exercise that ideally would be sustainable over time. This task force will also need to be mindful that the AI compute landscape is unusually dynamic with technical shifts on a frequent basis. To communicate about the outcomes of the OECD’s engagement in this domain, an interactive visualisation on OECD.AI could feature the work of the task force. The targeted focus of the ONE AI task force on AI compute complements the activities of the three ONE AI working groups.

Mr. Keith Strier, Vice President of Worldwide AI Initiatives at NVIDIA will co-chair the task force. Another co-chair with complementary expertise will be identified before the launch of the task force.

Task force participants include policy makers and entities in charge of public computing infrastructure as well as key industry players from: hardware providers; cloud service providers; original equipment manufacturers; academia engaged in AI compute; major data center operators; major consulting firms; and other experts on computing performance.

Sign up for email alerts