OECD Working Party and Network of Experts on AI

The Working Party on Artificial Intelligence Governance oversees the OECD's work on AI policy. The OECD.AI Network of Experts provides policy, technical and business expert input to inform OECD analysis and recommendations.

Expert Group on AI Risk & Accountability

While AI provides tremendous benefits, it also presents real risks like bias and discrimination, the polarisation of opinions, privacy infringement, and widespread surveillance in some countries. Some of these risks are already materialising into harms to people and society. 

Key resources

About the expert group

To develop ‘trustworthy’, “responsible” or “ethical” AI systems, there is a need to assess impacts and manage AI risks, including in the context of generative AI. Over the past few years, there has been global convergence towards using – voluntary or mandatory – risk-based approaches and impact assessments to help govern AI. Demand is growing in the public and private sectors for tools and processes to help document AI system decisions and facilitate accountability throughout the AI system lifecycle – from planning and design to data collection and processing, to model building and validation, to deployment, operation and monitoring. 

At the same time, interoperability between burgeoning frameworks and standards is desirable, ideally ahead of their implementation in mandatory and voluntary AI risk assessment and management standards. A proliferation of different frameworks and standards that are not interoperable could make the implementation of Trustworthy AI more complex and costly in practice, and therefore less effective and less enforceable. Facilitating such interoperability calls for cooperation and coordination between domestic and international state and non-state actors developing standards and frameworks on AI systems management; AI risk management; AI design (e.g., trustworthiness by design); and AI impact, conformity, and risk assessments. 

Major actors include the International Organization for Standardization (ISO), Institute of Electrical and Electronics Engineers (IEEE), National Institute of Standards and Technology (NIST), European Committee for Electrotechnical Standardization (CEN-CENELEC), the European Commission (EC), Council of Europe (CoE), UNESCO, OECD, EU-US Trade and Technology Council (TTC) and Responsible AI Institute (RAII)-WEF. Domestic AI standards initiatives e.g., by Australia, Canada, Japan and the UK may also be relevant. 

Through the OECD.AI Network of Experts workstream on AI risk, the OECD is engaging with partner organisations including those listed above, policy makers and experts, to identify common guideposts to assess AI risk and impact for Trustworthy AI. The goal is to help implement effective and accountable trustworthy AI systems by promoting global consistency. The work of this group will consist of the following 5 steps: 

  1. Map existing and developing core standards, frameworks and guidelines for AI design – AI impact, conformity, and risk assessment; and AI risk management – to the top-level interoperability framework developed in the report Advancing Accountability in AI: Governing and Managing risks through the lifecycle for trustworthy AI” (Figure 1). These include those from ISO, IEEE, NIST, the EU and CEN-CENELEC, the OECD, CoE and UNESCO. 
  2. One level down, take stock of commonalities and differences in concepts and terminology between initiatives and conduct a gap analysis, proposing possible terminology if appropriate. 
  3. Translate analysis into good practice on due diligence for responsible business conduct in AI throughout the AI system lifecycle. 
  4. Research and analyse the alignment of certification schemes with OECD RBC and AI standards, including in the context of generative AI. 
  5. Develop an interactive online tool to help organisations and stakeholders compare frameworks (see 1 and 2 above) and navigate existing methods, tools and good practices for identifying, assessing, treating and governing AI risks.

Co-chairs

Nozha Boujemaa, Global Vice-President – Digital Ethics and Responsible AI – IKEA

Sebastian Hallensleben, Head of Digitalisation and AI – VDE Association for Electrical, Electronic & Information Technologies

Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy (GRID) – Centre for European Policy Studies

See Working Group participants

Figure 1: Functional view of the high-level AI risk management interoperability framework