OECD.AI Expert Group on Implementing Trustworthy AI (WG2)

This is how the OECD’s ONE AI working group on implementing trustworthy AI will approach its work.

Concept note

As agreed by the expert group members on 17 June 2020, this is how the OECD.AI expert group on implementing trustworthy AI will approach its work.

The expert group’s goal

As AI technologies permeate economies and societies, the technical, business and policy communities are actively exploring the best ways to encourage human-centred and trustworthy AI development and deployment, to maximise benefits and minimises risks. 

This expert group will identify promising ideas,  good practices and shared procedural approaches for implementing the five values-based OECD AI principles for trustworthy AI systems:

  1. inclusive growth, sustainable development and well-being;
  2. human rights, democratic values and fairness;
  3. transparency and explainability;
  4. robustness, security and safety; and
  5. accountability.

These practices include:

Codes of conductCorporate governance frameworks
GuidelinesRisk management approaches
StandardsTechnical research
CertificationsSoftware tools
Capacity and awareness building tools

Once in place, this practical guidance and framework will help AI actors and decision-makers implement effective, efficient and fair policies for trustworthy AI. This will include highlighting how tools and approaches may vary across different operational contexts.   

The expert group’s co-moderators:

  • Adam Murray, ONE AI Chair and US delegate to CDEP
  • Carolyn Nguyen, Director of Technology Policy, Microsoft
  • Barry O’Brien, Government and Regulatory Affairs Executive, IBM

Members of the Expert Group on Implementing Trustworthy AI (WG2)

The up-to-date list and bios of Expert Group 2’s members and observers is available on OECD.AI.

Deliverables

  • A report on how AI actors are implementing the five values-based OECD AI Principles. This report will be a tool for a broader audience that illustrates some useful existing initiatives, as opposed to a comprehensive catalogue of existing initiatives.
  • A practical framework for implementing the OECD Principles. This framework will be iterative, provide concrete examples of tools to help implement each of the values-based AI Principles, and integrated into the OECD AI Policy Observatory.

Framing questions

The following questions will guide the expert group’s discussions, and some may be relevant to the work of the other ONE AI expert groups:

  • Are concerns raised by AI and machine learning different from those raised by previous or existing technological developments with regards to the five values-based principles – growth, inclusiveness, sustainability and well-being; human rights, democratic values and fairness; transparency and explainability; robustness, security and safety; and accountability?
  • What use cases illustrate practical implementation of one (or more) of the principles?
  • What tools – including existing regulations, technical standards, software and visualisation tools, certifications, codes of conduct, risk management approaches and governance frameworks – can help implement the values-based AI Principles for Trustworthy AI?
    • To what extent do existing legal or regulatory regimes – either horizontal or sector-specific regimes – already address AI-related risks and challenges? Do they need to be adapted or complemented?
  • What should go into a framework to help identify relevant tools depending on the context and characteristics of specific AI systems?
    • How can the framework account for different use contexts and an evolving state of art for principles such as those pertaining to transparency and fairness?
    • What approaches are scalable?  What approaches are best adapted to different types of stakeholders, including SMEs? In which cases are horizontal or vertical (e.g. by sector) approaches adapted? What are the inherent costs and trade-offs of different policy tools, and which approaches offer the best cost/benefit?
    • How can competing priorities be balanced in cases where there may be trade-offs between the different principles?
    • In which areas are existing business incentives sufficiently aligned with societal goals to support self-regulation? Are there areas where policy/regulation could help to align incentives?
    • Can we identify criteria and performance indicators to gauge whether an AI system is aligned with the Principles (e.g. indicators of accuracy, data representativeness, level of confidence, robustness or risk)?
    • Are there gaps in existing practices or tools, or barriers that warrant further study and research?

Audience

This group’s work targets AI actors who make decisions regarding the practical development and application of AI systems, including government policy makers, business leaders and technologists.

How the Trustworthy AI Expert Group complements and relates to the other expert groups

The work of the three ONE AI expert groups has a unified strategy to contribute to a common assessment of the current state of AI systems’ development, deployment and use under the umbrella of the OECD AI Principles. As such, the work being conducted in parallel by ONE AI expert groups 1 and 3 is expected to be complementary to expert group 2 efforts. In particular:

  • WG1 on the classification of AI systems is expected to flag policy considerations associated with different AI systems’ attributes including the context (e.g. transportation, healthcare, finance, other); the input/data (e.g. structured data, unstructured data, sensitive data, etc.); the AI model/technology (e.g. neural networks, support vector machine, expert systems, etc.); and the output/task (e.g. recognition, interaction, forecasting, etc.). The tools identified in WG2 could at the same time inform and be informed by these considerations.
  • WG3 on national AI policies is expected to identify good practices for implementing the 5 recommendations to policy-makers contained in the AI Principles. It focuses in particular in the policy design (e.g. governance and processes used to formulate national AI strategies and policies); policy implementation (e.g. data, infrastructure, policy and regulatory frameworks); policy intelligence (e.g. evaluating implementation of AI policies); and international multi-stakeholder cooperation (e.g. good practices by other IGOs e.g. as EC, IDB, CoE, UNESCO, World Bank). Public sector good practices and use cases identified by WG3 could inform WG1’s discussion of tools and frameworks to implement trustworthy AI, and vice-versa.

Additionally, the work of WG2 builds on the AI system lifecycle work of the AI Group of experts at the OECD (AIGO).

PeriodActivities  
May – June 2020Agree on scope, approach and survey elements
July – mid-August 2020Survey dissemination and collection of responses
Mid-August – mid-September 2020Collective intelligence exercise and expert analysis of responses
Mid-September – 10 October 2020 Secretariat drafts progress report
Mid October 2020Progress report shared with OECD policy communities
Late October 2020Progress report shared with national delegations and advisory committees
24 November – mid-December 2020Committee meeting and feedback from delegates
September – December 2020WG2 calls every 3 to 4 weeks. Parallel Secretariat work on report and framework.
January – June 2021Finalise report and integrate framework on OECD.AI
 Table 1. Timeline for the activities of WG2