Providing an enabling policy environment for AI (Principle 2.3) icon Shaping an enabling policy environment for AI (Principle 2.3)

Governments should create a policy environment that will open the way to deployment of trustworthy AI systems.

  • Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.
  • Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.

From the AI Wonk

Rationale for this principle

This recommendation focuses on the policy environment that enables AI innovation, i.e., institutional, policy and legal framework. It thus complements recommendation 2.2 on the necessary physical and technological infrastructure. As inrecommendation 2.2, governments are called to pay particular attention to SMEs.

Considering the fast pace of AI developments, setting a policy environment that is flexible enough to keep up with developments and promote innovation yet remains safe and provides legal certainty is a significant challenge. This recommendation seeks to address this challenge by identifying means for improving the adaptability, reactivity, versatility and enforcement of policy instruments, in order to responsibly accelerate the transition from development to deployment and where relevant, commercialisation. In fostering the use of AI, a human-centred approach should be taken.

The recommendation highlights the role of experimentation as a means to provide controlled and transparent environments in which AI systems can be tested and in which AI-based business models that could promote solutions to global challenges can flourish. Policy experiments can operate in “start-up mode” whereby experiments are deployed, evaluated and modified, and then scaled up or down, or abandoned, depending on the test outcomes.

Finally, the recommendation acknowledges the importance of oversight and assessment mechanisms to complement policy frameworks and experimentation. In this respect, there may be room for encouraging AI actors to develop self-regulatory mechanisms, such as codes of conduct, voluntary standards and best practices. Together with the OECD Guidelines for Multinational Enterprises (MNE)1, such initatives can help guide AI actors through the AI lifecycle, including for monitoring, reporting, assessing and addressing harmful effects or misuse of AI systems. To the extent possible and relevant, these mechanisms should be transparent and public.

1Available at: https://mneguidelines.oecd.org/responsible-business-conduct-matters.htm

Related OECD publications

Related online news

?

Related AI policy initiatives

?

Related types of policy instruments used

?

Related scientific research by country

?

Other principles

Inclusive growth, sustainable development and well-being icon

Inclusive growth, sustainable development and well-being

Human-centred values and fairness icon

Human-centred values and fairness

Transparency and explainability icon

Transparency and explainability

Robustness, security and safety icon

Robustness, security and safety

Accountability icon

Accountability

Investing in AI R&D icon

Investing in AI R&D

Fostering a digital ecosystem for AI icon

Fostering a digital ecosystem for AI

Providing an enabling policy environment for AI icon

Providing an enabling policy environment for AI

Building human capacity and preparing for labour market transition icon

Building human capacity and preparing for labour market transition

International co-operation for trustworthy AI icon

International co-operation for trustworthy AI