- Governments should promote an agile policy environment that supports transitioning from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate. They should also adopt outcome-based approaches that provide flexibility in achieving governance objectives and co-operate within and across jurisdictions to promote interoperable governance and policy environments, as appropriate.
- Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
From the AI Wonk
Rationale for this principle
This recommendation focuses on the policy environment that enables AI innovation, i.e., institutional, policy and legal framework. It thus complements recommendation 2.2 on the necessary physical and technological infrastructure. As inrecommendation 2.2, governments are called to pay particular attention to SMEs.
Considering the fast pace of AI developments, setting a policy environment that is flexible enough to keep up with developments and promote innovation yet remains safe and provides legal certainty is a significant challenge. This recommendation seeks to address this challenge by identifying means for improving the adaptability, reactivity, versatility and enforcement of policy instruments, in order to responsibly accelerate the transition from development to deployment and where relevant, commercialisation. In fostering the use of AI, a human-centred approach should be taken.
The recommendation highlights the role of experimentation as a means to provide controlled and transparent environments in which AI systems can be tested and in which AI-based business models that could promote solutions to global challenges can flourish. Policy experiments can operate in “start-up mode” whereby experiments are deployed, evaluated and modified, and then scaled up or down, or abandoned, depending on the test outcomes.
Finally, the recommendation acknowledges the importance of oversight and assessment mechanisms to complement policy frameworks and experimentation. In this respect, there may be room for encouraging AI actors to develop self-regulatory mechanisms, such as codes of conduct, voluntary standards and best practices. Together with the OECD Guidelines for Multinational Enterprises (MNE)1, such initatives can help guide AI actors through the AI lifecycle, including for monitoring, reporting, assessing and addressing harmful effects or misuse of AI systems. To the extent possible and relevant, these mechanisms should be transparent and public.
1Available at: https://mneguidelines.oecd.org/responsible-business-conduct-matters.htm