- Governments, including developing countries and with stakeholders, should actively co-operate to advance these principles and to progress on responsible stewardship of trustworthy AI.
- Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.
- Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.
- Governments should also encourage the development, and their own use, of internationally comparable indicators to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.
From the AI Wonk
Rationale for this principle
This recommendation calls for international co-operation among governments and with stakeholders, to address the global opportunities and challenges of AI. Such co-operation includes advancing the implementation and dissemination of these principles and policies across OECD and partner countries, including developing countries and least developed countries, and with other stakeholders.
International co-operation can leverage international and regional fora to share AI knowledge in order to build long-term expertise on AI; to develop technical standards for interoperable and trustworthy AI; and to develop, disseminate and use metrics to assess the performance of AI systems, such as accuracy, efficiency, advancement of societal goals, fairness and robustness. It can also contribute to transborder flows of data with trust, that safeguard security, privacy, intellectual property, human rights and democratic values and that are key to AI innovation.