Intergovernmental

Participate in the OECD’s pilot on monitoring the application of the G7 Code of Conduct for Organisations Developing Advanced AI Development

G7 oeccd codes of conduct

The OECD launched a pilot reporting framework to monitor the application of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems. This initiative marks a significant step forward in the G7’s commitment to promoting safe, secure, and trustworthy AI development and deployment.

Promoting global safe, secure and trustworthy advanced AI systems

The pilot is part of a broader effort initiated in May 2023 under the G7 Hiroshima AI Process to advance cooperation for global access to safe, secure, and trustworthy generative AI. This process delivered a Comprehensive Policy Framework, including the OECD’s report “Towards a G7 Common Understanding of Generative AI,” International Guiding Principles for All AI Actors, and the International Code of Conduct for Organisations Developing Advanced AI Systems. Under Italy’s current G7 presidency, member countries have focused on advancing these outcomes to ensure responsible AI governance. In the Apulia Communiqué, G7 Leaders emphasised their commitment to the Hiroshima AI Process:

“We welcome our Industry, Tech, and Digital Ministers’ efforts to advance the Hiroshima AI Process outcomes released last year, including the development of a reporting framework for monitoring the International Code of Conduct for Organizations Developing Advanced AI Systems. We look forward to the pilot of the reporting framework, developed in cooperation with the OECD, in view of the Industry, Tech, and Digital Ministers’ Meeting in October.”

This reporting framework marks a significant milestone in ensuring global responsible AI development. It also paves the way for greater transparency and accountability in the AI industry by providing a standardised approach for organisations to report on their adherence to the G7 Code of Conduct. Through the framework, G7 countries send a promising signal of international alignment on responsible AI development and demonstrate commitments to safe, secure, and trustworthy AI systems.

As AI evolves rapidly, such internationally coordinated efforts are crucial to seizing AI’s benefits while proactively and collectively addressing its challenges, which aligns with a risk-based approach.

The pilot’s key features:

The primary objective of this pilot phase is to test a reporting framework that will gather information on how organisations developing advanced AI systems align with the Actions outlined in the Code of Conduct. This effort aims to establish a robust monitoring mechanism for the Code, a priority for the G7 under Italy’s presidency.

  • Timeline: The pilot phase will run from 19 July to 6 September 2024.
  • Participation: Organisations developing advanced AI systems are invited to participate by completing the pilot a survey on the OECD.AI Policy Observatory. The survey includes questions based on the Code of Conduct’s 11 Actions.
  • Confidentiality: Responses during the pilot phase will not be made publicly available but will be used to refine the framework for its operational version, which is expected to be finalised later this year.
  • Feedback: Participants are encouraged to provide feedback on the content and structure of the survey to help improve the operational version of the reporting framework.

A reporting framework to compare and share policies and practices

The reporting framework will facilitate transparency and comparability of organisations’ measures to mitigate risks associated with advanced AI systems. By providing a standardised approach, the framework will:

  • Enhance transparency through accessible and comparable information on organisations’ AI policies and practices.
  • Promote, identify and disseminate good practices for AI development and deployment.
  • Mitigate risks of advanced AI systems by addressing safety, security, and trustworthiness issues.

OECD to launch operational framework in Q4 2024

Following the pilot phase, the OECD plans to launch an operational version of the reporting framework later this year.   

Since 2016, the OECD has been at the forefront of AI policymaking. The OECD Recommendation on AI, adopted in 2019 and updated in 2024, serves as a global reference for AI policy. This pilot project further solidifies the OECD’s leadership in advancing an interoperable approach to promote safe, secure, and trustworthy AI globally.

How to participate in the pilot

Organisations developing advanced AI systems are encouraged to participate in the pilot phase. Broad participation will contribute to the credibility and effectiveness of the final reporting framework, ensuring it meets the needs of diverse stakeholders and supports the responsible development and use of AI technologies.

By launching this pilot project, the OECD and G7 members are taking a crucial step towards fostering a safer and more trustworthy AI ecosystem, ensuring that advancements in AI technology benefit all of humanity while addressing potential risks.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.