Intergovernmental

How the G7’s new AI reporting framework could shape the future of AI governance

earth with AI graphic

On 7 February 2025, the OECD launched the reporting framework for monitoring the application of the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. This milestone marks a significant advancement in international AI governance, reinforcing the G7’s commitment to ensuring the safe, secure, and trustworthy development, deployment, and use of advanced AI systems.

From pilot to practice: a new era of AI transparency

This Reporting Framework is a direct outcome of the G7 Hiroshima AI Process, initiated under the Japanese G7 Presidency in 2023 and further advanced under the Italian G7 Presidency in 2024. It builds on the Hiroshima AI Process International Code of Conduct for Organizations Developing Advanced AI Systems, a landmark initiative to foster transparency and accountability in developing advanced AI systems. At the G7’s request and in line with the Trento Declaration, countries asked the OECD to identify mechanisms to monitor voluntary adoption of the Code.

Following a successful pilot phase in mid-2024, the operational framework now provides a standardised approach for organisations to demonstrate their alignment with the Code of Conduct’s actions. For the first time, companies can offer transparent, comparable information about their AI risk management practices, risk assessment, incident reporting, and information-sharing mechanisms.

The framework has already garnered substantial industry support, with several leading AI developers committing to participate. The framework has already garnered substantial industry support, with several leading AI developers committing to participate. Leading AI developers, including Amazon, Anthropic, Fujitsu, Google, KDDI Corporation, Microsoft, NEC Corporation, NTT, OpenAI, Preferred Networks Inc., Rakuten Group Inc., Salesforce, and Softbank Corp. have already pledged to complete the inaugural framework. Organisations developing advanced AI systems are invited to submit their initial reports by 15 April 2025, after which they can submit reports on a rolling basis in keeping with the annual reporting expectations.

>> Rewatch the launch <<

Building global trust through standardised reporting

The framework builds upon the 11 actions outlined in the Hiroshima AI Process International Code of Conduct, providing organisations with clear guidance on reporting their alignment with these principles. Organisations can now submit their reports easily and efficiently using a dedicated online platform, making this information publicly available and readily accessible to all stakeholders.

International alignment and interoperability

A key strength of the reporting framework is its emphasis on interoperability with various international AI governance mechanisms. By aligning with multiple risk management systems while maintaining the Hiroshima Code of Conduct as its foundation, the framework promotes consistency across global standards and reduces redundancy in reporting requirements.

The framework was developed through multistakeholder international cooperation, incorporating input from the private sector, academia, civil society, and research institutions during the pilot phase. This collaborative approach ensures the framework’s practicality and effectiveness in addressing the needs of various stakeholders.

Follow us on LinkedIn

Over time, consistent participation from key players will mean a more significant impact

The framework is more than a reporting mechanism. It sets the foundation for sharing good practices and fostering continuous improvement in AI development practices. Organisations participating in the framework will contribute to a growing body of publicly available knowledge about effective risk management strategies and responsible AI development approaches.

As we progress, this initiative will be vital in promoting transparency and accountability within the AI industry while supporting the G7’s broader objectives for safe, secure, and trustworthy AI development. The framework’s success will rely on sustained engagement from the AI community and continuous refinement based on practical implementation experience.

Organisations interested in participating in the framework can access the reporting platform on the OECD.AI Policy Observatory. The platform will provide regular updates and guidance to support organisations in their reporting efforts and ensure the framework continues to serve its intended purpose effectively.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.