Intergovernmental

G7 AI transparency reporting: Ten insights for AI governance and risk management

HAIP reporting framework logo against earth

Transparency in artificial intelligence (AI) is increasingly recognised as essential to building trust, ensuring accountability, and promoting responsible innovation. In 2023, the Group of Seven (G7) launched the Hiroshima AI Process (HAIP), a global initiative aimed at addressing the governance and risk challenges posed by AI systems. A central element of this process is a voluntary transparency reporting framework, developed with the OECD, which invites AI organisations to disclose how they identify risks, implement safeguards, and align with internationally agreed-upon principles for trustworthy AI.

In April 2025, the OECD published the first round of transparency reports. Twenty organisations from around the globe participated, ranging from large multinational technology companies to smaller advisory, research, and educational institutions. Their submissions offer a unique insight into how AI developers approach governance in practice.

The reports are important for several reasons. First, they demonstrate a practical implementation of the G7’s International Code of Conduct for AI, translating broad principles into observable disclosures. Second, they establish a benchmark for responsible AI worldwide, offering policymakers, civil society, and the public unprecedented insight into the inner workings of the AI industry.   Third, they provide companies with concrete examples of how peers are operationalising AI governance in practice, enabling organisations to learn from diverse approaches to risk management, transparency, and safety measures. This indicates a cultural shift towards voluntary transparency in AI governance, supplementing regulatory measures with industry-led openness.

Follow us on LinkedIn

Ten of the most significant findings from the HAIP reporting framework

The full report presents preliminary insights from submissions by 20 organisations across diverse sectors and countries. It examines their approaches to risk identification and management, transparency, governance, content authentication, AI safety research, and the advancement of global interests. Here are ten of the most relevant findings:

  • 1. Approaches to risk identification and evaluation are as diverse as the areas of concern
    Overall, approaches align with frameworks such as the OECD AI Principles and the EU AI Act to guide risk classification. Organisations are also combining quantitative metrics with qualitative expert judgment throughout the AI system lifecycle. Practices range from systemic, society-wide risk assessments to application-specific concern evaluations. Many organisations report using AI-assisted tools and adversarial testing methods such as red teaming, and organisations developing highly capable models have implemented capability thresholds to trigger additional safeguards. Larger technology firms tend to focus on systemic risks and demonstrate more advanced evaluation frameworks.
  • 2. Multi-layered risk management and information security
    Organisations implement multi-layered strategies with procedural and technical safeguards throughout the system design, development and deployment.. They use secure testing environments and graduated deployment as procedural measures, while technical measures cover data filtering, model fine-tuning, and output moderation. Widely used cybersecurity practices include zero-trust architecture, penetration testing, and real-time monitoring, along with privacy‑enhancing measures such as encryption and data minimisation. A small number of organisations are exploring security concerns related to AGI.
  • 3. Transparency practices vary
    Transparency differs by audience. Consumer-facing companies publish model or system cards and organisational reports, while B2B providers disclose information primarily through contracts. Engagement with academia and government is common, though engagement with civil society is less developed.
  • 4. AI governance and incident management efforts are expanding
    AI-specific governance is being embedded into wider corporate structures. Dedicated teams, specialised committees, and board-level oversight are complemented by incident response plans, industry collaborations, and a greater willingness to share lessons from failures.
  • 5. Content provenance: widespread disclosure but limited technical implementation
    Most organisations disclose AI involvement through user policies and interface design. Provenance tools, such as watermarking, cryptographic signatures, and content credentials, are being piloted by larger firms for applications such as content authentication, but adoption remains limited. Organisations are increasingly contributing to international standard-setting efforts for content provenance.
  • 6. Research and investment in AI safety are growing
    Investment in AI safety is growing. Organisations are dedicating resources to AI safety research across multiple domains, including cybersecurity, bias detection, and fairness evaluation, often through internal R&D, open-source contributions, and partnerships with governmental, academic, and civil society institutions
  • 7. Reporting brought attention to collective interests
    AI is increasingly applied to pressing social challenges, including healthcare, education, accessibility, and climate action. Organisations connect these projects to environmental, social and governance (ESG) goals and the UN Sustainable Development Goals.
  • 8. Ecosystem-wide collaboration is prevalent
    Collaboration across the AI ecosystem is robust, encompassing joint research initiatives, standard-setting efforts, and participation in forums focused on transparency and safety. Many organisations see multi-stakeholder partnerships as essential to addressing AI risks.
  • 9. The reporting process enhances internal coordination and benchmarking
    Several participants noted that the exercise created unexpected internal value by clarifying roles and improving communication across technical, legal, and executive teams. Preparing a HAIP submission also supports benchmarking against peers and provides a structured way to communicate governance practices to stakeholders.
  • 10. Opportunities for improvement
    Organisations value the framework but call for refinements: clearer guidance, simplified reporting formats, role-specific modules, regular updates to reflect new risks, and stronger alignment with national and international initiatives.

READ THE REPORT: How are AI developers managing risks? Insights from responses to the reporting framework of the Hiroshima AI Process Code of Conduct

Striking a balance between flexibility and consistency

For some organisations, reporting served as a diplomatic signal of alignment with international norms; for others, it was a form of regulatory readiness or client assurance.

Another theme is the challenge of striking a balance between flexibility and consistency. The framework’s broad definition of advanced AI allowed diverse organisations to participate – from frontier model developers to smaller teams working on retrieval-augmented generation or regional language models. While inclusivity is a strength, comparability across reports is sometimes limited. Participants suggest that future iterations should strike a better balance by offering more structured guidance while preserving space for contextual differences.

Future reports: keep it current, honest and consistent

The first cycle of HAIP reporting marks a milestone in AI governance and reveals opportunities for improvement. There is broad agreement that transparency should not become a competitive ranking exercise. Participants strongly object to the idea of ranking reports, noting it could penalise honesty and deter participation. Instead, the reports should be recognised as good-faith contributions towards openness, fostering collaboration rather than competition.

Foster peer learning. Regular workshops and collaborations could enable participants to share good practices and exchange knowledge, thereby reinforcing a culture of transparency across the ecosystem.

Keep pace with technological change. Updating the questionnaire annually would ensure that emerging risks – from generative agents to new authentication tools – are captured.

Simplify and modularise the process. Participants suggest that structured response options and role-specific tracks for developers, deployers, or research bodies would lower barriers to entry and improve comparability.

Develop a shared glossary. To reduce ambiguity, terminology has to be consistent. Building on OECD definitions and international standards, a glossary would help participants interpret questions uniformly and make reports easier to read.

Expand awareness and recognition. Greater visibility would help to expand participation. Public recognition, such as an official HAIP participant logo, could help. Engagement with investors and industry associations could link transparency reporting to broader ESG priorities.

A cultural shift towards AI transparency with great potential

The Hiroshima AI Process transparency reporting framework represents a quiet but significant revolution in AI governance with great potential to become a reporting standard. Encouraging voluntary disclosure translates international principles into concrete practices, providing a foundation for mutual trust and accountability.

The first round of reports shows that organisations are adopting layered risk management, embedding AI governance into corporate structures, investing in safety research, and aligning AI with societal goals. It also highlights gaps that future iterations of the framework can address in consistency, comparability, and technical maturity.

Perhaps most importantly, the exercise signals a cultural shift: transparency is becoming a norm, not an exception. The HAIP reporting framework could become a cornerstone of international co-operation, complementing regulation with voluntary openness and collective learning.

The HAIP reporting framework demonstrates that collaboration and accountability can keep pace with innovation. By embedding transparency into the DNA of AI development, we move toward a future where trust in AI is built on practice, not just promises.

JOIN THE COMPANIES WHO HAVE SUBMITTED HAIP REPORTS



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.