Academia

The HAIP Reporting Framework: Feedback on a quiet revolution in AI transparency

Voluntary governance, stakeholder trust, and the future of responsible AI

HAIP logo with feedback rating

Transparency in AI is no longer an option

AI is transforming our world, but who gets to look under the hood? In a world where algorithms influence elections, shape job markets, and generate knowledge, transparency is no longer just a “nice-to-have”—it’s the foundation of trust.

This is one of the pressing challenges the Hiroshima AI Process (HAIP) addresses. HAIP is a G7 initiative launched in 2023 that aims to establish a global solution for safe and trustworthy AI. As part of this effort, it has developed, with the OECD, a voluntary reporting framework that invites AI developers to disclose how they align with international guidelines for responsible AI.

Let’s look at some early insights from interviews with 11 of the first 19 participating organisations and a multistakeholder meeting held in Tokyo in June 2025. The findings reveal a picture that is both promising and complex, with lessons for the future of global AI governance.

One framework, many motivations: Why companies are joining HAIP

Why would a company voluntarily publish sensitive information about how it builds AI? It turns out the answer depends on who they are speaking to. Our interviews revealed five key audiences that shape how companies approach their HAIP reports:

AudienceExamplesTypical Motivation
International bodiesOECD, G7 Partners– Visibility in AI governance
– International Alignment
Policy stakeholdersGovernments, regulatorsGain trust
– Influence on regulatory frameworks
Business and technical partnersB2B clients, external developers, corporate partnersContractual clarity,
risk accountability
General publicConsumers, civil society, job-seeking studentsEthical branding
Accessibility
Internal teamsEmployeesCreate internal alignment and awareness on AI governance

For some, HAIP is a diplomatic tool to show they are aligned with global norms. For others, it is a means of communicating readiness for future regulation. B2B companies use the reports to inform clients and partners. Some view the report primarily as a public-facing transparency tool, written in clear, relatable language.

Interestingly, many companies emphasise how the internal process of preparing the report—coordinating across departments, aligning terminology, clarifying roles—was just as valuable as the final publication.

The value and challenge of ambiguity

A recurring theme was uncertainty about how much to disclose or the level of detail to provide. Some companies asked: “Should we talk about specific AI models, or company-wide policy?” Others wondered: “Do we write from the perspective of a developer or a deployer?”

And yet, this ambiguity was also seen as a strength. The broad definition of “advanced AI systems” enabled a diverse group of participants to take part, including those working with small language models, retrieval-augmented generation (RAG), or open-weight AI.

This highlights a key trade-off: too much flexibility can weaken comparability, but too much standardisation might discourage participation. Future iterations of the framework will need to carefully balance these aspects.

Ranking or recognition? A cautionary note

Since HAIP employs a standard questionnaire, comparisons across organisations are possible. But should we rank the questionnaires?

At a stakeholder meeting in Tokyo, when researchers presented a draft scoring system, several participants strongly objected. The concern: that simplistic rankings could distort incentives, discourage participation, and shift the focus from transparency to performance signalling.

Instead, HAIP should be seen as a recognition of effort—a credit for choosing openness. While maintaining the credibility of published content is essential, evaluations must remain context-sensitive and qualitative, not one-size-fits-all.

Three proposals for HAIP’s future

Based on the feedback we collected, we would suggest the following improvements:

  • 1. Clarify the target audience

Each organisation should clearly specify its report’s target audience. Is it aimed at policymakers, customers, or the public? This assists readers in understanding the content and prevents mismatched expectations.

  • 2. Promote shared vocabulary

Terms like “safety” or “robustness” are often used differently across organisations. To encourage uniformity, we suggest establishing a shared glossary based on the OECD and other international sources.

  • 3. Raise awareness and provide support

Many interviewees noted that HAIP remains poorly understood, both inside their organisations and in the public eye. To address this, we suggest:

  • Permitting the use of a HAIP logo to indicate participation.
  • Engaging institutional investors who increasingly value transparency in ESG.
  • An annual ‘HAIP Summit’ could showcase updates and good practices.

A new culture of voluntary transparency

Besides being a reporting tool, the HAIP Reporting Framework acts as a cultural intervention. It motivates companies to reflect, coordinate, and disclose in ways they might not have previously considered. Several participants observed that the very act of publishing a report, even a modest one, should be celebrated rather than penalised.

As AI continues to shape societies and economies, voluntary transparency mechanisms like HAIP present a promising model for bottom-up governance. They are not perfect, but they are a good starting point.

By fostering an environment where disclosure is rewarded, not feared, HAIP may well become a template for the future of responsible AI.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.