Business

From “black box AI” to operational AI transparency: How the HAIP Reporting Framework can play an important role in global AI governance

An operational view of AI transparency: what “good” looks like and how the HAIP Reporting Framework helps establish standardised information about transparency that governments, developers, and deployers can all rely on.

transparent brain

As the “black box challenge”  remains central to the AI revolution, transparency has shifted from being a goal to a fundamental governance objective, shaping into an operational discipline for responsible and trustworthy innovation. Expectations are rising rapidly, and so too is the range of policy instruments to address them. As the number of instruments grows, fragmentation and interoperability gaps could emerge across AI value chains, creating complexities for business, developers and other actors.

HAIP is the latest voluntary framework to shape global governance in technology

Recent history shows that voluntary governance approaches can play a critical role as part of a broader set of accountability and oversight measures. Unlike legally binding reporting, voluntary approaches can be agile and adaptive. They evolve more easily with technological change, respond to broader reporting requirements and establish clear and practical expectations. In turn, this can encourage iterative improvements and collective alignment.

The Internet is another general-purpose technology whose success required mechanisms for global coordination. Governing bodies, such as ICANN, the IETF, and ISOC, successfully launched voluntary coordination frameworks that gradually became living reference points for interoperability and shared accountability. For technologies whose capabilities evolve rapidly, these types of tools are key to maintaining trust, ensuring comparability, reinforcing shared norms and solidifying emerging best practices.

The G7 Hiroshima AI Process (HAIP) Reporting Framework, developed with Japan’s Friends Group and aligned with the United Nations General Assembly resolution adopted in March 2024, was launched publicly in February 2025. With 20 organisations in seven countries already reporting this year, the framework is encouraging convergence around transparency reporting and has the potential to set the standard for credible transparency. It is already filling the longstanding gap between governance norms and usable governance tools, positioning disclosure as an operational discipline that can mature and scale.

Voluntary governance mechanisms that foster convergence and interoperability are essential across the entire AI value chain in the G7 community and worldwide. This is the case for frontier AI model developers and anyone building large-scale applications on top of these models. It also applies to those who provide the compute infrastructure, hardware and software that enable their training and deployment.

This HAIP is doing more than encouraging convergence. It is laying the foundation for interoperable transparency. Its current form provides a structured template for disclosures on core governance practices, such as risk identification, evaluation and testing, mitigation, monitoring, incident handling, accountability, and content provenance. By standardising reporting requirements in these fields, it also creates a baseline of accountability, encouraging actors along the AI value chain to uphold and improve emerging responsible AI practices, inform norm and standard setting, and continually strengthen governance.

As AI evolves, so can HAIP

In its 2025 conclusions to the analysis of the first twenty reports, the OECD notes that voluntary industry disclosures can become a collective body of evidence, making information more comparable, repeatable, and usable for policymakers and evaluators. Built this way, transparency tools become reusable governance building blocks that strengthen alignment and best practices across the AI ecosystem.

Just as systemic financial institutions disclose material risks through standardised risk disclosures, AI companies are using the HAIP reporting framework to provide comparable, repeatable disclosures across AI risk management practices that governments can aggregate, compare, and build upon. From an industry perspective, the framework makes governance operational by translating high-level principles into concrete checkpoints across the AI lifecycle, facilitating the development and deployment of responsible advanced AI systems at scale. Looking ahead, the following options could be considered to further strengthen and expand the framework:

  • Expand the scope. To fulfil its promise, expanding the scope of the framework could help develop shared language and expectations for the entire AI value chain. This would require broad representation across the supply chain and jurisdictions, spanning model developers, model deployers, system integrators, application builders, compute providers and chip manufacturers. This expansion would strengthen the two-way flow of information: improving the evidence available for deployment decisions downstream while channelling practical insights upstream back into model design, development and governance. It would also provide a more comparable context for others, enabling stakeholders to assess progress consistently and align future improvements.
  • Adapt components to match the evolving AI landscape. The framework should also seek to incorporate additional research-informed and technically grounded components that address evolving technology and emerging risks, including agentic AI and model-security threats, ensuring it remains a living, referenced point for evolving governance standards. This could include integrating findings from frontier evaluations and reflecting progress in threat modelling, risk management approaches, and safety thresholds.

As global discussions continue through forums such as OECD – GPAI and the AI Impact Summit in India, the HAIP RF provides a concrete example of how voluntary governance can promote practical, scalable solutions for AI transparency. 



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.