Jerry Sheehan’s speech at AI Standards Hub Global Summit 

Monday, 17th March, 2025, Jerry Sheehan, OECD Director for Science, Technology and Innovation, was invited to give a keynote speech at this year’s AI Standards Hub Global Summit on the relationship between AI standardisation and AI regulation and the role of international standards in enabling regulatory interoperability.

Monday, 17th March, 2025

Jerry Sheehan, OECD Director for Science, Technology and Innovation, was invited to give a keynote speech at this year’s AI Standards Hub Global Summit on the relationship between AI standardisation and AI regulation and the role of international standards in enabling regulatory interoperability.

The keynote speech was followed by a panel discussion on the relationship between standards and regulation in the context of global governance AI. It featured speakers from the European Commission, the OECD (Karine Perset), the Alan Turing Institute, and the Korean Development Institute.  

I’m delighted to join today’s discussion and provide a few reflections on the relationship between AI standards and regulation in the context of AI governance. 

The role of standardisation in AI governance can hardly be overstated.  It is essential to advance innovation in AI and accelerate its application across our economies and societies. 

And it’s essential to do this now when AI deployment is still in its early stages. Despite all the attention to AI (driven by recent technological developments), AI was adopted by only 8% of firms in the OECD area in 2023, and much of that was concentrated in large firms and in the ICT sector.   

Further diffusion—with all the economic and societal benefits that AI promises to bring—will require us to reduce regulatory uncertainties and inconsistencies across countries and regions

Technical AI standards lay down detailed requirements that offer businesses legal clarity in designing and deploying AI systems that will comply with emerging regulatory regimes..  

As Mr. Viola noted this morning in reference to the EU AI Act, successful implementation relies on the adoption of technical standards, as they define how compliance should be achieved in practice. 

When it comes to driving compatibility among legal frameworks, AI standards are only effective insofar as they are rooted in shared principles and a common understanding of foundational concepts – such as “what is AI” or “what constitutes an AI incident.”  

They must also be informed by widely accepted frameworks – for example, to classify AI systems – and process standards, like those that guide responsible business conduct in AI.  

In the absence of this groundwork, any standardisation efforts risk reinforcing regulatory fragmentation rather than preventing it. 

At the OECD, we strive to foster international cooperation between governments on AI governance, build consensus on policies and practices, and develop common frameworks and principles to support a coherent governance landscape. 

That will enable organisations and users to develop, deploy and safely use AI across jurisdictions.  

The global context for AI regulations and standards

In the last few years, AI as a technology has developed at a dizzying pace, reshaping industries and people’s everyday lives.  

Consider, for instance, the astonishing leap in the capabilities of LLM-based chatbots. Just five years ago, this software struggled to produce coherent text, while today, the most advanced chatbots can pass bar exams in the top 10% of test-takers. 

Yet, while AI innovation accelerates, AI governance is still in its early days.  The EU AI act – currently under implementation – marks the first binding horizontal regulatory framework for AI. 

Governing AI presents significant challenges due to a number of complexities: 

AI is a general-purpose technology.  It is embedded—or soon will be—into virtually every sector of our economies, from finance and healthcare to manufacturing and education. As a result, AI applications pose vastly different risks, making a uniform regulatory approach impractical. 

AI is not a monolithic technology. Different types of AI systems present different benefits and risks depending on various factors, from model design and training data to testing processes and end-user qualifications. This variability makes AI harder to regulate with broad, static rules

Finally, AI systems transcend borders. They can be developed in one country, trained with data from another, and deployed worldwide, highlighting the need for international coordination.  

The OECD’s role in international AI standards

Developing international AI standards is, therefore, challenging, but the challenges are not insurmountable. Trying to overcome them is at the heart of our work at the OECD.  

We take the view that AI governance is not a one-size-fits-all issue. OECD Member countries naturally adopt different approaches, shaped by their unique histories, economic priorities, and levels of AI adoption—just as we see in other policy areas like taxation and education.  

Therefore, our objective is not to achieve full harmonization of AI regulatory approaches across countries. Rather, we aim to foster compatibility between legal frameworks, ensuring that AI systems can effectively operate across jurisdictions.  

This requires a coherent international AI standards landscape based on common frameworks, definitions and tools.  

The OECD AI Principles, adopted in 2019, were the first intergovernmental standards on AI. They set a global reference point for AI policymaking, which is today reflected in many national and regional frameworks, policies and regulations around the world.  

Last year, we updated these Principles to keep abreast with technological developments – notably generative AI.  

The changes concerned also the definition of an “AI system”. This may sound trivial, but reaching a consensus on what constitutes an AI system has been anything but simple and fundamental to ensuring that governance frameworks apply to the same phenomenon.  

Indeed, the OECD definition has been very influential, informing us of the US National AI Initiative Act, the EU AI Act, and the Council of Europe’s Convention on AI, Human Rights, Democracy, and the Rule of Law.  

The OECD AI Classification Framework has also been instrumental in fostering a shared understanding of AI systems and their multifaceted impact.  

By classifying AI systems based on key dimensions such as economic context, data & input, task performed & output or model design, this framework has helped policymakers and stakeholders assess AI-related challenges more effectively.  

It has laid a foundation for major international standards efforts, including NIST’s AI Risk Management Framework

Our work builds on a strong evidence base and data infrastructure that the OECD has been developing for over a decade.  

The OECD.AI Policy Observatory – which some of you in this room may be familiar with – illustrates very well our efforts to systematically gather data, trends, and best practices for promoting trustworthy AI.  

It hosts a repository of over 1,000 AI policies from more than 70 jurisdictions, including emerging and developing economies. It provides policymakers with an up-to-date and comprehensive global overview of AI governance efforts.  

Building international consensus on AI governance is crucial to ensuring that legal frameworks remain compatible and aligned around the world.   

That’s why a key milestone in this regard was integrating the work of the Global Partnership on AI (or GPAI) last year with that of the OECD, with the OECD AI Principles as a foundation.  

This has strengthened the inclusivity of our AI governance efforts, extending our reach across six continents and 44 countries, with further opportunities to expand engagement in the months and years ahead. 

Key OECD initiatives supporting regulatory compatibility

My colleague Karine Perset is also here and will share with you further details in the upcoming panel discussion, but let me provide you with a brief preview of three initiatives that exemplify the OECD’s approach to supporting global efforts towards regulatory compatibility:  

First, the OECD published a common reporting framework for AI incidents last month.  This framework has been approved by all GPAI countries.  

Based on the OECD’s definition of an “AI incident”, this framework promotes consistency in reporting, particularly as regulations like the EU AI Act introduce mandatory AI incident reporting.  

Second, in close partnership with the AI Standards Institute, we continue to expand our Catalogue of Tools and Metrics for Trustworthy AI,.   

It collects best practices and methodologies for evaluating and enhancing the trustworthiness of AI systems. With almost a thousand tools and over 100 technical metrics, the catalogue helps promote safe, secure and trustworthy AI systems.  

Finally, a little over a month ago, in the context of the French AI Action Summit, we launched the Reporting Framework for the G7 Hiroshima AI Process Code of Conduct for Organizations Developing Advanced AI Systems.  

This voluntary reporting framework promotes transparency and accountability in developing advanced AI systems.  

Developed by the G7 with the OECD’s support and informed by key AI stakeholders, it facilitates risk mitigation, comparability, and the identification of good practices – helping ensure that AI is developed and deployed safely and responsibly.  

As AI continues to evolve rapidly, so must our ability to govern it effectively. Around the world, we are seeing different regulatory frameworks emerge. This is an encouraging development reflecting the urgency of addressing AI’s opportunities and risks.  

Without international coordination, we risk a fragmented regulatory landscape – marked by significant compliance costs for businesses, barriers to cross-border deployment of AI systems and stifled innovation.  

Hence, it is critical to ensure that AI standards are rooted in shared principles, aligned with common frameworks and definitions, and supported by a robust set of evidence, allowing them to remain adaptable and responsive to the rapid pace of technological change. 

That’s why I look forward to continuing our collaboration with you to promote standards designed to enhance legal clarity, enable scalability, foster innovation, and promote AI safety.  

I am certain these two days of discussion will help drive meaningful progress towards this goal.