Non Governmental Organisation

How the global community can come together to understand AI’s risks

blocks with steps to evaluate risk

In 1980 and again in 1985, an international group of scientists came together in Villach, Austria, to discuss a concerning trend. The climate was heating, and human activity seemed part of the cause. The resulting flurry of activity led to the creation of the Intergovernmental Panel on Climate Change (IPCC) in 1988, and seven years later, an IPCC report would pen the historic words: “[t]he balance of evidence […] suggests a discernible human influence on global climate”. The IPCC was established to build scientific consensus and agreement amongst global decision-makers. To this day, it remains the preeminent international forum for doing so.

Although several recent events could qualify, it’s not yet clear what will be considered AI’s “Villach” moment when we look back in forty years. Maybe it was when ChatGPT was released, for instance, or when the open letters establishing AI as a potential existential risk were published, or even when 28 countries signed the Bletchley Declaration. Perhaps it was simply when a popular chatbot professed its love and tried to convince a journalist to leave his wife. Regardless, it has happened. The international community is now undeniably grappling with AI’s potential.

But while uncertainty about the present and future of AI has made it difficult for policymakers to figure out a response, there is no IPCC for AI. Instead, we see a patchwork of international institutions, scientific reports, and declarations attempting to keep pace with AI’s rapid development. Without a deliberate and ambitious vision for the future of the space, these initiatives could fail to coordinate and reach the speed and scale required to tackle AI.

Budding international efforts hold promise, but the bigger picture is still emerging

Across the globe, a myriad of initiatives are emerging to help governments gain the capacity to understand and manage AI. The rise of AI safety institutes (AISIs), the launch of the first International Scientific Report on the Safety of Advanced AI, the joining of the Global Partnership on AI with the OECD’s networks of AI experts, and the UN’s newly announced scientific panel on AI are all moves in the right direction.

Nonetheless, the work is not done. While these initiatives could eventually serve as the backbone of international efforts to come to grips with AI, some of their mandates are still being crafted and fine-tuned. The bigger picture of what each will do and how they will fit together to respond to the urgency of AI is still emerging.

This bigger picture must also contend with the underlying science’s relative immaturity. AI’s scientific community has only recently turned significant attention to understanding the technology’s practical impacts and safety, and that effort faces its own challenges.

For one, getting a grip on the present state of AI can be challenging. The interim International Scientific Report on the Safety of Advanced AI concludes, “[o]verall, the scientific understanding of the inner workings, capabilities, and societal impacts of general-purpose AI is very limited.” And while AI incident trackers like the OECD’s and the AI Incident Database collect hundreds of reported problems, they undoubtedly just scratch the surface.

And even more challenging is predicting what comes next in AI. Unlike climate science, where large datasets and complex models help predict outcomes, AI research lacks widely accepted methods for long-term forecasting. Some of the most influential attempts rely on expert surveys, but those also display the large extent to which experts disagree. Estimates of when AI will hit specific performance thresholds can vary by decades or even centuries.

And yet, AI systems are already having significant impacts worldwide. Products based on the newest craze in AI, large language models, have hundreds of millions of users and are used in most of the world’s biggest companies. The development and deployment of AI systems are resource intensive, including staggering amounts of energy and sometimes hidden human labour, with the industry taking in hundreds of billions of dollars. These environmental and human costs are growing too large to ignore.

Global efforts will need to collect data and expertise, produce relevant research, and build consensus among global researchers and decision-makers to make progress. This consensus could be as simple as agreeing on what we don’t know—without it, decision-makers are left in a tough position as fear of doing too little or too much limits their options.

Speed, inclusivity and prioritisation are key challenges

Setting up processes to accomplish these goals will carry unique challenges. For one, these activities will have to find ways to include industry actors productively. The IPCC in climate science and comparable organisations in other domains mostly rely on public and academic research for scientific assessments to ensure objectivity.

But AI is different. Industry leads the development of AI and controls most of the data about its impact, having produced 51 of the most notable models in 2023 compared to academia’s 15. Industry also controls much of the data needed to understand AI’s impacts, yet a review of major AI developers found that they publish almost no information on these impacts.

Moreover, the expertise needed to evaluate AI is increasingly concentrated in industry. In 2011, 42% of AI PhDs remained in academia, but by 2022, that number had dropped to under 20%, with over 70% moving to industry. Given the concentration of leading AI companies in North America and Europe, international efforts will need to balance industry inclusion carefully, ensuring their participation without undue influence over research findings.

Another challenge is the wide range of AI’s potential impacts. From human rights to economic and labour issues, international security, culture, and the environment, AI touches on almost every aspect of modern life. Sensitive topics like security issues or those deeply entwined with competitive national interests like labour and economic impacts can be hard to prioritise and complicate productive global discussions. Questions like the copyright of data fed into AI systems could be better addressed through regional legal systems.

But the biggest challenge is time. The IPCC took nearly a decade to launch and another two years to publish its first report. In contrast, some industry insiders predict that human-level AI could emerge in just three years. If these predictions are true, then global responses cannot wait as long for international initiatives to take off and protect against global harms.

Many players at different speeds along the same path forward

Meeting the moment will require a multifaceted approach with stakeholders from many sectors and disciplines. They will need to build a clear vision for larger and slower-moving global policy-centric projects to work alongside more focused and agile scientific efforts that meet pressing needs.

While creating this vision may be a challenge, a recent report by the Carnegie Endowment for International Peace and the Oxford AI Governance Initiative offers the beginnings of a solution. It suggests that the UN’s effort should focus on its ability to bring together governments worldwide to build consensus. This process may be slower but would address the broader policy questions AI poses to independent scientists and government experts.

At the same time, another organisation could focus on assessing the extreme risks associated with powerful AI systems, publishing reports on shorter timelines. Its work could rely on academic expertise and feed into the UN’s work. Similar efforts could tackle other pressing issues.

The report also evaluates three candidates for hosting this work: the OECD, a network of AISIs, and the International Science Council. It concludes that there is no obvious candidate and that each is well-placed to make a contribution regardless of the host. AISIs could play an important role in mediating input from industry, building on existing projects.

Over the coming months and years, the pieces of this puzzle will begin to fall into place. In less than a month, a San Fransisco summit for existing AISIs will begin to shed light on their future role. The first International Scientific Report on the Safety of Advanced AI will be presented at the AI Action Summit in February. Meanwhile, an intergovernmental process will determine the UN panel’s structure and functioning. As these initiatives progress, things will continue to shift, and getting lost in the details will be easy.

The global community must remember to keep returning to the big picture to ensure we are on track to manage AI’s transformational impact responsibly.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.