Thresholds for Frontier AI
This official side event of the AI Action Summit is co-organised by the OECD and the UK AI Safety Institute.

Date: 7 February 2025 10:00-12:00 CET
Venue: Auditorium Faurre, Institut Polytechnique de Paris (IP Paris), Rte de Saclay, 91120 Palaiseau, France
The OECD and the UK AI Safety Institute (AISI) co-organised a session titled “Thresholds for Frontier AI” as part of the AI, Science and Society Conference, a satellite event of the AI Action Summit.
The frontier of AI is advancing at pace; systems are becoming ever more agentic and multimodal, capabilities are becoming more advanced, and applications are becoming more diverse. Emerging frontier AI systems can potentially exacerbate known and pose novel risks.
An ongoing international conversation explores whether thresholds, understood as predefined points at which additional mitigations are required, could be a useful way to operationalise shared understandings of frontier AI risk. Recent international initiatives have included the OECD’s public consultation and expert survey on the topic of thresholds, the ambition to identify thresholds for frontier AI risks agreed by 27 countries and the EU in the Seoul Ministerial Statement, and the thresholds set by leading AI developers including OpenAI, Google DeepMind and Anthropic.
The panel discussion on February 7 will delve deeper into this topic regarding thresholds used in other industries and those already established by companies and governments for frontier AI. The session will promote shared understanding between actors across the AI ecosystem, advance an inclusive conversation on thresholds for frontier AI risks, and ultimately facilitate greater international consensus in approaches to risk assessment.
Agenda
10:00 – 10:20 | Introduction • Introduction from Robert Trager (Director, Oxford Martin AI Governance Initiative). • Introduction from Agnes Delaborde (Chair, Safety Working Group, Trust in AI Track, AI Action Summit). 🎥 Presentation by Karine Perset (Head of AI Division, OECD) on the OECD survey on thresholds for advanced AI systems. |
10:20 – 11:05 | 🎥 Why (or why not) thresholds? This panel will explore the utility, limitations, and evolution of thresholds in the context of frontier AI, drawing insights from practices in other industries. Speakers: • Jan Brauner, Technology Specialist, EU AI Office • Chris Meserole, Executive Director, Frontier Model Forum • Malcolm Murray, Head of Research, Safer AI • Veronique Rouyer, Head of Division of Nuclear Safety Technology and Regulation, OECD Nuclear Energy Agency • Peng Wei, Associate Professor, George Washington Universityy, Department of Mechanical and Aerospace Engineering Moderator: Robert Trager, Director, Oxford Martin AI Governance Initiative |
11:05 – 11:55 | 🎥 Setting thresholds in practice This panel will identify practical methodologies for establishing thresholds for AI systems and the challenges involved in their implementation. Industry practitioners and experts will share their firsthand experiences in engaging stakeholders, developing assessment frameworks, setting specific thresholds, and addressing uncertainties during the threshold-setting process. Speakers: • Lewis Ho, Researcher, Google-Deepmind • Jade Leung, Chief Technical Officer, the UK AI Safety Institute • Rumman Chowdhury, CEO, Human Intelligence • Maeve Ryan, AI Policy Manager, Meta Moderator: Robert Trager, Director, Oxford Martin AI Governance Initiative |
11:55 – 12:00 | Closing remarks |