Safer together: How governments can enhance the AI Safety Institute Network’s role in global AI governance
As we integrate AI into every facet of society—from healthcare to national security—it is more than a technical challenge to ensure that the technology is secure, safe, and trustworthy. It is a global imperative.
While each country must address its own AI risks, the technology’s ever-growing reach across borders demands coordinated efforts. The recently launched International Network of AI Safety Institutes is one of the most promising initiatives to address this need. Launched in May 2024 at the Seoul AI Summit, the AISI Network’s mission is “to promote the safe, secure, and trustworthy development of AI.” While the effort is commendable, it is important to ask if such an ambitious, collaborative body can effectively govern a technology as dynamic and integral to national security and competitiveness as AI.
Ahead of the inaugural meeting of the AISI Network in San Francisco, The Future Society published “Weaving a Safety Net: Key Considerations for How the AI Safety Institute Network Can Advance Multilateral Collaboration”. Based on a review of a range of commitments and existing collaboration methods, our report provides insights into the current state of AISI collaboration and where it can go.
Here are some key observations on the rationale behind the AISI Network, possible options for its structure, and the essential partnerships it needs to forge to lead in global AI governance.
Why AISIs should collaborate more
When powerful AI systems are deployed globally, the risks don’t respect national borders. An incident in one region—such as a cybersecurity breach in a globally integrated AI system—could cause cascading effects worldwide. Recognising this, the AISI Network aims to bring nations together to set safety standards, evaluate risks, and collectively mitigate the potential harms of advanced AI. Working together, these institutes will be critical to establishing consistent standards, shared knowledge, and best practices for AI safety. Benefits of stronger collaboration include, amongst others:
Streamline knowledge exchange: In any new field, sharing insights, methodologies, and lessons learned accelerates progress. This is particularly true for AI, where threats and challenges evolve almost as rapidly as the technology itself. Through joint testing exercises, personnel exchanges, and workshops, AISIs could contribute to a pool of shared knowledge that makes every institute more informed and effective.
Faster response to incidents and crisis management: As authorities integrate AI into critical infrastructure, they must ensure rapid responses to potential AI incidents. A coordinated AISI Network could ensure a rapid international response mechanism when AI failures or breaches occur by establishing protocols for data-sharing, joint evaluations, and emergency communication.
Specialisation: While it’s beneficial for each AISI to contribute broadly, specialisation could enable institutes to focus on distinct areas where they excel or that are of particular concern to them. For example, some AISIs might focus on incident response, while others emphasise technical safety evaluations. Thematic specialisation coupled with robust knowledge-sharing can allow each AISI to prioritise specific issues and benefit from the work of others, reducing competition for limited talent and resources. That said, overreliance of partner AISIs on each other in critical safety areas could be highly costly. For instance, political shifts or diplomatic tensions could cause AISIs to stop collaborating.
Mutual recognition of safety evaluations: The AISI Network could also allow for mutual recognition of evaluations across member institutes. This means that a model tested in one country’s AISI can gain credibility across the entire network, avoiding redundancy, saving resources, and mitigating prohibitive costs and bureaucracy for AI developers.
However, collaboration doesn’t come without challenges. Different national priorities, regulatory landscapes, resource availability, and the complexities of managing sensitive data are significant obstacles to seamless collaboration.
How do AISIs collaborate now?
Beyond multilateral collaboration within the AISI Network, AISIs are also engaging in cross-sectoral and bilateral partnerships:
Cross-Sectoral Collaborations:
- AISIs work directly with major AI companies to enhance model testing and evaluation. For example, the United Kingdom and United States struck deals with several firms, while Singapore’s IMDA and AI Verify Foundation have partnered with Anthropic to conduct red-teaming exercises across languages and cultural contexts.
- Public-private partnerships also grant AISIs access to computational resources, such as the UK’s AI Research Resource, which is critical for comprehensive technical AI safety research.
- Collaboration with academia and industry also plays an important role in developing standards, facilitating compliance, and raising awareness of AI safety issues.
Bilateral collaboration:
- Joint testing exercises and collaborative research programs can enhance expertise and create insights into model evaluation and safety.
- Personnel exchanges are crucial to facilitating knowledge transfer and strengthening collaboration.
- Bilateral regulatory “crosswalks” can harmonise frameworks and improve interoperability, while regular dialogues between countries build a deeper understanding of each other’s needs and objectives.
Three ways to structure the AISI Network to enhance collaboration
The AISI Network would benefit from a central coordination body, or secretariat, to unify and streamline collaborative efforts. Coordination could focus on aligning research agendas, facilitating working groups, standardising evaluations, coordinating joint research programmes, bridging research and policy, sharing technical expertise, managing member admission, drafting terms of reference, organising meetings and events, securing network funding, and representing the Network internationally.
Three models for the AISI Network to consider are a rotating secretariat, a static secretariat in a designated country, and a static secretariat hosted by an intergovernmental organisation like the UN or OECD:
Model | Benefits | Challenges |
Rotating secretariat: Member countries alternate as the Network’s coordination arm, distributing administrative duties across nations (G7 model). | Shared responsibility and geographic representation: Ensures broad geographic representation and equitable influence in agenda-setting, preventing one member from dominating discussions. Diverse agendas and specialisation: Enables hosts to bring unique focuses and perspectives, potentially enriching the Network’s expertise. | Shifting priorities: Allows rotating hosts to shift agendas, potentially leading to mission drift or creep. Continuity issues: This could lead to inefficiencies, institutional “memory loss,” and slower progress on long-term objectives due to frequent transitions. |
Static secretariat in a designated country: A permanent administrative body located in a single country (e.g. World Bank model). | Institutional stability and consistency: Facilitates long-term planning and maintains strategic continuity through a steady leadership structure. Established infrastructure and local networks: Utilizes established relationships and infrastructure to enhance administrative efficiency. | Perception of bias: Risk of a single host country dominating priorities, potentially alienating other members and deterring some regions from full participation. |
Static secretariat hosted by an Intergovernmental Organisation (IGO), such as the United Nations or the OECD: Embedded within the IGO for greater reach and legitimacy. | Global credibility and inclusiveness: Facilitates coordination with countries lacking AISIs and leverages the IGO’s reputation for broad stakeholder engagement. AI expertise and established networks: Provides access to AI expert networks and multi-stakeholder collaboration channels, enhancing collective knowledge and capabilities. Established diplomatic safeguards: Offers legal immunity for member institutes, ensuring secure and neutral platforms for international collaboration. Structured Funding: Enables equitable funding mechanisms for sustainable support (e.g. proportional to GDP). | Bureaucratic delays: Intricate IGO processes could slow responses to emerging AI risks. Scope limitations: Could reduce attention to frontier risks and limit specialised knowledge for advanced AI governance to balance diverse national interests. Inefficient industry collaboration: Slow industry engagement, making it more complex for AI labs to share resources or provide access to models, thereby restricting cooperation. |
The upcoming inaugural convening of the AISI Network in San Francisco would be the ideal moment to formalise the Network’s coordination efforts by defining its scope, establishing clear criteria and terms for membership, and agreeing on concrete and actionable projects for collaboration.
How could the AISI Network collaborate with other multilateral efforts?
While AISIs hold substantial expertise, they represent just one part of a much larger governance ecosystem. Successful global governance will hinge on strategic partnerships with other international organisations, as each brings unique strengths to the table.
- The UN’s extensive reach and credibility make it an indispensable partner for the AISI Network. Through platforms like UNESCO’s AI Readiness Assessment and the UN’s High-Level Advisory Body, the Network can contribute specialised technical insights related to risks of advanced AI that complement the UN’s inclusivity and policy focus. The UN’s forthcoming International Scientific Panel and the Global Policy Dialogue on AI also offer unique opportunities for collaboration..
- However, we recommend that the AISI Network engage with UN processes selectively, prioritising collaborations that preserve its independence and technical specialisation. Similarly, to maintain inclusivity and fairness, the UN should balance inputs from the AISI Network with contributions from member states and other stakeholders, making sure not to marginalise nations without dedicated AISIs.
- The OECD’s robust frameworks for AI policy and broad multi-stakeholder networks would complement the technical focus of AISIs. The AISI Network could collaborate with the OECD on initiatives such as drafting the International Scientific Report on the Safety of Advanced AI, monitoring the G7 Hiroshima Process Code of Conduct, producing annual AI risk assessments, and monitoring AI incidents.
- The AISI Network could strengthen the upcoming OECD-UN partnership on AI governance by contributing advanced AI safety expertise, adding technical rigour to the UN’s inclusive mandate and the OECD’s policy development experience. By aligning the strengths of all three entities, the partnership could accelerate the development of a global AI governance regime complex capable of addressing both immediate and long-term challenges posed by advanced AI systems.
- The AISI Network should engage other coalitions, especially those lacking the immediate capacity or intent to establish dedicated AISIs but still have a vested interest in the safe development of AI systems. For example, engaging with non-Western entities like the China-BRICS Artificial Intelligence Development and Cooperation Center and regional organisations such as the African Union and ASEAN would infuse the AISI Network with necessary diverse perspectives.
- The Network’s inclusion of the European Union offers a model for engaging “regional AISIs,” groups of countries collaborating to ensure representation in AI safety discussions without establishing their own domestic institutes. Early engagement with these entities will ensure that emerging AI governance frameworks are inclusive and resilient.
Connecting the dots: Towards a unified approach to global AI governance
The formation of the AISI Network is an ambitious step toward building a cohesive, global response to the safety challenges of advanced AI. But sustaining the network’s effectiveness will require careful planning, adaptability, and a commitment to nurturing trust among its members and partners. As political landscapes shift and AI continues to evolve, the AISI Network should remain a flexible, responsive, and inclusive organisation that adapts to new challenges while staying true to its mission.
In this effort, countries worldwide should prioritise support for initiatives like the AISI Network and advocate for equitable contributions and transparent protocols reinforcing trust. The Network must also carefully calibrate its partnerships with AI companies to leverage their invaluable expertise and access to advanced models while safeguarding against potential industry capture. Just as importantly, civil society should find a place within the AISI network, serving as a critical voice for accountability and representing the global public interest.
As the Network prepares to convene in San Francisco, it has a unique opportunity to lay the foundations of a global governance regime that adapts to the evolving AI landscape and shapes it responsibly, setting the course for a future where AI serves humanity’s highest aspirations.