photo of Responsible AI Working Group

Responsible AI Working Group

The Global Partnership on AI - (GPAI)

Stakeholder TypeIntergovernmental
GPAI

The Global Partnership on AI (GPAI) has a mission to “support the development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth, while seeking to address the United Nations Sustainable Development Goals”. Launched in June 2020, it is the first intergovernmental institution with a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realise the OECD AI Recommendation.

Responsible AI Working Group's documents

Algorithmic transparency in the public sector: A state-of-the-art report of algorithmic transparency instruments

This report overviews algorithmic transparency instruments in the public sector and focuses on repositories or registers of public algorithms. It is a preliminary report of the “Algorithmic Transparency in the Public Sector” project developed by experts from the Global Partnership on Artificial Intelligence (GPAI). In the project's subsequent phases, additional reports will be produced based on three in-depth case studies of public algorithmic repositories. The case studies will include interviews with diverse stakeholders to evaluate this type of transparency instrument. GPAI experts from the Responsible AI and Data Governance Working Groups contribute to this project. The project's objective is to study algorithmic transparency in the public sector with an emphasis on assessing transparency instruments, both reactive and proactive, that may allow governments to comply with algorithmic transparency principles, standards, and rules. The project will study the strengths and weaknesses of these instruments, the challenges of building them, their diverse usages and users, costs, how instruments complement each other, and their potential contributions to transparency and different goals (e.g., explainability, accountability). May 27, 2025

Policy Guide for Implementing Transformative AI Policy Recommendations

The purpose of this guide is to support policy makers and regulatory bodies in implementing key recommendations from the report Towards Substantive Equality in AI: Transformative AI Policy for Gender Equality and Diversity. The guide aims to assist national policy makers – in your duty to protect, promote and fulfil human rights – to integrate transformative AI policies into broader governmental frameworks and practices.May 27, 2025

Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity (November 2024)

The rapid advancement of artificial intelligence (AI) is transforming societies and driving economic growth, hold ing great potential to improve lives and livelihoods globally. However, it risks exacerbating existing inequalities by mirroring and magnifying societal biases, particularly those affecting historically marginalised groups. Challenges such as discrimination, unfairness, bias and harmful stereotypes persist throughout the AI lifecycle, impacting many aspects of human life. Robust regulatory frameworks are urgently needed to mitigate these disparities, prevent harm and work towards substantive equality and diversity in AI ecosystems. Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity aims to strengthen the capacity of States and other stakeholders to foster inclusive, equitable and just AI ecosystems. It examines promising practices, provides policy insights and offers actionable recommendations to enhance gender equality and diversity in AI and related policy making. The Policy Guide for Implementing Transformative AI Policy Recommendations provides additional guidance in implementation.May 27, 2025

AI for Net Zero: Assessing Readiness for AI (November 2024)

The objective of this booklet is to support companies to understand the prerequisites needed to deploy AI in support of a low cost transition to net zero. AI can accelerate the transition to net zero. In this booklet, we refer to AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI is becoming increasingly useful as it can help identify subtle patterns in very large amounts of data allowing it to optimise and automate complex systems. However, it also has weaknesses: its outputs can be strongly influenced by poor or biased data; it is not always clear how it arrives at its conclusions; and any answers it offers are only as good as the questions it is asked.May 27, 2025

Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action

Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mecha nisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems.May 27, 2025

State-of-the-art Foundation AI Models Should be Accompanied by Detection Mechanisms as a Condition of Public Release

Foundation models represent a dramatic advance for the state of the art in Artificial Intelligence (AI). In current discussions of AI, a foundation model is defined very generally as an AI model that is trained on large amounts of data, typically using self-supervision, that can be adapted, or ‘fine-tuned’ to a wide range of downstream tasks (see, e.g., Bommasani et al., 2022).1 In this paper, we will argue for a specific regulatory mechanism that governs the release of new state-of-the-art foundation models. For concreteness, our arguments will sometimes make reference to a central ingredient in many current foundation models, namely large language models (LLMs), which have the ability to generate natural language text as output. Many LLMs are foundation models in their own right: for instance, BERT and GPT-3 are LLMs and also canonical examples of foundation models. The discussions in this paper will sometimes refer to LLMs and the text they generate, to give concrete examples of the content that foundation models can produce and the issues that arise for these models. Our broad argument is about foundation models generally, not just about LLMs. But we will begin by introducing LLMs and then show how LLMs can provide the core of foundation models with wider functionality.May 27, 2025

Social Media Governance Policy Brief: How the DSA can enable a public science of digital platform social impacts (policy brief)

A key aim of the EU’s Digital Services Act (DSA, 2022) is to improve transparency about the operation of very large online platforms (VLOPs): to shed light on how the algorithms and processes deployed by these platforms influence the way information flows in our society, and influence individual platform users, in potentially harmful ways. The DSA provides two particular mechanisms for delivering this transparency. One involves access to company data and processes by external auditors: each VLOP must undergo regular independent audits, to check for compliance on its obligations under the DSA. Another involves access to company data and processes by vetted independent researchers, to ensure potential risks to fundamental rights can be identified. This allows DSA-relevant aspects of company operation to be further studied, using data and methods that are only available within companies. Each type of access is governed by a Delegated Regulation. The Delegated Regulation on Auditing has already been released (DSA, 2023). The Delegated Regulation for Data Access for External Researchers is currently under discussion. Our briefing note contributes to this latter discussion.May 27, 2025

Crowdsourcing annotations for harmful content classifiers An update from GPAI’s pilot project on political hate speech in India

This report is a sequel to the report we gave at last year’s GPAI Summit in Delhi (GPAI, 2023), that introduced our harmful content classification project and presented some initial results. We begin in Section 2 by summarising the aims of the project, and the work described in our first report. In the remainder of the report, we present the new work we have done this year, and outline plans for future work.May 27, 2025

Social Media Governance project

Social media platforms are one of the main vectors for AI influence in the modern world. In 2024, over 5 billion people were social media users, a number projected to rise to 6 billion by 2028 (Statista, 2024a); these users spent over two hours per day on social media (Statista, 2024b). Social media platforms are largely powered by AI systems, so attention to the AI systems used to drive these platforms is a central strand of any AI governance endeavour. GPAI has been working on social media governance since its inception: the Social Media Governance project has been running since the first round of GPAI projects in 2020. In this report, we summarise the work of the Social Media Governance project in 2024. The report is structured around the three main influences of AI on social media platforms. Recommender systems are AI systems that learn how to push content at platform users, through curation of their content feeds. We will discuss our work on recommender systems in Section 3. Harmful content classifiers are AI systems that learn how to withhold content from users, by blocking it or downranking it. We will discuss our work on harmful content classifiers in Section 4. Social media platforms are also a key medium for the dissemination of AI-generated content. We begin in Section 2 by discussing our work on AI-generated content, and how it can be identified.May 27, 2025

AI-Powered Immediate Response to Pandemics

AI-Powered Immediate Response to PandemicsMay 21, 2025

Responsible AI Working Group Report

We are delighted to report on our mandate and mission to “foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals, ensuring diversity and inclusivity to promote a resilient society, in particular, in the interest of vulnerable and marginalised groups.” Our Expert Working Group considers that ensuring responsible and ethical AI is more than designing systems whose results can be trusted - it is about the way we design them, why we design them, and who is involved in designing them. Responsible AI is not, as some may claim, a way to give AI systems some kind of ‘responsibility’ for their actions and decisions, and in the process, discharge people, governments and organizations of their responsibility. Rather, it is those that shape the AI tools who should take responsibility and act in accordance with the rule of law and in consideration of an ethical framework - which includes respect for human rights - in such a way that these systems can be trusted by society. In order to develop and use AI responsibly, we need to work towards technical, societal, institutional, legal methods, and tools that provide concrete support to AI practitioners and deployers, as well as awareness and training to enable the participation of all, to ensure the alignment of AI systems with our societies’ principles, values, needs, and priorities, where the human being is at the heart of the decisions and the purposes in the design and use of AI.May 18, 2025

Responsible AI Strategy for the Environment

The Global Partnership on AI (GPAI) has put climate action and biodiversity preservation at the top of their agenda. As a general-purpose technology, artificial intelligence (AI) can be harnessed responsibly to accelerate positive environmental action. Since 2020, the ‘Responsible AI Strategy for the Enforcement’ (RAISE) project, under the leadership of Responsible AI Expert Working Group, have been conducting important foundational work to understand both the opportunities and challenges of AI in relation to climate action 1 and biodiversity preservation 2 . Project RAISE has also taken steps to act on the recommendations of these foundational reports, including publication of AI readiness guidance booklets for net-zero 3 , collaboration with the OECD to assess environmental impacts of AI compute and applications 4 , and exploring the concept of a net-zero data space in collaboration with the UK 5 . With the explosive popularization of generative AI methods in 2023, Project RAISE has also undertaken work to address the G7’s Hiroshima AI process 6 , to facilitate discussions on generative AI, with the aim of leading to delivery of practical projects. To further these mandates, on August 4th, 2023, Project RAISE held a virtual workshop, bringing together experts at the intersection of AI and the environment to co-design future approaches to international collaboration and practical action. The workshop structure is detailed in Section 1.May 18, 2025

Social Media Governance Project

The founding observation for our project is that social media platforms are one of the main channels through which AI systems influence people’s lives, and therefore influence countries and cultures. In 2022, the number of social media users worldwide was 4.59 billion (Statista, 2023a); the number is projected to be around 4.89 billion at the current moment, or 59% of the world’s population. The average user spent over two and a half hours per day on social media in 2022, a figure which has been rising since 2012 (Statista, 2023b) and is projected to rise further. Crucially, the experience of a social media user, on any given platform, is heavily influenced by AI systems that run on that platform.May 18, 2025

Crowdsourcing the curation of the training set for harmful content classifiers used in social media

The world’s dominant social media platforms all run active content moderation programmes. They take seriously their responsibility to moderate the content of their users' posts, to keep their community of users safe. Moderation involves checking for ‘harmful content’ of various kinds, and taking various actions when it is found. Some content is removed if it is found to be illegal or if it violates the company’s published standards. Other content is left in place, but is moderated in less draconian ways, perhaps by flagging it with messages advising caution, or by downranking it in the platform’s recommender algorithm. This latter type of content is often termed borderline content, and has been the subject of much discussion. The scale of social media platforms means that content moderation processes must use automated tools, as well as human effort. AI tools are central in moderation processes, so content moderation is an important topic for our group at GPAI, which focusses on Social Media Governance.May 18, 2025

Scaling Responsible AI Solutions Challenges and Opportunities

Artificial intelligence (AI) seems to present solutions to many challenges across different domains. However, there is now a widespread understanding of the range of potential risks and harms to people and the planet that AI can produce if conceived, designed, and governed in irresponsible ways. In response to this, many proposals, frameworks and laws have been advanced for the responsible development and use of AI systems. In tandem, more and more AI ‘solutions’ are emerging around the world, which attempt to contribute to the public good, whilst upholding best-practice standards of responsibility. It is important that AI systems that meet responsible AI (RAI) best practices and have positive socio-environmental impacts are supported to grow and reach potential users and communities who could benefit from them. However, nascent AI projects have encountered challenges when it comes to practically implementing RAI principles, as well as scaling. Key RAI challenges include mitigating bias and discrimination, ensuring representativeness and contextual appropriateness, transparency and explainability of processes and outcomes, upholding human rights, and ensuring that AI does not reproduce or exacerbate inequities. Frameworks for RAI have proliferated, but tend to remain at a high-level, without technical guidelines for implementation in various uses and contexts. At the same time, the process of scaling itself can introduce obstacles and complications to realising or preserving RAI adherence.May 18, 2025

Pandemic Resilience Developing an AI-calibrated ensemble of models to inform decision making

This report explores the use of ensemble modeling of infectious diseases to enable better data-driven decisions and policies related to public health threats in the face of uncertainty. It demonstrates how Artificial Intelligence (AI)-driven techniques can automatically calibrate ensemble models consistently across multiple locations and models. The ensembling, cali bration, and evidence-generation reported here was conducted by an interdisciplinary team recruited by the Pandemic Resilience project team via the Global Partnership on Artificial In telligence (GPAI) Pandemic Resilience living repository. This diverse team co-developed and tested a collaborative ensemble model that assesses the level of use of Non-Pharmaceutical Interventions (NPIs) and predicts the consequent effect on both epidemic spread and eco nomic indicators within specified locations. The disease of interest was COVID-19 and its variants. The development of the ensemble model was undertaken in five main phases from June 2022 to October 2023: 1. Definition of a standardized set of inputs and outputs; 2. Adapta tion of individual models to the standard; 3. Development of a calibration framework for the ensemble; 4. Deployment and testing of the ensemble across different different locations; 5. Automated calibration of the ensemble using a Genetic Algorithm (GA) metaheuristic op timization approach. Having constructed and tested the ensemble, the study team has prepared this report to share key findings about the use of such models and communicate key recommendations for governments and policymakers about their development and support:May 18, 2025

Towards Real Diversity and Gender Equality in Artificial Intelligence

This is an Advancement Report for the Global Partnership on Artificial Intelligence (GPAI) project “Towards Real Diversity and Gender Equality in Artificial Intelligence: Evidence-Based Promising Practices and Recommendations.” It describes, at a high level, the strategy, approach, and progress of the project thus far in its efforts to provide governments and other stakeholders of the artificial intelligence (AI) ecosystem with recommendations, tools, and promising practices to integrate Diversity and Gender Equality (DGE) considerations into the AI life cycle and related policy-making. The report starts with an overview of the human rights perspective, which serves as the framework upon which this project is building. By acknowledging domains where AI systems can pose risks and harms to global populations, and further, where they pose disproportionate risks and harms to women and other marginalized populations due to a lack of consideration for these groups throughout the AI life cycle, the need to address such inequalities becomes clear.May 18, 2025

AI for Net Zero Electricity

Enabling more variable renewable energy generation will be critical to delivering a low-cost transition to net zero, and diversifying in this way will require every electricity asset to be fully optimised to be competitive in the market. AI is well suited to optimisation and ensuring system flexibility. It offers the promise of radically improved forecasting of generation and demand, the optimisation of asset management and electricity trading, and efficient grid balancing through to optimised dispatch markets. It can also support increasing demands on electricity systems - from heat pumps, EVs, battery storage, distributed generation - and more efficient operation and maintenance of demand response assets. AI is already enabling progress across a wide range of use-cases in the electricity system, and its importance will only grow as more applications emerge. However, a major roadblock is that many electricity companies are not yet ready to apply AI in their operations. This booklet seeks to inform companies operating in the electricity sector on how they can prepare their organisations to apply AI to accelerate a low cost net zero transition. To support companies in assessing their current level of AI readiness and to map out areas for further investment, we provide an AI for Electricity Readiness Self-Assessment tool. This highlights five key themes that electricity companies can advance to become AI ready: AI opportunity identification, human capacity, data for AI, digital infrastructure and responsible AI governance. May 18, 2025

Responsible AI Working Group Report

Building on the preliminary findings presented at the 2021 Summit, the working group observed a pattern in the life cycle of the project’s development. During the first year (2021), the issues are defined and framed. In the second year (2022), there’s a deeper dive into the topics and extensions are identified. The third year (2023) enables us to work concretely on partnerships and concrete implementations to make the project more practical. Following the presentation of the working group outputs at the Summit 2021, the working group has continued to build momentum and scale up to deliver further impact on its selected projects. May 18, 2025

Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action

Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mechanisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems. We focussed on the domain of Terrorist and Violent Extremist Content (TVEC), because this type of content is already the focus of discussion in several ongoing initiatives involving companies, including the Global International Forum to Counter Terrorism (GIFCT) and the Christchurch Call to Eliminate TVEC Online. Our proposal was for a form of fact-finding study, that we argued would surface relevant information about recommender system effects in this area, without compromising the rights of platform users, or the intellectual property of companies. We presented and argued for this proposed fact-finding study at last year’s GPAI Summit. Over the past year, our project has pursued the practical goal of piloting our proposed fact finding study in one or more social media companies. This has involved discussions with several companies, often mediated by governments; and participation in several international initiatives relating to TVEC, in particular the Christchurch Call and GIFCT. At the recent Christchurch Call Summit, a scheme for running a pilot project of the kind we advocate was announced: the initiative involves two governments (the US and New Zealand) and two tech companies (Twitter and Microsoft), and centres on the trialling of ‘privacy-enhancing technologies’ developed by a third organisation, OpenMined. In this report, we will summarise the discussions that led to this initiative, in the context of other ongoing discussions around transparency mechanisms for recommender systems.May 18, 2025

Biodiversity and Artificial Intelligence: Opportunities & Recommendations for Action

Biodiversity loss is one of the most critical issues facing humanity, requiring urgent and coordinated action. Despite ongoing conservation efforts, biodiversity has declined dramatically in recent decades. Artificial intelligence (AI) is one tool that offers opportunities to accelerate action on biodiversity conservation. However, it must be deployed in a way that supports a paradigm shift to new, sustainable models of development, rather than entrenching business as usual. Applications such as automated classification of species from citizen scientists and communities, automated monitoring of land use change, monitoring of fishing vessels, monitoring of the impact of different biodiversity policies, and the optimisation of biodiversity positive business models for key sectors all enable enhanced transparency, accountability and action that can support biodiversity conservation. However AI is not asilver bullet and needs to be deployed as part of wider applications and efforts that support action. May 18, 2025

Measuring the environmental impacts of artificial intelligence compute and applications

The green and digital “twin transitions” offer the promise of leveraging digital technologies for a sustainable future. As a general-purpose technology, artificial intelligence (AI) has the potential not just to promote economic growth and social well-being, but also to help achieve global sustainability goals. AIenabled products and services are creating significant efficiency gains, helping to manage energy systems and achieve the deep cuts in greenhouse gas (GHG) emissions needed to meet net-zero targets. However, training and deploying AI systems can require massive amounts of computational resources with their own environmental impacts. The computational needs of AI systems are growing, raising sustainability concerns. While AI can be perceived as an abstract, non-tangible technical system, it is enabled by physical infrastructure and hardware, together with software, collectively known as “AI compute”. In the last decade, the computing needs of AI systems have grown dramatically, entering what some call the “Large-Scale Era” of compute. At the same time, according to the International Energy Agency (IEA), data centre energy use has remained flat at around 1% of global electricity demand, despite large growth in workloads and data traffic, of which AI is estimated to represent a small fraction. While this may point to hardware efficiency gains, some researchers note that AI compute demands have grown faster than hardware performance, bringing into question whether such efficiency gains can continue. The environmental impacts of AI compute and applications should be further measured and understood. Policy makers need accurate and reliable measures of AI’s environmental impacts to inform sustainable policy decisions. The 2010 OECD Recommendation on ICTs and the Environment encourages the development of comparable measures of environmental Information ICT impacts. Further, the 2019 OECD Recommendation on Artificial Intelligence underlines that AI should support beneficial outcomes for people and the planet. The 2021 OECD Recommendation on Broadband Connectivity also stresses the need to minimise the negative environmental impacts of communication networks. Yet further efforts are needed to develop measurement approaches specifically focused on AI and its environmental impacts. The report defines AI compute as including one or more “stacks” (i.e. layers) of hardware and software used to support specialised AI workloads and applications in an efficient manner. This definition was developed by the OECD.AI Expert Group on AI Compute and Climate (the “Expert Group”) to meet the needs of both technical and policy communities. Informed by the Expert Group and experts involved in the Global Partnership on AI (GPAI), this report synthesises findings from a literature review, a public survey and expert interviews to assess how the environmental impacts of AI are currently measured. A number of indicators and measurement tools can help quantify the direct environmental impacts from AI compute, as well as the indirect environmental impacts from AI applications. The report distinguishes between direct and indirect positive and negative environmental impacts. Direct impacts stem from the AI compute resources lifecycle (i.e. the production, transport, operations and end-of-life stages). Analysis indicates that direct impacts are most often negative and stem from resource consumption, such as the use of water, energy and its associated GHG emissions, and other raw materials. Indirect impacts result from AI applications and can be either positive, such as smart grid technology or digital twin simulations, or negative, such as unsustainable changes in consumption patterns.May 18, 2025

AI for Public Good Drug Discovery Advocacy Efforts and a Further Call to Action

This document presents an update on the progress made by the AI for Public Good Drug Discovery Project Advisory Group following the publication of the 2021 GPAI report, Artificial Intelligence for Public Good Drug Discovery: Recommendations for Policy Development. Our previous report presents a series of policy recommendations, with the aim of effectively leveraging AI for the purpose of combating global public health challenges, including the rise of antimicrobial resistance (AMR) and the threat of future pandemics. These recommendations were centered on three broad themes: 1. Government leadership on incentivizing or pursuing tasks related to these significant public health threats that are not sufficiently addressed by private industry. 2. Emboldening the uptake of AI throughout the drug research and development process. 3. Accelerating progress through promotion of Open Science and Open Data practices wherever feasible. Following the publication of our report, the focus of the Project Advisory Group turned to advocating for and supporting the implementation of our recommendations. This update document will summarize our progress along two separate approaches. The group has pursued extensive engagement with relevant stakeholders in both the public and private sectors, to spread awareness of our recommendations, encourage their uptake, and receive further feedback on how to maximize the effectiveness of our efforts. A summary of the results of these engagements makes up the first component of this report. Second, we outline a call to action for the broader AI community, to make a more direct impact on critical public health challenges. We outline apossible avenue, via a novel international initiative, tasked with deploying AI-enabling expertise and resources to impactful points of intervention throughout ongoing antibiotic R&D projects. May 18, 2025

Responsible AI Working Group Report

At Summit 2020, the Working Group committed to focus on developing enabling environments for AI technologies to achieve the UN Sustainable Development Goals and other key objectives. It decided to create five internal committees: 1. The Committee on Drug Discovery & Open Science (linked to SDG 3: Good health and wellbeing) 2. The Committee on Climate Change and Biodiversity Preservation (SDG 13: Climate action) 3. The Committee on AI & Education (SDG 4: Quality education) 4. The Committee on Governance & Transparency of Social Media (SDG 16: Peace, Justice, and Strong Institutions) 5. A Transversal Committee on Issues and Means of Governance (that could work on the certification, assessment and audit mechanisms used to evaluate AI systems) Following a process of ideation and engagement with GPAI’s Steering Committee and Council, the Working Group prioritised two projects in 2021, with a third (Drug Discovery and Open Science) being taken forward in collaboration with the AI and Pandemic Response Subgroup: 1. A Responsible AI Strategy for the Environment: the selection of this project recognises that the combined fight against climate change and preservation of biodiversity represents one of the most pressing challenges humanity is facing. All GPAI Member countries have put this at the top of their agenda and have taken strong commitments, especially through the Paris Agreement signed in 2015. As a response, this project aims to develop a global responsible AI adoption strategy for climate action and biodiversity preservation. The project committee (fully listed under Annex 1) is co-led by Raja Chatila and Nico Miailhe, and has collaborated with the Centre for AI and Climate Change and Climate Change AI. 2. Responsible AI for Social Media Governance: The selection of this project reflects a growing consensus that governments should review the effectiveness of current regulations on the influence of social media platforms on the dynamics of public discourse, so these processes are undertaken democratically and systematically, rather than solely by private companies. It responds to growing concerns about the level of misuse which can be harmful and serve to propagate disinformation, extremism, violence and many forms of harassment and abuse. The aim of the project is therefore to identify a set of technical and democratic methods that governments could adopt to safely ask a set of agreed questions and measurements about the effects of social media recommender systems. The GPAI project committee (fully listed under Annex 1) is co-led by Alistair Knott, Dino Pedreschi, and Kate Hannah, in collaboration with the Universities of Otago and Auckland. It builds upon the Christchurch Call (a commitment by Governments and tech companies to eliminate terrorist and violent extremist content online), with New Zealand as the first case study for the project. May 18, 2025

Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action

Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mechanisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems. We focussed on the domain of Terrorist and Violent Extremist Content (TVEC), because this type of content is already the focus of discussion in several ongoing initiatives involving companies, including the Global International Forum to Counter Terrorism (GIFCT) and the Christchurch Call to Eliminate TVEC Online. Our proposal was for a form of fact-finding study, that we argued would surface relevant information about recommender system effects in this area, without compromising the rights of platform users, or the intellectual property of companies. We presented and argued for this proposed fact-finding study at last year’s GPAI Summit.May 18, 2025

Climate Change and AI Recommendations for Government Action

Climate change is one of the most pressing issues of our time, requiring rapid action spanning many communities, approaches, and tools.2 Artificial intelligence (AI) has been proposed as one such tool, with significant opportunities to accelerate climate action via applications such as forecasting solar power production, optimizing building heating and cooling systems, pinpointing deforestation from satellite imagery, and analyzing corporate financial disclosures for climate-relevant information.3 At the same time, AI is a general-purpose technology with many applications across society, which means it has also been applied in ways that impede climate action both through immediate effects and broader systemic effects. In this report, we provide actionable recommendations as to how governments can support the responsible use of AI in the context of climate change. These recommendations were obtained via consultation with a broad set of stakeholders, and span three primary categories: (a) supporting the responsible use of AI for climate change mitigation and adaptation, (b) reducing the negative impacts of AI where it may be used in ways that are incompatible with climate goals, and (c) building relevant implementation, evaluation, and governance capabilities for and among a wide range of entities. May 18, 2025

Responsible Development, Use and Governance of AI Working Group Report

RAI has 30 members. Its international experts come from various fields, something which favors robust discussions and the emergence of diverse viewpoints. More precisely, 14 members of RAI come from the technical world (e.g. machine learning, information technologies), whereas 16 come from the social and human sciences sector and fields like communications, anthropology, literature, management, history, psychology, philosophy, international affairs, international development, journalism, economics, and political science. 40% of RAI's members are women, a number which we’ll work to increase in the future. Most members (63%) come from the academic sector, but 17% work in the private sector, 13% for nonprofits and 7% in the Public Sector. A better balance should be achieved in coming months and years as we believe that the collaboration of all stakeholders will be necessary to ensure AI is produced and used in a responsible manner. RAI also represents an interesting diversity of countries, although more countries and international organizations should be represented in the short and medium term, especially countries and entities from the Global South. As of November 30, members were based in 17 countries, that is Argentina, Australia, Canada (2 people), France (3), Germany (2), India (2), Italy, Japan (2), Korea, Mexico (2), the Netherlands, New Zealand (2), Singapore, Slovenia (2), Sweden, the United Kingdom (3), and the USA (3). These members have been designated by the 15 founding members of GPAI or recommended by UNESCO. It’s worth mentioning that members are designated by GPAI’s member countries or recommended by international institutions, but act with full independence inside RAI. Finally, 7 additional specialists take part in RAI's activities as observers. One of them is a representative of the OECD, a strategic partner of GPAI, and another one a representative of a panel of experts that advises the OECD. May 18, 2025

Areas for future action in the responsible AI ecosystem

Over the past decade, Artificial Intelligence (AI) has prominently entered the public conscience and debate. As a general-purpose technology, it has unprecedented potential to advance societal well-being, economic progress and address many of the most pressing challenges of our times. Yet, it also comes with significant risks, and decisions and mechanisms that are taken and designed today will set the course for decades to come. It is in this context, that the Global Partnership on AI (GPAI) was formed —to help promote and foster the responsible development and use of AI globally— in line with societal values, preferences and needs. This report, produced by The Future Society in collaboration with the GPAI Responsible Development, Use and Governance of AI Working Group (“Responsible AI Working Group” or “RAIWG”), serves as a first step in supporting GPAI’s mission. In preparation for the first GPAI plenary in December 2020, it aims to provide a high-level review of the landscape, an analysis of opportunities and gaps by delving into a subset of diverse initiatives, and recommendations that may steer the future agenda of the GPAI Responsible Development, Use and Governance of AI Working Group.May 18, 2025

Intergovernmental

03 – AI for net zero: Assessing readiness for AI

This guide seeks to inform organisations how they can use AI to transition to net zero at low cost. It provides a series of checklists to help organisations understand where they are on this journey. These checklists should be followed no matter which sector your organisation is in. While the guide is applicable to any industry, four chosen “case study” sectors illustrate how this can be done at the end of the guide, including a summary of AI suppliers per sector: ● Electricity ● Agriculture ● Foundation industries ● Transport To support companies in assessing their current level of AI readiness and to map out areas for further investment we provide an AI Readiness Self-Assessment tool. This highlights five key themes that companies can advance to become AI ready: AI opportunity identification, human capacity, data for AI, digital infrastructure and responsible AI governance. These key aspects for AI readiness were identified by industry and AI experts. We summarise the key recommendations below, however a full self assessment is recommended to identify all AI readiness requirements.December 3, 2024

Intergovernmental

02 – Social media governance project: Summary of work in 2024

Social media platforms are one of the main vectors for AI influence in the modern world. In 2024, over 5 billion people were social media users, a number projected to rise to 6 billion by 2028 (Statista, 2024a); these users spent over two hours per day on social media (Statista, 2024b). Social media platforms are largely powered by AI systems, so attention to the AI systems used to drive these platforms is a central strand of any AI governance endeavour. GPAI has been working on social media governance since its inception: the Social Media Governance project has been running since the first round of GPAI projects in 2020. In this report, we summarise the work of the Social Media Governance project in 2024. The report is structured around the three main influences of AI on social media platforms. Recommender systems are AI systems that learn how to push content at platform users, through curation of their content feeds. We will discuss our work on recommender systems in Section 3. Harmful content classifiers are AI systems that learn how to withhold content from users, by blocking it or downranking it. We will discuss our work on harmful content classifiers in Section 4. Social media platforms are also a key medium for the dissemination of AI-generated content. We begin in Section 2 by discussing our work on AI-generated content, and how it can be identified.December 3, 2024

Intergovernmental

01 – Crowdsourcing annotations for harmful content classifiers: An update from GPAIs pilot project on political hate speech in India

This report is a sequel to the report we gave at last year’s GPAI Summit in Delhi (GPAI, 2023), that introduced our harmful content classification project and presented some initial results. We begin in Section 2 by summarising the aims of the project, and the work described in our first report. In the remainder of the report, we present the new work we have done this year, and outline plans for future work.December 3, 2024

Intergovernmental

19 – Pandemic resilience case studies of an AI-calibrated ensemble of models to inform decision-making

This report from Global Partnership on Artificial Intelligence (GPAI)’s Pandemic Resilience project follows its 2023 report and is focused on practically implementing the concepts pre- viously developed by the project team. Indeed, the 2023 report laid the foundation for this research while presenting recommendations on various approaches that aligned pandemic modelling with responsible Artificial Intelligence (AI). The 2023 report showcased a calibra- tion framework approach and an ensemble modelling concept, focusing on the added value and pertinence of both consistent calibration and ensembling; that is, ensuring models are consistent in shared parameter values while using the strengths of different models and creat- ing a digital “task force”. The combination of the calibration framework and ensemble model encourages and enables modellers from different locations and backgrounds to work to- gether by using standardised versions of their work. Although there has been substantial modelling activity of Non-Pharmaceutical Interventions (NPIs) for COVID-19, this activity has been fragmented across different countries, with mixed access and sharing of data and models. This report documents a prototype calibration frame- work – based on a multi-objective genetic algorithm – that simultaneously calibrates multiple models across different locations and ensures consistent parameter values across models. The resulting, calibrated models are then combined using an ensemble modelling concept that provides more accurate model results than any of the models do individually. Hence, consistent models for multiple locations are created and can be shared easily with these lo- cations. In addition, diverse perspectives from the models can provide more accurate results for each location through the ensemble model.December 3, 2024

Intergovernmental

18 – Digital ecosystems that empower communities: Exploring case studies to develop theory and templates for technology stacks

This report is the first from Global Partnership on Artificial Intelligence (GPAI)’s Digital Ecosystem project. The concept of digital ecosystems aims to empower communities with digital technologies to enhance their capacity to solve problems and address challenging issues they face. This report presents and discusses the digital ecosystems concept, lays out a proposed methodology to explore the concept further using case studies, and then presents some case studies from various communities gathered by the project team. The report concludes with some suggested future research directions and observations from the project’s work over 2024. December 3, 2024

Intergovernmental

17 – Scaling responsible AI solutions – Building an international community of practice and knowledge-sharing

This report marks the conclusion of the second year of the Scaling Responsible Artificial Intelligence Solutions (SRAIS) project, an initiative of the Responsible AI (RAI) working group of the Global Partnership on Artificial Intelligence (GPAI). In 2024 the project has grown in scope and impact, and has taken strides towards consolidating a global network of collaboration and knowledge-sharing. This network is focused not only on responsibility in the development of AI-based systems, but more uniquely on the intersection between scalability and responsibility. The process of scaling an AI-based application presents distinct challenges in terms of adherence to RAI principles. These include the need for responsible approaches to data and cultural integration in new places of operation; the risk of bias amplification as an application gains a larger and more diverse user base; the additional resource demands of responsible technical and operational expansion; the need to navigate varying legal and regulatory frameworks; and the imperative of assessing and mitigating the potential complex societal, developmental and environmental impacts of a given AI-based system in all of its intended use contexts.December 3, 2024

Intergovernmental

16 – Algorithmic Transparency in the Public Sector Recommendations for Governments to Enhance the Transparency of Public Algorithms

This report is a product of the "Algorithmic Transparency in the Public Sector" project developed by Global Partnership on Artificial Intelligence (GPAI) experts. The project is carried out by GPAI experts from the Responsible Artificial Intelligence and Data Governance Working Groups. The project’s overall objective is to study algorithmic transparency in the public sector, emphasising evaluating reactive and proactive transparency instruments that can enable governments to comply with algorithmic transparency principles, standards, and rules. The project examines the strengths and weaknesses of these instruments, the challenges for their construction, their various uses and users, the costs, how the instruments complement one another, and their possible contributions to transparency and various objectives (e.g., explainability, accountability). This report analyses the findings of the previous studies (GPAI, 2024; GPAI, forthcoming) and, based on that, presents recommendations for governments regarding the use of instruments to comply with algorithmic transparency principles, standards, and rules. The recommendations will include practical tools such as decision trees and benchmarks to compare the strengths and weaknesses of different transparency instruments.December 3, 2024

Intergovernmental

The technological readiness level (TRL) of 66 initiatives grouped based on the clustering framework described in Responsible AI in Pandemic Response

The technological readiness level (TRL) of 66 initiatives. Initiatives are grouped based on the clustering framework described in Responsible AI in Pandemic Response (The Future Society, 2020). Visualization by Bruno Kunzler, TFS Affiliate.April 6, 2022

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.