
Responsible AI Working Group
The Global Partnership on AI - (GPAI)
The Global Partnership on AI (GPAI) has a mission to “support the development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth, while seeking to address the United Nations Sustainable Development Goals”. Launched in June 2020, it is the first intergovernmental institution with a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realise the OECD AI Recommendation.
Responsible AI Working Group's documents
Transparencia algorítmica en el sector público: Informe sobre el estado de los instrumentos de transparencia algorítmica
Este informe repasa los instrumentos de transparencia algorítmica en el sector público y se centra en los repositorios o registros de algoritmos públicos. Se trata de una versión actualizada del borrador de informe del proyecto "Transparencia algorítmica en el sector público", elaborado por expertos de la Global Partnership on Artificial Intelligence (GPAI) y publicado en mayo de 2024. Han contribuido a este proyecto expertos de la GPAI de los grupos de trabajo sobre IA responsable y gobernanza de datos.— June 26, 2025
Hacia la igualdad sustantiva en el ámbito de la inteligencia artificial (IA): políticas transformadoras de IA para la igualdad de género y la diversidad
El rápido avance de la inteligencia artificial (IA) está transformando las sociedades e impulsando el crecimiento eco- nómico con un gran potencial para mejorar la vida y el desarrollo socio-económico en el plano mundial. Sin embargo, existe el riesgo de que esta agrave las desigualdades existentes al reflejar y magnificar los sesgos sociales, espe- cialmente los que afectan a grupos históricamente marginados. Desafíos como la discriminación, las injusticias, los sesgos y los estereotipos dañinos persisten a lo largo del ciclo de vida de la IA y afectan muchos aspectos de la vida humana. Se necesitan urgentemente marcos regulatorios sólidos para mitigar estas disparidades, prevenir daños y trabajar hacia la igualdad sustantiva y la diversidad en los ecosistemas de IA.— June 26, 2025
Algorithmic transparency in the public sector: A state-of-the-art report of algorithmic transparency instruments
This report overviews algorithmic transparency instruments in the public sector and focuses on repositories or registers of public algorithms. It is a preliminary report of the “Algorithmic Transparency in the Public Sector” project developed by experts from the Global Partnership on Artificial Intelligence (GPAI). In the project's subsequent phases, additional reports will be produced based on three in-depth case studies of public algorithmic repositories. The case studies will include interviews with diverse stakeholders to evaluate this type of transparency instrument. GPAI experts from the Responsible AI and Data Governance Working Groups contribute to this project. The project's objective is to study algorithmic transparency in the public sector with an emphasis on assessing transparency instruments, both reactive and proactive, that may allow governments to comply with algorithmic transparency principles, standards, and rules. The project will study the strengths and weaknesses of these instruments, the challenges of building them, their diverse usages and users, costs, how instruments complement each other, and their potential contributions to transparency and different goals (e.g., explainability, accountability). — May 27, 2025
Policy Guide for Implementing Transformative AI Policy Recommendations
The purpose of this guide is to support policy makers and regulatory bodies in implementing key recommendations from the report Towards Substantive Equality in AI: Transformative AI Policy for Gender Equality and Diversity. The guide aims to assist national policy makers – in your duty to protect, promote and fulfil human rights – to integrate transformative AI policies into broader governmental frameworks and practices.— May 27, 2025
Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity (November 2024)
The rapid advancement of artificial intelligence (AI) is transforming societies and driving economic growth, hold ing great potential to improve lives and livelihoods globally. However, it risks exacerbating existing inequalities by mirroring and magnifying societal biases, particularly those affecting historically marginalised groups. Challenges such as discrimination, unfairness, bias and harmful stereotypes persist throughout the AI lifecycle, impacting many aspects of human life. Robust regulatory frameworks are urgently needed to mitigate these disparities, prevent harm and work towards substantive equality and diversity in AI ecosystems. Towards Substantive Equality in Artificial Intelligence: Transformative AI Policy for Gender Equality and Diversity aims to strengthen the capacity of States and other stakeholders to foster inclusive, equitable and just AI ecosystems. It examines promising practices, provides policy insights and offers actionable recommendations to enhance gender equality and diversity in AI and related policy making. The Policy Guide for Implementing Transformative AI Policy Recommendations provides additional guidance in implementation.— May 27, 2025
AI for Net Zero: Assessing Readiness for AI (November 2024)
The objective of this booklet is to support companies to understand the prerequisites needed to deploy AI in support of a low cost transition to net zero. AI can accelerate the transition to net zero. In this booklet, we refer to AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI is becoming increasingly useful as it can help identify subtle patterns in very large amounts of data allowing it to optimise and automate complex systems. However, it also has weaknesses: its outputs can be strongly influenced by poor or biased data; it is not always clear how it arrives at its conclusions; and any answers it offers are only as good as the questions it is asked.— May 27, 2025
Transparency Mechanisms for Social Media Recommender Algorithms: From Proposals to Action
Social media platforms rely on several kinds of AI technology for their operation. Much of the appeal of social media platforms comes from their ability to deliver content that is tailored to individual users. This ability is provided in large part by AI systems called recommender systems: these systems are the focus of our project. Recommender systems curate the ‘content feeds’ of platform users, using machine learning techniques to tailor each user’s feed to the kinds of item they have engaged with in the past. They essentially function as a personalised newspaper editor for each user, choosing which items to present, and which to withhold. They rank amongst the most pervasive and influential AI systems in the world today. The starting point for our project is a concern that recommender systems may lead users in the direction of harmful content of various kinds. This concern is at origin a technical one, relating to the AI methods through which recommender systems learn. But it is also a social and political one, because the effects of recommender systems on platform users could potentially have a significant influence on currents of political opinion. At present, there is very little public information about the effects of recommender systems on platform users: we know very little about how information is disseminated to users on social media platforms. It is vital that governments, and the public, have more information about how recommender systems steer content to platform users, particularly in domains of harmful content. In the first phase of our project, we reviewed possible methods for studying the effects of recommender systems on user platform behaviour. We concluded the best methods available for studying these effects are the methods that companies use themselves. These are methods that are only available internally to companies. We proposed transparency mecha nisms, in which these company-internal methods are used to address questions in the public interest, about possible harmful effects of recommender systems.— May 27, 2025
State-of-the-art Foundation AI Models Should be Accompanied by Detection Mechanisms as a Condition of Public Release
Foundation models represent a dramatic advance for the state of the art in Artificial Intelligence (AI). In current discussions of AI, a foundation model is defined very generally as an AI model that is trained on large amounts of data, typically using self-supervision, that can be adapted, or ‘fine-tuned’ to a wide range of downstream tasks (see, e.g., Bommasani et al., 2022).1 In this paper, we will argue for a specific regulatory mechanism that governs the release of new state-of-the-art foundation models. For concreteness, our arguments will sometimes make reference to a central ingredient in many current foundation models, namely large language models (LLMs), which have the ability to generate natural language text as output. Many LLMs are foundation models in their own right: for instance, BERT and GPT-3 are LLMs and also canonical examples of foundation models. The discussions in this paper will sometimes refer to LLMs and the text they generate, to give concrete examples of the content that foundation models can produce and the issues that arise for these models. Our broad argument is about foundation models generally, not just about LLMs. But we will begin by introducing LLMs and then show how LLMs can provide the core of foundation models with wider functionality.— May 27, 2025
Social Media Governance Policy Brief: How the DSA can enable a public science of digital platform social impacts (policy brief)
A key aim of the EU’s Digital Services Act (DSA, 2022) is to improve transparency about the operation of very large online platforms (VLOPs): to shed light on how the algorithms and processes deployed by these platforms influence the way information flows in our society, and influence individual platform users, in potentially harmful ways. The DSA provides two particular mechanisms for delivering this transparency. One involves access to company data and processes by external auditors: each VLOP must undergo regular independent audits, to check for compliance on its obligations under the DSA. Another involves access to company data and processes by vetted independent researchers, to ensure potential risks to fundamental rights can be identified. This allows DSA-relevant aspects of company operation to be further studied, using data and methods that are only available within companies. Each type of access is governed by a Delegated Regulation. The Delegated Regulation on Auditing has already been released (DSA, 2023). The Delegated Regulation for Data Access for External Researchers is currently under discussion. Our briefing note contributes to this latter discussion.— May 27, 2025
Crowdsourcing annotations for harmful content classifiers An update from GPAI’s pilot project on political hate speech in India
This report is a sequel to the report we gave at last year’s GPAI Summit in Delhi (GPAI, 2023), that introduced our harmful content classification project and presented some initial results. We begin in Section 2 by summarising the aims of the project, and the work described in our first report. In the remainder of the report, we present the new work we have done this year, and outline plans for future work.— May 27, 2025
Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD, the GPAI or their member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.