
Responsible AI Working Group
The Global Partnership on AI - (GPAI)
The Global Partnership on AI (GPAI) has a mission to “support the development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth, while seeking to address the United Nations Sustainable Development Goals”. Launched in June 2020, it is the first intergovernmental institution with a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realise the OECD AI Recommendation.
Responsible AI Working Group's documents
Intergovernmental
03 – AI for net zero: Assessing readiness for AI
This guide seeks to inform organisations how they can use AI to transition to net zero at low cost. It provides a series of checklists to help organisations understand where they are on this journey. These checklists should be followed no matter which sector your organisation is in. While the guide is applicable to any industry, four chosen “case study” sectors illustrate how this can be done at the end of the guide, including a summary of AI suppliers per sector: ● Electricity ● Agriculture ● Foundation industries ● Transport To support companies in assessing their current level of AI readiness and to map out areas for further investment we provide an AI Readiness Self-Assessment tool. This highlights five key themes that companies can advance to become AI ready: AI opportunity identification, human capacity, data for AI, digital infrastructure and responsible AI governance. These key aspects for AI readiness were identified by industry and AI experts. We summarise the key recommendations below, however a full self assessment is recommended to identify all AI readiness requirements.— December 3, 2024
Intergovernmental
02 – Social media governance project: Summary of work in 2024
Social media platforms are one of the main vectors for AI influence in the modern world. In 2024, over 5 billion people were social media users, a number projected to rise to 6 billion by 2028 (Statista, 2024a); these users spent over two hours per day on social media (Statista, 2024b). Social media platforms are largely powered by AI systems, so attention to the AI systems used to drive these platforms is a central strand of any AI governance endeavour. GPAI has been working on social media governance since its inception: the Social Media Governance project has been running since the first round of GPAI projects in 2020. In this report, we summarise the work of the Social Media Governance project in 2024. The report is structured around the three main influences of AI on social media platforms. Recommender systems are AI systems that learn how to push content at platform users, through curation of their content feeds. We will discuss our work on recommender systems in Section 3. Harmful content classifiers are AI systems that learn how to withhold content from users, by blocking it or downranking it. We will discuss our work on harmful content classifiers in Section 4. Social media platforms are also a key medium for the dissemination of AI-generated content. We begin in Section 2 by discussing our work on AI-generated content, and how it can be identified.— December 3, 2024
Intergovernmental
01 – Crowdsourcing annotations for harmful content classifiers: An update from GPAIs pilot project on political hate speech in India
This report is a sequel to the report we gave at last year’s GPAI Summit in Delhi (GPAI, 2023), that introduced our harmful content classification project and presented some initial results. We begin in Section 2 by summarising the aims of the project, and the work described in our first report. In the remainder of the report, we present the new work we have done this year, and outline plans for future work.— December 3, 2024
Intergovernmental
19 – Pandemic resilience case studies of an AI-calibrated ensemble of models to inform decision-making
This report from Global Partnership on Artificial Intelligence (GPAI)’s Pandemic Resilience project follows its 2023 report and is focused on practically implementing the concepts pre- viously developed by the project team. Indeed, the 2023 report laid the foundation for this research while presenting recommendations on various approaches that aligned pandemic modelling with responsible Artificial Intelligence (AI). The 2023 report showcased a calibra- tion framework approach and an ensemble modelling concept, focusing on the added value and pertinence of both consistent calibration and ensembling; that is, ensuring models are consistent in shared parameter values while using the strengths of different models and creat- ing a digital “task force”. The combination of the calibration framework and ensemble model encourages and enables modellers from different locations and backgrounds to work to- gether by using standardised versions of their work. Although there has been substantial modelling activity of Non-Pharmaceutical Interventions (NPIs) for COVID-19, this activity has been fragmented across different countries, with mixed access and sharing of data and models. This report documents a prototype calibration frame- work – based on a multi-objective genetic algorithm – that simultaneously calibrates multiple models across different locations and ensures consistent parameter values across models. The resulting, calibrated models are then combined using an ensemble modelling concept that provides more accurate model results than any of the models do individually. Hence, consistent models for multiple locations are created and can be shared easily with these lo- cations. In addition, diverse perspectives from the models can provide more accurate results for each location through the ensemble model.— December 3, 2024
Intergovernmental
18 – Digital ecosystems that empower communities: Exploring case studies to develop theory and templates for technology stacks
This report is the first from Global Partnership on Artificial Intelligence (GPAI)’s Digital Ecosystem project. The concept of digital ecosystems aims to empower communities with digital technologies to enhance their capacity to solve problems and address challenging issues they face. This report presents and discusses the digital ecosystems concept, lays out a proposed methodology to explore the concept further using case studies, and then presents some case studies from various communities gathered by the project team. The report concludes with some suggested future research directions and observations from the project’s work over 2024. — December 3, 2024
Intergovernmental
17 – Scaling responsible AI solutions – Building an international community of practice and knowledge-sharing
This report marks the conclusion of the second year of the Scaling Responsible Artificial Intelligence Solutions (SRAIS) project, an initiative of the Responsible AI (RAI) working group of the Global Partnership on Artificial Intelligence (GPAI). In 2024 the project has grown in scope and impact, and has taken strides towards consolidating a global network of collaboration and knowledge-sharing. This network is focused not only on responsibility in the development of AI-based systems, but more uniquely on the intersection between scalability and responsibility. The process of scaling an AI-based application presents distinct challenges in terms of adherence to RAI principles. These include the need for responsible approaches to data and cultural integration in new places of operation; the risk of bias amplification as an application gains a larger and more diverse user base; the additional resource demands of responsible technical and operational expansion; the need to navigate varying legal and regulatory frameworks; and the imperative of assessing and mitigating the potential complex societal, developmental and environmental impacts of a given AI-based system in all of its intended use contexts.— December 3, 2024
Intergovernmental
16 – Algorithmic Transparency in the Public Sector Recommendations for Governments to Enhance the Transparency of Public Algorithms
This report is a product of the "Algorithmic Transparency in the Public Sector" project developed by Global Partnership on Artificial Intelligence (GPAI) experts. The project is carried out by GPAI experts from the Responsible Artificial Intelligence and Data Governance Working Groups. The project’s overall objective is to study algorithmic transparency in the public sector, emphasising evaluating reactive and proactive transparency instruments that can enable governments to comply with algorithmic transparency principles, standards, and rules. The project examines the strengths and weaknesses of these instruments, the challenges for their construction, their various uses and users, the costs, how the instruments complement one another, and their possible contributions to transparency and various objectives (e.g., explainability, accountability). This report analyses the findings of the previous studies (GPAI, 2024; GPAI, forthcoming) and, based on that, presents recommendations for governments regarding the use of instruments to comply with algorithmic transparency principles, standards, and rules. The recommendations will include practical tools such as decision trees and benchmarks to compare the strengths and weaknesses of different transparency instruments.— December 3, 2024
Intergovernmental
The technological readiness level (TRL) of 66 initiatives grouped based on the clustering framework described in Responsible AI in Pandemic Response
The technological readiness level (TRL) of 66 initiatives. Initiatives are grouped based on the clustering framework described in Responsible AI in Pandemic Response (The Future Society, 2020). Visualization by Bruno Kunzler, TFS Affiliate.— April 6, 2022
GPAI Ai & Pandemic Response Sub-Working Group Report
January 28, 2021
Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.