Working Group on Responsible AI
The work of the Working Group on Responsible AI (RAI) is grounded in a vision of AI that is human-centred, fair, equitable, inclusive and respectful of human rights and democracy, and that aims at contributing positively to the public good. RAI’s mandate aligns closely with that vision and GPAI’s overall mission, striving to foster and […]
The work of the Working Group on Responsible AI (RAI) is grounded in a vision of AI that is human-centred, fair, equitable, inclusive and respectful of human rights and democracy, and that aims at contributing positively to the public good. RAI’s mandate aligns closely with that vision and GPAI’s overall mission, striving to foster and contribute to the responsible development, use and governance of human-centred AI systems, in congruence with the UN Sustainable Development Goals.
RAI, as all other GPAI Working Groups, does not operate in a silo within GPAI. RAI seeks to collaborate with other Working Groups. For instance, RAI works with the Data Governance Working Group when their respective projects share common dimensions.
Finally, the ad hoc AI and Pandemic Response Subgroup that was created in July 2020 to support the responsible development and use of AI-enabled solutions to COVID-19 and other future pandemics, was merged to the RAI in February 2022. The projects this group were working on were also transferred under the stewardship of the RAI.
ARCHIVE: GPAI expert reports
***Disclaimer: These reports were planned before the integration of the Global Partnership on Artificial Intelligence (GPAI) and the Organisation for Economic Co-operation and Development (OECD) in mid-2024. Consequently, the report was not subject to approval by GPAI and OECD members and should not be considered to reflect their positions.
Current projects from the 2024 work plan
The Working Group on Responsible AI is pursuing the following projects under the previous governance of GPAI:
- Algorithmic transparency in the public sector (jointly with the Working Group on Data Governance)
- Social media governance
- Responsible AI strategy for the environment (RAISE)
- Towards real diversity and gender equality in AI
- Scaling responsible AI solutions
- Digital ecosystems that empower communities
2023
RAI Working Group Report (December 2023)
The term “responsible AI” is gaining traction, both in tech circles and beyond, but how can it be achieved? By aligning closely with GPAI’s mission, which promotes fair, equitable and inclusive AI in accordance with the UN Sustainable Development Goals (SDGs), the Responsible AI Expert Working Group (RAI EWG) works to ensure that AI is developed in the interests of the public good. This report outlines their outputs from 2023 as well as plans for 2024.
RAI Strategy for the Environment Workshop Report (RAISE) (November 2023)
From forecasting the impacts of climate change to highlighting the risks facing various ecosystems, AI can help us better understand and prepare for climate action and biodiversity preservation. Since 2020, the Responsible AI Strategy for the Environment (RAISE) has produced recommendations on these areas and taken steps to ensure their implementation. This year, GPAI Experts organised a workshop to design approaches to increase practical and international action to this end.
Social Media Governance Project – Summary of Work in 2023 (October 2023)
Social media is one of the most influential channels through which AI can impact our daily lives. Throughout 2023, GPAI Experts examined the power of algorithms to shape our interaction with online content and, ultimately, the way we perceive this information. Their work on this topic was influential in the development of the EU AI Act and the G7 Hiroshima AI Process.
Crowdsourcing the curation of the training set for harmful content classifiers used in social media (December 2023)
How can we moderate harmful content on social media? Harmful content classifiers are key to identifying and flagging inappropriate or dangerous posts, but their use is neither consistent nor transparent to the public. This makes it increasingly difficult to develop effective policies to mitigate the dangers of online content while respecting the values of freedom and democracy. The report examines political hate speech during two elections in India, and proposes new models based on classifiers using semi-public datasets, rather than private ones within companies.
Scaling Responsible AI Solutions – Challenges and Opportunities (December 2023)
Along with its opportunities, AI brings with it a number of social challenges. In response, AI “solutions” have been developed to ensure that it upholds democratic values. In enabling developers to identify problematic stages within the AI lifecycle and adjust them accordingly, solutions are paramount to helping AI systems meet responsible best practices. This report presents the Scaling Responsible AI Solutions (SRAIS) project, which aimed to identify challenges to responsibility and scaling of such solutions and produced recommendations to overcome them.
Pandemic Resilience – Developing an AI-calibrated ensemble of models to inform decision making (December 2023)
AI is being applied in the health sector to inform policy in times of medical uncertainty. Using an ensemble model (a group of predictive algorithms), this report explores AI’s potential to forecast the epidemic spread and socio-economic impact of COVID-19 across various locations. Based on its findings, it provides policy recommendations, which include strengthening ties between modelers and decision makers, establishing a feedback mechanism to facilitate the adjustment of policies according to model outcomes, and developing a public data pipeline.
Towards Real Diversity and Gender Equality in Artificial Intelligence – Advancement Report (November 2023)
How can we avoid bias in AI systems? AI is trained on datasets containing biases which reinforce misconceptions of certain groups and threaten the safety and dignity of their members. In acknowledging the harms this can have on women and marginalised communities, this report calls for their increased consideration throughout the AI life cycle. Through reviewing literature, speaking with marginalised individuals, and analysing existing initiatives in this field, it gathers a comprehensive understanding of their experience to integrate their perspectives into policy.
2022
AI for Net Zero Electricity (December 2022)
Responsible AI Working Group Report (November 2022)
Transparency mechanisms for social media recommender algorithms: From proposals to action (November 2022)
Biodiversity and Artificial Intelligence: Opportunities and recommendations for action (November 2022)
AI-powered immediate response to pandemics: Summaries of top initiatives (March 2022)
Measuring the environmental impacts of Artificial Intelligence compute and applications: The AI footprint (November 2022)
AI for public good drug discovery: Advocacy efforts and a further call to action (October 2022)
2021
Responsible AI Working Group Report (November 2021)
Responsible AI for social media guidance: A proposed collaborative method for studying the effects of social media recommender systems on users (November 2021)
Climate change and AI: Recommendations for government action (November 2021)
2020
Responsible AI Working Group Report (November 2020)
Areas for future action in the responsible AI ecosystem (supporting report prepared for GPAI by the Future Society, December 2020)