An introduction to the Global Partnership on AI’s Pandemic Response

Alice Oh and Paul Suetens, Co-Chairs, AI & Pandemic Response Subgroup, The Global Partnership on AI (GPAI)

The Global Partnership on AI (GPAI) mission is to “support the development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth, while seeking to address the United Nations Sustainable Development Goals”. Launched in June 2020, it is a voluntary, multistakeholder initiative bringing together industry, governments, academia and civil society. GPAI has a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realize the OECD AI Recommendation.

GPAI will bring together experts from industry, government, civil society and academia, to advance cutting-edge research and pilot projects on AI priorities. It is supported by four Working Groups looking at Data Governance, Responsible AI, the Future of Work, and Commercialisation and Innovation. As set out in Audrey Plonk’s blog post on the AI Wonk, the OECD is a principal strategic partner on GPAI, hosting GPAI’s Secretariat and working closely with GPAI’s two Centres of Expertise in Paris and Montreal.

This post is the third in a series from GPAI Chairs. It follows the initial post by Dr. Jeni Tennison, of the Data Governance Working Group, and the second by Dr. Yoshua Bengio and Raja Chatila of the Responsible AI Working Group.

Photo by Alexander Sinn on Unsplash

Introducing the Pandemic Response Subgroup 

In light of the current international context, the GPAI Task Force has invited the Responsible  Development, Use and Governance of AI Working Group to form an ad hoc AI and Pandemic Response Subgroup. The subgroup brings together AI practitioners, healthcare experts, members and international organizations to promote that methods, algorithms, code and validated data are shared rapidly, openly, securely, and in a rights and privacy-preserving way, to inform public health responses and help save lives. The subgroup was launched this summer, with the collaboration of Jacques Rajotte, Interim Executive Director at the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (ICEMAI), and his team. Here, we will introduce the group’s members, outline our current efforts, and, most importantly, share how the broader AI community can get involved. 

Our experts

22 experts from 14 countries are working together in GPAI’s Pandemic Response Subgroup. The group benefits from a range of perspectives, from individuals working on AI-enabled pandemic responses across disciplines and contexts. As the group refined its mission and deliverables over the last month, we witnessed impressive cross-disciplinary collaboration among these experts. The subgroup’s experts are listed at the end of this post. 

Our mission and objectives

Our goal is to support the responsible development and use of AI-enabled solutions to the COVID-19 and other future pandemics. We promote cross-sectoral and cross-border collaboration, and support engagement with the responsible use of AI among the public and healthcare professionals in the global response to pandemics and to public health challenges. When relevant, we collaborate with other GPAI groups. For example, we may consult with the Data Governance Working Group on access to data to train, validate and test deep learning models. We also align closely with the Responsible AI Working Group, as we operate within its framework.

Our first project

We are launching a project to catalogue, analyse, issue recommendations and suggest future projects on AI tools addressing the pandemic. This project has three components:

  1. Catalogue existing AI tools developed and used in the context of the COVID-19 pandemic to  accelerate research, detection, prevention, response and recovery. The catalogue will list initiatives from academia, governments, the private sector, civil society, and international organizations, among others. 
  2. Assess selected AI tools. AI tools of particular interest will be selected from the above catalogue for further assessment. The assessment will analyse how these tools implement notions of responsible research and development, and why they are  beneficial applications of AI systems. The analysis will identify best practices, lessons learned and the main socio-economic, technical, and scientific challenges to implementing responsible AI principles. 
  3. Recommendations and future projects. Based on the analysis, make recommendations on best practices to overcome the challenges identified above, and suggest specific projects to fill gaps and overcome problems detected during the assessment.

The subgroup will report results to the Multistakeholder Experts Group Plenary in early December 2020. We are launching a competitive tender to identify a partner to assist the subgroup with this project. If you have experience and expertise in this field, please read the Terms of Reference and consider submitting a proposal by midnight AoE, September 29, 2020.

Please keep an eye out on this blog series as we move forward. Should you have questions, comments, ideas or requests about the Pandemic Response subgroup, please get in touch via jacques.rajotte[AT]gmail[DOT]com.

Membership of GPAI’s subgroup on AI and Pandemic Response

Working Group members

Alice Oh (Co-Chair) – Korea Advanced Institute of Science and Technology (KAIST) (Korea)

Paul Suetens (Co-Chair) – KU Leuven (Belgium)

Anurag Agrawal – Council of Scientific and Industrial Research (India)

Amrutur Bharadwaj – Indian Institute of Science (India)

Nozha Boujemaa – Median Technologies (France)

Dirk Brockmann – Humboldt University of Berlin (Germany)

Howie Choset  – Carnegie Mellon University (US)

Enrico Coiera – Macquarie University (Australia)

Marzyeh Ghassemi  – University of Toronto (Canada)

Hiroaki Kitano – Sony Computer Science Laboratories Inc (Japan)

Seán Ó hÉigeartaigh – Centre for the Study of Existential Risk (UK)

Michael Justin O’Sullivan – University of Auckland (New Zealand)

Michael Plank – University of Canterbury (New Zealand)

Mario Poljak – University of Ljubljana (Slovenia)

Daniele Pucci – Istituto Italiano di Tecnologia Research Labs Genova (Italy)

Joanna Shields – BenevolentAI (UK)

Margarita Sordo-Sanchez – Harvard Medical School (US)

Leong Tze Yun National – University of Singapore (Singapore)

Gaël Varoquaux – INRIA (France)

Blaž Zupan – University of Ljubljana (Slovenia)


Cyrus Hodes – AI Initiative (US)

Alan Paic – OECD (France)

AI Wonk Dog
Sign up for OECD artificial intelligence newsletter

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.

Sign up for OECD artificial intelligence newsletter