Multistakeholder

An introduction to the Global Partnership on AI’s work on data governance

The Global Partnership on AI (GPAI) has a mission to “support and guide the responsible adoption of AI that is grounded in human rights, inclusion, diversity, innovation, economic growth and societal benefit, while seeking to address the UN Sustainable Development Goals (UN SDGs)”. Launched in June 2020, it is a voluntary, multi-stakeholder initiative with a permanent focus on AI, with a founding membership that covers 2.5 billion of the world’s population. It has ambitions to scale, particularly to include low and middle income countries, support the UN Sustainable Development Goals and help fully realise the OECD AI Recommendation.

GPAI will bring together experts from industry, government, civil society and academia, to advance cutting-edge research and pilot projects on AI priorities.. It is supported by four Working Groups looking at Data Governance, Responsible AI (including a subgroup on Pandemic Response), the Future of Work, and Commercialisation and Innovation. As set out in Audrey Plonk’s blog post on the AI Wonk, the OECD is a principal strategic partner on GPAI, hosting GPAI’s Secretariat and working closely with GPAI’s two Centres of Expertise in Paris and Montreal.

This introductory blog, from Dr Jeni Tennison, Vice President and Chief Strategy Adviser for the Open Data Institute and one of the Co-Chairs of the Data Governance Working Group, is the first in a series from the Working Group Co-Chairs. 

Heart-data
Photo by Alexander Sinn on Unsplash

Introducing the Data Governance Working Group and its mission

How to do data governance well – making sure that it is collected, used and shared in responsible and trustworthy ways – is one of the fundamental challenges in the development of AI, and one that we work on extensively at the Open Data Institute. So I was delighted to be asked by the UK Government to be one of the Co-Chairs for GPAI’s Data Governance Working Group. My Co-Chair Dr. Maja Bogataj Jančič, founder and head of the Intellectual Property Institute in Slovenia, and I have been working with Jacques Rajotte, Interim Executive Director and his team at the International Centre of Expertise in Montréal for the Advancement of Artificial Intelligence (ICEMAI) and the rest of the Working Group, over the last couple of months to shape the work of the group.

Maja and I are very aware of the huge range of expertise on data governance both within and importantly outside the Working Group, and we are determined to be as open as possible around the work we are doing. In this blog, I wanted to introduce the group, outline the work we’ve been doing so far, and invite you – the wider AI and data community – to get involved, starting today with our first projects going live.

The Working Group itself consists of 30 experts from 17 countries with experience in technical, legal and institutional aspects of data governance. True to the overall ambition of the Global Partnership on AI, they combine cross-sectoral insights from the scientific community, industry, civil society and international organizations. We are fortunate to have a highly energetic, passionate and collaborative group bringing a wealth of perspectives beyond any single country. The full list is given at the end of this post.

The working group’s mission and objectives

Our mandate as a group aligns closely with GPAI’s overall mission. Our Working Group aims to collate evidence, shape research, undertake applied AI projects and provide expertise on data governance, to promote data for AI being collected, used, shared, archived and deleted in ways that are consistent with human rights, inclusion, diversity, innovation, economic growth, and societal benefit, while seeking to address the UN Sustainable Development Goals.”  

Clearly, there are interactions between data governance and the remits of the other Working Groups – particularly on Responsible AI and Commercialisation and Innovation – so we are aiming to work with them on areas of overlap. The Working Group also has the chance to help coordinate GPAI’s applied AI ambitions, shape the projects carried out by or funded by ICEMAI, and to influence the policy recommendations created by the OECD through its work. Our goal is that our work is also useful more widely, amongst those researching, thinking about and implementing data governance practices in AI.

Our first projects

In support of the Working Group’s mandate, we are starting two projects that will lay the groundwork for GPAI’s future ambitions on Data Governance. These are to be delivered by GPAI’s first Plenary in December 2020.

The first is an agreed data governance framework for the group. We’re aware that there have been many frameworks proposed for thinking about data and data governance, and we’re hoping that we can simply adopt one or more of them. We want to ensure we have an agreed scope, structure and vocabulary for the Working Group’s work. To do so, this piece of work will:

  • define what the Working Group understands Data Governance to cover, particularly in the context of the work of GPAI’s other Working Groups
  • define an agreed set of terms and conceptual frameworks that the Working Group will use in further reports
  • describe the aspects of data governance that are most important for further research and evidence building by the Working Group

This will be produced by the Working Group itself, and I’m delighted to announce that Christiane Wendehorst, President of the European Law Institute and Professor of Law at the University of Vienna, will be leading this work with volunteers from the group. We hope to be able to share a draft of this at the beginning of October.

The second project, going out to tender today, is an investigation on The Role of Data in AI. The aim of this project is to situate the importance of data in AI development and to identify both areas where more data would be useful – such as specific, open, datasets that could be worthy of national support or international collaboration – and where harms arise due to the collection of or access to data. In the process of doing so, it will:

  • describe how data is used in the development of AI
  • describe the benefits and harms that arise from creating and having access to different types of data
  • identify challenges to responsible AI development that arise from having or not having (access to) particular data

We are looking for a partner to assist the Working Group on this second deliverable and are today launching a competitive tender to identify that partner. If you have experience and expertise in this field, then please read the Terms of Reference, being published today, and consider submitting a proposal by the deadline of the 6th September 2020.

Openness, transparency, diversity and collaboration are going to be at the heart of these two projects and all of our work into the future. We will be publishing drafts and further updates on these two projects as an opportunity for consultation and wider input from the community before they are finalised. 

So please keep an eye out both on here, and on my Twitter account (@JeniT), as we move forward with this work, and get in touch with myself and Maja via jacques.rajotte@gmail.com if you have questions, comments, ideas or requests about the work of GPAI’s Data Governance Working Group.

Membership of GPAI’s Data Governance Working Group

Working Group members

Jeni Tennison (Co-Chair) – Open Data Institute (UK)

Maja Bogataj Jančič (Co-Chair) – Intellectual Property Institute (Slovenia)

Alejandro Pisanty Baruch – National Autonomous University (Mexico)

Alison Gillwald – Research ICT Africa (South Africa / UNESCO)

Bertrand Monthubert – Occitanie Data (France)

Carlo Casonato – University of Trento (Italy)

Carole Piovesan – INQ Data Law (Canada)

Christiane Wendehorst – European Law Institute / University of Vienna (EU)

Dewey Murdick  – Center for Security and Emerging Technology (USA)

Iris Plöger – Federation of German Industries (Germany)

Jeremy Achin – DataRobot (USA)

Josef Drexl – Max Planck Institute (Germany)

Kim McGrail – University of British Columbia (Canada)

Matija Damjan – University of Ljubljana (Slovenia)

Neil Lawrence – University of Cambridge (UK)

Nicolas Miailhe – The Future Society (France)

Oreste Pollicino – University of Bocconi (Italy)

Paola Villerreal – National Council for Science and Technology (Mexico)

Paul Dalby – Australian Institute of Machine Learning (Australia)

P. J. Narayanan– International Institute of Technology, Hyderabad (India)

Shameek Kundu – Standard Chartered Bank (Singapore)

Takashi Kai – Hitachi (Japan)

Teki Akuetteh Falconer – Africa Digital Rights Hub (Ghana / UNESCO)

Te Taka Keegan – University of Waikato (New Zealand)

V. Kamakoti – International Institute of Technology, Madras (India)

Yeong Zee Kin – Infocomm Media Development Authority (Singapore)

Observers

Elettra Ronchi – OECD



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.