Data Governance Working Group A Framework Paper for GPAI’s Work on Data Governance 2.0
The Global Partnership on AI (“GPAI”) has been established with a mission to support and guide the responsible adoption of artificial intelligence (AI). It is supported in this mission by four Working Groups made up of leading international experts: (1) Responsible AI, (2) Data Governance, (3) The Future of Work, and (4) Innovation and Commercialization. According to the GPAI Terms of Reference, the Data Governance Working Group has a mandate to ‘collate evidence, shape research, undertake applied AI projects and provide expertise on data governance, to promote data for AI being collected, used, shared, archived and deleted in ways that are consistent with human rights, inclusion, diversity, innovation, economic growth, and societal benefit, while seeking to address the UN Sustainable Development Goals.’ The GPAI terms of reference explicitly exclude aspects of AI and data governance related to defence and state security. The mandates of the Data Governance Working Group and of the Responsible AI Working Group in particular are closely related and overlap to a certain degree. Generally speaking, the Responsible AI Working Group will be looking more into how to model AI development and how to employ which datasets, in order for AI to be shaped and to function in a responsible manner (e.g. without any undue bias). The Data Governance Working Group will therefore focus on how to collect and manage the data responsibly in the first place, in particular considering the situation of parties that are in some way or another associated with the origin and context of the data or that may otherwise be affected by use of the data (e.g. data subjects and those belonging to communities about which data is collected). The ‘data perspective’ and the ‘algorithms perspective’ are closely related, partly overlapping, and yet to some extent distinct (METI 2018; DEK 2019).