These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Responsible Data Enrichment Sourcing: Resource Library for AI Practitioners

High quality AI models require massive amounts of training data. However, this data often needs to be cleaned, labelled, categorised, annotated, or otherwise enriched so that it can be legible to algorithmic systems. This critical work is done by data enrichment workers. Despite growing awareness of their often precarious working conditions, there has been limited transparency about how AI practitioners are sourcing enriched data from these workers and little guidance on how they should. To help fill this gap, Partnership on AI (PAI) shared the Responsible Data Enrichment Sourcing Library, a set of resources AI organisations can use to formalise their data enrichment practices and have a positive impact on the lives of data enrichment workers. These resources include:
- Data Enrichment Sourcing Guidelines: A shareable PDF listing the five key, worker-centric guidelines that AI practitioners should follow when designing a project involving enriched data;
- Good Instructions Checklist for Data Enrichment Projects: A PDF listing what should be included in a set of task instructions to make sure they are as clear as possible for data enrichment workers;
- Data Enrichment Vendor Comparison Template: A Google Sheets template for comparing various vendors of data enrichment services and surfacing relevant worker-centric considerations;
- Local Living Wages Spreadsheet Template: A Google Sheets template to create a centralised resource for looking up living wage information for each of the geographic areas an organisation sources data enrichment labour from.
These resources are based on multi stakeholder recommendations from the PAI community summarised in PAI’s Responsible Sourcing of Data Enrichment Services white paper. The resource library was developed in partnership with DeepMind over an applied collaboration to put the recommendations from the white paper into practice. Our primary objective in sharing these resources is to lower the barriers for companies interested in improving their data enrichment sourcing practices. Our intention is for the library to serve as a resource for champions within companies to advocate for their organisations to adopt responsible data enrichment sourcing practices. To demonstrate that these resources and recommendations are actionable, we partnered with DeepMind to put them into practice in a real world setting. The process by which DeepMind operationalized the recommendations, why they chose to do so, and the impact of adopting those recommendations are documented in a case study, Implementing Responsible Data Enrichment Practices at an AI Developer: The Example of DeepMind.
About the tool
You can click on the links to see the associated tools
Tool type(s):
Objective(s):
Impacted stakeholders:
Target sector(s):
Country/Territory of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Use Cases

Implementing Responsible Data Enrichment Practices at an AI Developer: The Example of DeepMind
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























