These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
Participatory AI Framework
The framework was first featured in ‘Participatory AI for humanitarian innovation: a briefing paper’, which responded to the growing interest in using participatory approaches for the design, development and evaluation of AI systems across industry, academia and the public sector. The methods involved and depth of engagement can be split between four categories:
- Consultation: refers to participation where input occurs outside of the core AI development process and it is not guaranteed that it will impact the design of AI. Common methods include polling and deliberation.
- Contribution: refers to participation that is time-limited to one stage of the AI development pipeline. Here, external stakeholders complete one of the tasks that is necessary to AI development e.g. data collection, data labelling, validation of model outputs. Common methods include both targeted and open crowdsourcing.
- Collaboration: refers to participatory practices with multiple touchpoints along the AI development pipeline and/or where external stakeholders are able to meaningfully contribute to interrogating the model and shaping the features that it uses to make predictions or classifications, even if they were not involved in problem setting.
- Co-design: is the most comprehensive form of stakeholder involvement in AI design and development. It involves engagement at multiple stages throughout the pipeline and outside it. All of the stakeholder groups involved discussing their needs, values and priorities, both with respect to the problem space and the technology.
The publication presents in-depth case studies to illustrate the framework and suggests five key design questions to help others design participatory AI projects. The final section of the report outlines the relevance of Participatory AI to the humanitarian sector. Drawing on the Core Humanitarian Standard, there is a mapping of the risks AI poses to humanitarian principles or the rights of crisis-affected communities, alongside examples of participatory approaches that could help to address some of the risks.
The Participatory AI framework was first applied during a year-long project that set out to design and evaluate new proof-of-concept Collective Crisis Intelligence tools, which combine data from crisis-affected communities with the processing power of AI to improve humanitarian action. The project was a partnership between Nesta’s Centre for Collective Intelligence Design (CCID) and Data Analytics Practice (DAP), the Nepal Red Cross and Cameroon Red Cross, IFRC Solferino Academy, and Open Lab Newcastle University, and it was funded by the UK Humanitarian Innovation Hub.
The two collective crisis intelligence tool prototypes developed were:
1)NFRI-Predict: a tool that predicts which non-food aid items (NFRI) are most needed by different types of households in different regions of Nepal after a crisis.
2)Report and Respond: a French language SMS-based tool that allows Red Cross volunteers in Cameroon to check the accuracy of COVID-19 rumours or misinformation they hear from the community while they’re in the field, and receive real-time guidance on appropriate responses.
The full results, detailed methodology and technical documention describing this application of the Participatory AI framework can be accessed at: https://www.nesta.org.uk/report/localising-ai/.
About the tool
You can click on the links to see the associated tools
Objective(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target users:
Stakeholder group:
Validity:
Geographical scope:
Required skills:
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case