These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AI and Data Protection Risk Toolkit
The risks and benefits to individuals that arise from personal data processing using artificial intelligence(AI) are heavily context-dependant, and vary significantly across the diverse range of sectors, technologies and organisation types covered by data protection legislation. This toolkit will help you understand some of the AI-specific risks to individual rights and freedoms and provides practical steps to mitigate, reduce or manage them.
Developing AI is generally an iterative process. We have divided the risks and controls by high-levellifecycle stages to help as a guide to what risks and controls you should be considering at each stage.However, you should always ensure your processing is compliant with data protection legislation as awhole. The toolkit can also be divided by risk area. So, for example, if you are struggling to think ofhow to mitigate risks associated with data minimisation, then you can filter the risk area column toonly include information related to data minimisation.
You can use this tool as a way to assess the risks to fundamental rights and freedoms of individuals. By undertaking the practical steps suggested in line with what is expected under the legislation, these risks to fundamental rights and freedoms are reduced and compliance with data protection law becomes more likely. Documenting your assessment of the risk and the steps you take to mitigate them can help you demonstrate compliance with the legislation. We have provided additional cells to
illustrate how you could carry out an evaluation.
When scoring risks, we have provided four options, 'high', 'medium', 'low' and 'non-applicable'. The
assessment of risks will vary depending on the context, so you should undertake your own assessments of the risks identified.
Using the toolkit is entirely optional and you will not be penalised for not using it. Although this toolkit can complement data protection impact assessments (DPIA) that you are legally required to conduct where processing is likely to result in high risk to individuals, it is not designed to replace them.
Please note that this tool is not designed to be 'one size fits all' and each risk should be assessed in the
context in which you are developing and deploying AI. There may also be additional risks that apply to
your context that are not included in this toolkit.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Country of origin:
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai risks
- data documentation
- data governance
- demonstrating trustworthy ai
- documentation
- evaluate
- evaluation
- legal playbook
- transparent
- trustworthy ai
- ai assessment
- open source
- ai governance
- ai auditing
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case