These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
AI Ethics Self-Assessment Questionnaire
AI has the potential to truly change the world. However, this represents not only an opportunity but also a risk. As the adoption of AI accelerates, organisations and governments around the world are considering how best to harness this technology for the benefit of people and planet. It is vital therefore, that AI is designed, developed and deployed ethically.
This self-assessment questionnaire is designed to help you bridge the gap between ethical AI principles and ethical AI practice. It asks questions structured around the principles outlined in the AI Ethics Playbook developed by the GSMA. The questions are differentiated depending on the project's risk level, so it will be quicker to complete for lower risk projects. We hope it helps you to operationalise ethical AI principles and do good business responsibly. The questionnaire is constantly evolving and regularly updated.
The primary objectives of conducting this assessment are to:
- Evaluate overall risk level for a specific use-case, classifying it as high, medium or low. Please, also bear in mind that use cases with unacceptable risk levels are prohibited under a current Proposal for Regulation of the European parliament and of the Council*.
- Answer the relevant ethical questions. The assessment works through each of the principles, with the specific questions differentiated depending on the risk level. The lower the risk, the fewer questions you will be asked to complete. Depending on your answer to the question, you may receive suggestions for further actions.
- Record information to help you track status, report, conduct future planning and potential auditing.
You should carry out the assessment in the following order:
- Complete the AI use case's general information in the 'Pre-Assessment' tab.
- In the 'Pre-Assessment' tab, you will also find three questions under 'Risk Assessment.' Your answers to these questions will determine the risk level of your AI system and hence, the questions you will be later asked.
- Work through the AI ethics principles listed in the 'Ethical Dimensions' menu. A series of relevant baseline questions will be established for each of the principles based on the risk level.
- Review the proposed further actions.
- At the end of each question list, you will find a text box to record status, evidence and any other notes; as also to support with prioritisation and planning.
- Review a summary of your answers across dimensions in the 'Results' section.
- Repeat the process as necessary. The first assessment should happen in the design phase, with re-assessments conducted at key stages of the product-lifecycle - for example, development and deployment. Further reassessments might also be triggered periodically, with the time period depending on the risk level; or by significant changes to the deployment, such as an increase in scale. All versions of the assessment should be saved for posterity.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai ethics
- ai responsible
- ai risks
- build trust
- building trust with ai
- collaborative governance
- demonstrating trustworthy ai
- trustworthy ai
- ai assessment
- ai governance
- ai auditing
- bias
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case