These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe.
The AI Ethics Playbook: Implementing Ethical Principles Into Everyday Business

This playbook is intended as a practical tool to help organisations consider how to ethically design, develop and deploy artificial intelligence (AI) systems.
It can be read cover to cover, like a report, or you can find and use the appropriate chapter for your role and purpose.
It is designed to cater to organisations with varying levels of maturity with regards to AI adoption and familiarity with ethical principles. Some Chapters may, therefore, be more relevant than others.
Chapter 1: An overview of common ethical principles
This chapter will help introduce some of the ethical issues presented by AI to interested readers throughout an organisation. It can be used to develop a basic level of familiarity with these topics.
Chapter 2: A proposed organisational structure for dealing with ethical issues
This chapter is designed to help organisations to think through the governance of a system. It is useful for people who make decisions about the structure and key resources for an organisation.
Chapter 3: A Self‑Assessment Questionnaire designed to help establish ethical risks and a series of tools to help you answer and address issues raised in the questionnaire
This chapter contains a Self-Assessment Questionnaire that can help establish the risks presented by a system, and tools to identify and address those risks. It is most relevant to people working directly with AI systems who are aiming to implement ethical principles. These might include product managers or responsible AI champions.
Chapter 4: Key themes and recommendations from the report
This chapter will be useful for anyone hoping to get a sense of the playbook without reading it in detail. It might be used as an educational tool for people who are less directly involved in AI projects or for senior leadership.
About the tool
You can click on the links to see the associated tools
Developing organisation(s):
Tool type(s):
Objective(s):
Impacted stakeholders:
Purpose(s):
Target sector(s):
Lifecycle stage(s):
Type of approach:
Maturity:
Usage rights:
Target groups:
Target users:
Stakeholder group:
Validity:
Enforcement:
Benefits:
Geographical scope:
People involved:
Required skills:
Technology platforms:
Tags:
- ai ethics
- ai responsible
- ai risks
- build trust
- building trust with ai
- collaborative governance
- data governance
- demonstrating trustworthy ai
- digital ethics
- trustworthy ai
- ai assessment
- ai governance
- ai reliability
Use Cases
Would you like to submit a use case for this tool?
If you have used this tool, we would love to know more about your experience.
Add use case


























