Accountability (Principle 1.5) icon Accountability (Principle 1.5)

Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI.

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of the art.

To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outputs and responses to inquiry, appropriate to the context and consistent with the state of the art.

AI actors, should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on an ongoing basis and adopt responsible business conduct to address risks related to AI systems, including, as appropriate, via co-operation between different AI actors, suppliers of AI knowledge and AI resources, AI system users, and other stakeholders. Risks include those related to harmful bias, human rights including safety, security, and privacy, as well as labour and intellectual property rights.

From the AI Wonk

Rationale for this principle

The terms accountability, responsibility and liability are closely related yet different, and also carry different meanings across cultures and languages. Generally speaking, “accountability” implies an ethical, moral, or other expectation (e.g., as set out in management practices or codes of conduct) that guides individuals’ or organisations’ actions or conduct and allows them to explain reasons for which decisions and actions were taken. In the case of a negative outcome, it also implies taking action to ensure a better outcome in the future. “Liability” generally refers to adverse legal implications arising from a person’s (or an organisation’s) actions or inaction. “Responsibility” can also have ethical or moral expectations and can be used in both legal and non-legal contexts to refer to a causal link between an actor and an outcome.

Given these meanings, the term “accountability” best captures the essence of this principle. In this context, “accountability” refers to the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified).

Related OECD publications

Related online news

?

Related AI policy initiatives

?

Related types of policy instruments used

?

Related scientific research by country

?

Other principles

Inclusive growth, sustainable development and well-being icon

Inclusive growth, sustainable development and well-being

Human rights and democratic values, including fairness and privacy icon

Human rights and democratic values, including fairness and privacy

Transparency and explainability icon

Transparency and explainability

Robustness, security and safety icon

Robustness, security and safety

Accountability icon

Accountability

Investing in AI research and development icon

Investing in AI research and development

Fostering an inclusive AI-enabling ecosystem icon

Fostering an inclusive AI-enabling ecosystem

Shaping an enabling interoperable governance and policy environment for AI icon

Shaping an enabling interoperable governance and policy environment for AI

Building human capacity and preparing for labour market transition icon

Building human capacity and preparing for labour market transition

International co-operation for trustworthy AI icon

International co-operation for trustworthy AI