AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.
Rationale for this principle
The terms accountability, responsibility and liability are closely related yet different, and also carry different meanings across cultures and languages. Generally speaking, “accountability” implies an ethical, moral, or other expectation (e.g., as set out in management practices or codes of conduct) that guides individuals’ or organisations’ actions or conduct and allows them to explain reasons for which decisions and actions were taken. In the case of a negative outcome, it also implies taking action to ensure a better outcome in the future. “Liability” generally refers to adverse legal implications arising from a person’s (or an organisation’s) actions or inaction. “Responsibility” can also have ethical or moral expectations and can be used in both legal and non-legal contexts to refer to a causal link between an actor and an outcome.
Given these meanings, the term “accountability” best captures the essence of this principle. In this context, “accountability” refers to the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks, and for demonstrating this through their actions and decision-making process (for example, by providing documentation on key decisions throughout the AI system lifecycle or conducting or allowing auditing where justified).