Human-centred values and fairness (Principle 1.2) icon Respect for the rule of law, human rights and democratic values, including fairness and privacy (Principle 1.2)

AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.

AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. These include non-discrimination and equality, freedom, dignity, autonomy of individuals, privacy and data protection, diversity, fairness, social justice, and internationally recognised labour rights. This also includes addressing misinformation and disinformation amplified by AI, while respecting freedom of expression and other rights and freedoms protected by applicable international law.

To this end, AI actors should implement mechanisms and safeguards, such as capacity for human agency and oversight, including to address risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse in a manner appropriate to the context and consistent with the state of the art.

From the AI Wonk

Rationale for this principle

AI should be developed consistent with human-centred values, such as fundamental freedoms, equality, fairness, rule of law, social justice, data protection and privacy, as well as consumer rights and commercial fairness.

Some applications or uses of AI systems have implications for human rights, including risks that human rights (as defined in the Universal Declaration of Human Rights)1 and human-centred values might be deliberately or accidently infringed. It is therefore important to promote “values-alignment” in AI systems (i.e., their design with appropriate safeguards) including capacity for human intervention and oversight, as appropriate to the context. This alignment can help ensure that AI systems’ behaviours protect and promote human rights and align with human-centred values throughout their operation. Remaining true to shared democratic values will help strengthen public trust in AI and support the use of AI to protect human rights and reduce discrimination or other unfair and/or unequal outcomes.

This principles also acknowledges the role of measures such as human rights impact assessments (HRIAs) and human rights due diligence, human determination (i.e., a “human in the loop”), codes of ethical conduct, or quality labels and certifications intended to promote human-centred values and fairness.

1 Available at: https://www.ohchr.org/EN/UDHR/Documents/UDHR_Translations/eng.pdf

Related OECD publications

Related online news

?

Related AI policy initiatives

?

Related types of policy instruments used

?

Related scientific research by country

?

Other principles

Inclusive growth, sustainable development and well-being icon

Inclusive growth, sustainable development and well-being

Human rights and democratic values, including fairness and privacy icon

Human rights and democratic values, including fairness and privacy

Transparency and explainability icon

Transparency and explainability

Robustness, security and safety icon

Robustness, security and safety

Accountability icon

Accountability

Investing in AI research and development icon

Investing in AI research and development

Fostering an inclusive AI-enabling ecosystem icon

Fostering an inclusive AI-enabling ecosystem

Shaping an enabling interoperable governance and policy environment for AI icon

Shaping an enabling interoperable governance and policy environment for AI

Building human capacity and preparing for labour market transition icon

Building human capacity and preparing for labour market transition

International co-operation for trustworthy AI icon

International co-operation for trustworthy AI