OECD AI Principles overview

The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time.

Values-based principles

Inclusive growth, sustainable development and well-being icon

Inclusive growth, sustainable development and well-being

Human-centred values and fairness icon

Human-centred values and fairness

Transparency and explainability icon

Transparency and explainability

Robustness, security and safety icon

Robustness, security and safety

Accountability icon


Recommendations for policy makers

Investing in AI R&D icon

Investing in AI R&D

Fostering a digital ecosystem for AI icon

Fostering a digital ecosystem for AI

Providing an enabling policy environment for AI icon

Providing an enabling policy environment for AI

Building human capacity and preparing for labour market transition icon

Building human capacity and preparing for labour market transition

International co-operation for trustworthy AI icon

International co-operation for trustworthy AI

The OECD AI Principles focus on how governments and other actors can shape a human-centric approach to trustworthy AI. As an OECD legal instrument, the principles represent a common aspiration for its adhering countries.

Governments that have committed to the AI Principles
Map of governments that have committed to the AI Principles

AI terms & concepts

An AI system is a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.

The OECD’s work on Artificial Intelligence and rationale for
developing the OECD Recommendation on Artificial Intelligence

AI is a general-purpose technology that has the potential to improve
the welfare and well-being of people, to contribute to positive
sustainable global economic activity, to increase innovation and
productivity, and to help respond to key global challenges. It is
deployed in many sectors ranging from production, finance and
transport to healthcare and security.

Alongside benefits, AI also raises challenges for our societies and
economies, notably regarding economic shifts and inequalities,
competition, transitions in the labour market, and implications for
democracy and human rights.

The OECD has undertaken empirical and policy activities on AI in
support of the policy debate over the past two years, starting with a
Technology Foresight Forum on AI in 2016 and an international
conference on AI: Intelligent Machines, Smart Policies in 2017. The
Organisation also conducted analytical and measurement work that
provides an overview of the AI technical landscape, maps economic and
social impacts of AI technologies and their applications, identifies
major policy considerations, and describes AI initiatives from
governments and other stakeholders at national and international

This work has demonstrated the need to shape a stable policy
environment at the international level to foster trust in and adoption
of AI in society. Against this background, the OECD Committee on
Digital Economy Policy (CDEP) agreed to develop a draft Council
Recommendation to promote a human-centric approach to trustworthy AI,
that fosters research, preserves economic incentives to innovate, and
applies to all stakeholders.

An inclusive and participatory process for developing the

The development of the Recommendation was participatory in nature,
incorporating input from a broad range of sources throughout the
process. In May 2018, the CDEP agreed to form an expert group to scope
principles to foster trust in and adoption of AI, with a view to
developing a draft Council Recommendation in the course of 2019. The
AI Group of experts at the OECD (AIGO) was subsequently established,
comprising over 50 experts from different disciplines and different
sectors (government, industry, civil society, trade unions, the
technical community and academia) –
see the full list

Between September 2018 and February 2019 the group held four meetings:
in Paris, France, in September and November 2018, in Cambridge, MA,
United States, at the Massachusetts Institute of Technology (MIT) in
January 2019, back to back with the MIT AI Policy Congress, and
finally in Dubai, United Arab Emirates, at the World Government Summit
in February 2019. The work benefited from the diligence, engagement
and substantive contributions of the experts participating in AIGO, as
well as from their multi-stakeholder and multidisciplinary
backgrounds. Drawing on the final output document of the AIGO, a draft
Recommendation was developed in the CDEP and with the consultation of
other relevant OECD bodies. The CDEP approved a final draft
Recommendation and agreed to transmit it to the OECD Council for
adoption in a special meeting on 14-15 March 2019. The OECD Council
adopted the Recommendation at its meeting at Ministerial level on
22-23 May 2019.