OECD AI Principles overview

The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. Adopted in May 2019, they set standards for AI that are practical and flexible enough to stand the test of time.

Principles for trustworthy AI

The OECD AI Principles were initially adopted in 2019 and updated in May 2024. Adherents updated them to consider new technological and policy developments, ensuring they remain robust and fit for purpose.

The Principles guide AI actors in their efforts to develop trustworthy AI and provide policymakers with recommendations for effective AI policies.

Countries use the OECD AI Principles and related tools to shape policies and create AI risk frameworks, building a foundation for global interoperability between jurisdictions. Today, the European Union, the Council of Europe, the United States, and the United Nations and other jurisdictions use the OECD’s definition of an AI system and lifecycle below in their legislative and regulatory frameworks and guidance. The principles, definition and lifecycle are all part of the OECD Recommendation on Artificial Intelligence.

Values-based principles

Recommendations for policy makers