Intergovernmental

Evolving with innovation: The 2024 OECD AI Principles update

Updated AI Principles image

The 2024 OECD Ministerial Council Meeting marked a pivotal moment in the evolution of AI governance, adopting significant updates to the OECD Principles on Artificial Intelligence (AI). Established as the first intergovernmental standard in 2019, these principles are foundational to innovative, trustworthy AI aligned with human rights and democratic values. With rapid advances in AI technologies, particularly in general-purpose and generative AI, the updated principles now address emerging challenges with an enhanced focus on safety, privacy, intellectual property rights and information integrity. Now endorsed by 47 jurisdictions, including the European Union, the Principles provide a universal blueprint for navigating AI’s complexities in policy frameworks globally.

A more robust foundation for AI policymaking and global interoperability

The relevance of the OECD AI Principles has grown in this quickly evolving landscape. Technological advances and the integration of AI into virtually every sector over the past five years have given governments a run for their money in keeping up with the new ethical challenges and risks. The Principles serve as a benchmark for responsible AI development and a critical checklist to address these rapid changes effectively, ensuring that AI continues to benefit society without compromising standards and safety.

Ethical standards and safety rely on international cooperation. A core aspect of the updates is the emphasis on interoperability, facilitated by the OECD’s definitions of an AI system and its lifecycle. As these definitions are adopted globally and integrated into AI legislation and regulation across different jurisdictions, including the European Union, the Council of Europe, Japan, and the United States, they lay the groundwork for future interaction between jurisdictions. This harmonisation supports seamless global AI governance, allowing for technologies to be managed effectively across borders with regulations that may be different but not conflicting. 

In the current AI climate, where sometimes there seems to be more hypothesis than certainty, the updated OECD AI Principles are grounded. They address the most pertinent issues, such as the creation and propagation of synthetically generated content amplified by AI, which could result in the spread of misinformation and disinformation, the ethical use of AI in critical sectors such as the world of work, and the overarching need for robust safety protocols. These guidelines not only respond to existing challenges but also set a proactive framework to anticipate and mitigate future risks.

The OECD AI Principles have been instrumental in shaping AI governance globally

Their adoption across major economies and developing nations underscores their effectiveness in coordinating policy through shared values. This wide acceptance highlights the principles’ flexibility and relevance, making them a cornerstone for international AI policy discussions.

The OECD’s reputation for data-driven analysis and its ability to bring together diverse stakeholders and build consensus makes it an ideal platform for addressing the global challenges associated with AI technologies. 

The OECD AI Principles’ comprehensive approach and broad consensus distinguish them from other frameworks. Unlike initiatives that may be more prescriptive or limited in scope, the OECD Principles are designed to be flexible and adaptable, essential for countries with diverse contexts to keep pace with the rapid development of AI technologies.

Non-binding principles work for everyone

Every country has a unique culture and socioeconomic makeup. The non-binding nature of the OECD AI Principles allows governments to adapt and tailor implementation across different national contexts at a manageable pace. This flexibility ensures that countries can integrate these guidelines into their national frameworks in a manner that respects local conditions and capabilities while still adhering to a global standard. The OECD’s monitoring helps track AI policy development and facilitates knowledge-sharing between countries.

While the Principles themselves are non-binding, their adoption represents a strong commitment by governments to implement them in their national frameworks. This implementation often involves adapting existing legal frameworks or creating new regulations that uphold the Principles.

Addressing real AI risks and ethical considerations

The OECD AI Principles address tangible AI risks in areas like cybersecurity, misuse, and privacy violations rather than hypothetical scenarios, which it addresses in other ways, such as scanning the horizon to understand likely scenarios in the medium term. A grounded approach ensures they are actionable and relevant to current challenges. 

Following the same logic, the OECD opted for specific terms like “human-centred values, fairness, transparency, and explainability” instead of the broader concept “ethics”. Ethics are linked to values which change across cultures due to different beliefs, traditions, and norms, rendering the term subjective.  Moreover, the terms chosen for the AI Principles reflect a commitment to actionable, clear standards rather than abstract ethical considerations.

From principles to practice: OECD.AI has tools for that

The Principles alone cannot shape trustworthy AI, even if crafted to match AI’s expansive breadth. The OECD.AI Policy Observatory has a wealth of tools and expertise from the OECD, its members, partners, and stakeholders to sculpt policies for trustworthy AI. Guided by the insights of the OECD AI Network of Experts—comprising AI specialists from diverse sectors— what OECD.AI has to offer is at the forefront of shaping global AI governance. It is worth looking at some of the Observatory’s most relevant tools and content.

  • OECD.AI Country policy dashboards provide the largest repository of AI initiatives and policies from over seventy governments worldwide.
  • Live data on crucial topics from our partners help policymakers to develop, implement, and refine AI policies. They offer a wide range of AI-related data visualisations for comparative analysis, monitoring, and best practice development.
  • AI Incidents Monitor (AIM) is a source for AI incidents and hazards as they appear in the global press. This information provides insights into risks and aids in establishing a collective understanding of common AI harms and trustworthy AI.
  • The generative AI resources portal covers generative AI’s benefits, risks, and evolving aspects, offering tools and resources for a comprehensive understanding from a policy perspective. 
  • The OECD Catalogue of Tools & Metrics for AI is a resource that centralises diverse tools, mechanisms and practices to help all actors to ensure AI’s trustworthiness.

Look for the Principles in forthcoming regulations

The updated Principles strongly emphasise safety, risk management, misinformation and disinformation, information integrity, transparency, environmental sustainability and interoperable governance. These focus areas ensure that the Principles are comprehensive and address the multifaceted nature of AI technology and its presence in our lives.

As adherents put the Principles into practice, international dialogues in different configurations will continue to refine and enhance guidance based on emerging needs and technologies. But it is how adherent countries interpret and integrate the Principles into national AI strategies and regulations that will shape trustworthy AI in critical areas.

National regulation and international dialogue will be crucial to shaping the broader global discourse on trustworthy AI worldwide.



Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.