AI Risk & Accountability
AI has risks and all actors must be accountable.
AI, Data & Privacy
Data and privacy are primary policy issues for AI.
Generative AI
Managing the risks and benefits of generative AI.
Future of Work
How AI can and will affect workers and working environments
AI Index
The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI)
AI Incidents
To manage risks, governments must track and understand AI incidents and hazards.
Data Governance
Expertise on data governance to promote its safe and faire use in AI
Responsible AI
The responsible development, use and governance of human-centred AI systems
Innovation & Commercialisation
How to drive cooperation on AI and transfer research results into products
AI Compute & Climate
AI computing capacities and their environmental impact.
AI & Health
AI can help health systems overcome their most urgent challenges.
AI Futures
AI’s potential futures.
WIPS
Programme on Work, Innovation, Productivity and Skills in AI.
Catalogue Tools & Metrics
Explore tools & metrics to build and deploy AI systems that are trustworthy.
AI Incidents Monitor
Gain valuable insights on global AI incidents.
OECD AI Principles
The first IGO standard to promote innovative and trustworthy AI
Policy areas
Browse OECD work related to AI across policy areas.
Publications
Find OECD publications related to AI.
Videos
Watch videos about AI policy the issues that matter most.
Context
AI is already a crucial part of most people’s daily routines.
What we do
Countries and stakeholder groups join forces to shape trustworthy AI.
Network of Experts
Experts from around the world advise the OECD and contribute to its work.
Partners
OECD.AI works closely with many partners.
In 1980 and again in 1985, an international group of scientists came together in Villach, Austria, to discuss a concerning trend. The climate was heating, and human activity seemed part of the cause. ...
November 5, 2024 — 6 min read
Grand pronouncements about artificial intelligence appear daily: AI will produce autocrats and demolish democracies. Algorithms are replacing workers and do not lead to prosperity. Data flows remain a...
October 30, 2024 — 7 min read
Artificial Intelligence (AI) has a profound effect on societies around the globe. Its application improves the lives of many, but it can also increase inequities. To safeguard against AI’s negative im...
October 4, 2024 — 6 min read
In May 2024, the OECD assembled a time-limited Expert Group on AI for Health.
September 13, 2024 — 4 min read
The survey will inform OECD work.
September 11, 2024 — 1 min read
The standards in the Catalogue will be continuously updated as new ones are published.
September 9, 2024 — 3 min read
This report outlines AI’s benefits, risks and policy issues for governments.
August 27, 2024 — 3 min read
Countries are establishing specialised bodies to evaluate AI systems’ capabilities and risks, and more.
July 29, 2024 — 7 min read
The principles now address better address issues related to safety, privacy, intellectual property rights and information integrity.
May 20, 2024 — 5 min read
Complete and accurate definitions of AI incidents and hazards should capture all aspects of harm.
May 17, 2024 — 5 min read
Can we maintain the integrity of copyright laws and ensure fair compensation for human creators?
March 29, 2024 — 7 min read
OECD countries agreed to define the characteristics of machines considered to be AI.
March 6, 2024 — 8 min read
Governments worldwide can benefit from AI and privacy communities working together to achieve common goals.
February 21, 2024 — 7 min read
In 2023, OECD.AI helped members and other countries shape AI policy. Here’s a review of our most significant achievements.
January 9, 2024 — 3 min read
These acts will work with the EU AI Act and data regulation to address digital platforms and associated algorithms better.
December 7, 2023 — 9 min read
Transparency about AI-generated content is a vital objective for the heath of society.
December 4, 2023 — 6 min read
We want to respond to the interest our community has shown in the definition with a short explanation.
November 29, 2023 — 4 min read
AIM is the first step to producing evidence and foresight on AI incidents for sound policymaking and international cooperation.
November 14, 2023 — 4 min read
Four years on, how are countries putting the OECD AI Principles into practice?
October 31, 2023 — 9 min read
How will AI developments impact political, judicial, and legislative action?
October 26, 2023 — 7 min read
Society must meet the AI data challenge.
October 19, 2023 — 7 min read
The threat of AI attacks is not only real. It is pervasive and present.
October 5, 2023 — 8 min read
We need better tools and clear regulations to reduce AI’s carbon footprint.
August 11, 2023 — 8 min read
When Brazil discusses AI and work, diversity has a seat at the table.
July 27, 2023 — 7 min read
With common governance systems, the roundtable was an opportunity to discuss a harmonized strategy.
July 25, 2023 — 5 min read
The EU could mitigate risks of systemic foundation models and exploit opportunities.
July 20, 2023 — 15 min read
Join us in building a global competitive challenge to promote trust.
July 19, 2023 — 3 min read
DeepMind explores governance models to manage frontier AI development.
July 13, 2023 — 2 min read
Join the open discussion on the potential benefits and risks of AI.
July 12, 2023 — 4 min read
As AI developments soar, who owns and has the rights to AI-generated work?
June 6, 2023 — 7 min read
Regulatory sandboxes are a dynamic, evidence-based approach to regulation.
May 31, 2023 — 5 min read
Synthetic data could help reduce criminal activity without the privacy trade off.
May 25, 2023 — 7 min read
AI systems may discriminate due to facial differences, gestures, gesticulation, speech impairment, or different communication patterns.
May 17, 2023 — 7 min read
Listen to our interview with Changpeng Zhao, CZ, about the OECD AI Principles and regulation.
April 25, 2023 — 6 min read
A global AI learning campaign would foster greater human-centric AI development and empower decision-making.
April 19, 2023 — 8 min read
ChatGPT has become a household name thanks to its apparent benefits. At the same time, governments and other entities are taking action to contain potential risks. AI Language Models (LM) are at the h...
April 13, 2023 — 5 min read
Canada could look to international initiatives for guidance.
March 1, 2023 — 5 min read
As enforcement expands, the question becomes how best to build more capacity.
January 9, 2023 — 7 min read
It is crucial to develop a definition of GPAIS that adequately identifies these technologies.
November 7, 2022 — 4 min read
Governments are right to require audits for bias in recruiting algorithms. But what about transparency, safety and privacy?
September 27, 2022 — 7 min read
Thailand’s main challenges will be to develop the human capacity and skills for an AI ecosystem.
August 15, 2022 — 6 min read
As trade’s reliance on AI grows, so does AI’s reliance on appropriate trade policies. Here is why.
July 21, 2022 — 5 min read
Korea has been emerged as an AI technology and innovation driver, and a leader in international cooperation around trustworthy AI.
March 10, 2022 — 4 min read
To produce high quality outputs, cows, like AI systems, require care, feeding, and surveillance.
March 2, 2022 — 6 min read
The classification framework can help policy makers understand and map the affects of AI systems on societies and economies.
February 17, 2022 — 4 min read
To date, many governments do not know how much compute capacity they need to achieve their AI goals.
February 8, 2022 — 6 min read
What CSET and OECD learned by comparing how frameworks can guide the human classification of AI systems.
December 14, 2021 — 7 min read
Gender (dis)parities between countries are expected, but the countries with the widest and narrowest gaps may surprise you.
December 9, 2021 — 8 min read
How we structure policies can determine whether AI moves the world towards shared prosperity or income polarization.
October 6, 2021 — 7 min read
This is a first step towards defining the key components of algorithm auditing.
August 10, 2021 — 6 min read
This document features some of the ideas this GPAI Working Group developed in 2021.
July 30, 2021 — 3 min read
The AI Observatory has been crucial for Colombia, a leader in the development of trustworthy AI.
July 21, 2021 — 8 min read
For AI soft law to become an increasingly effective and credible tool, programs must integrate incentives and mechanisms.
July 13, 2021 — 8 min read
We are now very pleased and excited to share an update with you on our work, and extend an invitation to get involved, including two new tenders
June 9, 2021 — 4 min read
For Germany, AI technologies are not relevant for their own sake, but as a means for making lives better.
May 18, 2021 — 8 min read
Singapore has a multi-stakeholder approach to enabling its citizens and businesses for AI. It even has an AI escape-room game.
May 12, 2021 — 7 min read
Canada’s AI strategy emphasizes responsible innovation and economic growth, grounded in human rights, inclusion and diversity. First in a five part series on national AI policies.
May 4, 2021 — 16 min read
A video series, where policymakers from Egypt, Singapore, Canada, Germany and the United Kingdom showcase their AI strategies.
April 28, 2021 — 4 min read
Fair and unbiased decisions are good for the individuals involved, but also for business and society.
April 8, 2021 — 7 min read
Algorithm regulatory authorities could help ensure that the “gatekeepers” act for the common good.
March 11, 2021 — 7 min read
Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.
January 6, 2021 — 7 min read
Find out about the OECD’s Framework for the Classification of AI Systems and follow the next steps.
November 24, 2020 — 5 min read
The screening is followed by a debate with some of the brightest minds in AI.
November 17, 2020 — 1 min read
The German AI Observatory says that AI’s impact on work and society needs policies based on solid evidence.
September 10, 2020 — 7 min read
Governments can help balance opportunities and risks linked to AI.
July 29, 2020 — 4 min read
The report is an independent initiative within Stanford University’s Human-Centered Artificial Intelligence Institute and in its third year.
July 8, 2020 — 4 min read
To sustain a trusted AI ecosystem, Singapore has to be proactive about providing guidance for AI and responding to industry realities.
June 24, 2020 — 5 min read
The American AI Initiative is in support of AI innovation.
June 11, 2020 — 5 min read