AI Risk & Accountability
AI has risks and all actors must be accountable.
AI, Data & Privacy
Data and privacy are primary policy issues for AI.
Generative AI
Managing the risks and benefits of generative AI.
Future of Work
How AI can and will affect workers and working environments
AI Index
The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI)
AI Incidents
To manage risks, governments must track and understand AI incidents and hazards.
Data Governance
Expertise on data governance to promote its safe and faire use in AI
Responsible AI
The responsible development, use and governance of human-centred AI systems
Innovation & Commercialisation
How to drive cooperation on AI and transfer research results into products
AI Compute & Climate
AI computing capacities and their environmental impact.
AI & Health
AI can help health systems overcome their most urgent challenges.
AI Futures
AI’s potential futures.
WIPS
Programme on Work, Innovation, Productivity and Skills in AI.
Catalogue Tools & Metrics
Explore tools & metrics to build and deploy AI systems that are trustworthy.
AI Incidents Monitor
Gain valuable insights on global AI incidents.
OECD AI Principles
The first IGO standard to promote innovative and trustworthy AI
Policy areas
Browse OECD work related to AI across policy areas.
Publications
Find OECD publications related to AI.
Videos
Watch videos about AI policy the issues that matter most.
Context
AI is already a crucial part of most people’s daily routines.
What we do
Countries and stakeholder groups join forces to shape trustworthy AI.
Network of Experts
Experts from around the world advise the OECD and contribute to its work.
Partners
OECD.AI works closely with many partners.
Exploring how to ensure AI systems are accountable to the public.
We invite our global community to join an interactive livestream of the full day of roundtable discussions.
November 20, 2024 — 4 min read
The Future Society puts forward three ways to structure the AISI Network to enhance collaboration.
November 18, 2024 — 8 min read
In 1980 and again in 1985, an international group of scientists came together in Villach, Austria, to discuss a concerning trend. The climate was heating, and human activity seemed part of the cause. ...
November 5, 2024 — 6 min read
The standards in the Catalogue will be continuously updated as new ones are published.
September 9, 2024 — 3 min read
Countries are establishing specialised bodies to evaluate AI systems’ capabilities and risks, and more.
July 29, 2024 — 7 min read
The OECD and partners are launching a public consultation on AI risk thresholds.
July 26, 2024 — 4 min read
Like financial audits, algorithm audits are becoming the norm in AI.
July 4, 2024 — 5 min read
The study covers tools for trustworthy AI in the UK and the U.S. and prospects for future alignment.
June 20, 2024 — 4 min read
The principles now address better address issues related to safety, privacy, intellectual property rights and information integrity.
May 20, 2024 — 5 min read
Miscommunication and hype surrounding AI can lead to misunderstandings and hinder progress.
March 25, 2024 — 6 min read
Scraped data can advance social good and do harm. How do we get it right?
March 5, 2024 — 8 min read
Reliability engineering principles can be applied to AI systems to ensure safety and performance.
December 12, 2023 — 9 min read
Things are happening in the field of AI safety to avoid existential risks.
January 26, 2023 — 5 min read