AI Risk & Accountability
AI has risks and all actors must be accountable.
AI, Data & Privacy
Data and privacy are primary policy issues for AI.
Generative AI
Managing the risks and benefits of generative AI.
Future of Work
How AI can and will affect workers and working environments
AI Index
The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI)
AI Incidents
To manage risks, governments must track and understand AI incidents and hazards.
Data Governance
Expertise on data governance to promote its safe and faire use in AI
Responsible AI
The responsible development, use and governance of human-centred AI systems
Innovation & Commercialisation
How to drive cooperation on AI and transfer research results into products
AI Compute & Climate
AI computing capacities and their environmental impact.
AI & Health
AI can help health systems overcome their most urgent challenges.
AI Futures
AI’s potential futures.
WIPS
Programme on Work, Innovation, Productivity and Skills in AI.
Catalogue Tools & Metrics
Explore tools & metrics to build and deploy AI systems that are trustworthy.
AI Incidents Monitor
Gain valuable insights on global AI incidents.
OECD AI Principles
The first IGO standard to promote innovative and trustworthy AI
Policy areas
Browse OECD work related to AI across policy areas.
Publications
Find OECD publications related to AI.
Videos
Watch videos about AI policy the issues that matter most.
Context
AI is already a crucial part of most people’s daily routines.
What we do
Countries and stakeholder groups join forces to shape trustworthy AI.
Network of Experts
Experts from around the world advise the OECD and contribute to its work.
Partners
OECD.AI works closely with many partners.
AI21 utilised the OECD’s AI Principles to develop an AI Code of Conduct to promote the safe and responsible use of LLMs.
December 18, 2024 — 7 min read
Countries are establishing specialised bodies to evaluate AI systems’ capabilities and risks, and more.
July 29, 2024 — 7 min read
the OECD and G7 members are taking a crucial step towards fostering a safer and more trustworthy AI ecosystem.
July 23, 2024 — 3 min read
The study covers tools for trustworthy AI in the UK and the U.S. and prospects for future alignment.
June 20, 2024 — 4 min read
The principles now address better address issues related to safety, privacy, intellectual property rights and information integrity.
May 20, 2024 — 5 min read
Complete and accurate definitions of AI incidents and hazards should capture all aspects of harm.
May 17, 2024 — 5 min read
Case studies from the UK’s Portfolio will be the OECD’s Catalogue to create a more complete resource.
April 11, 2024 — 3 min read
OECD countries agreed to define the characteristics of machines considered to be AI.
March 6, 2024 — 8 min read
Handling DSARs can be complicated, but there are ways to mitigate the challenges.
February 22, 2024 — 7 min read
AIM is the first step to producing evidence and foresight on AI incidents for sound policymaking and international cooperation.
November 14, 2023 — 4 min read
Clear definitions and collaboration can uphold European values.
September 7, 2023 — 8 min read
The future of AI in health be like that of autonomous vehicles: the benefits are always five years away.
August 1, 2023 — 7 min read
We thus need to employ unprecedented safety measures to develop frontier AI systems.
July 5, 2023 — 7 min read
ChatGPT has become a household name thanks to its apparent benefits. At the same time, governments and other entities are taking action to contain potential risks. AI Language Models (LM) are at the h...
April 13, 2023 — 5 min read
It is crucial to develop a definition of GPAIS that adequately identifies these technologies.
November 7, 2022 — 4 min read
Proof that artificial intelligence can dramatically accelerate scientific discovery and in turn benefit humanity.
August 31, 2022 — 3 min read
Organizations seeking to responsibly deploy AI systems face a complex and quickly evolving legal landscape.
May 4, 2022 — 4 min read
Developing a living repository that classifies AI systems designed to fight COVID-19.
April 6, 2022 — 8 min read
To produce high quality outputs, cows, like AI systems, require care, feeding, and surveillance.
March 2, 2022 — 6 min read
The classification framework can help policy makers understand and map the affects of AI systems on societies and economies.
February 17, 2022 — 4 min read
This is a first step towards defining the key components of algorithm auditing.
August 10, 2021 — 6 min read
Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.
January 6, 2021 — 7 min read
Find out about the OECD’s Framework for the Classification of AI Systems and follow the next steps.
November 24, 2020 — 5 min read
The German AI Observatory says that AI’s impact on work and society needs policies based on solid evidence.
September 10, 2020 — 7 min read