AI Risk & Accountability
AI has risks and all actors must be accountable.
AI, Data & Privacy
Data and privacy are primary policy issues for AI.
Generative AI
Managing the risks and benefits of generative AI.
Future of Work
How AI can and will affect workers and working environments
AI Index
The OECD AI will be a synthetic measurement framework on Trustworthy Artificial Intelligence (AI)
AI Incidents
To manage risks, governments must track and understand AI incidents and hazards.
Data Governance
Expertise on data governance to promote its safe and faire use in AI
Responsible AI
The responsible development, use and governance of human-centred AI systems
Innovation & Commercialisation
How to drive cooperation on AI and transfer research results into products
AI Compute & Climate
AI computing capacities and their environmental impact.
AI & Health
AI can help health systems overcome their most urgent challenges.
AI Futures
AI’s potential futures.
WIPS
Programme on Work, Innovation, Productivity and Skills in AI.
Catalogue Tools & Metrics
Explore tools & metrics to build and deploy AI systems that are trustworthy.
AI Incidents Monitor
Gain valuable insights on global AI incidents.
OECD AI Principles
The first IGO standard to promote innovative and trustworthy AI
Policy areas
Browse OECD work related to AI across policy areas.
Publications
Find OECD publications related to AI.
Videos
Watch videos about AI policy the issues that matter most.
Context
AI is already a crucial part of most people’s daily routines.
What we do
Countries and stakeholder groups join forces to shape trustworthy AI.
Network of Experts
Experts from around the world advise the OECD and contribute to its work.
Partners
OECD.AI works closely with many partners.
AI21 utilised the OECD’s AI Principles to develop an AI Code of Conduct to promote the safe and responsible use of LLMs.
December 18, 2024 — 7 min read
We invite our global community to join an interactive livestream of the full day of roundtable discussions.
November 20, 2024 — 4 min read
The Future Society puts forward three ways to structure the AISI Network to enhance collaboration.
November 18, 2024 — 8 min read
In 1980 and again in 1985, an international group of scientists came together in Villach, Austria, to discuss a concerning trend. The climate was heating, and human activity seemed part of the cause. ...
November 5, 2024 — 6 min read
In May 2024, the OECD assembled a time-limited Expert Group on AI for Health.
September 13, 2024 — 4 min read
The OECD and partners are launching a public consultation on AI risk thresholds.
July 26, 2024 — 4 min read
Like financial audits, algorithm audits are becoming the norm in AI.
July 4, 2024 — 5 min read
Six crucial policy considerations to harmonise AI’s advancements with privacy principles
June 26, 2024 — 4 min read
The study covers tools for trustworthy AI in the UK and the U.S. and prospects for future alignment.
June 20, 2024 — 4 min read
Responsible AI principles are vital to African economic policymaking but need to be tailored.
June 19, 2024 — 7 min read
A look at the risks and opportunities of using AI in tax administrations.
June 14, 2024 — 7 min read
This text initially appeared as a blog post on gpai.ai, coauthored by Yann Dietrich, Catherine Stihler, and Lea Daun. Abstract With the rise of generative AI and other innovations, focus has increased...
June 12, 2024 — 7 min read
Germany stands at a crucial juncture in its journey towards becoming a global leader in AI.
June 11, 2024 — 4 min read
The principles now address better address issues related to safety, privacy, intellectual property rights and information integrity.
May 20, 2024 — 5 min read
Complete and accurate definitions of AI incidents and hazards should capture all aspects of harm.
May 17, 2024 — 5 min read
Canada’s journey to close its AI compute gap is not just an innovation challenge; it’s a societal imperative.
April 16, 2024 — 7 min read
Case studies from the UK’s Portfolio will be the OECD’s Catalogue to create a more complete resource.
April 11, 2024 — 3 min read
Scraped data can advance social good and do harm. How do we get it right?
March 5, 2024 — 8 min read
Handling DSARs can be complicated, but there are ways to mitigate the challenges.
February 22, 2024 — 7 min read
Governments worldwide can benefit from AI and privacy communities working together to achieve common goals.
February 21, 2024 — 7 min read
The Challenger disaster shows us what can happen if we do not have a strong safety culture.
February 5, 2024 — 6 min read
This text initially appeared as a blog post on gpai.ai, coauthored by Borys Stokalski, Bogumił Kamiński, Daniel Kaszyński Abstract This article discusses the need for policymakers, AI investors, and A...
February 5, 2024 — 9 min read
Marrying AI with DLT or blockchain opens avenues for enhanced security, data integrity, and reliability.
January 17, 2024 — 3 min read
In 2023, OECD.AI helped members and other countries shape AI policy. Here’s a review of our most significant achievements.
January 9, 2024 — 3 min read
The United States is addressing AI holistically by focusing on the potential of AI to boost prosperity and overcome major societal challenges.
December 20, 2023 — 9 min read
Comprehensive audits could be essential for compliance with some regulations.
December 19, 2023 — 7 min read
Reliability engineering principles can be applied to AI systems to ensure safety and performance.
December 12, 2023 — 9 min read
These acts will work with the EU AI Act and data regulation to address digital platforms and associated algorithms better.
December 7, 2023 — 9 min read
Air pollution and carbon emissions are well-known environmental costs of AI. But water consumption is also an issue.
November 30, 2023 — 8 min read
AIM is the first step to producing evidence and foresight on AI incidents for sound policymaking and international cooperation.
November 14, 2023 — 4 min read
Society must meet the AI data challenge.
October 19, 2023 — 7 min read
The threat of AI attacks is not only real. It is pervasive and present.
October 5, 2023 — 8 min read
The convergence of AI and blockchain technologies in healthcare holds immense promise.
September 20, 2023 — 5 min read
Clear definitions and collaboration can uphold European values.
September 7, 2023 — 8 min read
Generative AI is helping Indians to obtain legal assistance, advice for agriculture and more.
August 25, 2023 — 6 min read
BigCode can empower the machine learning and open source communities through open governance.
August 8, 2023 — 7 min read
All actors of the AI ecosystem must work together to reduce data’s CO2 footprint.
August 7, 2023 — 7 min read
The future of AI in health be like that of autonomous vehicles: the benefits are always five years away.
August 1, 2023 — 7 min read
When Brazil discusses AI and work, diversity has a seat at the table.
July 27, 2023 — 7 min read
The EU could mitigate risks of systemic foundation models and exploit opportunities.
July 20, 2023 — 15 min read
DeepMind explores governance models to manage frontier AI development.
July 13, 2023 — 2 min read
Join the open discussion on the potential benefits and risks of AI.
July 12, 2023 — 4 min read
We thus need to employ unprecedented safety measures to develop frontier AI systems.
July 5, 2023 — 7 min read
Synthetic data could help reduce criminal activity without the privacy trade off.
May 25, 2023 — 7 min read
AI systems may discriminate due to facial differences, gestures, gesticulation, speech impairment, or different communication patterns.
May 17, 2023 — 7 min read
Holistic AI argues that businesses using AI should adopt an AI risk management framework.
May 16, 2023 — 8 min read
ChatGPT has become a household name thanks to its apparent benefits. At the same time, governments and other entities are taking action to contain potential risks. AI Language Models (LM) are at the h...
April 13, 2023 — 5 min read
This vlog series covers the basic concept of web3 and its components – blockchain and AI technologies.
March 9, 2023 — 4 min read
Things are happening in the field of AI safety to avoid existential risks.
January 26, 2023 — 5 min read
Business at the OECD shares practical lessons for businesses and regulators from seven case studies.
April 19, 2022 — 6 min read
Here are some near-term opportunities for the OECD’s AI work to influence the financial services industry.
March 9, 2022 — 3 min read
To produce high quality outputs, cows, like AI systems, require care, feeding, and surveillance.
March 2, 2022 — 6 min read
The classification framework can help policy makers understand and map the affects of AI systems on societies and economies.
February 17, 2022 — 4 min read
To date, many governments do not know how much compute capacity they need to achieve their AI goals.
February 8, 2022 — 6 min read
What CSET and OECD learned by comparing how frameworks can guide the human classification of AI systems.
December 14, 2021 — 7 min read
This is a first step towards defining the key components of algorithm auditing.
August 10, 2021 — 6 min read
We need better understandings of the role data trusts can play in data stewardship and the operational strategies that can implement data trusts methods in practice.
August 3, 2021 — 4 min read
We are now very pleased and excited to share an update with you on our work, and extend an invitation to get involved, including two new tenders
June 9, 2021 — 4 min read
The framework helps AI practitioners determine which tool fits their use case and how well it supports the OECD AI Principles for trustworthy AI.
May 25, 2021 — 6 min read
Singapore has a multi-stakeholder approach to enabling its citizens and businesses for AI. It even has an AI escape-room game.
May 12, 2021 — 7 min read
Algorithm regulatory authorities could help ensure that the “gatekeepers” act for the common good.
March 11, 2021 — 7 min read
Despite the COVID pandemic, we can look back on 2020 as a year of positive achievement in progress towards understanding what is needed in the governance and regulation of AI.
January 6, 2021 — 7 min read
The screening is followed by a debate with some of the brightest minds in AI.
November 17, 2020 — 1 min read
The report is an independent initiative within Stanford University’s Human-Centered Artificial Intelligence Institute and in its third year.
July 8, 2020 — 4 min read