The OECD’s new responsible AI guidance: A compass for businesses in a complex terrain

Companies hoping to take advantage of AI’s opportunities need to be trustworthy. Whether investing in, developing, or using AI, the OECD’s new Due Diligence Guidance for Responsible AI provides businesses with an internationally agreed, government-backed tool to demonstrate that markets and societies can trust their AI systems.
Recent international reporting underscores a growing consensus: AI is not just a technological shift. It is a major geopolitical, economic, and societal phenomenon that demands coordinated action amongst all actors, including companies.
AI has the potential to transform society through productivity, economic value and solutions to complex challenges, but for these benefits to materialise, AI needs trust. So far, the technology seems to advance faster than its guardrails. The gap between AI systems and appropriate safeguards is now one of the defining challenges for policymakers and global businesses alike. Both are under pressure to balance AI innovation and diffusion with safety and risk management. Success depends on getting the balance right.
Risks throughout the AI value chain are continually evolving
Risks to people and the environment can manifest at any point along the AI value chain. The OECD actively tracks and categorises risks through its AI Incidents and Hazards Monitor.
Here are a few examples. At one end of the AI value chain, there are the people who label, clean, and moderate the vast datasets required to train AI models. They can face low wages, long hours, and suffer psychological distress from exposure to harmful content. Companies need to ensure decent work for data enrichment workers.
The environmental costs of running AI systems can also be significant, particularly for energy and water consumption by data centres that power AI development and deployment, which may lead to higher energy prices.
Data privacy is another critical concern. AI models are trained on massive datasets that may include personal or sensitive information. If these datasets are not properly anonymised and secured, it can lead to data breaches. If AI models “memorise” and reproduce sensitive data in their outputs, they can expose confidential details, creating legal and ethical dilemmas.
At the other end of the AI value chain, the potential for AI misuse poses risks such as reputational harm and the spread of misinformation. AI-generated deepfakes, for instance, can be used to create realistic but fabricated content, damaging reputations or manipulating public opinion. Similarly, AI can be used to generate and disseminate mis and dis-information at speed and scale, eroding trust in institutions and potentially influencing events.
Worldwide, governments, consumers, and markets are calling for responsible and trustworthy AI. This is one of the reasons for the surge in mandatory and voluntary AI risk management frameworks, responsible AI initiatives, global agreements, academic research and statements from industry leaders and investors. However, this surge in frameworks is also increasing complexity for companies, as risk management is defined differently across jurisdictions and understanding of AI-related risks is evolving.
OECD Due Diligence Guidance for Responsible AI: A flexible, whole-of-value-chain approach to support businesses in navigating evolving risks and rules
This is why the OECD has now developed the first internationally agreed, government-backed Due Diligence Guidance for Responsible AI. Backed by all the OECD’s member countries, plus 17 partner governments and the EU, this Guidance helps enterprises navigate the complex terrain of AI risk management. It is designed to help businesses ensure that the AI systems they develop are trustworthy, used and developed safely and responsibly, and aligned with broad societal values.
Concretely, this Guidance offers:
- A step-by-step framework for enterprises to set up internal management systems capable of proactively identifying and responding to risks related to human rights, labour standards, and environmental impacts.
- Comprehensive coverage of all risk areas from the leading international standards that it is built on and reflects, notably, the OECD Guidelines for Multinational Enterprises on Responsible Business Conduct (MNE Guidelines) and the OECD Recommendation on Artificial Intelligence (AI Principles);
- Recommendations and implementation examples for everyone in the AI value chain, from data suppliers and infrastructure providers to financiers and end-users – including enterprises. The guidance emphasises a “whole-of-value-chain” approach to support secure and resilient AI value chains more resistant to supply chain shocks and interference.
- A roadmap of related provisions in existing frameworks, indicating how each step complements and relates to relevant provisions from AI risk management frameworks. This feature helps enterprises understand how implementing this guidance can help them meet expectations from multiple sources and navigate the current landscape of AI risk management frameworks.
Responsibility and trust can give a competitive edge
Responsibility and innovation not only coexist but also reinforce each other. Companies that show a commitment to responsible AI and actively address potential risks can gain trust from investors, customers, regulators, and policymakers. This trust leads to a competitive edge. Instead of hindering innovation, responsible AI practices can speed up growth by reducing obstacles and preventing costly damage to reputation, legal issues, and society.
Responsible and trustworthy AI is becoming increasingly crucial for accessing global markets as international regulatory and voluntary risk management frameworks evolve. Companies in the AI value chain that meaningfully implement the Guidance’s recommendations can position themselves advantageously for cross-border expansion, potentially avoiding the substantial costs of retrofitting systems to meet various regional requirements.
As AI continues to develop rapidly, frameworks and best practices for responsible AI are likely to evolve as well. To help stakeholders keep pace, the OECD will launch an online navigation tool later this year with updates on new frameworks and use cases.






























