AI Risk & Accountability

Exploring how to ensure AI systems are accountable to the public.

Overview

AI continues to be a powerful engine driving prosperity and progress. With this dynamic growth, however, comes real risks of harms to people and the planet. Risks are already materialising, including bias, discrimination, manipulation of consumers, the polarisation of opinions, privacy infringement, and widespread surveillance. Businesses involved in the development and use of AI systems are expected to respect human rights and international standards on responsible business conduct through due diligence. The Expert Group on Risk and Accountability explores interoperability and policy coherence among leading risk management frameworks and promotes accountability for harms linked to the use of AI.

High-level AI risk management interoperability framework: Governing and managing risks throughout the lifecycle for trustworthy AI

From the report Common guideposts to promote interoperability in AI risk management

Accountability framework

Artificial intelligence and responsible business conduct

As policymakers integrate AI risk management into regulations and other market access requirements across the OECD and partner countries, proactively managing risks can help support the burgeoning AI industry and facilitate investment in responsible AI systems. The Expert Group on Risk & Accountability is in the process of developing practical guidance for companies in the AI value chain on how to conduct due diligence by identifying, preventing, mitigating and remedying harmful impacts related to AI systems. This guidance is based on the OECD Due Diligence Guidance for Responsible Business Conduct and rooted in the OECD AI Principles.