AI Risk & Accountability

Exploring how to ensure AI systems are accountable to the public.

Overview

AI continues to be a powerful engine driving prosperity and progress. With this dynamic growth, however, comes real risks of harms to people and the planet. Risks are already materialising, including bias, discrimination, manipulation of consumers, the polarisation of opinions, privacy infringement, and widespread surveillance. Businesses involved in the development and use of AI systems are expected to respect human rights and international standards on responsible business conduct through due diligence. The Expert Group on Risk and Accountability explores interoperability and policy coherence among leading risk management frameworks and promotes accountability for harms linked to the use of AI.

High-level AI risk management interoperability framework: Governing and managing risks throughout the lifecycle for trustworthy AI.
From the report: Common guideposts to promote interoperability in AI risk management

Accountability framework