The OECD.AI Policy Navigator

Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Mandatory Guardrails for Safe and Responsible AI


Added by:   National contact point
Added on:   09 Jul 2025
Updated by:   OECD analyst
Updated on:   25 Dec 2025

undefined
The paper outlines proposed options for mandatory guardrails as preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle.The paper includes:- a proposed definition of high-risk AI- 10 proposed regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems- regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws

Name in original language

Mandatory Guardrails for Safe and Responsible AI

Initiative overview

The potential for Artificial Intelligence (AI) to improve social and economic well-being is immense. AI development and deployment is accelerating and is already permeating institutions, infrastructure, products and services. This often occurs undetected by those engaging with it.The Australian Government's consultations on safe and responsible AI have shown that our current regulatory system is not fit for purpose to respond to the distinct risks that AI poses. Internationally, governments are reforming existing regulations and introducing new regulations to address the risks of AI, with a focus on creating preventative, risk-based guardrails that apply across the AI supply chain and throughout the AI lifecycle. To unlock innovative uses of AI we need a modern and effective regulatorysystem.In the Australian Government's interim response to the Safe and Responsible AI in Australia discussion paper, a commitment was made to a develop a regulatory environment that builds community trust and promotes AI adoption. The guardrails outlined complement those in the Voluntary AI Safety Standard, and set clear expectations from the Australian Government on how to use AI safely and responsibly. They aim to address risks and harms from AI, build public trust and provide businesses with greater regulatory certainty. Implementing the Voluntary AI Safety Standard now will help businesses start to develop practices required in a future regulatory environment.Consultation on the proposed mandatory guardrails is now closed.Government is now considering the feedback received and next steps.

Name of responsible organisation (in English)

Department of Industry, Science, and Resources

About the policy initiative


Organisation:

  • Department of Industry, Science, and Resources

Category:

  • AI policy initiatives, programmes and projects

Initiative type:

  • Other AI policy initiatives, programmes and projects

Status:

  • Proposed or under development

Start Year:

  • 2024

Binding:

  • Binding

Other relevant urls:


Images: