The OECD.AI Policy Navigator
Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.
AI and Cyber Risk Model Clauses
About the policy initiative
Category:
- Regulations, guidelines and standards
Initiative type:
- Guidance document (instructions on how to implement a law, regulation, policy or other rule)
Status:
- Active
Start Year:
- 2025
Target Sectors:
—OECD AI Principles:
—Criminal Code Amendment (Using Technology to Generate Child Abuse Material) Bill 2025
Proposed in July 2025, this bill amends the Criminal Code Act 1995 to criminalize the use of technology, including AI, for the creation of child abuse material. It introduces new offences for downloading, accessing, supplying, or enabling access to such technology.
Mandatory Guardrails for Safe and Responsible AI
The paper outlines proposed options for mandatory guardrails as preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle.The paper includes:- a proposed definition of high-risk AI- 10 proposed regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems- regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws
Criminal Code Amendment (Deepfake Sexual Material) Bill 2024
Proposed in June 2024, and adopted later that year in August, this bill introduces new offences into the Criminal Code Act 1995 targeting the non-consensual transmission of sexual material, including deepfake content. It criminalises the use of carriage services to share sexual material depicting individuals aged 18 or older without their consent, regardless of whether the material is real, altered, or generated using digital technologies such as AI.
Policy for the responsible use of AI in government
Enacted in September 2024 and updated in December 2025, this policy outlines a framework for the safe, ethical, and responsible use of artificial intelligence (AI) across government entities. The policy introduces principles under the "enable, engage, and evolve" framework, mandates transparency and accountability measures, and provides guidance on risk assessment and integration with existing governance structures.
Voluntary AI Safety Standard
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI.The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do.
Tech Trends Position Statement: Generative AI
One of a series of position statements by Australia's independent online safety regulator on emerging tech trends and challenges. The statement outlines steps industry can take to prevent online safety risks of generative AI.
Australian Framework for Generative AI in Schools
The Australian Framework for Generative Artificial Intelligence (AI) in Schools (the Framework) is a set of six principles which are supported by 25 guiding statements. The Framework was developed in consultation with teachers, students, unions, industry, academics, and parent and school representative bodies from all sectors.
Responsible AI Principles for Audiovisual Collections
The NFSA is commencing development of responsible AI principles to guide the uptake and development of AI technologies in service of cultural institution operations; and identifying best practices to share with the wider cultural and collecting sectors preserving and making audiovisual collections accessible.
Safe and Responsible AI in Australia: Interim Response
The Australian Government's interim response to public consultation on Safe and responsible AI in Australia.




























