Initiative overview
The goal of these guidelines is to guide Australian government agencies in using AI in ways that are effective, ethical, and aligned with public expectations - while also encouraging innovation and improving public services.
Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.
The goal of these guidelines is to guide Australian government agencies in using AI in ways that are effective, ethical, and aligned with public expectations - while also encouraging innovation and improving public services.
Category:
Initiative type:
Status:
Start Year:
Binding:
Target Sectors:
OECD AI Principles:
—Other relevant urls:
Proposed in July 2025, this bill amends the Criminal Code Act 1995 to criminalize the use of technology, including AI, for the creation of child abuse material. It introduces new offences for downloading, accessing, supplying, or enabling access to such technology.
The paper outlines proposed options for mandatory guardrails as preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle.The paper includes:- a proposed definition of high-risk AI- 10 proposed regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems- regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws
Proposed in June 2024, and adopted later that year in August, this bill introduces new offences into the Criminal Code Act 1995 targeting the non-consensual transmission of sexual material, including deepfake content. It criminalises the use of carriage services to share sexual material depicting individuals aged 18 or older without their consent, regardless of whether the material is real, altered, or generated using digital technologies such as AI.
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI.The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do.
One of a series of position statements by Australia's independent online safety regulator on emerging tech trends and challenges. The statement outlines steps industry can take to prevent online safety risks of generative AI.
The Australian Framework for Generative Artificial Intelligence (AI) in Schools (the Framework) is a set of six principles which are supported by 25 guiding statements. The Framework was developed in consultation with teachers, students, unions, industry, academics, and parent and school representative bodies from all sectors.
The NFSA is commencing development of responsible AI principles to guide the uptake and development of AI technologies in service of cultural institution operations; and identifying best practices to share with the wider cultural and collecting sectors preserving and making audiovisual collections accessible.
The Australian Government's interim response to public consultation on Safe and responsible AI in Australia.
On 31 March 2023, the Australian Government began work to better understand how algorithms operate on digital platforms and the potential harms they may cause. The initiative aims to explore regulatory options to keep users safe and improve platform transparency. This effort will help inform policies and build expertise to manage the impact of AI-driven algorithms.