Name in original language
Australian Framework for Generative AI in Schools
Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Australian Framework for Generative AI in Schools
Organisation:
Category:
Initiative type:
Status:
Start Year:
Binding:
Target Sectors:
OECD AI Principles:
Other relevant urls:
—Official PDF:
Published on 20 March 2025, the AI and Cyber Risk Model Clauses initiative by BuyICT.gov.au makes available a collection of contractual clauses for inclusion in digital and ICT procurement contracts in Australia. The AI model clauses support responsible, ethical, and secure procurement of AI systems, including custom solutions, embedded AI products, and internal AI tools, promoting transparency and accountability.
Proposed in July 2025, this bill amends the Criminal Code Act 1995 to criminalize the use of technology, including AI, for the creation of child abuse material. It introduces new offences for downloading, accessing, supplying, or enabling access to such technology.
The paper outlines proposed options for mandatory guardrails as preventative measures that would require developers and deployers of high-risk AI to take specific steps across the AI lifecycle.The paper includes:- a proposed definition of high-risk AI- 10 proposed regulatory guardrails to reduce the likelihood of harms occurring from the development and deployment of AI systems- regulatory options to mandate guardrails, building on current work to strengthen and clarify existing laws
Proposed in June 2024, and adopted later that year in August, this bill introduces new offences into the Criminal Code Act 1995 targeting the non-consensual transmission of sexual material, including deepfake content. It criminalises the use of carriage services to share sexual material depicting individuals aged 18 or older without their consent, regardless of whether the material is real, altered, or generated using digital technologies such as AI.
Enacted in September 2024 and updated in December 2025, this policy outlines a framework for the safe, ethical, and responsible use of artificial intelligence (AI) across government entities. The policy introduces principles under the "enable, engage, and evolve" framework, mandates transparency and accountability measures, and provides guidance on risk assessment and integration with existing governance structures.
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI.The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do.
One of a series of position statements by Australia's independent online safety regulator on emerging tech trends and challenges. The statement outlines steps industry can take to prevent online safety risks of generative AI.
The NFSA is commencing development of responsible AI principles to guide the uptake and development of AI technologies in service of cultural institution operations; and identifying best practices to share with the wider cultural and collecting sectors preserving and making audiovisual collections accessible.
The Australian Government's interim response to public consultation on Safe and responsible AI in Australia.