Name in original language
Mandatory Guardrails for Safe and Responsible AI
Our policy navigator is a living repository from more than 80 jurisdictions and organisations. Use the filters to browse initiatives and find what you are looking for.

Mandatory Guardrails for Safe and Responsible AI
Organisation:
Category:
Initiative type:
Status:
Start Year:
Binding:
Target Sectors:
OECD AI Principles:
Other relevant urls:
—Official PDF:
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with AI.The standard consists of 10 voluntary guardrails that apply to all organisations throughout the AI supply chain. They include transparency and accountability requirements across the supply chain. They also explain what developers and deployers of AI systems must do.
On 31 March 2023, the Australian Government began work to better understand how algorithms operate on digital platforms and the potential harms they may cause. The initiative aims to explore regulatory options to keep users safe and improve platform transparency. This effort will help inform policies and build expertise to manage the impact of AI-driven algorithms.
The Australian Government's interim response to public consultation on Safe and responsible AI in Australia.
The NFSA is commencing development of responsible AI principles to guide the uptake and development of AI technologies in service of cultural institution operations; and identifying best practices to share with the wider cultural and collecting sectors preserving and making audiovisual collections accessible.
The Australian Framework for Generative Artificial Intelligence (AI) in Schools (the Framework) is a set of six principles which are supported by 25 guiding statements. The Framework was developed in consultation with teachers, students, unions, industry, academics, and parent and school representative bodies from all sectors.
One of a series of position statements by Australia's independent online safety regulator on emerging tech trends and challenges. The statement outlines steps industry can take to prevent online safety risks of generative AI.
Policy paper, which was subsequently endorsed by the Infrastructure and Transport Ministers' Meeting in February 2022. The paper outlines the proposed end-to-end regulatory framework for the commercial deployment of automated vehicles in Australia. The design of this framework represents the culmination of work led by the NTC over the five years on national reforms for automated vehicles, and was developed in consultation withgovernments and industry.
The AFP Technology Strategy outlines the orginaisation's direction and the capability it needs to acquire to achieve its crititical outcomes. The strategy provides a high-level roadmap that identifies the necessary changes and actions required to achieve these outcomes. The strategy consists of six strategic shifts in technology use, which includes AI-enabled data analysis.
The NSW AI Assurance Framework is a set of rules by the New South Wales government to ensure AI systems are safe and fair in public services. It helps government staff use AI responsibly and protects the public from risks. This initiative was created to build trust in AI and make sure it works well for everyone.