- Department of Industry, Science and Resources (DISR;)
- Data61 (;)
- Australia has developed an AI ethics framework.
- To draft an ethical framework and define principles that should be considered throughout the design and use of an AI system (through public consultations).
- To ensure that AI applications adhere to the principles of Human, social and environmental wellbeing; Human-centred values; Fairness; Privacy protection and security; Reliability and safety; Transparency and explainability; Contestability; Accountability
- Inclusive growth, sustainable development and well-being
- Human-centred values and fairness
- Transparency and explainability
- Robustness, security and safety
- Accountability
- Fostering a digital ecosystem for AI
- Competition
- Corporate governance
- Development
- Digital economy
- Environment
- Public governance
- Social and welfare issues
- Less than 1M
- In April 2019, the Australian government launched a consultation on an ethical framework to help mitigate the risks accompanying the technologies. The consultation centres around a discussion paper, Artificial Intelligence: Australia’s Ethics Framework, by the Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) digital innovation wing, Data61. Additional consultation and expert workshops were held after the public consultation period.
Components of Australias AI ethics framework as of November 2020:
- A set of voluntary AI Ethics Principles to encourage organisations using AI systems to strive for the best outcomes for Australia and Australians.
- How and when to apply the AI Ethics Principles.
- How the framework and principles were developed with business, academia and the community.
- AI ethics in context.
- No
- No
- N.A
- No
- N.A
- N.A
- N.A
Regulatory oversight and ethical advice bodies
- Type : Regulatory oversight and ethical advice bodies
- Name in English :
- Description : Australia is still in the process of developing the next steps after it publishes its principles to guide ethical AI. There will be a process of developing guidance material and piloting the principles.
- Country : Australia
- Type(s) of oversight or advice : Technical guidance (e.g. toolkits, documentation, technical standards) | Educational guidance (e.g. capacity awareness building, inclusive design guidance, educational materials and training programmes)
- Activities : Provide guidance, advice and support to stakeholders | Gather opinions from stakeholders on ethical principles, regulation improvements, etc. | Cross-government coordination in developing/adopting guidelines, regulations, etc. | Setting and adopting international standards
- Reports to : Ministry
- The regulatory oversight/ethical advice body is composed of : Mostly government representatives
- Reports are publicly available : No
Public consultations of stakeholders or experts
- Type : Public consultations of stakeholders or experts
- Name in English :
- Description : In April 2019, the Australian government launched a consultation on an ethical framework to help mitigate the risks accompanying the technologies. The consultation centres around a discussion paper, Artificial Intelligence: Australia’s Ethics Framework, by the Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) digital innovation wing, Data61. In it, CSIRO identifies key principles and measures that can be put in place during the development of AI systems to retain “the well-being of Australians as the top priority.” The eight key principles identified are: the generation of net benefits; doing no harm; regulatory and legal compliance; privacy protection; fairness; transparency and explainability; contestability; and accountability. All eight principles should be considered throughout the design and use of an AI system, the report says, and “should be seen as goals that define whether an AI system is operating ethically.” The Minister released a discussion paper on 5 April 2019 to encourage conversations about AI ethics in Australia. This paper included a set of draft AI ethics principles.During this consultation phase, our department:•received more than 130 written submissions•conducted stakeholder roundtables and targeted consultation in Sydney, Melbourne, Brisbane and Canberra•collaborated with a group of AI experts, to develop the revised set of AI ethics principles
- Country : Australia
- Stakeholders contribute to : Policy design
- Method : Participatory workshops and seminars | Expert groups/committees
- Number of participants : 251 to 500
Emerging AI-related regulation
- Type : Emerging AI-related regulation
- Name in English : AI Ethics Principles
- Description : The AI Ethics Principles, as part of Australia's wider AI ethics framework, were published in November 2019. They are as follows: - Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment.- Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.- Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.- Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.- Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.- Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.- Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.- Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.For more information: https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles
- Country : Australia
- Role of government : Protector of public values
- Challenge(s) addressed : Risks to human safety (e.g. prevention of physical and mental harm) | Data protection and right to privacy | Risks to fairness (e.g. non-discrimination, gender equality, fairness and diversity)
- Type(s) of regulation : Self-regulation (e.g. codes of conduct, guidelines, standards)
- Regulatory approach : Technology-based regulation (e.g. moratoria, bans, standards of use)
- Level of governance : National
- Approach to monitor compliance : Regulated parties are incentivised to adopt monitoring technology that is not managed by the regulator