Australian Electoral Commission Warns of AI-Generated Misinformation Threat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Australian Electoral Commission (AEC) warns it lacks the tools to detect or deter AI-generated misinformation, including deepfakes, in upcoming elections. Commissioner Tom Rogers highlighted the risks posed by AI to democracy, noting similar issues in other countries. Current laws do not prohibit political deepfakes, complicating the AEC's ability to intervene.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems capable of generating misinformation (e.g., deepfakes, voice clones) that could disrupt democratic processes and harm communities by misleading voters. Although no actual harm has been reported yet in Australia, the article clearly states the plausible risk of AI-generated misinformation impacting the upcoming election, which fits the definition of an AI Hazard. The AEC's lack of tools to detect or deter this misinformation further underscores the potential for harm. Since the article discusses potential future harm rather than a realized incident, and focuses on the risk and governance challenges, the classification as an AI Hazard is appropriate.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Public interestReputationalPsychologicalHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AEC warns it doesn't have power to deter AI-generated political misinformation at next election

2024-05-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of generating misinformation (e.g., deepfakes, voice clones) that could disrupt democratic processes and harm communities by misleading voters. Although no actual harm has been reported yet in Australia, the article clearly states the plausible risk of AI-generated misinformation impacting the upcoming election, which fits the definition of an AI Hazard. The AEC's lack of tools to detect or deter this misinformation further underscores the potential for harm. Since the article discusses potential future harm rather than a realized incident, and focuses on the risk and governance challenges, the classification as an AI Hazard is appropriate.
Thumbnail Image

AEC says it can not stop AI deepfakes in election campaigns

2024-05-20
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfakes) and their potential misuse in election campaigns, which could plausibly lead to harm such as misinformation, voter deception, and disruption of democratic processes (harm to communities). Since the article does not report any realized harm or incident in Australia but rather warns about the potential for such harm and discusses regulatory and educational measures, it fits the definition of an AI Hazard. The article does not describe a current AI Incident, nor is it primarily about responses to a past incident, so it is not Complementary Information. It is not unrelated because it clearly involves AI and its societal implications.
Thumbnail Image

Deepfakes in an Australian election campaign would be legally fine, and OpenAI benches its flirty new chatbot voice

2024-05-21
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of generative AI deepfakes used in political campaigns. While it acknowledges the potential for harm to communities and democratic processes through misinformation, it does not report any actual harm occurring yet. The discussion centers on the plausible future risk of AI-generated misinformation influencing elections and the need for legal and governance responses. Therefore, this qualifies as an AI Hazard because the development and use of AI deepfakes could plausibly lead to significant harm, but no incident has yet materialized. Other content in the article about AI voice models and tech announcements do not describe incidents or hazards and are thus unrelated or complementary information but not the main focus.
Thumbnail Image

Voters should be warned about AI-generated election ads

2024-05-20
Perth Now
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated election disinformation and misinformation, including deepfakes and robocalls, which are AI systems producing harmful content that can disrupt democratic processes and harm communities. Although such harms have been detected in other countries, the article does not report a specific realized incident of harm in Australia but warns that such harms are expected in the next election. The Australian Electoral Commission lacks the tools and legislation to address this threat, indicating a credible risk of harm. The focus is on the plausible future harm and the need for legislative and technical responses, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Voters should be warned about AI-generated election ads

2024-05-20
The West Australian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating election misinformation (deepfakes, robocalls) that have caused harm in other countries and are expected to appear in Australia. The Electoral Commissioner's statements confirm the plausible risk of harm to the election process and voter trust, which falls under harm to communities. No actual harm in Australia is reported yet, so it is not an AI Incident. The focus is on the potential for harm and the need for legislative and technical measures to mitigate it, fitting the definition of an AI Hazard.
Thumbnail Image

AEC checks Meta AI's handling of election questions

2024-05-20
iTnews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Meta AI) and its potential misuse to produce false election-related information, which could plausibly lead to harm to communities by misleading voters and undermining electoral integrity. Although the AEC has tested the AI tool and found it resistant to generating false information so far, the concern about future misuse, including deepfaked political content and AI-generated robocalls, indicates a credible risk. No actual harm or incident is reported, and the focus is on potential future threats and the AEC's preparatory measures. This aligns with the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving misinformation and election interference. The article is not primarily about a response to a past incident or a general AI news update, so it is not Complementary Information or Unrelated.
Thumbnail Image

AEC says AI-generated misinformation likely to plague next federal election

2024-05-21
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (e.g., ChatGPT, Dall-E, Meta AI) capable of generating misinformation. The harms discussed relate to misinformation affecting election integrity, which constitutes harm to communities and potentially violates democratic rights. However, the article focuses on warnings and anticipated risks rather than describing an actual realized harm or incident. Therefore, this qualifies as an AI Hazard, as the AI-generated misinformation could plausibly lead to harm in the next federal election, but no direct or indirect harm has yet been reported in this specific context.