Australian Teen Convicted in Landmark Deepfake Pornography Case

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

William Hamish Yeates, a 19-year-old from Adelaide, became the first person in Australia convicted under new federal laws criminalizing the creation and distribution of AI-generated deepfake sexual images without consent. Yeates pleaded guilty to multiple charges, highlighting the legal and social harms of AI-enabled image-based abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The creation and distribution of deepfake images involve AI systems capable of generating realistic but fabricated content. The event describes a person admitting to using such AI-generated images to harass and violate the victim's rights, which is a direct harm caused by the AI system's misuse. This fits the definition of an AI Incident as it involves harm to a person through violation of rights and offensive use of AI-generated content.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Private school grad admits deepfake offence

2026-04-15
Countryman
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake images involve AI systems capable of generating realistic but fabricated content. The event describes a person admitting to using such AI-generated images to harass and violate the victim's rights, which is a direct harm caused by the AI system's misuse. This fits the definition of an AI Incident as it involves harm to a person through violation of rights and offensive use of AI-generated content.
Thumbnail Image

Australian pleads guilty to creating deepfake porn in landmark case

2026-04-15
BBC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create deepfake pornography, which is a form of manipulated sexual imagery created without consent. The harm caused includes violations of rights and gendered abuse, which are direct harms to the victim. The case is a legal precedent under a new law criminalizing such AI-enabled manipulation. Therefore, this is an AI Incident due to the realized harm caused by the AI system's use in creating and distributing non-consensual deepfake sexual images.
Thumbnail Image

Private school graduate admits creating sexual deepfakes

2026-04-15
News.com.au
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the creation and distribution of deepfake images, which are generated or altered using AI systems. The individual pleaded guilty to offences related to creating or altering sexual material without consent, which is a violation of rights and causes harm to the victim. The harm is direct and realized, meeting the criteria for an AI Incident. The involvement of AI in generating deepfake content is clear and central to the incident, and the legal charges confirm the harm and breach of rights.
Thumbnail Image

Teen admits creating deep‑fake porn images under landmark laws

2026-04-15
7NEWS.com.au
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deep-fake technology) to create sexualized images without consent, which directly harms the individual depicted, violating their rights and causing personal harm. The AI system's use in generating these images is central to the incident. Since the harm has materialized and legal action has been taken, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to individuals.
Thumbnail Image

Teen admits creating deepfakes in Australian-first prosecution

2026-04-15
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI software to create deepfake content, which is explicitly mentioned. The harm caused is the violation of rights through non-consensual sexual material, a clear breach of applicable laws and human rights protections. Since the harm has occurred and legal action is underway, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of legal obligations.
Thumbnail Image

First Prosecution Of its Kind As Teen Faces Charges Over AI Deepfake Abuse In Australia

2026-04-15
arise.tv
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI deepfake technology used to create and distribute manipulated sexual images without consent, which constitutes a violation of rights and causes harm to the victim. The prosecution and guilty plea confirm that harm has occurred due to the AI system's misuse. The involvement of AI in generating deepfake sexual images and the resulting legal and social harms meet the criteria for an AI Incident. The event is not merely a warning or potential risk, but a realized harm with legal action, so it is not an AI Hazard or Complementary Information.