Australian Man Fined for Distributing AI-Generated Deepfake Pornography

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anthony Rotondo was fined AU$343,500 by an Australian court for creating and distributing AI-generated deepfake pornographic videos of high-profile women without consent. The case, the first of its kind in Australia, led to the shutdown of the website involved and highlighted the psychological harm caused to victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The creation and posting of deepfake pornographic images involves an AI system capable of generating realistic fake content. The harm caused includes violation of privacy and reputational damage to the women targeted, which falls under violations of human rights and harm to communities. Since the AI system's use directly led to these harms and legal action was taken, this qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafety

Industries
Arts, entertainment, and recreation

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

'Strong message': Deepfake porn creator cops huge fine

2025-09-26
The Sydney Morning Herald
Why's our monitor labelling this an incident or hazard?
The creation and posting of deepfake pornographic images involves an AI system capable of generating realistic fake content. The harm caused includes violation of privacy and reputational damage to the women targeted, which falls under violations of human rights and harm to communities. Since the AI system's use directly led to these harms and legal action was taken, this qualifies as an AI Incident.
Thumbnail Image

Man fined $34,000 for deepfake pornography of prominent Australian women in first-of-its-kind case

2025-09-26
The Guardian
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, specifically deepfake technology, which is used to generate non-consensual explicit images. The use of this AI system directly led to harm in the form of psychological and emotional distress to the targeted women, constituting a violation of their rights and image-based abuse. The legal action and penalty demonstrate recognition of this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm and legal violations.
Thumbnail Image

Man fined $340,000 for creating deepfake porn of high-profile women

2025-09-26
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The creation and distribution of deepfake pornographic images involve AI systems capable of generating realistic synthetic content. The harm caused includes psychological and emotional distress to the victims, which falls under harm to persons and violation of rights. The event describes realized harm resulting from the use of AI systems, meeting the criteria for an AI Incident. The legal and regulatory response further confirms the recognition of harm caused by AI misuse.
Thumbnail Image

'Strong message': deepfake porn creator cops huge fine

2025-09-26
Yahoo!7 News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create deepfake pornographic images without consent, causing psychological and emotional harm to the victims, which is a violation of human rights. The legal actions and penalties imposed are responses to this harm. The AI system's development and use directly led to the harm, qualifying this as an AI Incident under the framework definitions.
Thumbnail Image

Man's huge fine for deepfake porn row

2025-09-26
News.com.au
Why's our monitor labelling this an incident or hazard?
The event explicitly involves deepfake images, which are AI-generated synthetic media. The misuse of this AI system to create intimate images without consent has caused harm to the individuals depicted, violating their rights and privacy. The court ruling and fines are a direct consequence of this harm. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's use.
Thumbnail Image

Man fined $350k in Federal Court for creating deepfake porn

2025-09-26
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-generated deepfake technology to create non-consensual pornographic images, which is a violation of privacy and legal rights. The harm to the individuals targeted is direct and significant, as the images were posted publicly. The legal consequences and court orders confirm that the AI system's misuse led to a breach of rights and harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Australia court fines man over $280,000 for deepfake porn

2025-09-26
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI-powered deepfake technology to create and distribute non-consensual pornographic content, which caused significant psychological and emotional harm to the victims. The court ruling and fine indicate that harm has materialized. The AI system's use directly led to violations of privacy and emotional harm, fitting the definition of an AI Incident. The involvement of AI in generating the deepfake content is clear, and the harm is direct and significant.
Thumbnail Image

" Un message fort " : un homme condamné à 200 000 euros pour diffusion de deepfakes pornographiques en Australie

2025-09-26
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate deepfake pornographic content, which directly caused harm to the victims' privacy, dignity, and emotional well-being. The legal conviction and fine confirm that harm has materialized. The AI system's role in creating the manipulated videos is pivotal to the incident. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Un homme condamné à 200.00 euros pour diffusion de deepfakes pornographiques en Australie

2025-09-26
DH.be
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deepfake pornographic videos, which constitutes a violation of rights and harms individuals. The AI system's use directly led to harm through the dissemination of manipulated content. Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm caused by the use of an AI system.
Thumbnail Image

Australia court fines man over $200,000 for deepfake porn - Jamaica Observer

2025-09-27
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a clear example of an AI system's use leading to harm. The harm includes violation of privacy, emotional distress, and psychological harm to the victims, fitting the definition of an AI Incident under violations of human rights and harm to communities. The court ruling and fine confirm that the harm has materialized and is recognized legally. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Un homme condamné à 200.00 euros pour diffusion de deepfakes pornographiques en Australie

2025-09-26
La Libre.be
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI (deepfake technology) to create pornographic videos without consent, which constitutes a violation of human rights and personal dignity. The harm has materialized as the videos were distributed, leading to legal consequences. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

Il diffusait des deepfakes pornographiques de femmes célèbres, un homme condamné à 200 000 euros

2025-09-26
LaProvence.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and distribute deepfake pornographic content, which constitutes a violation of human rights and causes harm to the individuals depicted. The harm has already occurred, as evidenced by the legal conviction and site closure. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in generating and disseminating harmful content.
Thumbnail Image

Court Fines $343K for Posting Aussie Women's Deepfakes

2025-09-26
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake images without consent, causing psychological and emotional harm to the victims, which qualifies as harm to persons under the AI Incident definition. The court's penalty and enforcement actions confirm that harm has materialized. The AI system's use in generating explicit non-consensual images directly led to the harm. Hence, this is an AI Incident.
Thumbnail Image

Un Australien condamné à 200.000 euros d'amende pour diffusion de deepfakes pornographiques

2025-09-26
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deepfake videos, which are manipulated content generated by AI systems. The distribution of these videos caused harm to the victims, including violations of privacy and psychological harm, which fall under violations of human rights and harm to individuals. The legal conviction and fine demonstrate that harm has occurred and the AI system's use was pivotal in causing this harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Court fines man $343K for creating deepfake porn of Australian women

2025-09-29
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create fake pornographic images without consent, which constitutes a violation of human rights and causes psychological harm to individuals. The harm has already occurred, and the AI system's use was central to the incident. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to individuals.
Thumbnail Image

Anthony Rotondo fined $343,500 for deepfakes of six prominent Australian women

2025-09-30
Women's Agenda
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate deepfake images, which are non-consensual and intimate, causing psychological and emotional harm to the victims. The harm is direct and materialized, as evidenced by the court's civil penalty and the description of distress caused. The AI system's use in creating and distributing these images led to violations of rights and significant harm, meeting the criteria for an AI Incident. The legal and enforcement actions further confirm the recognition of harm caused by AI misuse.