Friend Betrayal: Deepfake Pornography Incident in Sydney

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hannah Grundy in Sydney was victimized when a trusted friend, Andrew Hayler, used AI deepfake technology to produce and circulate non-consensual explicit imagery and violent content. Together with her partner, Hannah uncovered the misuse of private social media images affecting dozens of women, constituting a serious human rights violation.[AI generated]

Why's our monitor labelling this an incident or hazard?

This is a direct misuse of an AI system (deepfake generation) to harass, threaten, and inflict emotional and reputational harm on Hannah and other women. The harm has materialized (non-consensual sexual content, threats, doxxing), constituting violations of personal and human rights. Therefore, it meets the criteria for an AI Incident.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityTransparency & explainabilityHuman wellbeingRobustness & digital security

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

'Um dos meus melhores amigos fez deepfake pornô contra mim'

2025-02-10
Diversão e Arte
Why's our monitor labelling this an incident or hazard?
This is a direct misuse of an AI system (deepfake generation) to harass, threaten, and inflict emotional and reputational harm on Hannah and other women. The harm has materialized (non-consensual sexual content, threats, doxxing), constituting violations of personal and human rights. Therefore, it meets the criteria for an AI Incident.
Thumbnail Image

'Um dos meus melhores amigos fez deepfake pornô contra mim'

2025-02-10
Terra
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate fake sexual content (deepfake pornography) and distribute it as part of a campaign of threats and harassment. The incident resulted in severe emotional, reputational, and privacy harm to multiple women. This is a direct realized harm from an AI system, constituting an AI Incident.
Thumbnail Image

'Meu melhor amigo fez deepfake pornô contra mim' - 10/02/2025 - Tec - Folha

2025-02-10
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event involves the malicious use of an AI system (deepfake technology) by a trusted individual to create and share fake pornographic images and detailed sexual violence fantasies, along with personal data and threats. The misuse directly led to psychological and emotional harms, breaches of privacy, and intimidation, meeting the criteria for an AI Incident.
Thumbnail Image

Um dos meus melhores amigos fez deepfake pornô contra mim

2025-02-10
O POVO
Why's our monitor labelling this an incident or hazard?
This case involves the malicious use of an AI system (deepfake-generation models) to produce and distribute non-consensual sexual content and threats, directly causing harm to the victims’ physical and psychological well-being and violating their rights. The harm was realized (not merely potential), making it an AI Incident.
Thumbnail Image

Homem cria deepfakes pornográficos de amiga e dezenas de mulheres em caso chocante na Austrália; vítima revela desfecho - Hugo Gloss

2025-02-10
Hugo Gloss
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generator) was used to create and distribute harmful content, directly violating the victims’ rights and inflicting psychological trauma. The harm has already occurred and led to criminal charges and convictions, fitting the definition of an AI Incident.
Thumbnail Image

"Cada momento publicado en Instagram y Facebook lo convirtió en porno": Traicionó a su amiga usando su imagen con IA

2025-02-10
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create fake sexual images and text depicting assault of Hannah, causing direct harm (psychological trauma, threats, privacy invasion) and violating her rights. This misuse of AI for non-consensual deepfakes constitutes an AI Incident.
Thumbnail Image

"Publicas tus momentos más felices, y alguien los convierte en porno". Hannah descubrió que un 'amigo' creaba deepfakes con sus fotos

2025-02-10
genbeta.com
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generation) was used to produce and distribute pornographic images without consent, directly causing harm (harassment, threats, violation of privacy and dignity). The misuse of generative AI in this case constitutes an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Cómo un hombre traicionó a su amiga usando su imagen con inteligencia artificial

2025-02-10
EL NACIONAL
Why's our monitor labelling this an incident or hazard?
The event describes the actual use of AI to generate deepfake images without consent, directly resulting in emotional and reputational harm to the victims and infringing on their rights. This constitutes a realized harm from AI misuse, fitting the definition of an AI Incident.
Thumbnail Image

La historia de Hannah Grundy, la australiana víctima de deepfake por un amigo: "Convirtió cada uno de mis recuerdos en porno"

2025-02-10
El Colombiano
Why's our monitor labelling this an incident or hazard?
The misuse of deepfake generation technology (an AI system) directly caused harm (psychological trauma, threats, sexual harassment) to Hannah and dozens of other women. The AI’s outputs were central to the wrongdoing, and the event led to a criminal investigation and precedent‐setting conviction. Thus, it is an AI Incident.
Thumbnail Image

"Cada momento lo convirtió en porno": cómo un amigo me traicionó usando mi imagen con inteligencia artificial - BBC News Mundo

2025-02-10
BBC
Why's our monitor labelling this an incident or hazard?
The event describes the actual use of an AI system to create deepfake pornographic images and harassing content, resulting in direct harm (psychological trauma, violation of privacy and bodily autonomy) to the victims. This meets the criteria for an AI Incident involving violations of human rights and harm to individuals.
Thumbnail Image

"Cada momento lo convirtió en porno": cómo un amigo me traicionó usando mi imagen con inteligencia artificial

2025-02-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
This is a clear instance of harm caused by the malicious use of an AI system (deepfake generation). The AI-enabled creation and dissemination of non-consensual intimate images constitutes a violation of personal and sexual rights, inflicts severe psychological harm, and involves direct wrongdoing by the perpetrator. Because the harm has materialized and was caused by the AI system's misuse, it qualifies as an AI Incident.