Twitch Streamer Exposes AI Deepfake Porn Scandal Involving Non-Consensual Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Twitch streamer Brandon "Atrioc" Ewing was caught viewing AI-generated deepfake pornographic videos of female streamers, created without their consent. The incident sparked public outrage, highlighting the harm and rights violations caused by AI systems used to produce non-consensual explicit content, and led to significant reputational damage and emotional distress for the victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the use of AI systems to create non-consensual deepfake pornographic images, which is a clear violation of human rights and privacy. The harm is realized as the images were viewed, shared, and the identities of the women were exposed, leading to direct harm to the individuals involved. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its distribution.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityHuman wellbeingFairnessTransparency & explainabilityRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Deepfake Porn Creator Deletes Internet Presence After Tearful 'Atrioc' Apology

2023-01-31
VICE
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems to create non-consensual deepfake pornographic images, which is a clear violation of human rights and privacy. The harm is realized as the images were viewed, shared, and the identities of the women were exposed, leading to direct harm to the individuals involved. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content and its distribution.
Thumbnail Image

Popular Female Streamers Targeted In Deepfake Pornography Scandal

2023-01-31
Kotaku
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a direct misuse of AI systems to create harmful and non-consensual content. The harm includes violations of rights, emotional distress, and reputational damage to the targeted streamers. The deepfake creator's actions and the resulting impact on the victims meet the criteria for an AI Incident, as the AI system's use directly led to significant harm. The article also discusses legal and societal responses, but the primary focus is on the harm caused by the AI system's misuse.
Thumbnail Image

Twitch Star Atrioc Caught Using Deepfake Porn OF OTHER STREAMERS While Live! - Perez Hilton

2023-01-31
Perez Hilton
Why's our monitor labelling this an incident or hazard?
The article describes an AI system used to create deepfake pornographic images of real streamers without their consent, which is a clear violation of their rights and causes harm to the individuals involved. The AI-generated content was actively used and paid for by the Twitch streamer, leading to direct harm including emotional distress and violation of privacy and intellectual property rights. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and communities.
Thumbnail Image

Twitch Streamer Tearfully Apologizes for Looking at Deepfaked Porn

2023-01-31
Futurism
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of deepfaked pornographic content generated by AI systems without the consent of the individuals depicted. This has caused direct harm to the victims, including emotional trauma and violation of their rights. The AI system's role in generating this content is pivotal to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as it involves violations of rights and harm to individuals resulting from the use of AI.
Thumbnail Image

The Deepfake Porn Scandal Is Tearing the Streaming Community Apart

2023-02-01
Futurism
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated deepfake pornographic content was created and distributed without consent, directly harming the individuals involved and causing significant community disruption. The AI system's role in generating the deepfake content is central to the harm, fulfilling the criteria for an AI Incident due to violations of rights and harm to communities. The event is not merely a potential risk but a realized harm, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitch's AI 'Porn' Controversy Is a Creepy Sign of Things to Come

2023-01-31
Jezebel
Why's our monitor labelling this an incident or hazard?
The article describes the creation and consumption of AI-generated deepfake pornographic images without the consent of the individuals depicted, which constitutes a violation of their rights and causes harm to their well-being and communities. The AI system's use in generating these images directly leads to harm (emotional, reputational, and potentially legal) to the affected individuals. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to communities.
Thumbnail Image

Deepfake Porn Is Not "Just Porn" Because Consent Is Not Present

2023-02-01
The Mary Sue
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the creation and dissemination of deepfake pornographic images using AI systems without the consent of the people depicted, which is a clear violation of their rights and causes harm. The AI system's role is pivotal as it enables the generation of realistic but fabricated explicit content. This meets the criteria for an AI Incident because the AI's use has directly led to violations of human rights and harm to individuals and communities. The article also discusses legal actions and social responses, but the primary focus is on the harm caused by the AI-generated deepfakes.
Thumbnail Image

Twitch Streamer Atrioc's Apology Video And Deepfake Controversy, Explained

2023-02-01
Junkee
Why's our monitor labelling this an incident or hazard?
The article describes an incident where AI-generated deepfake pornographic videos of female Twitch streamers were created and viewed by a known streamer, leading to public controversy and harm to the individuals involved. The AI system (deepfake generation) was used to produce non-consensual explicit content, which constitutes a violation of rights and harm to communities. The harm is realized, not just potential, as the affected streamers have responded publicly, indicating impact. Hence, this is an AI Incident under the framework definitions.
Thumbnail Image

This streamer has been caught watching 'deepfakes' porn from other creators. And his career has been ruined in a moment

2023-02-03
Bullfrag
Why's our monitor labelling this an incident or hazard?
The article describes an AI system used to create deepfake pornographic videos of real people without their consent, which is a clear violation of intellectual property and personal rights, thus constituting harm under the framework. The streamer viewing this content live led to public exposure and reputational damage. The AI system's development and use directly caused this harm. Therefore, this qualifies as an AI Incident due to realized harm involving violations of rights and harm to individuals and communities.
Thumbnail Image

Rise of the post-truth sex tape: Deepfake pornography is making women's online lives even more frightening

2023-02-07
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate deepfake pornography, which directly harms individuals by violating their rights and causing psychological and reputational damage. The harm is realized and ongoing, not merely potential. The involvement of AI in creating and distributing these images is central to the incident. The article also discusses societal and legal responses, but the primary focus is on the harm caused by the AI-generated content. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitch star QTCinderella's deepfake porn nightmare: 'F--k the...

2023-02-06
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based deepfake technology to create non-consensual pornographic content, which has caused direct psychological harm to the victim. The AI system's involvement is clear as it was used to generate the fake videos. The harm is realized and significant, including emotional distress and violation of personal rights. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person and violation of rights. The event is not merely a potential risk or a general discussion but a concrete case of harm caused by AI misuse.
Thumbnail Image

Deepfake pornography is making women's online lives even more frightening

2023-02-07
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake pornography) that have directly led to harm: non-consensual sexualized images of women causing harassment, reputational damage, and emotional distress. This fits the definition of an AI Incident as it involves violations of human rights and harm to communities. The article details actual harm and ongoing impact, not just potential or hypothetical risks. Hence, it is not a hazard or complementary information but an incident.
Thumbnail Image

Why we should take the threat of deepfakes more seriously

2023-02-07
The Daily Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos, which directly leads to violations of human rights and harm to individuals (psychological and reputational harm). The AI system's use in generating non-consensual pornography is a clear AI Incident as it has directly caused harm to people. The article details realized harm and victim impact, not just potential risk, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Twitch Streamer Atrioc's Deepfake Porn Controversy Sparks Wisesweeping Debate on AI Efficacy

2023-02-06
Tech Times
Why's our monitor labelling this an incident or hazard?
The article describes an AI Incident because AI-generated deepfake pornography was shown publicly without consent, directly causing harm to the individuals depicted. The harm includes violations of personal rights and emotional trauma, which fits the definition of harm to persons and communities. The AI system's use in creating and disseminating these images is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Porn: Netizens Are Disturbed by AI's Use in Creating Morphed XXX Videos That Draw Questions on Consent (View Tweets) | 👍 LatestLY

2023-02-06
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake porn videos made without consent, which is a clear violation of human rights and causes harm to individuals' mental health and dignity. The AI system's role in creating these videos is central to the harm described. This fits the definition of an AI Incident because the AI's use has directly led to violations of rights and harm to communities. The mention of lack of consent and mental trauma confirms the harm is realized, not just potential.
Thumbnail Image

Female Content Creators Stand Up Against Deepfake Porn

2023-02-06
OUT FRONT
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create non-consensual pornographic content, which is a clear violation of individuals' rights and causes significant harm. The harm is realized and ongoing, as victims are speaking out and seeking legal action. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The presence of AI is explicit (deepfake technology), and the harm is direct and materialized.
Thumbnail Image

Professor lets victims talk to a deepfake to process their trauma - Vox magazine

2023-02-08
Vox magazine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used in therapy, which qualifies as an AI system under the definitions. However, the article does not report any injury, violation of rights, or other harms caused by the AI system. Instead, it discusses a pilot therapy application and plans for further research. Ethical concerns and potential risks are noted but remain speculative and unmaterialized. Thus, the event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on a novel AI application and ongoing research, including societal and ethical considerations.