Minnesota Considers Ban on AI 'Nudify' Apps Creating Non-Consensual Explicit Images

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Molly Kelly’s case, where AI 'nudification' technology was used to create explicit images from her social media photos, highlights a growing trend affecting around 80-85 Minnesota women. In response, legislators propose a bipartisan bill to target companies enabling such deepfake pornography, violating privacy and human rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The bill targets the use of AI systems that generate deepfake pornographic content, which directly violates individuals' rights and causes harm to victims. The presence of AI systems is explicit in the creation of deepfake videos, and the harm to victims is realized, making this an AI Incident. The legislative response is focused on addressing this harm caused by AI misuse.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer servicesDigital security

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generationRecognition/object detection


Articles about this incident or hazard

Thumbnail Image

Minnesota Senate bill targets websites where users create 'deepfake' porn videos

2025-03-07
startribune.com
Why's our monitor labelling this an incident or hazard?
The bill targets the use of AI systems that generate deepfake pornographic content, which directly violates individuals' rights and causes harm to victims. The presence of AI systems is explicit in the creation of deepfake videos, and the harm to victims is realized, making this an AI Incident. The legislative response is focused on addressing this harm caused by AI misuse.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
Market Beat
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create non-consensual sexually explicit deepfake images and videos, which constitute a violation of individuals' rights and cause harm to communities and individuals. The article details actual harm experienced by victims and legislative responses to prevent further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals. The legislative efforts and legal challenges are complementary information but the core event is the realized harm from AI-generated non-consensual explicit content.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as 'nudification' technology that generates realistic explicit images without consent, which has directly led to harm to individuals (violation of rights and emotional harm). The article details actual harm experienced by victims, making this an AI Incident. The legislative response aims to prevent further harm, but the harm is already realized. Therefore, this is not merely a hazard or complementary information but an AI Incident due to the direct link between AI use and harm to persons.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
La Crosse Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create realistic sexually explicit images and videos without consent, which has caused direct harm to individuals (privacy violations, emotional distress) and communities. The article details actual harm experienced by victims and legislative responses to prevent further harm. The AI system's use in generating nonconsensual deepfake pornography fits the definition of an AI Incident due to violations of rights and harm to persons. The legislative proposals and lawsuits are responses to this incident, not the primary event itself.
Thumbnail Image

Minnesota Considers Blocking 'Nudify' Apps That Use AI to Make Explicit Images Without Consent

2025-03-04
US News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create nonconsensual explicit images, which is a direct violation of individuals' rights and causes harm to the victims. The harm is realized, as victims have reported actual instances of such images being created and disseminated. The article discusses legislative responses to this harm, but the primary focus is on the harm caused by the AI systems themselves. Therefore, this qualifies as an AI Incident due to direct harm caused by the use of AI systems in generating nonconsensual explicit content.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent | FOX 28 Spokane

2025-03-04
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The article discusses the potential harm caused by AI systems that generate explicit deepfake images without consent, which constitutes a violation of rights and harm to individuals and communities. Although no specific incident of harm is reported, the legislation aims to prevent such harms from occurring. Therefore, this event represents an AI Hazard, as the AI systems involved could plausibly lead to significant harm if unregulated.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make...

2025-03-04
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate realistic sexually explicit deepfake images without consent, which has caused direct harm to victims by violating their rights and causing emotional and reputational damage. The article details actual harm experienced by victims and legislative responses aimed at preventing further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm as defined in the framework (violation of rights and harm to communities).
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
KOB.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate non-consensual sexually explicit images ('nudification' apps). The harm is realized and ongoing, as victims have suffered from the creation and potential dissemination of these images, which constitutes a violation of rights and harm to individuals and communities. The article discusses legislative responses to this harm, but the primary focus is on the harm caused by the AI systems' use. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm (violation of rights and harm to individuals).
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
Times Union
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexually explicit deepfake images without consent, which has directly caused harm to individuals (privacy violations, emotional distress) and communities. The article details actual harm experienced by victims and legislative responses aimed at preventing further harm. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. Although the legislation and legal challenges are ongoing, the harm from the AI-generated content is realized and significant.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexually explicit deepfake images without consent, which directly harms individuals' privacy and dignity, constituting violations of rights. The harm is realized as victims have already been targeted and harmed by these AI-generated images. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The legislative response and legal actions are complementary information but the core event is the harm caused by the AI-generated explicit content.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
The Daily Gazette Family of Newspapers
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexually explicit deepfake images without consent, which has directly harmed individuals by violating their rights and causing personal and community harm. The legislation aims to prevent further harm by blocking access to these AI systems. Since the harm is realized and directly linked to the AI system's use, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent - WTOP News

2025-03-04
WTOP News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexually explicit deepfake images without consent, which has directly caused harm to individuals (violation of rights and harm to communities). The article details actual harm experienced by victims and legislative responses to prevent further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm, not just potential harm or general discussion about AI.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
startribune.com
Why's our monitor labelling this an incident or hazard?
The article centers on the use of AI systems to generate nonconsensual explicit images, which constitutes a violation of rights and sexual abuse. The harms described are realized or ongoing, as evidenced by lawsuits and legislative responses. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of rights and harm to individuals. The legislative and legal responses are complementary information but the primary focus is on the harm caused by AI misuse.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent - The Daily Reporter - Greenfield Indiana

2025-03-04
The Daily Reporter - Greenfield Indiana
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nonconsensual explicit content, which has directly harmed individuals by violating their rights and causing emotional and reputational damage. The article details actual harm experienced by victims, including the creation and dissemination of realistic deepfake pornography without consent. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The legislative response and legal challenges are complementary information but do not change the primary classification of the event as an AI Incident.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-04
Seymour Tribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nonconsensual sexually explicit images, which has directly led to harm to individuals (privacy violations, emotional distress) and communities (widespread targeting of women). The article details real harm experienced by victims and legislative responses to prevent further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The legislative efforts and expert opinions provide context but do not change the classification of the core event as an incident.
Thumbnail Image

Minnesota Tackles Deepfake Porn: New Legislation Aims to Curb AI 'Nudification' | Law-Order

2025-03-05
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to create harmful deepfake pornographic images without consent, which constitutes a violation of individual rights and causes harm to the affected person. The legislative efforts to curb such AI-driven content further confirm the recognition of harm caused by AI misuse. Since the AI system's use has directly led to realized harm (non-consensual explicit images), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Minnesota considers blocking 'nudify' apps that use AI to make explicit images without consent

2025-03-05
The Star
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nonconsensual explicit images, which has directly led to harm to individuals (violation of rights and harm to communities). The article details actual incidents of harm caused by these AI-generated deepfakes and legislative efforts to prevent further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant harm to people, including violations of rights and personal harm. The legislative and legal responses described are complementary information but the core event is the realized harm from AI misuse.