UK Regulator Bans AI App Ad for Promoting Non-Consensual Nudification

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UK Advertising Standards Authority banned a YouTube ad for PixVideo AI Video Maker, which implied users could digitally remove women's clothing. The ad was deemed offensive, irresponsible, and harmful, promoting sexualisation and objectification of women through AI-powered image manipulation. Eight complaints prompted regulatory action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (PixVideo) is involved in the use of AI to alter images, specifically with the potential to remove clothing digitally. The ad's implication that users could do this without consent directly relates to violations of rights (privacy, dignity) and harm to communities (gender-based harm and stereotypes). The complaints and regulatory response indicate that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident. The involvement of AI in the app's functionality and the resulting harm from its use and promotion justify classification as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Ad for AI video app which said it could 'remove anything' banned

2026-03-18
BBC
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (PixVideo) used for video and image editing with AI capabilities. The advertisement implied the app could be used to remove clothing digitally, which is harmful and offensive, leading to regulatory action. However, the app's actual use of AI to create sexually explicit content is prohibited, and no direct harm from the app's use is reported here. The event centers on the advertising and regulatory response, societal concerns, and ongoing legal developments rather than a direct AI Incident or an immediate AI Hazard. Thus, it fits the definition of Complementary Information, providing context and updates on governance and societal reactions to AI misuse risks.
Thumbnail Image

Ad for AI video app which said it could 'remove anything' banned

2026-03-18
BBC
Why's our monitor labelling this an incident or hazard?
The AI system (PixVideo) is involved in the use of AI to alter images, specifically with the potential to remove clothing digitally. The ad's implication that users could do this without consent directly relates to violations of rights (privacy, dignity) and harm to communities (gender-based harm and stereotypes). The complaints and regulatory response indicate that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident. The involvement of AI in the app's functionality and the resulting harm from its use and promotion justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Regulator bans AI ad over 'erase anything' claim

2026-03-18
The News International
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (an AI video editing app) whose advertised capabilities involve altering images in ways that could lead to privacy violations and harm to individuals, particularly women, by promoting non-consensual digital exposure. The regulator's ban and the complaints indicate that harm related to privacy and ethical concerns has occurred or is occurring due to the app's use or its promotion. The AI system's role in enabling such misuse is direct, as the app's functionality facilitates the harmful alteration of images. Therefore, this qualifies as an AI Incident due to realized harm involving privacy violations and harmful societal impacts linked to the AI system's use and promotion.
Thumbnail Image

AI video maker ad banned for exposing woman's body

2026-03-18
The Irish News
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it is an AI video maker capable of digitally altering images. The ad's messaging condoned the use of AI to expose women's bodies without consent, which is a violation of rights and harmful to communities. The harm is realized through the offensive and harmful gender stereotyping and objectification caused by the ad's promotion of the AI system's capabilities. Despite the company's claims about restrictions, the ad itself promoted misuse of the AI system leading to harm. Therefore, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

Ad for AI editing app which said it could 'erase anything' banned for sexualising women

2026-03-18
Sky News
Why's our monitor labelling this an incident or hazard?
The AI system (the video editing app with AI capabilities) is explicitly involved, as it uses AI to edit videos. The ad's implication that the app could remove clothing without consent directly relates to misuse of the AI system leading to harm—sexualisation and objectification of women, which is a violation of rights and harmful to communities. Although the app's terms prohibit such use and it has detection mechanisms, the ad's messaging condones this harmful use, which the regulator found offensive and irresponsible. Therefore, the event meets the criteria for an AI Incident due to realized harm linked to the AI system's use and its societal impact.
Thumbnail Image

'Offensive' AI advert banned over nudification claims - UKTN

2026-03-18
UKTN (UK Tech News)
Why's our monitor labelling this an incident or hazard?
The AI system (PixVideo) is explicitly mentioned as generating nudified images, which is a direct use of AI for harmful content creation. The advert promoting this capability caused serious offense and was deemed irresponsible and harmful by the UK Advertising Standards Authority. The harm includes violation of rights and harm to communities through offensive gender stereotyping and the promotion of illegal content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm and regulatory action.
Thumbnail Image

ASA steps into 'nudification' row with first AI tool ad ban - DecisionMarketing

2026-03-18
decisionmarketing.co.uk
Why's our monitor labelling this an incident or hazard?
The AI system (PixVideo AI Video Maker) is explicitly involved as it is an AI-powered tool capable of generating or manipulating images. The ad implied the tool could be used to remove clothing digitally, which directly relates to harm through sexual objectification and non-consensual exposure, violating rights and causing offense. The regulatory ban and complaints confirm that harm has occurred or is occurring. The company's acknowledgment of the harmful interpretation and the ASA's decision to ban the ad further support classification as an AI Incident. The event is not merely a potential risk (hazard) or a general update (complementary information), but a concrete case of harm linked to AI use.
Thumbnail Image

Ad For Ai Editing App Which Said It Could 'erase Anything' Banned For Sexualising Women - Beritaja

2026-03-18
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The AI system (video editing app with AI capabilities) was used in a way that directly led to harm by promoting sexualization and objectification of women, which is a violation of human rights and causes harm to communities. The advertisement's content and implications demonstrate the AI system's role in enabling or condoning harmful behavior. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's use and messaging.