BBC presenter Naga Munchetty targeted by AI deepfake nude images scam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Naga Munchetty, 49-year-old BBC Breakfast presenter, discovered AI-generated deepfake nude images of herself photoshopped and circulated on Facebook, leading to a porn site scam to harvest personal data. She said the experience was “scary” and “weird,” highlighting the emotional harm and privacy risks posed by AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the use of AI or AI-like technology to create fake explicit images and fake news articles to scam people out of money. The harm is realized as people are being tricked into financial loss, and the AI system's outputs (fake images and content) are pivotal in enabling this harm. The involvement of AI is reasonably inferred from the description of 'crudely mocked-up images' and the nature of the scam. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityHuman wellbeingRobustness & digital securityTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
WomenGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Naga Munchetty's fury as scammers fake nude pictures of her

2025-02-05
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI or AI-like technology to create fake explicit images and fake news articles to scam people out of money. The harm is realized as people are being tricked into financial loss, and the AI system's outputs (fake images and content) are pivotal in enabling this harm. The involvement of AI is reasonably inferred from the description of 'crudely mocked-up images' and the nature of the scam. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Naga Munchetty issues warning after fake nudes of her are spread on social media in money making scam

2025-02-05
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes the malicious use of AI-generated or AI-assisted fake images and fake news articles to perpetrate a scam. The AI system's outputs have been used to deceive people into clicking on fraudulent advertisements and potentially losing money, which is a direct harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (financial scams) and violation of trust, impacting individuals and communities.
Thumbnail Image

Naga Munchetty warns fake nudes of her spread online by scammers

2025-02-05
The Independent
Why's our monitor labelling this an incident or hazard?
The article describes the creation and dissemination of AI-generated deepfake images used in scams, which is a direct violation of the celebrity's rights and causes financial harm to victims. The AI system's use in generating manipulated content that is then used maliciously fits the definition of an AI Incident, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

BBC's Naga Munchetty 'mortified' after getting caught in fake nude photo scandal

2025-02-05
Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions fake images and fake news articles used in scams, which implies the use of AI or AI-like generative techniques to create realistic but false content. The harm includes financial scams targeting people and reputational harm to the individuals impersonated. The AI system's outputs (fake images and articles) directly led to these harms. Hence, this is an AI Incident due to realized harm caused by AI-generated content used maliciously.
Thumbnail Image

Naga Munchetty 'mortified' after finding fake naked photos of her online

2025-02-05
Metro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake videos and fake images generated to impersonate public figures, which are AI-generated content. These are used in scams that have already caused harm by misleading people and attempting to steal money. The involvement of AI in generating the fake content and the resulting financial and reputational harm qualifies this as an AI Incident under the framework, as the AI system's use has directly led to harm to people and communities.
Thumbnail Image

Naga Munchetty: Fake nude images of me have been shared online

2025-02-05
AOL.com
Why's our monitor labelling this an incident or hazard?
The event describes the creation and dissemination of AI-generated or AI-assisted fake images (deepfake-like content) and fake news articles used in scams. This has directly led to harm by attempting to defraud people financially and harming the reputation of the individuals involved. The AI system's involvement in generating the fake images and facilitating the scam meets the criteria for an AI Incident, as it has directly or indirectly led to harm to individuals and communities through deception and fraud.
Thumbnail Image

BBC Breakfast's Naga Munchetty 'mortified' as scammers create nude photos

2025-02-05
Birmingham Mail
Why's our monitor labelling this an incident or hazard?
The event describes the creation and use of AI-generated fake images (deepfakes or similar) and counterfeit news articles to scam people financially. This constitutes a direct harm to individuals (reputational harm to Naga Munchetty) and financial harm to victims of the scam, as well as harm to the community through misinformation and fraudulent activities. The AI system's involvement in generating fake content and enabling the scam meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI-generated content and its malicious use.
Thumbnail Image

Naga Munchetty speaks to GLAMOUR about the scammers sharing fake nude images of her online

2025-02-07
Glamour UK
Why's our monitor labelling this an incident or hazard?
The creation and dissemination of fake nude images likely involved AI-based generative techniques (deepfakes or similar), which were used maliciously to scam people. This constitutes a violation of rights (image-based abuse) and causes harm to the individual and potentially to victims of the scam. The scam website was taken down, indicating harm had occurred. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs in a scam context.
Thumbnail Image

Naga Munchetty 'mortified and bemused' by alleged scammers using her image

2025-02-05
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
The article describes scammers using AI-generated or manipulated images and fake news websites to impersonate trusted public figures and the BBC brand to trick people into investing in fraudulent cryptocurrency schemes. The harm is realized as people are scammed out of money, which is a direct harm caused by the AI system's outputs (fake images and misleading content). The involvement of AI is reasonably inferred from the creation of explicit fake images and fake news content, which typically require AI techniques such as generative models. Therefore, this event qualifies as an AI Incident due to direct financial harm and deception caused by AI-enabled scams.
Thumbnail Image

BBC Breakfast's Naga Munchetty 'left scared' after deepfake nudes

2025-02-10
accrington
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate deepfake pornography, which is a form of AI-generated manipulated content. The harm has materialized as the individual experienced distress and reputational harm, and the content was used in a scam, indicating direct harm caused by the AI system's outputs. This fits the definition of an AI Incident because the AI system's use directly led to violations of personal rights and harm to the individual and community through deceptive and harmful content dissemination.
Thumbnail Image

BBC Breakfast's Naga Munchetty 'left scared' after fake nudes of her emerge

2025-02-10
EXPRESS
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-based deepfake technology to create manipulated images and videos of a person without consent, which is a violation of personal rights and can cause psychological harm. The AI system's use in generating fake nude images and videos directly led to harm (fear, distress) to the individual. Hence, it meets the criteria for an AI Incident under violations of rights and harm to persons.
Thumbnail Image

BBC Breakfast star Naga Munchetty 'scared' after discovering 'fake nudes' online

2025-02-10
Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create fake nude images and videos. The harm is direct and realized: psychological distress to Naga Munchetty, reputational harm, and the broader societal harm of coercion and circulation of explicit images among young girls. The AI system's use in generating fake content that is maliciously distributed and linked to a scam platform constitutes a violation of privacy and causes harm to individuals and communities. Therefore, this qualifies as an AI Incident. The article's focus is on the harm caused by the AI-generated content and the scam, not merely on general AI developments or responses, so it is not Complementary Information. It is not an AI Hazard because harm has already occurred. It is not Unrelated because AI systems are central to the event.
Thumbnail Image

BBC's Naga Munchetty 'scared' and 'angry' as fake nude photos of her are shared

2025-02-10
Daily Record
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of deepfake pornography images of Naga Munchetty, which are AI-generated manipulated content. These images were used as part of a scam to deceive people and harvest personal information, causing emotional distress and reputational harm to the victim. The AI system's involvement in generating the fake images and enabling the scam directly led to harm, fulfilling the criteria for an AI Incident. The harm includes emotional distress (injury or harm to a person), reputational damage, and violation of privacy rights, which are covered under the definitions of AI Incident. The event is not merely a potential risk or a general update but a realized harm caused by AI misuse.
Thumbnail Image

Naga Munchetty: I found fake nudes of myself online

2025-02-10
thetimes.com
Why's our monitor labelling this an incident or hazard?
The article describes the discovery of fake nude images of Naga Munchetty that were photoshopped and circulated online. The creation of such images typically involves AI-based generative or manipulation tools. The harm caused is a violation of personal rights and privacy, which fits the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. Therefore, this event qualifies as an AI Incident.