MP Demonstrates AI-Generated Deepfake Nude Photo in Parliament

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

New Zealand MP Laura McClure showcased an AI-generated deepfake nude image in parliament to highlight the ease of creating such fake visuals, emphasizing the potential harm to privacy and dignity. Her demonstration calls for new legislation to regulate AI misuse and protect individuals from digital exploitation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves AI systems (deepfake generation technology) used to create manipulated images that have caused harm, particularly non-consensual explicit content targeting individuals, which is a violation of rights and causes harm to communities and individuals. The event includes a direct example of harm (deepfake image creation and its potential for abuse), discussion of existing harms, and calls for legal reform to address these harms. Therefore, it qualifies as an AI Incident due to realized harm from AI misuse. The article also includes complementary information about legal responses, but the primary focus is on the harm caused and demonstrated by the deepfake, making AI Incident the appropriate classification.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyAccountabilityTransparency & explainabilityHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
General publicWomen

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

कौन है यह महिला जिसने न्यूजीलैंड की संसद में लहराई अपनी न्यूड फोटो

2025-06-03
News18 India
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake technology) is explicitly involved in generating manipulated images, which can cause harm such as violation of privacy and reputational damage. However, the event is primarily about demonstrating the potential risks and advocating for legal measures, not about an incident where harm has already occurred or a direct imminent threat. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to harms if unregulated, but no actual harm incident is reported here.
Thumbnail Image

Deepfake: न्यूजीलैंड की संसद में महिला सांसद ने दिखाई अपनी 'न्यूड फोटो'!, डीप फेक लेकर छिड़ी बहस

2025-06-03
OneIndia
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake generation technology) used to create manipulated images that have caused harm, particularly non-consensual explicit content targeting individuals, which is a violation of rights and causes harm to communities and individuals. The event includes a direct example of harm (deepfake image creation and its potential for abuse), discussion of existing harms, and calls for legal reform to address these harms. Therefore, it qualifies as an AI Incident due to realized harm from AI misuse. The article also includes complementary information about legal responses, but the primary focus is on the harm caused and demonstrated by the deepfake, making AI Incident the appropriate classification.
Thumbnail Image

खूद की न्यूड फोटो लेकर संसद पहुंचीं महिला सांसद, सदन में दिखाई; क्या थी वजह

2025-06-03
Hindustan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake image, illustrating the potential for harm through misinformation or defamation. However, the article describes a demonstration and advocacy for legal measures rather than an actual incident of harm occurring. Therefore, this is a plausible risk highlighted by the use of AI-generated content, making it an AI Hazard rather than an AI Incident. It is not merely general news or complementary information because it centers on the potential harm and legislative response prompted by AI misuse.
Thumbnail Image

AI Deepfake Nude फोटो दिखाकर संसद में दहाड़ीं लॉरा मैक्लर: ये मेरी तस्वीर है पर असली नहीं

2025-06-03
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake images that have been used without consent, causing harm such as digital sexual exploitation and psychological distress. The harms described align with violations of human rights and harm to communities. The AI system's use (deepfake generation) has directly led to these harms, making this an AI Incident. The article also discusses legislative responses and advocacy, but the primary focus is on the realized harms caused by AI deepfake misuse, not just potential or complementary information.
Thumbnail Image

जब महिला सांसद ने पार्लियामेंट में दिखाई अपनी ही न्यूड फोटो, यह देख लोगों को लगा शॉक्ड

2025-06-04
punjabkesarinari
Why's our monitor labelling this an incident or hazard?
The AI system involved is the deepfake generation technology used to create the nude image. The event involves the use of this AI system to demonstrate potential harms, but no actual harm or violation has occurred to the parliamentarian or others as a result of this demonstration. The article focuses on raising awareness and legislative responses to the risks posed by deepfakes. Hence, it does not describe an AI Incident (no realized harm) or an AI Hazard (no imminent or plausible future harm from this specific event), but rather a governance and societal response to AI risks, fitting the definition of Complementary Information.
Thumbnail Image

महिला सांसद ने खुद की नंगी फोटो संसद में दिखाई, लेकिन क्यों?| New Zealand Female MP showed Her Nude Photo in Parliament

2025-06-03
Prabhat Khabar - Hindi News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create a deepfake image, illustrating the potential for harm through digital exploitation. No actual harm or incident of misuse is reported; instead, the MP uses the example to advocate for stronger laws against non-consensual deepfake content. This fits the definition of Complementary Information, as it provides context and governance response to AI-related risks without describing a specific AI Incident or Hazard.
Thumbnail Image

महिला सांसद ने खुद दिखाई अपनी AI न्यूड फोटो, संसद में हुआ बवाल, कर डाली ऐसी मांग

2025-06-04
NDTV Gadgets 360 Hindi
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake AI technology) to generate fake nude images, which directly relates to violations of privacy and potential harm to individuals, especially women, through misuse of AI-generated content. The demonstration in parliament and the discussion of real cases of harm caused by deepfakes indicate that the AI system's use has directly or indirectly led to harm or violations of rights. Therefore, this qualifies as an AI Incident due to the realized harm and violation of rights stemming from AI misuse.
Thumbnail Image

AI deepfake scandal, Laura Macler nude AI photo, New Zealand MP AI speech, AI-generated nudes, deepfake awareness campaign, deepfake technology dangers, AI misuse in politics, AI nudity law proposal, digital privacy violation, deepfake exploitation law, artificial intelligence misuse, female MP AI protest, deepfake image awareness, AI ethics and safety, deepfake legislation New Zealand, AI digital harm law, AI and sexual harassment, deepfake revenge porn law, deepfake regulation 2025, AI-generated explicit content, deepfake social impact, AI identity abuse, fake nude scandal parliament, AI photo manipulation risk, deepfake technology abuse, Laura Macler speech viral, deepfake porn threat, AI gender-based violence, digital rights and AI, AI content accountability, stop deepfake movement | महिला MP ने संसद में दिखाई अपनी ही Nude फोटो, फिर जो कहा उससे दुनिया सन्न, देखें तस्वीर ! | News Track in Hindi

2025-06-04
Newstrack
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to create non-consensual explicit images, which constitutes a violation of privacy and can be considered sexual harassment and identity abuse. The MP's demonstration and speech reveal that such AI misuse is already occurring and causing harm, fulfilling the criteria for an AI Incident. The event also discusses the lack of current legal frameworks and the need for new laws, but the primary focus is on the realized harm and the direct use of AI systems in creating harmful content, not just potential future harm or general commentary. Therefore, this is classified as an AI Incident.
Thumbnail Image

Nude Photo in Parliament: अपनी न्यूड तस्वीर लेकर संसद पहुंची महिला सांसद, खुलकर अपनी फोटो दिखाई और बताई वजह

2025-06-03
IBC24 News : Chhattisgarh News, Madhya Pradesh News, Chhattisgarh News Live , Madhya Pradesh News Live, Chhattisgarh News In Hindi, Madhya Pradesh In Hindi
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (deepfake generation) to create a fake nude photo, which was used as a demonstration in parliament to highlight the risks and harms of AI-generated fake content. No actual harm or incident of misuse is reported; rather, the event is about raising awareness and advocating for legal measures to prevent harm. Therefore, it fits the definition of Complementary Information as it provides societal and governance responses to AI-related risks without describing a specific AI Incident or AI Hazard.
Thumbnail Image

MP Laura McClure Nude Photo: महिला सांसद ने संसद में दिखाई अपनी न्यूड फोटो: फिर कर डाली ये बड़ी मांग

2025-06-04
npg.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and use of an AI-generated deepfake image, which is a direct product of an AI system. The harm involved includes violation of privacy and potential exploitation, which are recognized harms under the framework (violations of rights and harm to individuals). The event is not merely a warning or potential risk but involves actual creation and demonstration of harmful AI-generated content. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

जानें भरी संसद में महिला सांसद को क्यों दिखानी पड़ी खुद की नंगी फोटो? डीपफेक पर छिड़ी है बहस?

2025-06-04
ऑपइंडिया
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated images that have caused or could cause significant harm to individuals, including violations of privacy, potential psychological harm, and social harm through harassment and blackmail. The article reports on actual harm and misuse of AI-generated deepfake pornography, which directly relates to violations of rights and harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm, and the article discusses real cases and impacts, not just potential risks or general information.
Thumbnail Image

डरावना सच! न्यूजीलैंड की महिला सांसद ने संसद में दिखाई AI से बनी अपनी न्यूड फोटो, डीपफेक तकनीक की खोली पोल | 🌎 LatestLY हिन्दी

2025-06-05
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The article describes a politician using an AI-generated deepfake image of herself to demonstrate the dangers of deepfake technology. While the potential for harm (reputational damage, violation of privacy, psychological distress) from such AI-generated images is clear, the event itself is a demonstration and advocacy effort rather than a report of an actual AI Incident where harm has occurred. Therefore, it does not meet the criteria for an AI Incident. It also does not describe a specific AI Hazard event where harm could plausibly occur imminently but has not yet. Instead, it is primarily a societal/governance response raising awareness and calling for legal reform, which fits the definition of Complementary Information.
Thumbnail Image

संसद में महिला सांसद ने खुद दिखाई अपनी AI से बनी न्यूड फोटो, कर डाली बड़ी मांग, कहां का मामला, जानें

2025-06-04
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The article describes the creation and public demonstration of an AI-generated deepfake image by a New Zealand MP to highlight the dangers of such technology. The AI system was used to create manipulated content that can lead to violations of privacy and harassment, which are recognized harms under the framework. However, the article does not report an actual incident of harm occurring but rather focuses on raising awareness and advocating for legal reforms to prevent misuse. Hence, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm, rather than an AI Incident where harm has already occurred.
Thumbnail Image

न्यूजीलैंड संसद में सनसनी, सांसद लॉरा मैकल्योअर ने दिखाई अपनी न्यूड फोटो

2025-06-05
Webdunia
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake generation) is used here to create a synthetic image, but no direct harm such as injury, rights violation, or disruption has occurred. The MP's action is intended to raise awareness and advocate for legal measures against potential misuse of AI deepfake technology. Therefore, this event is best classified as Complementary Information, as it provides context and societal/governance response to AI risks rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

न्यूजीलैंड की महिला सांसद ने पार्लियामेंट में दिखा दी अपनी ही AI जनरेटेड न्यूड फोटो

2025-06-05
India TV Hindi
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake generation) is explicitly involved as it was used to create fake nude images without consent, which constitutes a violation of privacy and can cause significant harm to individuals. The event describes realized harm (non-consensual creation and potential distribution of explicit images) and the MP's legislative efforts to prevent further harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm related to privacy violations and exploitation.
Thumbnail Image

न्यूजीलैंड की महिला सांसद ने संसद में दिखाई अपनी न्यूड फोटो, बताई इसके पीछे की सच्चाई

2025-06-05
Newsnation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create a fake nude image, which is an AI system. However, the MP used this image herself as a demonstration to raise awareness and push for legislation, not as a result of an actual incident of harm caused by AI misuse. The event highlights the potential for harm (digital violence, privacy violations) but does not describe a realized AI Incident or an immediate AI Hazard. Instead, it focuses on the societal response and legislative proposal to address these risks, fitting the definition of Complementary Information.
Thumbnail Image

Video | New Zealand MP Laura McClure SHOCKING कदम: Parliament में दिखाई अपनी DEEPFAKE Nude Photo! क्यों?

2025-06-05
ndtv.in
Why's our monitor labelling this an incident or hazard?
An AI system (deepfake technology) was used to generate a fake nude image of a public figure, which was publicly shown in Parliament. This constitutes a violation of personal rights and dignity, a form of harm to the individual caused by AI misuse. The event directly involves AI-generated content causing reputational and personal harm, fitting the definition of an AI Incident. The discussion of legal gaps and proposed bills is complementary information but the core event is the AI-generated harmful content being used in a public setting.
Thumbnail Image

न्यूजीलैंड की MP ने संसद में दिखाई AI से बनी अपनी न्यूड फोटो, Deepfakes पर बैन लगाने की उठाई मांग

2025-06-03
आज तक
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake images, which are used without consent, causing harm such as exploitation and violation of privacy rights. This fits the definition of AI-related harm. However, the article does not report a specific AI Incident where harm has already occurred to a particular victim; instead, it highlights the problem and the MP's legislative response. This aligns with Complementary Information, as it details governance and societal responses to AI harms rather than a new incident or hazard. Hence, the classification is Complimentary Info.
Thumbnail Image

New Zealand MP Laura McClure displays deepfake AI nude image of herself in Parliament to urge legal reform

2025-06-05
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The AI system was used to create a deepfake image, which is an AI-generated content type. The MP's action is intended to illustrate the potential for harm and the ease of misuse, but no actual harm or abuse resulting from this specific image is reported. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. Instead, it serves as complementary information by providing context and raising awareness about AI misuse and the need for legal reform.
Thumbnail Image

Government open to further changes to crack down on AI deepfake porn

2025-06-03
NZ Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI-generated deepfake technology used to create non-consensual sexual images, which is recognized as harmful. However, the article centers on legislative and political responses to this issue rather than reporting a specific AI incident where harm has already occurred or a concrete AI hazard event. The discussion is about potential and ongoing misuse and the need for legal frameworks to address it. Therefore, this is best classified as Complementary Information, as it provides context on governance and societal responses to AI-related harms without describing a new incident or hazard itself.
Thumbnail Image

INSANE! NZ MP Laura McClure Holds Up Her Deepfake Nude In Parliament For Stronger Digital Abuse Laws

2025-06-03
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The article describes an MP using an AI-generated deepfake image to raise awareness about the harms of deepfake technology and to push for stronger laws. The AI system (deepfake generation) is involved in creating harmful content that can cause violations of rights and harm to individuals. However, the article does not report a new specific AI Incident where harm has directly or indirectly occurred in this instance; rather, it focuses on the demonstration and legislative response to an ongoing problem. This fits the definition of Complementary Information, as it details societal and governance responses to AI harms and raises awareness without reporting a new incident or hazard.
Thumbnail Image

New Zealand MP Stuns Parliament With AI-Generated Nude Image To Expose Deepfake Abuse

2025-06-05
Oneindia
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a non-consensual explicit deepfake image, which directly relates to harm to the individual’s mental health, reputation, and personal safety, fitting the definition of harm to persons and violation of rights. The event is not merely a warning or potential risk but demonstrates actual misuse and harm caused by AI-generated content. The MP's act and the discussion of legislative changes further confirm the recognition of this harm. Hence, this is classified as an AI Incident.
Thumbnail Image

New Zealand MP shows AI-generated naked image of herself in Parliament, highlights dangers of deepfake

2025-06-05
News9live
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated deepfake image used to demonstrate the potential harm of such technology, but it does not report a specific incident where the AI deepfake caused direct or indirect harm beyond the MP's own experience of distress. The MP's action is a form of advocacy and warning about plausible harms from AI misuse, not a report of a new AI Incident or an immediate AI Hazard event. Therefore, this is best classified as Complementary Information, as it provides important context and societal response to AI misuse risks.
Thumbnail Image

New Zealand MP Shocks Parliament With Her Own AI Nude. But For A Serious Cause

2025-06-05
english
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic deepfake nude image without consent, which constitutes a violation of personal rights and can cause significant emotional harm. The event directly involves the use of AI-generated content that has already caused harm by its creation and public exposure, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The MP's demonstration and call for legal amendments further confirm the recognition of this harm. Hence, this is not merely a potential risk or complementary information but a realized AI Incident.
Thumbnail Image

Female MP's 'terrifying' discovery following a Google search

2025-06-03
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake images that have been created and circulated, causing real harm to individuals, including minors and public figures. The harm includes psychological distress, reputational damage, and violation of privacy and consent, which fall under violations of human rights and harm to communities. The AI system's use in generating these images is central to the harm described. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New Zealand MP Exposes Deepfake Threat By Displaying AI-Generated Nude Image Of Herself In Parliament

2025-06-05
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake generation) is explicitly involved in creating a manipulated image that could cause harm if misused. The MP's act is a demonstration of the potential for harm (degrading and devastating effects on victims) and a call for legislative action to prevent such harms. Since no actual harm from misuse is reported here, but the event highlights a credible risk of harm from AI misuse, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is on the risk demonstration and legislative advocacy, not on updates or responses to a past incident. It is not an AI Incident because no realized harm from the AI system's use is described.
Thumbnail Image

New Zealand MP holds up AI-generated nude of herself in Parliament to fight deepfakes (WATCH)

2025-06-03
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating deepfake images, which can cause harm such as violations of privacy and abuse, especially targeting vulnerable groups. The MP's demonstration is a preventive and advocacy action rather than an incident of harm itself. The article focuses on the potential misuse of AI deepfake technology and the need for legal frameworks to address it. Therefore, this event is best classified as Complementary Information, as it provides context and societal response to AI-related risks without reporting a new AI Incident or Hazard.
Thumbnail Image

New Zealand MP Displays Her AI-Generated Nude Image, Warns Against Deepfake Threat

2025-06-05
News18
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake nude image of MP Laura McClure, which was then publicly displayed and shared. The creation and sharing of this manipulated image without consent is a misuse of AI technology that directly harms the individual's dignity and privacy, constituting a violation of rights and causing psychological harm. The article describes a realized harm rather than a potential one, and the AI system's role is pivotal in enabling this harm. Therefore, this event qualifies as an AI Incident under the framework.
Thumbnail Image

New Zealand MP Shows Her Nude Deepfake In Parliament; Exposes How Easy It Is To Nudify Anyone With AI In Minutes

2025-06-05
Mashable India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake AI apps) used to generate non-consensual sexually explicit images, which is a direct violation of personal rights and autonomy. The MP's demonstration and the cited statistics about the prevalence of non-consensual deepfake pornography confirm that harm is occurring. The event highlights the misuse of AI technology leading to violations of human rights and harm to individuals, meeting the criteria for an AI Incident. The proposed legislation further underscores the recognition of this harm.
Thumbnail Image

New Zealand MP shows AI-generated nude of herself in Parliament, says 'imagine how easy...'

2025-06-05
India TV News
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake generation) is explicitly involved in creating a synthetic nude image, illustrating the ease with which such content can be produced and weaponized. The event centers on the plausible future harm of non-consensual explicit deepfake content, which can cause violations of privacy and dignity (harm to individuals and communities). Since no actual harm from malicious use is reported, but the risk is clearly articulated and the event is a call for legal action to prevent such harms, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the AI system's role in generating the image and the associated risk is central to the event. Therefore, the classification is AI Hazard.