Meta Sues Developer of AI 'Nudify' App for Generating Non-Consensual Deepfakes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta has filed a lawsuit against Joy Timeline HK Limited, developer of the CrushAI app, for using AI to generate non-consensual nude images and repeatedly evading Meta's ad policies. The app ran tens of thousands of ads on Facebook and Instagram, causing widespread harm through the creation and distribution of explicit deepfakes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (nudify apps) that generate fake nude content without consent, causing direct harm to individuals through sextortion and blackmail, which are violations of rights and harm to communities. Meta's legal action and improved AI detection are responses to this realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingConsumer servicesDigital security

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Meta Sues Over AI Nude Scandal -- Deepfake Sextortion Sparks Global Crackdown

2025-06-12
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (nudify apps) that generate fake nude content without consent, causing direct harm to individuals through sextortion and blackmail, which are violations of rights and harm to communities. Meta's legal action and improved AI detection are responses to this realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Meta sues AI app maker for running nudify ads on Facebook and Instagram

2025-06-13
Digit
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates non-consensual explicit images, causing violations of personal rights and privacy, which falls under harm to individuals and communities. The lawsuit and removal of ads indicate that harm has occurred. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm through misuse and violation of rights.
Thumbnail Image

Meta Sues Developer Behind AI "Nudify" App For Running Harmful Ads On Its Platforms

2025-06-13
finanzen.ch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the CrushAI app) that generates harmful, nonconsensual sexualized images, causing harm to individuals and communities through harassment and exploitation. Meta's lawsuit and enforcement actions are responses to this harm. Since the AI system's use has directly led to violations and harm, this qualifies as an AI Incident.
Thumbnail Image

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI 'nudifying' tech

2025-06-12
CNN International
Why's our monitor labelling this an incident or hazard?
The AI system in question is explicitly described as generating sexually explicit deepfake images without consent, which is a clear violation of human rights and privacy. The harm is realized and ongoing, as evidenced by the large number of ads promoting the app and the targeting of users across multiple countries. Meta's legal action and the description of the harm caused by the AI-generated content meet the criteria for an AI Incident, as the AI system's use has directly led to significant harm to individuals' rights and dignity.
Thumbnail Image

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI 'nudifying' tech | CNN Business

2025-06-12
CNN
Why's our monitor labelling this an incident or hazard?
The article describes an AI-powered 'nudifying' app that generates non-consensual sexually explicit deepfake images, which is a clear violation of rights and causes harm to individuals targeted by such content. The AI system's outputs have been used maliciously and advertised widely, leading to realized harm. Meta's legal action and efforts to detect and block such ads further confirm the harm caused. The AI system's development and use have directly led to violations of human rights and sexual exploitation, fitting the definition of an AI Incident.
Thumbnail Image

Meta sues app-maker as part of crack down on 'nudifying'

2025-06-12
BBC
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate fake nude images without consent, directly leading to violations of human rights and privacy. The misuse of AI in this context has caused harm to individuals, and Meta's legal action is a response to this AI-driven harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI misuse.
Thumbnail Image

Meta sues company behind AI nudify app Crushai By Investing.com

2025-06-12
Investing.com India
Why's our monitor labelling this an incident or hazard?
The Crushai app uses AI to generate nude images without consent, directly violating individuals' rights and privacy, which is a clear harm. The lawsuit by Meta is a response to this harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident due to the realized violation of rights stemming from the AI system's use.
Thumbnail Image

Meta Sues Crush AI to Block 'Nudify' Ads

2025-06-12
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI systems (nudify apps) to generate nonconsensual explicit images, which constitutes a violation of human rights and intimate-image abuse. The ads promoting these AI-generated images have caused harm by facilitating the spread of such content. Meta's lawsuit and enhanced detection tools are responses to this ongoing harm. Since the AI system's use has directly led to realized harm (nonconsensual explicit images and their promotion), this qualifies as an AI Incident under the framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

Meta Sues Hong Kong Firm in Crackdown on Deepfake Nude Apps

2025-06-12
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated nude images created without consent, which is a clear violation of rights and causes harm to individuals. The AI system's use in generating these images is central to the harm described. Meta's legal action against the company promoting these apps confirms the direct link between the AI system's use and the harm. Hence, this qualifies as an AI Incident under the framework's criteria for violations of rights and harm to communities.
Thumbnail Image

Meta Launches Lawsuit Against 'Nudify' AI Company Over Facebook, Instagram Ads

2025-06-12
Investopedia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'nudify' app using AI to generate explicit images) whose use has led to the promotion of explicit content in violation of platform policies. The AI system's misuse has directly led to harm in terms of violating content standards and potentially exposing users to harmful explicit material. The lawsuit addresses the circumvention of detection technology, indicating harm has occurred. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use and misuse.
Thumbnail Image

Meta Sues Hong Kong Firm in Crackdown on Deepfake Nude Apps

2025-06-12
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (the 'nudify' apps) that generate explicit images without consent, directly leading to violations of personal rights and harm to individuals. This fits the definition of an AI Incident because the AI's use has directly led to harm through non-consensual sexual image creation. The lawsuit is a response to this harm, but the core event is the AI-enabled harm itself.
Thumbnail Image

Meta cracks down on nudify apps after being exposed

2025-06-12
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI deepfake nudify apps) whose use has led to harm by enabling non-consensual creation and distribution of nude deepfake images, violating individuals' rights and causing harm to communities. Meta's lawsuit and removals are responses to this ongoing AI Incident. The harm is realized (ads and apps are active), and the AI system's use is directly linked to violations of rights and harm to individuals. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Meta: Oops, Ads for Deepfake 'Nudify' Apps Shouldn't Be on Facebook, Instagram

2025-06-12
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The article describes AI-powered 'nudify' apps that generate fake nude images of real people without consent, constituting a violation of human rights and privacy. The AI system's use in producing such content has directly led to harm, including potential psychological and reputational damage to victims. Meta's lawsuit and platform enforcement actions are responses to this realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of rights and harm to individuals.
Thumbnail Image

Meta: Oops, Ads for Deepfake 'Nudify' Apps Shouldn't Be on Facebook, Instagram

2025-06-12
PC Magazine
Why's our monitor labelling this an incident or hazard?
The AI systems in question are generative AI apps that create fake nude images without consent, directly causing harm to individuals by violating their privacy and potentially causing psychological and reputational damage. The presence of these AI systems is explicit, and their use has led to realized harm (nonconsensual intimate imagery). Meta's legal action and technical measures are responses to this ongoing AI Incident. Therefore, the event qualifies as an AI Incident due to the direct harm caused by the AI systems' use and the violation of rights.
Thumbnail Image

Meta sues developer of 'nudify' app CrushAI

2025-06-12
The Hill
Why's our monitor labelling this an incident or hazard?
The 'nudify' apps use AI to generate non-consensual explicit images, directly causing harm by violating individuals' rights and privacy. Meta's lawsuit and actions against the developer and ads indicate that the AI system's use has led to actual harm, not just potential harm. The event clearly involves AI systems and their misuse leading to violations of human rights, fitting the definition of an AI Incident.
Thumbnail Image

Meta sues developers of 'nudify' apps for running ads on its platforms - UPI.com

2025-06-12
UPI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate explicit images without consent, which constitutes a violation of rights and harm to individuals. The AI system's use has directly led to harm through non-consensual creation and promotion of explicit content. Meta's legal action and technological enforcement are responses to this AI Incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Meta sues makers ofAI app that makes deepfake nudes from regular pics

2025-06-12
The Independent
Why's our monitor labelling this an incident or hazard?
The app CrushAI uses AI to generate deepfake nudes from ordinary photos, enabling non-consensual intimate imagery, which is a clear violation of human rights and privacy protections. The harm is realized as these apps have been widely advertised and used, causing direct harm to individuals. Meta's legal action and content moderation efforts are responses to this ongoing AI Incident. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Meta is cracking down on AI 'nudify' apps

2025-06-12
engadget
Why's our monitor labelling this an incident or hazard?
The AI systems (nudify apps) are explicitly mentioned as generating nonconsensual nude images, which is a violation of human rights and causes harm to individuals and communities. The harm is realized as these apps have been advertised and presumably used, leading to direct harm. Meta's lawsuit and new detection technologies are responses to this ongoing AI Incident. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use.
Thumbnail Image

Meta files lawsuit against maker of "nudify" app technology

2025-06-12
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (nudify app) that generates simulated nude images of real people without consent, directly leading to harm through violations of privacy and human rights. The use of AI to create non-consensual intimate imagery is a clear harm under the framework. Meta's legal action and platform policy enforcement respond to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Deepfake nude apps on radar: Meta sues Hong Kong firm over CrushAI ad. What's next? | Today News

2025-06-12
mint
Why's our monitor labelling this an incident or hazard?
The event describes AI systems (nudify apps) generating explicit images without consent, which is a direct violation of individuals' rights and privacy, a form of harm under the AI Incident definition (violation of human rights). The lawsuit and Meta's actions are responses to this realized harm. The AI system's use in creating and promoting non-consensual sexual content is central to the incident, not merely a potential risk or background context. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Meta sues 'nudify' app Crush AI

2025-06-12
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Crush AI) that generates nonconsensual intimate images, which constitutes a violation of human rights and privacy, thus causing harm. The AI system's use and promotion have directly led to this harm. Therefore, this qualifies as an AI Incident. The article also mentions Meta's responses, but the primary focus is on the harm caused by the AI system and the lawsuit, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Meta sues 'nudify' app Crush AI

2025-06-12
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Crush AI) that generates nonconsensual intimate images, a clear violation of rights and a form of harm to individuals and communities. The AI system's use has directly led to harm through the creation and dissemination of intimate deepfake images without consent. Meta's lawsuit and detection efforts confirm the AI system's pivotal role in causing this harm. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Meta sues AI 'nudify' app Crush AI for advertising on its platforms | TechCrunch

2025-06-12
TechCrunch
Why's our monitor labelling this an incident or hazard?
The Crush AI app uses generative AI to create fake nude images of real people without their consent, which is a clear violation of human rights and privacy protections. The widespread advertising of this service on Meta's platforms has caused harm by facilitating non-consensual explicit content distribution. The AI system's use in generating these images and the resulting harm to individuals and communities meet the criteria for an AI Incident. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

Meta sues Hong Kong firm over AI app making non-consensual explicit images

2025-06-13
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The app uses AI to generate explicit images without consent, directly violating human rights and privacy, which is a recognized harm under the AI Incident definition. The lawsuit highlights ongoing misuse and harm caused by the AI system's deployment and promotion. The involvement of AI in creating non-consensual explicit content and the resulting legal action confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta sues maker of Crush AI nudify app over Facebook and Instagram ads

2025-06-12
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI used to create nonconsensual nude images) whose use has directly led to harm by violating individuals' rights and causing harassment. The repeated circumvention of ad policies to promote this harmful AI service further supports the classification as an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in causing the harm described.
Thumbnail Image

Meta Sues Developer Behind AI "Nudify" App For Running Harmful Ads On Its Platforms

2025-06-12
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the CrushAI app) that generates nonconsensual sexualized images, causing harm to individuals and communities through harassment and exploitation. The AI system's use in generating such content and its promotion via ads on Meta's platforms has directly led to violations of rights and harm, qualifying this as an AI Incident. The article describes actual harm occurring, not just potential harm, and the legal action is a response to this harm, not merely complementary information.
Thumbnail Image

Meta Takes Legal Action Against AI Apps That Generate Fake Nude Images

2025-06-12
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the 'CrushAI' app) that generates harmful content (non-consensual nude images). The use of this AI system has directly led to violations of human rights, specifically privacy and consent, and harm to individuals and communities. Meta's legal action is a response to this realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm as defined in the framework.
Thumbnail Image

Meta targets AI 'nudify' apps, but not for the reasons you're thinking

2025-06-12
Android Police
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for generating nude or sexually explicit images without consent, which is a recognized harm. However, Meta's lawsuit targets the app-maker's repeated circumvention of advertising policies rather than the AI-generated content causing direct harm. The article does not report a specific AI Incident where harm has occurred due to the AI system's use or malfunction, nor does it describe a plausible future harm scenario beyond existing concerns. Instead, it details Meta's governance and enforcement response to mitigate misuse of AI-generated content. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI misuse rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Combating Nudify Apps with Lawsuit & New Technology | Meta

2025-06-12
About Facebook
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (nudify apps) that generate non-consensual explicit images, which is a clear violation of human rights and causes harm to individuals. The use of AI to create such images and their promotion on platforms directly leads to harm. Meta's response, including lawsuits and technological enforcement, addresses this AI-driven harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI systems.
Thumbnail Image

Meta files lawsuit to stop app that creates fake non-consensual nude images

2025-06-12
Washington Times
Why's our monitor labelling this an incident or hazard?
The CrushAI app uses AI to generate fake nude images without consent, which is a clear violation of human rights and privacy. The harm is realized as these images are being created and distributed, causing direct harm to individuals. Meta's actions to remove ads and sue the company behind the app further confirm the presence of harm linked to the AI system's use. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Meta sues 'nudify' app-maker that ran 87k+ Facebook ads

2025-06-12
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The 'nudify' app uses AI to generate nude and sexually explicit images without consent, which constitutes a violation of individual rights and can cause significant harm to persons depicted. The widespread advertising and use of this AI system have led to realized harm, including privacy violations and potential psychological harm to individuals. Meta's enforcement actions and lawsuit confirm the AI system's role in causing these harms. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Meta sues Hong Kong firm in crackdown on deepfake nude apps

2025-06-12
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake nude apps) that generate non-consensual sexual images, directly leading to harms including violations of rights and abuse. Meta's lawsuit and information sharing indicate the AI system's use has caused actual harm, qualifying this as an AI Incident. The focus is on the harm caused by the AI system's use, not just potential harm or general information, so it is not a hazard or complementary information.
Thumbnail Image

Meta files lawsuit against developer of CrushAI 'nudify' app

2025-06-12
NBC 5 Dallas-Fort Worth
Why's our monitor labelling this an incident or hazard?
The CrushAI app uses AI to generate nude images of people without their consent, which constitutes a violation of personal rights and causes harm to individuals and communities. Meta's lawsuit and enforcement actions are in response to this realized harm. The AI system's use in creating non-consensual sexualized images is central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The article describes ongoing harm and Meta's legal and technical responses, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

Meta has filed a lawsuit against AI firm behind fake non-consensual nude images - SiliconANGLE

2025-06-13
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The AI system in question is a generative AI app that enables the creation of fake non-consensual nude images, directly leading to violations of human rights and privacy. The harm is realized as these images are being created and distributed, including potential use involving minors, which is a serious concern. Meta's legal action is a response to this harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in generating non-consensual intimate images.
Thumbnail Image

Meta files lawsuit against AI firm behind fake nonconsensual nude images - SiliconANGLE

2025-06-13
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The AI system (generative AI apps) is explicitly involved in producing nonconsensual sexualized images, which is a clear violation of human rights and privacy, thus constituting harm. The event describes realized harm through the creation and spread of such images, including risks to children, and legal actions taken in response. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Meta files lawsuit against developer of CrushAI 'nudify' app

2025-06-12
NBC10 Philadelphia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the CrushAI app) that uses AI to generate non-consensual nude images, which constitutes a violation of human rights and privacy, thus causing harm. The AI system's use in creating harmful content and its promotion through ads directly led to harm to individuals' rights and communities. Therefore, this qualifies as an AI Incident. The article also describes Meta's response, but the primary focus is on the harm caused by the AI system's use and the resulting lawsuit, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Meta sues CrushAI developer amid broader crackdown on 'nudify' apps

2025-06-12
Proactiveinvestors NA
Why's our monitor labelling this an incident or hazard?
The CrushAI app uses AI to generate non-consensual nude images, which is a clear violation of human rights and constitutes sexual exploitation, fitting the definition of an AI Incident due to realized harm. The developer's deliberate circumvention of platform policies and widespread promotion of the app further supports the classification as an AI Incident. Meta's legal and technological responses are complementary information but do not change the primary classification.
Thumbnail Image

Meta takes legal action against app that can 'nudify' images

2025-06-12
WPTV
Why's our monitor labelling this an incident or hazard?
The app CrushAI uses AI to generate realistic fake nude images without consent, which is a direct violation of personal rights and privacy, fitting the definition of harm under AI Incident (c) violations of human rights. Meta's lawsuit and the discussion about the app's misuse confirm that the AI system's use has directly led to harm. The presence of AI is explicit in the creation of deepfake images, and the harm is realized, not just potential. Hence, this event is classified as an AI Incident.
Thumbnail Image

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI 'nudifying' tech | News Channel 3-12

2025-06-12
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The app CrushAI uses AI to create non-consensual explicit deepfake images, which directly violates individuals' rights and causes harm. The AI system's outputs are sexualized or nude images generated without consent, fulfilling the criteria for an AI Incident under violations of human rights and sexual exploitation. The lawsuit and Meta's efforts to block ads and develop detection technology confirm the harm is realized and ongoing. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Meta undresses AI "Nudify" apps in legal crackdown - Mediaweek

2025-06-13
Mediaweek
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (the 'nudify' apps) that generate non-consensual sexually explicit images, which constitutes a violation of individuals' rights and causes harm. Meta's lawsuit and detection tools respond to realized harms from the use of these AI systems. Since the harm is occurring and the article focuses on responses to this harm, this qualifies as Complementary Information rather than a new AI Incident or AI Hazard. The article does not report a new incident of harm but details Meta's enforcement and policy responses to an existing problem involving AI-generated non-consensual imagery.
Thumbnail Image

Meta sues maker of explicit deepfake app for dodging its rules to advertise AI 'nudifying' tech

2025-06-12
WAAY TV 31
Why's our monitor labelling this an incident or hazard?
The app CrushAI uses AI to create explicit deepfake images without consent, which is a clear violation of rights and causes harm to individuals targeted by such images. The AI system's development and use have directly led to this harm. The lawsuit and Meta's efforts to block ads and detect such content confirm the realized harm and ongoing risk. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in generating non-consensual explicit content.
Thumbnail Image

Meta Is Cracking Down On AI 'Nudify' Apps

2025-06-12
BruneiDirect
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate nonconsensual explicit images, which is a direct violation of human rights and causes harm to individuals and communities. The AI system's use has directly led to harm through the creation and dissemination of harmful content. Meta's lawsuit and new detection measures are responses to this AI Incident. Therefore, this qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Meta Sues AI Firm Behind 'Nudify' Apps Amid Surge in Deepfake Abuse - TV360 Nigeria

2025-06-12
TV360 Nigeria
Why's our monitor labelling this an incident or hazard?
The AI system CrushAI is explicitly mentioned as enabling the creation of non-consensual sexually explicit images, which is a violation of human rights and causes emotional and psychological harm to victims. The lawsuit and investigation confirm that the AI system's use has directly led to these harms. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

Meta Sues Nudify App That Keeps Advertising on Instagram

2025-06-12
404 Media
Why's our monitor labelling this an incident or hazard?
The CrushAI app uses AI to generate nude images without consent, directly violating individuals' rights and causing harm. The article describes ongoing harm through the app's advertising and use on Meta's platforms, indicating realized violations of rights and privacy. Meta's legal and enforcement responses confirm the seriousness and reality of the harm. The AI system's development and use have directly led to violations of human rights and legal obligations, fitting the definition of an AI Incident.
Thumbnail Image

Meta sues developer of AI 'nudify' app for evading ad rules

2025-06-12
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (CrushAI) that uses AI technology to create non-consensual nude images, which is a clear violation of rights and causes harm to individuals. The harm is realized and ongoing, as evidenced by the widespread use of such apps and associated cases of abuse. Meta's legal action and enhanced AI-based detection tools are responses to this harm but do not negate the fact that harm has occurred. Hence, this is an AI Incident involving the use of an AI system that has directly led to violations of human rights and harm to individuals.
Thumbnail Image

Meta sues 'nudify' app firm amid calls for broader crackdown - Tech Digest

2025-06-13
Tech Digest
Why's our monitor labelling this an incident or hazard?
The 'nudify' apps use AI to generate non-consensual fake nude images, which constitutes a violation of human rights and causes emotional harm, particularly when used by predators to create illegal images of children. The article details ongoing harm and Meta's legal response to stop the advertising and spread of these AI-generated images. The AI system's use has directly led to harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta sues AI 'nudify' app Crush AI for running ads on its platforms

2025-06-13
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (CrushAI apps) that generate AI-based nude or sexually explicit images without consent, which constitutes a violation of human rights and privacy. Meta's lawsuit and enforcement actions are responses to this harm. However, the article focuses on the legal and enforcement response rather than describing a new AI Incident or Hazard itself. The harm from the AI system is established, but the main content is about Meta's actions and cooperation with other companies to mitigate the issue, which fits the definition of Complementary Information rather than a new Incident or Hazard.
Thumbnail Image

Meta sues company for using Facebook ads to promote AI app that creates fake nude images

2025-06-13
India Today
Why's our monitor labelling this an incident or hazard?
The AI system (CrushAI app) generates non-consensual nude images, which is a direct violation of individuals' rights and causes harm. The use of AI to create such images and promote them via ads on Meta platforms has led to realized harm. Meta's lawsuit and enforcement actions are responses to this incident. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Meta files lawsuit against nudity app developer

2025-06-13
Mobile World Live
Why's our monitor labelling this an incident or hazard?
The CrushAI app uses AI to generate nude images without consent, directly violating individuals' rights and causing harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights. Meta's legal and technical responses aim to mitigate ongoing harm but do not negate the fact that harm has occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Meta Sues Developer of CrushAI 'Nudify' App | Silicon UK

2025-06-13
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (CrushAI) that generates non-consensual nude images, directly causing harm through violations of rights and sexual exploitation. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and potential psychological harm). The article focuses on the harm caused by the AI system and the legal action taken, not just on the legal action itself as a response, so it is not merely Complementary Information.
Thumbnail Image

Meta sues HK-based firm in crackdown on deepfake nude apps

2025-06-13
China Daily Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated nude and sexual images created without consent, which directly violates human rights and leads to harm such as abuse and blackmail. The AI system's use in generating these images is central to the harm described. Meta's legal action against the promoter of these apps is a response to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of rights and harm to individuals.
Thumbnail Image

Meta Targets 'Nudifying' App in Legal Battle Against Deepfakes

2025-06-13
Digit
Why's our monitor labelling this an incident or hazard?
The 'nudifying' app uses AI to generate non-consensual nude images, which is a clear violation of human rights and causes harm to individuals (harm category c and d). The AI system's use has directly led to harm through the creation and dissemination of intimate images without consent. Meta's legal action and platform enforcement are responses to this AI Incident. Therefore, the event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs and misuse.
Thumbnail Image

Meta takes AI firm behind 'nudify' apps to court over ads on Facebook, Instagram

2025-06-15
The Indian Express
Why's our monitor labelling this an incident or hazard?
The 'nudify' apps use generative AI to create realistic fake nude images without consent, which constitutes a violation of human rights and privacy, thus causing harm. The AI system's use in generating such images directly leads to harm to individuals. Meta's lawsuit and AI detection system are responses to this ongoing AI-driven harm. Since the harm is occurring and the AI system's role is pivotal, this qualifies as an AI Incident.
Thumbnail Image

Meta Takes Action on 'Nudify' Apps, Files Lawsuit Against Hong Kong-Based Joy Timeline HK Limited To Prevent Advertising CrushAI Apps on Meta Platforms | 📲 LatestLY

2025-06-15
LatestLY
Why's our monitor labelling this an incident or hazard?
The article describes AI systems ('nudify' apps) that generate non-consensual explicit images, which is a clear violation of human rights and privacy protections. The harm is realized as these apps produce and distribute harmful content. Meta's lawsuit and platform actions are responses to this ongoing AI-driven harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Adiós a los desnudos con IA: Meta demanda a la aplicación más popular por publicitarse en sus plataformas

2025-06-12
El Español
Why's our monitor labelling this an incident or hazard?
The AI system (Crush AI) is explicitly described as generating non-consensual nude deepfake images, which is a clear violation of human rights and privacy, thus causing harm. The event involves the use and misuse of AI technology leading to realized harm (non-consensual deepfake pornography). Meta's legal and technical responses are complementary to addressing this AI Incident but do not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Meta declara guerra a apps de IA que criam 'nudes' falsos sem consentimento

2025-06-12
Canaltech
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake-generating apps) that create harmful content (non-consensual nude images) causing psychological harm and violating rights. Meta's legal and platform actions respond to realized harms caused by these AI systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and psychological harm).
Thumbnail Image

Meta está processando aplicativo de IA para geração de nudes; entenda

2025-06-12
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Crush AI) that generates nude images, which is a clear AI involvement. The event stems from the use of this AI system in a way that violates platform policies and potentially harms users or communities by spreading inappropriate content. However, the article does not describe a specific incident where harm has already occurred due to the AI system's outputs; instead, it focuses on Meta's legal and policy actions to prevent further misuse. This fits the definition of Complementary Information, as it details governance responses and ongoing challenges in managing AI-generated harmful content rather than a direct AI Incident or an AI Hazard.
Thumbnail Image

Meta começou combate a apps de IA criadas para 'despir' celebridades

2025-06-12
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves AI systems designed to create non-consensual nude images, which is a clear violation of privacy and potentially human rights. This fits the definition of an AI Hazard because the AI systems could plausibly lead to harm (violation of rights and harm to individuals' dignity). However, since the article does not report actual incidents of harm caused by these AI apps but rather Meta's proactive legal and technological measures to prevent such harm, it is best classified as Complementary Information. The main focus is on the societal and governance response to a known AI-related threat rather than on a realized AI Incident or a mere potential hazard without response.
Thumbnail Image

Meta demanda a la empresa que promocionaba aplicación para 'desnudar' a personas

2025-06-12
Forbes México
Why's our monitor labelling this an incident or hazard?
The article describes AI-powered applications (CrushAI) that generate explicit images of people without their consent, which is a clear violation of human rights and privacy. The AI system's use has directly led to harm by enabling non-consensual creation and dissemination of sexualized images, causing potential psychological and reputational damage. Meta's legal actions and platform restrictions are responses to this ongoing harm. Since the harm is realized and directly linked to the AI system's use, this event meets the criteria for an AI Incident.
Thumbnail Image

Meta declara guerra a apps de IA que criam "nudes" sem consentimento | TugaTech

2025-06-12
TugaTech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI applications that generate non-consensual nude images, causing harm to individuals' rights and privacy, which is a violation of human rights and legal protections. The AI systems' use has directly led to harm through the creation and dissemination of these manipulated images. Meta's legal action and technological countermeasures are responses to this ongoing harm. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

Aplicações para 'despir' famosos em tribunal

2025-06-13
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake applications) that generate non-consensual nude images of public figures, which constitutes a violation of rights and harm to individuals. The AI system's use has directly led to harm through the creation and dissemination of explicit deepfake content without consent. Therefore, this qualifies as an AI Incident. The article also mentions Meta's legal and technical responses, but the primary focus is on the harmful use of AI and its consequences, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Meta demanda aplicación que "desnuda" a personas mediante IA

2025-06-12
Cooperativa
Why's our monitor labelling this an incident or hazard?
The applications CrushAI use AI to generate explicit images without consent, which is a direct violation of individuals' rights and privacy, fitting the definition of an AI Incident due to harm to persons and violation of rights. The event involves the use and misuse of AI systems to cause harm, and the harm is realized as these services are actively promoted and used. Meta's legal and technical responses are attempts to mitigate this harm but do not negate the incident itself.
Thumbnail Image

Meta demanda a la firma detrás de una app de IA que "desnuda" personas sin consentimiento - Tecnología - ABC Color

2025-06-12
ABC Digital
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (CrushAI) that uses AI to generate images of people without clothes without their consent, which is a clear violation of human rights and privacy. The harm is realized as these images are being promoted and disseminated via ads on Meta's platforms, leading to direct harm to individuals. Meta's legal action and content moderation efforts confirm the seriousness and occurrence of harm. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Nueva guerra digital: Aumentan los anuncios en redes sociales de IA que desnudan y Meta busca limitarlos

2025-06-12
publimetro
Why's our monitor labelling this an incident or hazard?
The event involves AI generative systems used to create harmful content (non-consensual nude images), which is a violation of individuals' rights and privacy. However, the article focuses on Meta's efforts to detect, remove, and limit the spread of such harmful AI-generated content rather than describing a specific incident where harm has already occurred or a direct malfunction of an AI system causing harm. The harm is recognized as a risk and ongoing issue, but the main narrative is about the response and mitigation measures taken by Meta and partners. Therefore, this is best classified as Complementary Information, as it provides updates on societal and governance responses to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Portaltic.-Meta combate los anuncios de aplicaciones que usan IA...

2025-06-12
Notimérica
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create non-consensual explicit images, which constitutes a violation of rights and harm to individuals. Meta's actions to remove such content and block promotion address an ongoing harm caused by AI misuse. Since the AI system's use has directly led to violations of rights and harm to individuals through non-consensual explicit content, this qualifies as an AI Incident. The article describes realized harm and active mitigation efforts, not just potential harm or general information, so it is not a hazard or complementary information.
Thumbnail Image

Meta demanda a la empresa que promocionaba aplicación para "desnudar" a personas

2025-06-12
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The applications CrushAI use AI to generate explicit images of individuals without their consent, causing harm to the individuals' rights and privacy. The article describes ongoing harm through the promotion and advertisement of these services, which Meta is actively trying to stop through legal means. Since the AI system's use has directly led to violations of rights and harm to individuals, this qualifies as an AI Incident under the framework.
Thumbnail Image

Meta combate los anuncios de aplicaciones que usan IA para desnudar

2025-06-12
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems to create non-consensual nude images, which constitutes a violation of personal rights and privacy. Meta's actions target the use of AI systems that generate harmful content, but the article describes ongoing mitigation efforts rather than a realized harm incident or a plausible future harm scenario. The focus is on enforcement and detection improvements, which aligns with Complementary Information as it updates on responses to an existing AI-related harm issue rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Curta | Empresas | Valor Econômico

2025-06-13
Valor Econômico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the CrushAI app) that generates explicit images without consent, which constitutes a violation of human rights and privacy. The harm has occurred as the AI-generated content is being distributed, and Meta's legal action indicates recognition of this harm. Therefore, this qualifies as an AI Incident due to violations of rights caused by the AI system's use.
Thumbnail Image

Meta pretende acabar com anúncios de apps que criam nudes falsos usando IA

2025-06-13
TecMundo
Why's our monitor labelling this an incident or hazard?
The described apps use AI to generate non-consensual sexualized images, causing direct harm to individuals' emotional and personal lives, which constitutes harm to persons (a). The AI system's use is central to the harm, as it enables the creation of realistic fake nude images without consent. The legal action and content blocking are responses to an ongoing AI Incident involving violations of rights and harm to people. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta processa criador de app que gera nudes falsos com IA | CNN Brasil

2025-06-13
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (CrushAI) that generates explicit deepfake images without consent, which directly causes harm to individuals by violating their rights and privacy. The AI system's use has led to widespread dissemination of non-consensual sexualized images, a clear violation of human rights and platform policies. The harm is realized, not just potential, as the images have been created and advertised extensively. Meta's legal action and efforts to detect and remove such content further confirm the incident's nature. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

Las apps que 'desnudan' con IA, en el punto de mira: Meta ha denunciado a una de estas plataformas

2025-06-13
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generative AI used to create fake nude images without consent, which constitutes a violation of privacy and human rights. The harm is realized and ongoing, including harassment and potential psychological injury, especially to vulnerable groups like minors. Meta's lawsuit and content moderation efforts confirm the AI system's role in causing these harms. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Meta demanda a la empresa detrás de aplicación IA para "desnudar" a personas

2025-06-13
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (CrushAI) to generate explicit images without consent, which constitutes a violation of human rights and personal privacy. The misuse of AI to create such harmful content directly leads to harm to individuals and communities. Meta's lawsuit and ongoing efforts to block and restrict these ads indicate that harm has occurred and is ongoing. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use and the legal and societal responses to it.
Thumbnail Image

Meta demanda a empresa por publicitar en sus plataformas aplicaciones de IA que "desnudan" personas

2025-06-13
RPP noticias
Why's our monitor labelling this an incident or hazard?
The event describes the use of generative AI systems to produce fake nude images without consent, which is a clear violation of privacy rights and can cause reputational and psychological harm to individuals. The AI system's outputs are central to the harm caused. Meta's removal of ads and legal action further confirm the harm has occurred. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Demanda Meta a la empresa que promocionaba aplicación para "desnudar" a personas

2025-06-13
Vanguardia
Why's our monitor labelling this an incident or hazard?
The applications CrushAI use AI to generate explicit images without consent, which is a clear violation of human rights and privacy protections. The harm is realized as these images are being created and advertised, causing direct harm to individuals. Meta's response and legal action confirm the seriousness and occurrence of harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing violations of rights and harm to people.
Thumbnail Image

ميتا تقاضى شركةً تستخدم إعلانات فيسبوك للترويج لتطبيق يُنشئ صورًا عارية مزيفة - اليوم السابع

2025-06-14
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (CrushAI) that generates explicit fake images without consent, which is a direct violation of individuals' rights and causes harm. The use of AI to create non-consensual intimate images is a clear breach of fundamental rights and constitutes harm to communities and individuals. Meta's legal action is a response to this realized harm. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

ميتا تلاحق مطوري تطبيقات التعري قضائيًا وتبدأ حملة لإزالة إعلاناتهم

2025-06-14
الوفد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate non-consensual nude images, which is a direct violation of privacy and likely breaches fundamental rights. The harm is realized as these images are produced and advertised, causing reputational and psychological harm to individuals. Meta's legal action and technical measures are responses to this AI-driven harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to communities).
Thumbnail Image

ميتا تحارب تطبيقات صينية مثيرة للجدل تسيء استخدام الذكاء الاصطناعي

2025-06-13
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate non-consensual digital undressing images, which directly harm individuals' privacy and safety, constituting violations of rights and harm to communities. Meta's legal action and platform measures confirm the harm is occurring and linked to AI misuse. The presence of AI is clear from the description of AI-powered applications performing digital undressing. The harms include privacy violations, digital sexual harassment, and exploitation, fitting the definition of an AI Incident due to direct harm caused by AI system use and misuse.
Thumbnail Image

مساحات نيوز : ميتا تقاضى شركةً تستخدم إعلانات فيسبوك للترويج لتطبيق يُنشئ صورًا عارية مزيفة - مساحات

2025-06-14
مساحات
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Crushai) that generates non-consensual explicit images, which constitutes a violation of human rights and causes harm to individuals (harm to rights and communities). The use of AI to create such harmful content and its promotion via Facebook ads directly leads to harm. Meta's legal action and enforcement measures are responses to this AI Incident. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

ميتا تقاضى شركةً تستخدم إعلانات فيسبوك للترويج لتطبيق يُنشئ صورًا عارية مزيفة

2025-06-14
Arabstoday
Why's our monitor labelling this an incident or hazard?
The AI system in question is explicitly described as generating non-consensual nude or explicit images, which is a clear violation of personal rights and privacy, thus meeting the criteria for harm under (c) violations of human rights or breach of obligations protecting fundamental rights. The harm is realized, not just potential, as the app is actively used and promoted via ads. Meta's lawsuit is a response to this ongoing harm, confirming the incident status rather than a mere hazard or complementary information. Hence, this is classified as an AI Incident.
Thumbnail Image

ميتا تقاضي تطبيق ذكاء اصطناعي أثار الجدل.. تفاصيل

2025-06-16
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system (Crush AI) is explicitly described as generating fake nude images without consent, which constitutes a violation of personal rights and privacy, a form of harm under the framework. The dissemination of thousands of ads promoting this AI-generated harmful content on Meta's platforms directly leads to harm to individuals and communities. Meta's legal action and efforts to remove such content confirm the harm is occurring. Hence, this is an AI Incident involving the use and misuse of an AI system causing violations of rights and harm to individuals.
Thumbnail Image

Fausses photos "nudes" : Meta poursuit une entreprise hongkongaise

2025-06-12
Boursier.com
Why's our monitor labelling this an incident or hazard?
The applications promoted by the Hong Kong company use AI to generate fake nude images without consent, which is a direct violation of human rights and can lead to harm such as extortion, blackmail, and abuse. Meta's legal action is a response to the misuse of AI technology causing these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Meta part en guerre contre les applications d'IA qui déshabillent

2025-06-13
Génération-NT
Why's our monitor labelling this an incident or hazard?
The AI systems (deepfake image generators) are explicitly involved in creating harmful content (non-consensual explicit images), which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized, as these images fuel sextortion and exploitation. Therefore, this qualifies as an AI Incident. The article focuses on the harm caused by the AI-generated content and Meta's response to it, not merely on general AI developments or policy discussions, so it is not Complementary Information. The presence of AI systems and their direct role in harm is clear, and the harm is ongoing, so it is not merely a hazard or unrelated news.
Thumbnail Image

Meta s'attaque aux applications d'IA dédiées à la " nudification "

2025-06-14
24matins.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate or modify images sexually without consent, causing harm to individuals' privacy and dignity, which is a violation of human rights. The AI systems' use has directly led to harm through the dissemination of illicit content and scams via deepfakes. Meta's legal action and technical countermeasures confirm the presence of realized harm. Hence, this event meets the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Meta attaque en justice l'éditeur d'une app de création de nudité boostée par l'IA

2025-06-12
KultureGeek
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (the 'nudify' app) to generate non-consensual nude images, which constitutes a violation of individuals' rights and the dissemination of illegal content. This harm has already occurred, as evidenced by the proliferation of ads and content. Meta's legal action and technological responses are reactions to this harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals.
Thumbnail Image

Cette IA montre les corps nus de toutes les femmes sur Facebook : Meta porte plainte

2025-06-13
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI applications creating deepfake images) whose use has directly led to harm: non-consensual sexualized deepfake images causing emotional harm and privacy violations. This fits the definition of an AI Incident because the AI system's use has directly caused harm to individuals and communities. The article also describes ongoing harm and Meta's responses, but the primary focus is on the realized harm caused by the AI-generated deepfakes.
Thumbnail Image

Meztelen képek miatt indított pert a Meta

2025-06-13
Blikk
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) that has been used to create non-consensual nude images, directly causing harm to individuals' rights and privacy, which fits the definition of an AI Incident. The legal action and new detection technology are complementary information but the main event is the harm caused by the AI-generated content and the misuse of AI for deepfake nude images. Therefore, this is classified as an AI Incident due to realized harm from AI misuse.
Thumbnail Image

Meztelen képeket generáltak az áldozatokról - A Facebook dühbe gurult, brutális pert indít

2025-06-16
Naphire.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (nudify technologies) that generate non-consensual nude images, causing direct harm to individuals' privacy and dignity, which falls under violations of human rights and ethical breaches. The harm is realized, not just potential, as the images are generated and distributed without consent, constituting sexual abuse and digital harassment. Meta's legal and technological responses confirm the seriousness and reality of the harm. Hence, this is an AI Incident.
Thumbnail Image

Nyilvánosságra kerültek az AI-nak feltett legintimebb kérdések

2025-06-17
ICT Global
Why's our monitor labelling this an incident or hazard?
The Meta AI chatbot is an AI system explicitly mentioned. The harm arises from the AI system's use, specifically the 'Discover' feed feature that enables sharing of user conversations, which has led to the public disclosure of sensitive personal information. This disclosure harms users' privacy and violates their rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized, not just potential, as private conversations have already been exposed publicly. Hence, the event is classified as an AI Incident.