Apple and Google Criticized for Hosting AI Nudify Apps Enabling Non-Consensual Deepfakes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Apple and Google have been criticized for hosting dozens of AI-powered 'nudify' apps on their app stores, which generate non-consensual sexualized deepfake images. Despite policies against such content, these apps have been downloaded hundreds of millions of times, causing significant privacy violations and harm to individuals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems that generate deepfake nude images, which is a direct use of generative AI technology. The harms described include violations of human rights (privacy, dignity), mental health harm to targeted groups (women and children), and illegal content involving children, which are serious harms under the AI Incident definition. The widespread availability and use of these apps, along with the generation of illegal content, confirm that harm has occurred. The involvement of AI in the development and use of these apps is clear and central to the incident. Hence, this is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Apple, Google app stores still host dozens of AI 'nudify' apps, report claims

2026-01-28
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate deepfake nude images, which is a direct use of generative AI technology. The harms described include violations of human rights (privacy, dignity), mental health harm to targeted groups (women and children), and illegal content involving children, which are serious harms under the AI Incident definition. The widespread availability and use of these apps, along with the generation of illegal content, confirm that harm has occurred. The involvement of AI in the development and use of these apps is clear and central to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

Apple and Google removing dozens of "nudity" apps in Grok-related fallout By Investing.com

2026-01-27
Investing.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating nude images of people without consent, which is a clear violation of human rights and privacy protections. The AI systems' use in these apps directly led to harm by enabling non-consensual sexualization. The fact that Apple and Google removed or suspended these apps confirms the harm and the role of AI in causing it. Hence, this is an AI Incident involving the use of AI systems causing violations of rights and harm to individuals.
Thumbnail Image

Apple, Google host dozens of AI 'nudify' apps like Grok, report finds

2026-01-27
CNBC
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate nude images from photos, which is a direct use of AI systems causing harm by non-consensual sexualization, violating rights and causing harm to individuals. The report confirms the apps are active and have caused harm, meeting the criteria for an AI Incident. The involvement of AI is explicit, and the harm is realized, not just potential. The companies' removal of some apps is a response but does not change the classification of the event as an incident.
Thumbnail Image

Dozens of nudify apps found on Google and Apple's app stores

2026-01-27
The Verge
Why's our monitor labelling this an incident or hazard?
The report explicitly identifies AI systems (nudify apps) that generate nonconsensual sexualized images, which is a direct violation of individuals' rights and causes significant harm to victims. The apps' widespread use and the resulting investigations and lawsuits confirm that harm has occurred. The AI systems' development and use have directly led to violations of human rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple and Google reportedly still offer dozens of AI 'nudify' apps

2026-01-27
engadget
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate sexualized images without consent, including of minors, which is a clear violation of rights and causes harm to individuals and communities. The widespread availability and use of these apps, along with the generation of illegal content by Grok, demonstrate direct and indirect harm caused by AI use. The companies' partial removal of apps does not negate the ongoing harm. Hence, this qualifies as an AI Incident due to realized harm linked to AI system use and misuse.
Thumbnail Image

Report claims App Store hosts nonconsensual AI undressing apps

2026-01-27
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate nonconsensual sexualized images, which is a clear violation of human rights and causes harm to individuals and communities. The apps' widespread availability and use, despite platform policies, indicate that harm is occurring. The AI systems' use in this context directly leads to violations of rights and harm, meeting the criteria for an AI Incident. The report's focus on the harm caused and the failure of platform enforcement supports this classification.
Thumbnail Image

Apple, Google host dozens of AI 'nudify' apps like Grok, report finds

2026-01-27
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate nude images from photos of people, which is a direct use of AI systems. The harm involves violations of personal rights and privacy, which are protected under human rights frameworks. The presence and use of these apps have already caused harm by enabling non-consensual creation of explicit images, fulfilling the criteria for an AI Incident. The companies' removal and suspension of apps are responses to the incident but do not negate the fact that harm has occurred.
Thumbnail Image

Nudify apps get past Google, Apple app moderation

2026-01-27
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate fake non-consensual nude images, which directly harms individuals' privacy and violates human rights. The harm is realized as these apps have been downloaded hundreds of millions of times and are marketed even to children, amplifying the impact. The involvement of AI in generating such content is clear and central to the harm. The failure of platform moderation to prevent these apps' proliferation further contributes to the incident. Hence, this is an AI Incident as the AI systems' use has directly led to violations of rights and harm to individuals and communities.
Thumbnail Image

Deepfake porn apps downloaded 705 million times on Apple, Google stores

2026-01-27
UPI
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generating nonconsensual sexualized deepfake images, which directly leads to violations of rights and harm to individuals and communities. The large scale of downloads and revenue indicates the harm is materialized and ongoing. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of AI systems in a way that violates rights and causes significant social harm.
Thumbnail Image

Apple and Google Under Fire as Nudify Apps Spread Across App Stores

2026-01-27
Android Headlines
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-powered nudify apps that generate nude images without permission, which is a violation of privacy and consent, thus a breach of fundamental rights. The harm is realized as millions of users downloaded these apps and the apps generated sexualized images without consent. The companies' delayed removal of these apps further contributed to the harm. Hence, the AI systems' use directly led to violations of rights and harm to individuals, fitting the definition of an AI Incident.
Thumbnail Image

Apple and Google Still Host Nudify Apps Despite Strict App Store Rules

2026-01-27
The Mac Observer
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexualized deepfake images without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as evidenced by the reported cases and the widespread availability and use of these apps. The involvement of AI in generating or manipulating images is clear, and the failure of platform enforcement contributes to the continuation of harm. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple, Google host dozens of AI 'nudify' apps like Grok, report finds

2026-01-27
CNBC Africa
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to generate non-consensual nude images, directly causing harm to individuals by violating their rights and privacy, which fits the definition of an AI Incident. The harm is realized as the apps have been downloaded hundreds of millions of times and have victimized many individuals. The involvement of AI in generating deepfake images is explicit, and the harm includes violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple and Google App Stores Offer Dozens of AI-Powered 'Nudify' Apps in Wake of Elon Musk's Grok Scandal

2026-01-28
Breitbart
Why's our monitor labelling this an incident or hazard?
The presence of AI-powered nudification apps that create non-consensual explicit images directly leads to violations of rights and harm to individuals and communities. The article documents actual harm occurring through these apps and the related Grok AI scandal, not just potential harm. The involvement of AI in generating sexual deepfakes is explicit, and the harms are clearly articulated, including privacy violations and abusive content dissemination. Enforcement actions and investigations confirm the seriousness and reality of the harm. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI apps that create sexual exploitation content, cumulative earnings alone approach 170 billion won···"Apple·Google stand idle

2026-01-28
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI apps) that create sexual exploitation content, which is a direct violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, as evidenced by the proliferation of these apps, the large number of downloads, and the investigations by authorities. The AI systems' use in generating non-consensual sexual images is central to the harm described. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm.
Thumbnail Image

Google, Apple hosted dozens of nudify apps, report reveals

2026-01-28
Mashable
Why's our monitor labelling this an incident or hazard?
The apps use AI to digitally remove clothes from images, creating non-consensual sexualized deepfakes, which directly harms individuals by violating their rights and causing reputational and emotional damage. The report documents that these apps have been downloaded hundreds of millions of times, indicating widespread harm. The companies' failure to effectively remove these apps despite policy violations further contributes to ongoing harm. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI systems.
Thumbnail Image

Apple and Google app stores under fire for hosting AI 'nudify' apps that can undress photos

2026-01-29
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems capable of generating realistic fake nude images by removing clothing from photos, which is a direct use of AI technology. The harm caused includes violations of privacy, non-consensual creation of sexualized images, and emotional trauma, which fall under violations of human rights and harm to communities. The report confirms that these harms are occurring, with apps actively available and used globally. The involvement of AI in causing these harms is direct and central to the incident. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Google and Apple hosted dozens of AI "nudify" apps despite platform policies

2026-01-28
Android Authority
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate sexualized and non-consensual images of women, which is a violation of human rights and privacy, thus meeting the criteria for harm (c) under AI Incident definitions. The apps' presence on major platforms and their monetization indicate active use and harm, not just potential risk. The partial removal of apps is a response but does not negate the realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple and Google app stores exposed for hosting AI nudify apps: Report

2026-01-28
The News International
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to generate non-consensual explicit content, which is a direct violation of individuals' rights and causes harm to the affected persons and communities. The presence of these apps on major platforms and their revenue generation indicates ongoing harm. The AI systems' use in creating deepfake sexual content is central to the harm described. Therefore, this event qualifies as an AI Incident due to the direct and ongoing harm caused by the AI systems' use in producing non-consensual sexual content and the violation of rights.
Thumbnail Image

AppleInsider.com

2026-01-28
AppleInsider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in apps that generate non-consensual sexualized images, including deepfake pornography, which harms individuals' rights and dignity. The harm is realized as these apps have been downloaded hundreds of millions of times and generate significant revenue, indicating widespread use and impact. Apple's inadequate enforcement of its own policies and slow removal of these apps contributes indirectly to the ongoing harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to violations of human rights and harm to communities.
Thumbnail Image

Apple and Google Remove AI 'Undressing' Apps from Stores | ForkLog

2026-01-28
ForkLog
Why's our monitor labelling this an incident or hazard?
The apps use AI to generate explicit images without consent, which is a clear violation of human rights and constitutes sexual violence, fulfilling the criteria for harm under AI Incident definition (c) violations of human rights and (e) other significant harms. The involvement of AI is explicit, and the harm is realized, not just potential. The legal actions and app removals further confirm the recognition of harm caused by these AI systems. Hence, this event qualifies as an AI Incident.
Thumbnail Image

App Store Faces Scrutiny Over Surge in AI 'Nudify' Apps

2026-01-29
The Unofficial Apple Weblog (TUAW)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate non-consensual sexual images, which constitutes a violation of human rights and causes harm to individuals and communities. The apps' availability on a major platform with millions of downloads and accessibility to minors further amplifies the harm. Apple's role in allowing these apps to operate and profiting from them, despite guidelines prohibiting such content, indicates indirect causation of harm. The harm is realized, not just potential, as the apps have been used to create explicit content without consent. Thus, the event meets the criteria for an AI Incident.
Thumbnail Image

Apple And Google App Stores Under Fire Over Deepfake Nude AI Apps - The News Chronicle

2026-01-28
The News Chronicle
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems capable of generating deepfake nude images, which are used non-consensually, causing direct harm to individuals' rights and privacy. The apps' widespread availability and use, with millions of downloads and significant revenue, indicate realized harm rather than just potential risk. The failure of Apple and Google to enforce their policies effectively contributed indirectly to the harm. The harms include violations of human rights and harm to communities through sexual exploitation and privacy breaches. Thus, the event meets the criteria for an AI Incident as the AI systems' use has directly led to significant harm.
Thumbnail Image

AI "Nudify" apps are still slipping through Google and Apple's app stores, says report

2026-01-28
Techlusive
Why's our monitor labelling this an incident or hazard?
The report explicitly identifies AI systems used to generate non-consensual sexual content, which is a clear violation of human rights and consent, thus constituting harm. The apps' presence on major platforms and their widespread downloads and revenue indicate the harm is ongoing and materialized. The AI systems' development and use directly lead to this harm. Although the companies are responding by removing some apps, the harm is occurring and the AI systems are pivotal in causing it. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New Report Claims Apple and Google Are Profiting From Nonconsensual Nudity App

2026-01-28
Techloy
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create non-consensual deepfake nude images, which directly violates individuals' rights and causes harm to their privacy and dignity. The apps' widespread availability, including to children, and the platforms' profit from these apps indicate systemic issues. The harm is realized and ongoing, not merely potential. The AI systems' development and use have directly led to violations of human rights and harm to communities, fitting the definition of an AI Incident.
Thumbnail Image

Are Google And Apple Still Profiting From Apps That Can Undress Women, Report Suggests

2026-01-28
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The apps use AI to generate nude images from clothed photos, which is a clear misuse of AI technology causing harm to individuals (harassment, humiliation, abuse). The platforms' inability to fully remove these apps exacerbates the harm. The event involves AI system use leading directly to violations of rights and harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New Report Reveals Dozens of Nudify Apps in Major App Stores

2026-01-28
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to generate undressed or pornographic images from photos, which constitutes a violation of privacy and potentially human rights. The harm is realized as these apps have been widely downloaded and used, causing direct harm to individuals and communities. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a warning or potential risk (hazard), nor is it only a response or update (complementary information).
Thumbnail Image

Apple and Google App stores still host dozens of AI nudify apps: Report

2026-01-28
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake nude images without consent, which is a clear violation of human rights and privacy protections. The harm is realized and ongoing, as evidenced by the large number of downloads and revenue generated. The AI systems' use directly leads to harm to individuals and communities. The partial removal of apps is a response but does not eliminate the incident. Hence, this is an AI Incident due to realized harm caused by AI use.
Thumbnail Image

Amid Grok's Nudity Case, 'Nudify' Apps On Apple App Store, Google Play Making Millions: Report

2026-01-28
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate nude or sexualized images from clothed photos, which is a direct use of AI systems. The harms include violations of privacy, potential sexual abuse, and exploitation, which are clear breaches of human rights and cause harm to individuals and communities. The large scale of downloads and revenue shows the harm is materialized and widespread. The involvement of AI in generating these images and the resulting abusive content meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Not Just Grok: Google And Apple App Stores Accused Of Hosting 'Nude Apps' For Users

2026-01-29
News18
Why's our monitor labelling this an incident or hazard?
The AI-powered apps generate explicit images from user photos, which constitutes a violation of personal rights and can cause harm to individuals and communities. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The article describes realized harm through the widespread use and downloads of these apps and the concerns raised by the research body. The involvement of AI systems is explicit, and the harm is direct and ongoing. The removal actions by Google and Apple are responses to the incident, not the main focus of the article, so this is not Complementary Information.
Thumbnail Image

Report filed against many digitally 'undressing' apps on Apple App Store and Google Play Store, despite ban

2026-01-29
India TV News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexualized or nude images by digitally undressing women in photos or videos. The harm is direct and realized, as these apps produce non-consensual sexual content, violating human rights and exposing individuals to harassment, abuse, and humiliation. The apps' widespread availability and popularity, along with the platforms' failure to fully enforce bans, exacerbate the harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The involvement of AI in generating these images is explicit, and the harm is clearly articulated and ongoing.
Thumbnail Image

Google, Apple Under Fire As Dozens of AI 'Nudify' Apps Slip Through Store Policies

2026-01-29
The Hans India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI image manipulation apps) whose use has directly led to harm in the form of violations of user rights and potential exploitation through non-consensual or exploitative content. The widespread availability and use of these apps on major platforms constitute a breach of obligations intended to protect fundamental rights and user safety. The reactive removal of apps confirms that harm has occurred and that the AI systems' deployment caused or contributed to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple, Google app stores criticized for hosting AI 'undressing' apps

2026-01-29
NewsBytes
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, as the apps use AI to digitally remove clothing and create realistic nude images. The harms are direct and realized, including privacy violations and emotional trauma from non-consensual deepfake pornography, which are violations of human rights and harm to communities. The event involves the use and misuse of AI systems leading to these harms. The investigation and partial removal of apps are responses but do not change the classification. Hence, this is an AI Incident.
Thumbnail Image

Apps generating non-consensual nude images using AI found on Google Play, Apple App Stores

2026-01-29
MM News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate harmful content by manipulating user photos without consent, leading to violations of privacy and user protection rights. The harm is realized and widespread, with millions of downloads and serious risks to vulnerable groups. The AI systems' use directly leads to violations of human rights and harm to individuals, meeting the criteria for an AI Incident rather than a hazard or complementary information. The presence of these apps on major platforms despite policies prohibiting such content further underscores the incident nature of the event.
Thumbnail Image

Dozens of apps that generate nude images found on Google Play & Apple Store

2026-01-29
MM News
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to generate nude images from photos, which directly leads to violations of individuals' rights (privacy, consent) and harm to communities through the creation and distribution of non-consensual explicit content. The harm is realized as the apps have been downloaded hundreds of millions of times and generate such content. The involvement of AI in generating synthetic nude images is explicit and central to the harm. The event describes actual harm occurring, not just potential harm, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

How AI undressing apps reach Google, Apple app stores? 'Nudify' apps raise concern for women's safety, what is it?

2026-01-29
News24
Why's our monitor labelling this an incident or hazard?
The apps use generative AI systems to create harmful content (non-consensual nude images), directly impacting individuals' rights and safety, especially women and children. The harm is realized and ongoing, as evidenced by the widespread downloads and revenue, and the need for app store interventions. The AI system's development and use have directly led to violations of rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Despite Ban, Apps That Digitally Undress Women Appear On Apple And Google Stores: Report

2026-01-29
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The apps use AI systems to manipulate images to create sexualized content without consent, which constitutes a violation of human rights and privacy. The widespread availability and use of these apps have directly led to harm through non-consensual sexualized imagery. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI systems' use.
Thumbnail Image

애플·구글, 앱스토어에 '딥페이크 앱' 버젓이...수수료도 챙겨

2026-01-28
Chosun.com
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate deepfake images that manipulate individuals' photos into sexually explicit content, which is a direct harm to the individuals depicted and potentially to communities by spreading unethical content. The platforms' failure to filter or remove these apps allowed the harm to occur and continue, with significant downloads and financial gain involved. This meets the criteria for an AI Incident as the AI system's use has directly led to harm related to violations of rights and harm to communities.
Thumbnail Image

스마트폰 '알몸 이미지합성' AI앱 활개..."1천700억원 수익"

2026-01-27
연합뉴스
Why's our monitor labelling this an incident or hazard?
The apps use AI to create synthetic sexualized images (deepfakes), directly leading to harm by violating rights to privacy and dignity, and causing potential psychological and reputational harm to individuals depicted. The widespread availability and use of these apps, despite policy prohibitions, indicate realized harm. The AI system's use in generating these images is central to the incident. Hence, this event meets the criteria for an AI Incident due to direct harm caused by AI-generated content violating human rights and causing community harm.
Thumbnail Image

애플·구글에 '알몸 딥페이크' AI앱 활개...전세계서 1700억원 수익 올렸다

2026-01-28
아시아경제
Why's our monitor labelling this an incident or hazard?
The apps explicitly use AI to generate sexualized deepfake images, which is a direct use of AI systems. The harms include violations of privacy, consent, and potentially other human rights, as well as harm to individuals' dignity and communities. The widespread availability and use of these apps with insufficient moderation have caused realized harm. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm (violation of rights and harm to communities).
Thumbnail Image

알몸 이미지 합성해 1700억 수익 올렸다" 발칵...애플·구글서 무슨 일이

2026-01-28
�����
Why's our monitor labelling this an incident or hazard?
The AI systems involved are explicitly described as generating sexualized deepfake images, including of minors, which constitutes a violation of human rights and causes harm to communities. The widespread use and monetization of these apps, despite platform policies, indicate direct harm resulting from AI use. The controversy around the AI chatbot generating sexualized images further supports the presence of realized harm. Therefore, this event qualifies as an AI Incident due to direct harm caused by AI system use and failure to prevent such harm.
Thumbnail Image

스마트폰 '알몸 이미지합성' AI앱 활개

2026-01-27
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate sexualized deepfake images, which directly cause harm by violating individuals' rights and enabling harassment and objectification. The apps' widespread availability and use have resulted in realized harm, including privacy violations and community harm. The AI system's use in this context is central to the harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

''구글·애플, 안 막나 못 막나''...'엉덩이 흔들기' 딥페이크앱 버젓이

2026-01-28
매일방송
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake AI apps generating manipulated sexual images). The use of these AI systems has directly led to harm in the form of violations of personal rights, privacy, and potential emotional or reputational harm to individuals and communities. The apps' widespread availability and use, despite platform policies, indicate ongoing harm. The involvement of AI in generating harmful sexualized content and the platforms' profit from these apps further supports classification as an AI Incident. The event is not merely a potential risk (hazard) or a complementary update but a clear case of realized harm caused by AI systems.
Thumbnail Image

'누드합성' 검색하니 AI앱 주르륵...애플·구글, '딥페이크 앱' 방치하고 챙긴 수익

2026-01-29
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate deepfake nude images, which directly cause harm by violating human rights and producing sexual exploitation content. The apps' widespread availability and significant downloads indicate realized harm rather than just potential risk. The article details the use of AI in creating harmful content and the resulting violations, fitting the definition of an AI Incident. Although the platforms have taken some removal actions, the primary focus is on the harm caused by the AI apps' use, not just on responses or policy updates, so it is not merely Complementary Information.
Thumbnail Image

Tiendas de aplicaciones de Google y Apple ofertan decenas de apps que desnudan con IA, advierte investigación

2026-01-28
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate harmful content (non-consensual nude images and deepfakes), which constitutes a violation of rights and harm to individuals and communities. The AI systems' use has directly led to these harms by enabling the creation and distribution of explicit images without consent, including of minors. Therefore, this qualifies as an AI Incident. The subsequent removal of apps is a response but does not negate the incident classification since harm has already occurred.
Thumbnail Image

Google y Apple eliminan decenas de aplicaciones de IA de sus tiendas oficiales por realizar desnudos

2026-01-28
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate harmful content (nudity and deepfakes) that violates platform policies and exposes minors to inappropriate material, constituting harm to communities and violations of rights. The AI systems' development and use directly led to these harms, as documented by the investigation and subsequent app removals. The presence of these apps on major platforms and their accessibility to minors further underscores the realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Las tiendas de Apple y Google ofrecen decenas de apps que desnudan con IA, según un estudio

2026-01-28
La Nacion
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate explicit nude images and deepfakes, which are forms of AI-generated content. The use of these AI systems has directly led to harms including violations of privacy, consent, and child protection laws, as well as potential psychological and societal harms. The presence of these apps on major platforms despite policies prohibiting such content indicates a failure in governance and oversight, contributing to the harm. The removal of apps after the investigation is a response but does not negate the fact that harm has occurred. Thus, this event meets the criteria for an AI Incident due to realized harm caused by the use of AI systems.
Thumbnail Image

Google y Apple ofertan medio centenar de aplicaciones de IA para generar desnudos

2026-01-28
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate nude and sexually explicit images, including deepfakes, which have caused harm by violating content policies, enabling non-consensual and potentially illegal content creation, and exposing minors to inappropriate material. The direct use of AI to produce harmful content and the resulting platform responses confirm realized harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Aplicaciones que crean desnudos logran colarse en Google Play y App Store: esto reveló la investigación

2026-01-28
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate nude and explicit images, including deepfakes, which are known to cause harm such as violations of privacy, consent, and potentially child exploitation. The presence of these apps on major platforms despite policies prohibiting such content indicates a failure in governance and control, leading to realized harm. The generation of explicit images of minors and non-consensual deepfakes constitutes a violation of human rights and legal protections. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI systems' use and the breach of platform policies and legal norms.
Thumbnail Image

Una investigación revela que las tiendas de aplicaciones de Google y...

2026-01-28
europa press
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate explicit nude images and deepfakes, which are forms of AI-generated content. The use of these AI systems has directly led to harm by enabling the creation and distribution of non-consensual explicit images, including those involving minors, which constitutes violations of rights and harm to individuals and communities. The platforms' policies prohibit such content, indicating that the AI systems' outputs breach legal and ethical standards. The investigation and subsequent removal of these apps confirm the materialization of harm. Hence, this is an AI Incident due to realized harm caused by the AI systems' use.
Thumbnail Image

Apps de IA para desnudos invaden tiendas a pesar de las prohibiciones

2026-01-28
El Observador
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate nude and sexually explicit images and videos, including deepfakes, which directly cause harm by violating individuals' rights and potentially causing psychological and reputational damage. The AI systems' use in creating non-consensual sexual content is a clear violation of human rights and legal protections. The harm is realized and ongoing, as millions of downloads and generated content have occurred. Therefore, this qualifies as an AI Incident. The article also mentions the companies' responses, but the main focus is on the harm caused by the AI applications and their proliferation, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Una investigación revela que las tiendas de aplicaciones de Google y Apple ofertan decenas de apps que desnudan con IA

2026-01-28
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate explicit nude images and deepfakes, which are forms of AI-generated content. The harm is realized, as these apps have been downloaded hundreds of millions of times and have generated explicit images, including of minors, violating policies and likely legal protections related to consent and child protection. This constitutes violations of human rights and harm to communities. The involvement of AI in generating such content is direct and central to the harm. The subsequent removal of apps is a response but does not negate the incident itself. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google y Apple eliminan apps de IA que permitían crear desnudos digitales | TN8.ni

2026-01-28
TN8 - Noticias de Nicaragua y El Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate nude and sexualized images, including deepfake technology, which directly caused harm by enabling access to inappropriate content, especially for minors. The removal of these apps by Google and Apple is a response to this harm. The AI systems' use led to violations of platform policies and posed risks to vulnerable groups, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm scenario involving AI misuse.
Thumbnail Image

Acusan a Apple y Google de permitir la descarga de aplicaciones que crean desnudos con Inteligencia Artificial

2026-01-29
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit, as the applications generate deepfake nude images using AI. The harm is direct and significant, involving violations of human rights (privacy, consent), potential psychological harm, and the creation of abusive sexualized content, including involving minors. The platforms' hosting of these apps and delayed removal contributed to the harm. The event describes realized harm, not just potential, fulfilling criteria for an AI Incident. The involvement of AI in generating harmful content and the resulting violations of rights and abuse justify classification as an AI Incident.
Thumbnail Image

Aplicaciones de IA para "Desnudar" Fotos Proliferan en Play Store y App Store - PasionMóvil

2026-01-29
PasionMovil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate manipulated images that harm individuals' privacy and dignity, constituting violations of human rights. The harm is realized and widespread, with hundreds of millions of downloads and significant economic benefit to developers, indicating active use and impact. The involvement of AI in generating these images is clear and central to the harm. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities through non-consensual explicit image generation.
Thumbnail Image

No es solo Grok: detectan decenas de apps que te desnudan en las tiendas oficiales de Google y Apple

2026-01-30
20 minutos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate sexualized and deepfake images without consent, causing direct harm to individuals' rights and dignity, which fits the definition of an AI Incident. The harm is realized and widespread, with millions of downloads and generated revenue, and the AI's role is pivotal in producing the harmful content. The event is not merely a potential risk or a complementary update but a clear case of AI misuse causing harm. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Descubren más de 100 que crean desnudos con IA en Play Store y App Store: el lado oscuro de los deepfakes - La Opinión

2026-01-30
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
The AI systems in these apps are directly used to create non-consensual sexualized deepfake images, which is a clear violation of human rights and constitutes digital sexual violence, harming individuals and communities. The harm is realized and widespread, with millions of downloads and significant revenue, confirming direct AI involvement in causing harm. Therefore, this qualifies as an AI Incident. The article also references regulatory investigations and partial removals, but the primary focus is on the harm caused by the AI systems' use in these apps.
Thumbnail Image

Apple y Google App Stores están infestadas de apps de IA nudificantes

2026-01-30
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (generative AI for image manipulation) as central to the creation of non-consensual sexualized deepfake images, which constitute a clear violation of human rights and cause harm to individuals and communities. The harm is realized and ongoing, with millions of downloads and documented impacts such as harassment and sexual violence. The involvement of AI in the development and use of these apps is direct and pivotal to the harm. The failure of app store moderation exacerbates the issue but does not negate the AI system's role in causing harm. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Las tiendas de Google y Apple se llenan de apps que desnudan personas con inteligencia artificial

2026-01-30
Perfil.com
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI systems (nudify apps) that generate sexualized images of real people without consent, which is a clear violation of human rights and can lead to harm such as harassment and digital violence. The harm is realized, not just potential, as these apps have been widely downloaded and used, causing direct harm to individuals. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The failure of platform enforcement further exacerbates the issue but does not change the classification.
Thumbnail Image

No solo Grok: las tiendas de Apple y Google están plagadas de apps que desnudan con inteligencia artificial

2026-01-30
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems that generate sexualized images of real people without consent, which constitutes a violation of human rights and privacy. The apps have been downloaded hundreds of millions of times, indicating widespread impact. The harm is realized, not just potential, as these images are being generated and distributed. The platforms' policies prohibit such content, but enforcement was lacking until the investigation prompted removals. This fits the definition of an AI Incident due to direct harm caused by the use of AI systems in generating non-consensual explicit content.
Thumbnail Image

Apple et Google suppriment des dizaines d'applications de "nudité" suite à l'affaire Grok Par Investing.com

2026-01-27
Investing.com France
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate non-consensual nude images, causing violations of human rights and privacy. The harm is realized, as these applications were available and used to sexualize individuals without consent. The involvement of AI in generating these images is direct and central to the harm. The actions by Apple and Google to remove these apps are responses to the incident but do not negate the occurrence of harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Deepfakes sexuels : Apple et Google laissent prospérer des dizaines d'applications illégales

2026-01-28
Clubic.com
Why's our monitor labelling this an incident or hazard?
Deepfake applications use AI to create manipulated sexual content, which constitutes a violation of human rights and causes harm to individuals. The fact that these applications are illegal and remain available on major app stores indicates ongoing harm facilitated by AI systems. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the use of AI in generating non-consensual sexual deepfakes and the failure of platforms to remove these harmful AI systems.
Thumbnail Image

Apple et Google distribuent des dizaines d'apps qui dénudent les femmes

2026-01-27
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered applications that digitally undress women without their consent, which is a clear violation of human rights and privacy. The AI systems are used to generate harmful content, causing direct harm to individuals and communities by producing abusive sexualized images. The involvement of AI in the development and use of these apps is central to the harm described. The harm is realized, not just potential, as these apps are actively distributed and used. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Pas mieux que Grok : Apple et Google hébergent des dizaines d'applis pour dénuder les femmes

2026-01-28
CommentCaMarche
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create non-consensual sexualized deepfake images, which directly harm individuals' privacy and dignity, constituting violations of human rights. The harm is realized and ongoing, as evidenced by the large number of downloads and revenue generated, and the need for platform intervention. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article does not merely warn of potential harm but documents actual harm and responses to it.
Thumbnail Image

Grok n'est pas seul : des dizaines d'apps dénudaient les corps sur l'App Store et Google Play Store

2026-01-28
Presse-citron
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems capable of generating manipulated sexual images without consent, which constitutes a violation of human rights and potentially illegal content (including child sexual abuse material). The widespread availability and use of these apps have caused direct harm to individuals and communities. The involvement of AI in generating these images is central to the harm described. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of AI systems.
Thumbnail Image

Deepfakes : comment Apple et Google profitent (malgré eux) de l'essor des applications qui dénudent les femmes

2026-01-28
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered applications generating deepfake images that undress women without consent, which is a clear violation of human rights and causes harm to the victims. The AI systems' use in creating and distributing such content directly leads to harm. The presence of these applications on major app stores and the indirect profit by Apple and Google further contextualize the harm but do not negate the direct AI involvement in causing harm. The ongoing availability of some of these apps despite removal efforts indicates the harm is current and not merely potential. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Comment des applications "nudify" prolifèrent-elles sur les stores de Google et Apple ?

2026-01-27
Génération-NT
Why's our monitor labelling this an incident or hazard?
The applications explicitly use AI to generate non-consensual nude deepfake images, which constitutes a violation of human rights and privacy (a form of harm to individuals and communities). The harm is realized and ongoing, as evidenced by hundreds of millions of downloads and significant revenue generated. The AI systems' use in sexualizing individuals without consent is a direct cause of this harm. Therefore, this event qualifies as an AI Incident due to the direct and significant harm caused by the AI systems involved.
Thumbnail Image

Deepfakes : la pudeur à géométrie variable d'Apple et Google

2026-01-28
iGeneration
Why's our monitor labelling this an incident or hazard?
The AI systems involved are generative AI models capable of creating realistic deepfake images that undress or sexualize people without consent, which is a clear violation of privacy and human rights. The harm is realized and ongoing, as millions of such images have been generated and distributed, causing harm to individuals and communities. The platforms' failure to effectively remove these apps and the monetization of such content further exacerbate the harm. Hence, this qualifies as an AI Incident due to direct harm caused by the use of AI systems.
Thumbnail Image

Apple et Google hébergent des applications qui dénudent les femmes par IA

2026-01-27
iPhoneAddict.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to generate sexualized images without consent, which is a violation of human rights and privacy, fitting harm category (c). The AI systems' use directly leads to harm by producing abusive content at scale. The platforms' failure to moderate these apps effectively allows the harm to continue and even profit from it, indicating ongoing and direct harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Puluhan Aplikasi Bahaya Beredar di App Store dan Play Store, Waspada!

2026-01-29
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-based applications that generate non-consensual sexualized images, which constitutes a violation of fundamental rights and causes harm to individuals and communities. The presence of these apps on major platforms and their widespread use (700 million downloads) confirms realized harm. The involvement of AI in generating the harmful content is clear. The partial removal of some apps and ongoing investigations are responses but do not negate the fact that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple dan Google Raup Keuntungan dari Aplikasi Pembuat Gambar Porno Berbasis AI

2026-01-29
Liputan 6
Why's our monitor labelling this an incident or hazard?
The applications use AI to generate explicit images without consent, directly violating human rights and privacy, which is a clear harm. The large scale of downloads and revenue indicates widespread impact. The involvement of AI in generating the harmful content is explicit, and the platforms' failure to adequately moderate these apps contributes to the harm. Hence, this is an AI Incident involving violations of human rights and harm to communities caused by AI system use.
Thumbnail Image

Puluhan Aplikasi Deepfake AI Cabul Ditemukan di App Store dan Play Store

2026-01-28
detiki net
Why's our monitor labelling this an incident or hazard?
The applications use AI systems to generate deepfake sexual images without consent, directly causing harm to individuals' privacy and dignity, violating rights, and potentially causing psychological and social harm. The widespread availability and use of these apps, including by minors, and the platforms' failure to promptly remove them, demonstrate realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

Apple Dituduh Gagal Jaga Standar Etika, App Store Dipenuhi Aplikasi AI "Nudify"

2026-01-30
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate manipulated sexual content without consent, which directly harms individuals' privacy and dignity, constituting violations of rights and harm to communities. The AI systems' use in these applications is central to the harm described. The failure of Apple to prevent these applications from being widely available and downloaded exacerbates the harm. The harm is realized, not just potential, as millions of downloads and generated content have occurred. Thus, this meets the criteria for an AI Incident rather than a hazard or complementary information.