AI-Generated Deepfake Nude Apps Cause Harm and Abuse in Hungary

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hungarian authorities and support organizations warn of the growing use of AI-powered deepfake and nudifying apps that generate fake nude images, including of children. These AI-generated images are used for sexual abuse, blackmail, and psychological harm, prompting calls for vigilance and international concern over the technology's misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (generative AI and deepfake technology) used to create realistic non-consensual explicit images, which directly cause harm to individuals' rights and dignity, particularly women and children. The harms include violations of human rights and potential criminal exploitation, fulfilling the criteria for an AI Incident. The article reports that millions of such images have been generated and distributed, with documented cases of associated criminal behavior, confirming realized harm rather than just potential risk.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Digital securityMedia, social platforms, and marketing

Affected stakeholders
ChildrenGeneral public

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Berúgta az ajtót Magyarországon Elon Musk vetkőztetős chatbotja

2026-02-20
Index.hu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI and deepfake technology) used to create realistic non-consensual explicit images, which directly cause harm to individuals' rights and dignity, particularly women and children. The harms include violations of human rights and potential criminal exploitation, fulfilling the criteria for an AI Incident. The article reports that millions of such images have been generated and distributed, with documented cases of associated criminal behavior, confirming realized harm rather than just potential risk.
Thumbnail Image

Nagyon vigyázzon! Ezek az alkalmazások bárkiről képesek kamu meztelen képeket generálni

2026-02-20
Blikk
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake applications) that generate fake nude images, including those involving children, which have caused real psychological and reputational harm to victims. The use of these AI systems has directly led to violations of human rights and harm to individuals and communities. The harms are realized and ongoing, not merely potential. Hence, this event fits the definition of an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Rém veszélyes, újfajta internetes csalás terjed: rengeteg magyar bedől neki, sok pénzük bánja - Pénzcentrum

2026-02-20
Pénzcentrum
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake applications that produce harmful fake nude images, which have led to real harms including sexual coercion and blackmail. The harms are direct and significant, affecting victims' rights and causing psychological damage. The AI system's use in generating these images is central to the incident. Hence, this is an AI Incident involving violations of rights and harm to communities through malicious AI-generated content.
Thumbnail Image

Kreatív Online - Figyelem, terjednek a meztelenítő appok! - Virtuális erőszak valós következményekkel

2026-02-20
Kreatív Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake and nudifying apps) generating harmful fake images that are used for sexual abuse, coercion, and blackmail. This constitutes a violation of human rights and causes significant harm to victims, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential. The article also references data and reports confirming the prevalence and impact of these AI-generated abusive contents, reinforcing the direct link between AI use and harm.
Thumbnail Image

A meztelenítő alkalmazások veszélyeire figyelmeztet az Internet Hotline - E-volution - DigitalHungary - Ahol a két világ találkozik. Az élet virtuális oldala!

2026-02-20
digitalhungary.hu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake and AI-generated nude image applications) that have been used to create harmful content. These AI systems have directly caused harm to individuals by generating fake sexual images, which constitute violations of rights and cause psychological and social harm. The harms are realized and ongoing, as evidenced by reports and the involvement of legal and support organizations. Hence, this qualifies as an AI Incident due to direct harm caused by the use of AI systems.
Thumbnail Image

Vigyázat! Gyermekeink fényképeiből is készülhet pornográf felvétel meztelenítő app segítségével

2026-02-20
Vasárnap.hu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake and nudifier apps) used to create harmful, non-consensual sexual images, including those of children, which directly leads to violations of rights and harm to individuals and communities. The harms are realized and documented by organizations monitoring such content, fulfilling the criteria for an AI Incident. The article's focus is on the actual occurrence and impact of these AI-generated abusive materials, not just potential risks or responses, so it is not a hazard or complementary information.