AI-Generated Deepfake of Missing Woman Causes Public Outrage and Distress

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos animating and voicing missing person Iwona Wieczorek have circulated on TikTok, causing psychological distress to her family and the public. The realistic deepfake content, which impersonates the missing woman, has sparked outrage and ethical concerns over the misuse of AI in sensitive cases.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to animate a photo of a missing person to create a video that falsely depicts her speaking. This use of AI has directly led to emotional harm to the family and the community, as well as the spread of potentially misleading content. Although no physical harm is reported, the emotional and social harm to the community and violation of ethical norms around the use of images of missing persons constitute significant harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated misleading content.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRespect of human rightsTransparency & explainabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Iwona Wieczorek opowiada o swoim zaginięciu na TikToku. ''Nie poznajesz mnie?''

2023-04-25
o2.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to animate a photo of a missing person to create a video that falsely depicts her speaking. This use of AI has directly led to emotional harm to the family and the community, as well as the spread of potentially misleading content. Although no physical harm is reported, the emotional and social harm to the community and violation of ethical norms around the use of images of missing persons constitute significant harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated misleading content.
Thumbnail Image

Mama Iwony Wieczorek skomentowała skandaliczne nagranie. "Dlaczego szasta się rodzinną tragedią?"

2023-04-28
kobieta.gazeta.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a synthetic video of a missing person, which is a direct use of AI-generated content. The resulting harm is emotional distress to the family and community, a form of harm to communities and violation of personal rights. The AI system's use directly led to this harm by producing and disseminating the manipulated video. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

Deepfake'owe wideo z Iwoną Wieczorek. "Brak empatii, przerażające"

2023-04-26
gazetapl
Why's our monitor labelling this an incident or hazard?
The video is generated using deepfake AI technology, which is an AI system capable of generating synthetic realistic human likeness and speech. The use of this AI system has directly caused harm by emotionally traumatizing the family and community of the missing person, as highlighted by the psychologist's comments and public reactions. The harm is a violation of emotional well-being and can be considered harm to communities. The AI system's use in this context is a direct cause of the harm, not merely a potential risk or background context. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"Ożywił" zdjęcie zaginionej Iwony Wieczorek. Głos w sprawie zabrała matka

2023-04-27
o2.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video that simulates the appearance and speech of a missing person, which has caused public distress and ethical concerns. Although no physical harm or direct violation of rights is explicitly stated, the creation and dissemination of such synthetic media can cause significant emotional harm to the family and community, potentially violating personal dignity and privacy rights. The article does not report any direct injury or legal violation yet, but the AI-generated content has already caused social harm and distress. Therefore, this event constitutes an AI Incident due to the realized harm to the community and individuals involved through the AI system's use.
Thumbnail Image

"Ożywili" Iwonę Wieczorek. Dziewczyna opowiada, co się z nią stało na Tik Toku. Jej matka: "Dlaczego to nie jest karalne?

2023-04-28
Super Express
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to animate a photograph of a missing person to create a video that simulates her speaking about her disappearance. This use of AI has directly caused harm by distressing the family and the public, violating personal and possibly legal rights. The event is not merely a potential risk but a realized harm, as the video has been widely viewed and caused significant emotional impact. Therefore, it meets the criteria for an AI Incident due to violation of rights and harm to the community.
Thumbnail Image

Iwona Wieczorek na TikToku. Filmik ma już 2,8 miliona wyświetleń

2023-04-25
plotek.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate synthetic voice and facial movements, which is a clear use of AI technology. The video involves the use of AI-generated content that impersonates a real person, potentially causing emotional harm to the community and possibly violating ethical norms. However, the article does not report any direct or indirect harm such as injury, disruption, or legal violations that have occurred as a result of this video. The main issue is the potential for harm or distress caused by the realistic AI-generated content, but no concrete harm or legal breach is documented. Therefore, this event is best classified as Complementary Information, as it provides context on AI's societal impact and public reaction to AI-generated deepfakes, without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Iwona Wieczorek "ożywiona" przez AI. Mama zaginionej jest zdruzgotana

2023-04-27
plotek.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic video of a missing person, which has caused emotional harm to the family. The AI's use in creating and distributing this content without consent constitutes a violation of personal rights and causes harm to the community (the family and public affected by the misuse). The harm is realized and directly linked to the AI system's use. Hence, this is an AI Incident.
Thumbnail Image

Iwona Wieczorek na TikToku... sama opowiada o zaginięciu. Jak to możliwe?!

2023-04-25
naTemat.pl
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating synthetic media (animation and speech) to tell a story about a real-world event. However, the article does not report any harm caused by this AI use, nor does it suggest any plausible risk of harm resulting from the AI-generated content. The AI's role is to create content that informs or engages the public about a cold case, which is a form of complementary information. Therefore, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

Iwona Wieczorek w wersji AI? Użytkownicy TikToka są przerażeni - Vibez.pl

2023-04-24
vibez.pl
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in animating a photo to create a realistic avatar that speaks, which is a clear use of generative AI technology. The event involves the use of AI (development and use) that has directly led to harm, specifically psychological harm to the family and community of the missing person. The article highlights the negative emotional impact and ethical concerns, indicating realized harm rather than potential harm. Hence, this is an AI Incident under the category of harm to communities and individuals' mental health.
Thumbnail Image

Sztuczna inteligencja mówi głosem Iwony Wieczorek. Ludzie są bezradni

2023-04-26
pomponik.pl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that generates synthetic audiovisual content mimicking a real missing person. This use of AI has directly led to harm in the form of psychological trauma to the victim's family and distress to the community, fulfilling the criteria for harm to people (a) and harm to communities (d). The AI's role is pivotal as it enabled the creation of the misleading and emotionally harmful content. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Iwona Wieczorek ożywiona przez sztuczną inteligencję. "To chore"

2023-04-26
rozrywka.radiozet.pl
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a video that 'revives' a missing person, which has led to public outrage and emotional harm to the family and community. The AI-generated content impersonates the missing person, causing psychological trauma and distress, which qualifies as harm to people and communities. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Mama Iwony Wieczorek zdruzgotana "ożywieniem" córki na TikToku

2023-04-27
rozrywka.radiozet.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a synthetic video that 'revives' a missing person to tell a story, which is a clear use of AI-generated content (image animation and speech synthesis). The event has caused emotional harm to the family, which qualifies as harm to persons (psychological harm). The use of the AI system in this way also implicates violations of personal rights and dignity, which fall under violations of human rights or legal protections. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

Mama Iwony Wieczorek komentuje szokujące nagranie, w którym użyto wizerunku córki

2023-04-27
pomponik.pl
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video that 'revives' the image of a missing person to narrate her story. This use of AI-generated content without consent, especially involving a tragic case, causes harm to the family and community by exploiting a personal tragedy and potentially violating rights to image and dignity. The article reports the harm as realized, with the mother expressing outrage and calling for legal consequences. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.