AI App 2wai Faces Backlash for Simulating Conversations with Deceased Loved Ones

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI app 2wai, co-founded by Disney actor Calum Worthy, allows users to create interactive avatars of deceased relatives. The app has sparked public and expert backlash over potential psychological harm, ethical concerns, and the risk of disrupting the grieving process, though no actual harm has yet been reported.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app clearly involves an AI system that generates avatars and conversations with deceased relatives. The public reaction highlights ethical concerns and potential psychological harm, which could plausibly lead to harm related to mental health or social well-being. However, the article does not report any actual harm or incidents resulting from the app's use. The event is about the potential for harm and societal unease, not about a realized incident or a response to one. Hence, it is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's use in this sensitive context.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI hazard

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

'Are we in a Black Mirror episode?': Former Disney Channel star criticized for 'vile' AI avatar app 2wai

2025-11-14
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (2wai app) that generates avatars using AI. However, it does not report any direct or indirect harm caused by the app's use, such as injury, rights violations, or disruption. The concerns expressed are anticipatory and ethical in nature, reflecting public unease rather than documented incidents. The app is in beta and has launched recently, with no evidence of harm yet. The main focus is on the app's description, social reactions, and the background of its creators, which fits the definition of Complementary Information as it provides context and societal response to an AI system without reporting a new incident or hazard.
Thumbnail Image

A former Disney Channel star creating the most evil thing I've ever seen in my life wasn't really what I was expecting," wrote one X user.

2025-11-14
Yahoo
Why's our monitor labelling this an incident or hazard?
The app is an AI system as it generates conversational outputs simulating deceased people. The event involves the use of this AI system. However, the article only reports public outrage and ethical concerns without evidence of realized harm or incidents resulting from the app's use. Therefore, it does not meet the threshold for an AI Incident. It also does not describe a plausible future harm scenario beyond general ethical concerns, so it is not clearly an AI Hazard. The main focus is on public reaction and ethical debate, which fits best as Complementary Information about societal responses to AI.
Thumbnail Image

Disney Star Under Fire for 'Dystopian' Dead Relative App

2025-11-14
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates avatars and conversations with deceased relatives. The public reaction highlights ethical concerns and potential psychological harm, which could plausibly lead to harm related to mental health or social well-being. However, the article does not report any actual harm or incidents resulting from the app's use. The event is about the potential for harm and societal unease, not about a realized incident or a response to one. Hence, it is best classified as an AI Hazard, reflecting the plausible future harm from the AI system's use in this sensitive context.
Thumbnail Image

Disney Star Launches Controversial AI App To Talk To Dead People

2025-11-14
ScreenRant
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-driven HoloAvatar tool) that generates digital recreations of people, which fits the definition of an AI system. However, the article does not report any realized harm or incident resulting from the app's use; rather, it highlights ethical concerns, public debate, and potential misuse. Since no direct or indirect harm has occurred yet, but there is a plausible risk of harm related to consent, privacy, and ethical issues, this event qualifies as an AI Hazard. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated since it clearly involves AI and its societal implications.
Thumbnail Image

Video app that allows dead to live on compared to dystopian show

2025-11-14
Newsweek
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the app uses AI to generate interactive avatars of deceased people. The event stems from the use and development of this AI system. Although no direct harm has yet occurred, the public backlash and ethical concerns highlight credible risks of harm, including emotional harm to users, violation of consent rights, and potential societal impacts on grieving processes. Since these harms are plausible but not yet realized, the event fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or governance measures, so it is not Complementary Information, nor is it unrelated.
Thumbnail Image

'Demonic': New App Ripped For Creating Avatars Of Dead Relatives

2025-11-14
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates avatars and simulates conversations based on data from deceased individuals. The controversy and public backlash focus on ethical and emotional risks, which are plausible harms related to psychological well-being and social impact. However, the article does not report any actual injury, violation of rights, or other harms that have materialized. The concerns are anticipatory and speculative, indicating a credible risk of future harm rather than a realized incident. Thus, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Controversial app lets people talk to AI avatars of their dead relatives

2025-11-14
Metro
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive avatars based on user-provided data, fulfilling the AI system definition. The event concerns the use and development of this AI system. Although there is significant public criticism and ethical concern, the article does not describe any direct or indirect harm that has already occurred due to the app's use. The potential for harm, such as emotional distress or exploitation, is plausible given the nature of the technology and its application, but no incident is reported. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet been documented.
Thumbnail Image

Great Job, Internet!: Nobody's happy about AI puppets of Jesus or your dead relatives

2025-11-15
The A.V. Club
Why's our monitor labelling this an incident or hazard?
The AI system involved is an AI avatar generation tool that creates conversational agents based on videos of deceased individuals. While the article raises concerns about potential emotional and economic harm (e.g., exploiting grief, ongoing processing fees), it does not describe any realized harm or incidents resulting from the AI's use. The concerns are about plausible future harm due to the nature of the product and its use, making this an AI Hazard rather than an AI Incident. There is no indication of direct or indirect harm having occurred yet, only the potential for harm.
Thumbnail Image

AI App From Linked to a Disney Star Sparks Alarm Over Eerie Loved One Simulations

2025-11-14
Men's Journal
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the app uses AI to create and simulate versions of real people from brief recordings. The concerns raised relate to potential emotional and psychological harm, privacy violations, and misleading representations, which could plausibly lead to harm in the future. However, the article does not report any actual harm or incidents caused by the app so far, only public concern and controversy. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but no direct or indirect harm has been documented yet.
Thumbnail Image

'Demonic': AI App That Lets Users 'Talk' to Dead Loved Ones Faces Backlash - Decrypt

2025-11-14
Decrypt
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates interactive digital replicas of deceased persons. The concerns raised—such as lack of consent from the deceased, exposure of personal data, potential psychological harm, and exploitation—indicate plausible future harms that could arise from the app's use. Since no actual harm or incident is reported as having occurred, but credible risks and ethical issues are highlighted, this event fits the definition of an AI Hazard rather than an AI Incident. It is more than just complementary information because the core of the article focuses on the potential harms and ethical risks posed by the AI system, not merely updates or responses to past incidents.
Thumbnail Image

"Vile" and "disturbing": Former Disney Channel star eviscerated over AI app that revives the dead

2025-11-14
The Daily Dot
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the app uses AI to generate interactive digital personas of deceased individuals. The event stems from the use and promotion of this AI system. Although no direct harm is reported, the article emphasizes the potential for significant psychological and social harm, such as emotional manipulation and detachment from reality, which could plausibly lead to harm to individuals and communities. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to mental health and social fabric, but no actual harm has yet been documented.
Thumbnail Image

Black Mirror-Style AI That Mimics Dead Loved Ones Called 'Dehumanizing'

2025-11-14
Mandatory
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it generates interactive avatars of deceased individuals, which fits the definition of an AI system. However, the article does not report any direct or indirect harm resulting from the AI's use, nor does it describe a plausible future harm event occurring or imminent. The criticisms are ethical and societal concerns expressed by commenters, not documented harms or incidents. Hence, the event does not meet the criteria for AI Incident or AI Hazard. Instead, it provides complementary information about societal reactions and ethical debates surrounding the AI system's deployment.
Thumbnail Image

Disney actor dragged for Black Mirror-like AI app where you talk to dead relatives

2025-11-14
The Tab
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the app uses AI to generate interactive avatars of dead relatives. Although no actual harm is reported in the article, the app's use could plausibly lead to psychological or emotional harm to users, or ethical violations concerning rights of the deceased or their families. Since the harm is potential and not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely general AI news or a complementary update, as the focus is on the app's use and its implications.
Thumbnail Image

Calum Worthy Criticized For Black-Mirror-Like AI App: 'This Is Sick'

2025-11-14
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system generating avatars of deceased individuals to simulate interactions, which fits the definition of an AI system. The event stems from the use of this AI system. While no actual harm is documented as having occurred, expert warnings about potential devastating psychological effects and public backlash indicate plausible future harm. The potential harms include emotional distress, manipulation, and harm to mental health, which fall under harm to persons. Since the harm is potential and not yet realized, this is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

New AI App "2wai," Co-Founded by Disney Alum Calum Worthy, Faces Backlash for Letting Users Create Avatars of Deceased Loved Ones

2025-11-15
Hollywood Unlocked
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive avatars based on user-provided data, which fits the definition of an AI system. The controversy and backlash reflect concerns about potential emotional and moral harms, but the article does not report any realized injury, violation of rights, or other harms caused by the app's use so far. Therefore, the event describes a plausible risk of harm from the AI system's use, but no actual harm has been documented yet. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms related to emotional distress or exploitation, but these harms are not yet realized.
Thumbnail Image

The app that lets you speak with your deceased loved ones: Creepy AI creates interactive avatars of the dead - but sceptics call it 'demonic, dishonest, and dehumanizing'

2025-11-14
News Flash
Why's our monitor labelling this an incident or hazard?
The app 2wai uses AI to create digital avatars of deceased individuals, which is an AI system by definition. The article does not report a specific realized harm incident but discusses credible and significant potential harms, including psychological distress, disruption of grief, and misuse for advertising, all of which fall under harm to communities and individuals. Expert warnings reinforce the plausibility of these harms. Since the harms are potential and not yet materialized, this event is best classified as an AI Hazard rather than an AI Incident. The article also includes societal and ethical concerns but does not focus primarily on governance responses or updates, so it is not Complementary Information. It is clearly related to AI and its impacts, so it is not Unrelated.
Thumbnail Image

Former Disney star sparks controversy for his AI app that lets you talk to dead relatives

2025-11-15
Entertainment Weekly
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction and promotion of an AI system that creates digital avatars of deceased individuals and historical figures. The AI system is clearly involved as it generates interactive avatars from video data and historical information. However, the article does not describe any realized harm or incidents caused by the AI system. The concerns are more about potential ethical and social implications rather than actual harm. Therefore, this event does not meet the criteria for an AI Incident. It also does not primarily focus on warnings or credible risks of future harm, so it is not an AI Hazard. The article mainly provides information about the app's launch and its features, which fits the category of Complementary Information as it adds context to the AI ecosystem and societal reactions to such technology.
Thumbnail Image

Conversational AI app 2Wai faces criticisms over ethics - Cryptopolitan

2025-11-15
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (2Wai) that uses AI-generated avatars of deceased individuals. The criticisms and expert opinions highlight potential ethical, legal, and privacy risks that could plausibly lead to harm, such as psychological distress to users and misuse of personal data. Since no actual harm or incident is reported, but credible concerns about future harm exist, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and ethical concerns, not on responses or updates to past incidents. It is not unrelated because the AI system and its implications are central to the discussion.
Thumbnail Image

Former Disney star sparks controversy over AI app that lets you speak to dead relatives in avatar form

2025-11-16
Yahoo
Why's our monitor labelling this an incident or hazard?
The app clearly involves AI systems that generate interactive avatars based on data from deceased individuals. The controversy and criticism focus on the potential for emotional harm and unrealistic expectations, which are plausible future harms. However, no direct or indirect harm has been reported as having occurred yet. The event does not describe a realized AI Incident but highlights a credible risk associated with the AI system's use. Thus, it fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Disney star sparks controversy with AI app that lets you speak to dead relatives

2025-11-16
The Independent
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates digital avatars capable of conversation, fulfilling the AI system definition. The controversy and criticism highlight concerns about potential emotional harm and grief disruption, which could be considered harm to individuals or communities if realized. However, the article does not describe any actual injury, violation of rights, or other harms that have already occurred due to the app's use. The concerns are anticipatory and speculative, indicating a plausible future risk rather than a realized incident. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

App bringing dead loved ones back is criticised

2025-11-16
Euro Weekly News Spain
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates digital avatars of deceased people, fulfilling the AI System criterion. The concerns raised—emotional manipulation, grief disruption, privacy, and ownership of digital personas—indicate plausible future harms related to psychological and ethical issues. Since no actual harm or incident is reported, but the potential for harm is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its implications are central to the report.
Thumbnail Image

Disney star sparks controversy over AI app that lets you speak to dead relatives

2025-11-16
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system (2wai app) is explicitly mentioned as creating digital avatars that simulate deceased individuals, which involves AI-generated content and interaction. The controversy and criticism focus on the potential emotional and psychological harm to users, which is a form of harm to persons. However, the article does not report any actual incidents of harm occurring, only concerns and criticisms about possible negative effects. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but no harm has yet been documented.
Thumbnail Image

Former Disney Channel star's new app slammed as 'disgusting'

2025-11-17
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive avatars based on limited data input. The controversy and ethical concerns raised by users relate to potential psychological harm and exploitation of grief, which could plausibly lead to harm in the future. However, the article does not report any actual incidents of harm, injury, or rights violations caused by the app's use so far. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has been documented yet.
Thumbnail Image

Disney star's AI app letting users talk to deceased loved ones slammed as 'evil'

2025-11-17
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system generating interactive avatars, fulfilling the AI system criterion. However, the article does not report any actual harm or incident resulting from the app's use, only public criticism and ethical concerns. There is no evidence of direct or indirect harm occurring yet, nor a clear plausible future harm event described beyond general ethical debate. Hence, it does not meet the threshold for AI Incident or AI Hazard. Instead, it provides complementary information about societal responses and ethical discussions surrounding a new AI application.
Thumbnail Image

Disney Star Calum Worthy Faces Backlash over His AI App that Let's You Talk to Dead Relatives

2025-11-17
Breitbart
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive content based on deceased individuals. While there are significant ethical concerns and public backlash, no direct or indirect harm (such as injury, rights violations, or disruption) has been reported as having occurred. The concerns are about potential misuse or moral implications, which are speculative at this stage. Therefore, this event is best classified as Complementary Information, as it provides context and societal response to a new AI application rather than documenting an AI Incident or AI Hazard.
Thumbnail Image

Ex-Disney star ripped for 'demonic' app that lets users talk to AI...

2025-11-17
New York Post
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates avatars and enables real-time conversations, fulfilling the AI system criteria. The concerns raised by users about potential mental health harms indicate plausible future harm, even though no specific incidents of harm are documented in the article. The event does not describe an actual AI Incident since harm has not yet materialized, nor is it merely complementary information or unrelated news. Hence, it fits the definition of an AI Hazard as the AI system's use could plausibly lead to significant psychological harm.
Thumbnail Image

Disney Channel alum sparks backlash for AI app that lets you talk to dead relatives

2025-11-17
NJ.com
Why's our monitor labelling this an incident or hazard?
The app 2wai uses AI to recreate likenesses and simulate conversations with deceased individuals, which fits the definition of an AI system. The public backlash and ethical concerns indicate plausible risks of harm, such as emotional distress, interference with natural grieving processes, and social harm. However, the article does not report any actual injury, violation of rights, or other harms that have materialized from the app's use. The harms are potential and speculative at this stage, making this an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the app's launch and the associated societal concerns about its use and impact, which relate directly to plausible future harm.
Thumbnail Image

AI Companies are encouraging users to believe Chatbots are people, and it's insanely creepy - MR Online

2025-11-17
MR Online
Why's our monitor labelling this an incident or hazard?
The article centers on the use of AI chatbots that simulate human likenesses and personalities, which qualifies as AI systems. The harms described are primarily psychological and societal, focusing on the potential for emotional manipulation, addiction, and erosion of human values. Since no specific realized harm or incident is reported, but the article clearly outlines plausible future harms from these AI systems' use and promotion, this fits the definition of an AI Hazard. It warns about credible risks of psychological disorders and societal disruption stemming from these AI applications, without documenting an actual incident of harm yet.
Thumbnail Image

Al estilo de Black Mirror: empresa de IA lanza app que recrea seres queridos fallecidos

2025-11-14
infobae
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates interactive digital avatars based on deceased individuals' data. The event stems from the use and deployment of this AI system. Although the article highlights widespread public concern and ethical debate, it does not report any direct or indirect harm having occurred yet. The potential harms—such as psychological harm or social disruption—are plausible but not realized. Hence, this fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no incident has yet materialized.
Thumbnail Image

Así es la polémica app de inteligencia artificial que te permite 'hablar' con tus seres queridos fallecidos: "Es como 'Black Mirror"

2025-11-14
20 minutos
Why's our monitor labelling this an incident or hazard?
The application clearly involves an AI system that generates interactive avatars simulating deceased individuals, which fits the definition of an AI system. However, the article does not report any direct or indirect harm caused by the AI system's development or use. The controversy and negative reactions reflect societal and ethical concerns rather than a realized AI Incident or a plausible AI Hazard with imminent risk. Thus, the event is best categorized as Complementary Information, providing insight into societal responses and ethical debates around AI applications without documenting an AI Incident or Hazard.
Thumbnail Image

El regalo para Navidad que está haciendo furor: una IA recrea a tus parientes fallecidos

2025-11-14
La Razón
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system that recreates deceased individuals as interactive avatars, fulfilling the definition of an AI system. There is no mention of actual injury, rights violations, or other harms having occurred so far, but the public reaction and ethical concerns indicate plausible future harms related to emotional and psychological well-being, consent, and identity issues. Therefore, the event is best classified as an AI Hazard because the AI system's use could plausibly lead to harm, even though no harm has yet materialized.
Thumbnail Image

Esta app te permitirá hablar con tus familiares fallecidos🎦

2025-11-14
Tiempo
Why's our monitor labelling this an incident or hazard?
The app uses AI to create avatars that simulate deceased individuals, which qualifies as an AI system. There is no indication that harm has occurred yet, but the technology's use could plausibly lead to psychological or emotional harm, or other significant harms related to human rights or community well-being. Since no realized harm is reported, and the main focus is on the potential implications of the technology, the event fits the definition of an AI Hazard.
Thumbnail Image

Exhiben app que "revive" a difuntos ¿innovación o una pesadilla tipo Black Mirror?

2025-11-14
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The app clearly involves AI systems generating conversational outputs simulating deceased individuals, fitting the definition of an AI system. The article focuses on the potential emotional and ethical harms that could plausibly arise from its use, such as emotional dependency or distorted grieving, but does not describe any realized harm or incident. Hence, it qualifies as an AI Hazard because the development and use of this AI system could plausibly lead to harms related to mental health and societal impacts, but no direct or indirect harm has been documented yet.
Thumbnail Image

La aplicación que te permite hablar con tus seres queridos fallecidos: Creepy AI crea avatares interactivos de los muertos, pero los escépticos la llaman "demoníaca, deshonesta y deshumanizante" | Contacto Conce

2025-11-14
Contacto Conce
Why's our monitor labelling this an incident or hazard?
The AI system (2wai) is explicitly described and is used to create digital avatars of deceased individuals, which directly leads to psychological harm and ethical concerns. The article reports actual use and public reaction indicating harm has occurred or is occurring, such as distress, disruption of grieving, and potential misuse for advertising. These constitute injury or harm to persons and harm to communities, fulfilling the criteria for an AI Incident. The presence of the AI system, its use, and the resulting harms are clearly established, so this is not merely a hazard or complementary information.
Thumbnail Image

Críticas a exactor de Hollywood por aplicación de IA que permite "hablar" con seres queridos fallecidos

2025-11-17
T13 (teletrece)
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it generates interactive avatars of deceased individuals, which is a sophisticated AI application. The event stems from the use and deployment of this AI system. Although users and commentators express concern about possible psychological harm and emotional dependency, the article does not report any realized harm or legal violations. The concerns are credible and relate to potential future harm, fitting the definition of an AI Hazard. There is no indication of an AI Incident (actual harm) or Complementary Information (updates or responses to prior incidents). Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Una compañía crea una aplicación para recrear a seres queridos fallecidos mediante IA. ¿Es ético lo que pretende?

2025-11-17
Urban Tecno
Why's our monitor labelling this an incident or hazard?
The described AI system clearly involves AI technology capable of generating realistic avatars with voice and conversational abilities, meeting the definition of an AI system. The concerns raised relate to potential emotional harm and psychological effects on users interacting with AI recreations of deceased individuals. Since no actual harm or incident is reported, but plausible future harm is discussed, this fits the definition of an AI Hazard. The event is not merely general AI news or a response to a past incident, so it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

Tehnologija iz Black Mirrora postaje stvarnost: AI imitira preminulu rodbinu

2025-11-16
Avaz.ba
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned and is in active use, fulfilling the AI System criterion. The concerns raised relate to potential emotional harm and psychological effects on users, especially children, which could plausibly lead to harm in the future. However, no direct or indirect harm has been reported as having occurred so far. Therefore, this event fits the definition of an AI Hazard, as the technology's use could plausibly lead to harm, but no incident has yet materialized.
Thumbnail Image

Tehnologija iz najjezivije Black Mirror epizode postaje stvarnost: AI imitira preminulu rodbinu

2025-11-14
Telegraf.rs
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as creating interactive avatars of deceased individuals based on voice recordings and photos, which qualifies as an AI system. Its use has directly led to emotional and psychological harms, such as disturbing the natural grieving process and causing discomfort among users, which is a form of harm to health and communities. The article reports that the technology is already in use and available on platforms like the App Store, so the harm is occurring rather than merely potential. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Tehnologija iz Black Mirrora postaje stvarnost: AI imitira preminulu rodbinu

2025-11-14
Oslobođenje d.o.o.
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the 2Wai app uses AI to generate interactive avatars of deceased individuals. The use of this AI system could plausibly lead to psychological harm, such as disruption of the natural grieving process and emotional distress, especially in vulnerable groups like children. Although no concrete harm has been reported as having occurred, the concerns raised by experts and public reactions indicate a credible risk of harm. Thus, the event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to health or emotional well-being. There is no indication of actual injury or violation yet, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the focus is on the potential harm from the AI system's use.
Thumbnail Image

Bizarna i uznemirujuća, izazvala je brojne kritike: Ova aplikacija "oživljava" pokojnike (VIDEO) | 6yka

2025-11-16
BUKA
Why's our monitor labelling this an incident or hazard?
The application clearly involves an AI system that generates interactive avatars based on voice recordings and photos, enabling conversations with digital representations of deceased people. The article highlights expert warnings about potential psychological and emotional harms, such as disrupting natural grieving processes and emotional confusion. However, it does not report any realized harm or incidents resulting from the app's use so far. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm in the future, but no direct or indirect harm has been documented yet.
Thumbnail Image

Εφαρμογή τεχνητής νοημοσύνης "ξαναζωντανεύει" νεκρούς συγγενείς - Έντονες αντιδράσεις μετά την παρουσίαση της εταιρείας

2025-11-14
Lamia Report
Why's our monitor labelling this an incident or hazard?
The described AI system clearly fits the definition of an AI system, as it generates interactive outputs (avatars) based on input data (memories, likeness) to influence virtual environments (user interactions). The event stems from the use of this AI system. Although no direct or indirect harm has been reported as having occurred, the intense public backlash and ethical concerns indicate plausible future harms, such as psychological harm or social disruption. Since no realized harm is described, but plausible harm is credible, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Εφαρμογή τεχνητής νοημοσύνης "ξαναζωντανεύει" νεκρούς συγγενείς - Έντονες αντιδράσεις μετά την παρουσίαση της εταιρείας

2025-11-14
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as creating interactive avatars of deceased persons that share memories and interact with living users, which involves AI development and use. The event reports actual use of the system and public reactions indicating emotional and psychological harm, such as distress, ethical concerns, and potential alteration of grieving processes. These effects constitute harm to persons (mental health and emotional well-being), fulfilling the criteria for an AI Incident. Although the harm is non-physical, it is significant and clearly articulated, with the AI system's role pivotal in causing it. The event is not merely a potential risk or a complementary update but a realized impact from the AI system's deployment.
Thumbnail Image

Βγαλμένο από το Black Mirror: Η AI εφαρμογή που "ζωντανεύει" τους νεκρούς

2025-11-17
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it creates interactive avatars of deceased people using AI-generated speech and memory synthesis. The article does not report any realized harm but highlights widespread public and expert concern about potential psychological and ethical harms, such as disrupting natural grief or consent issues. These concerns constitute plausible future harms that the AI system could cause. Since no direct or indirect harm has yet occurred, the event does not meet the criteria for an AI Incident. It is not merely complementary information because the main focus is on the potential risks and societal debate triggered by the AI application, not on responses or ecosystem updates. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

Black Mirror στην πραγματικότητα; Νέα AI πλατφόρμα "αναδημιουργεί" τους νεκρούς | Techblog.gr

2025-11-17
Techblog.gr
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it generates interactive digital avatars based on deceased persons, which fits the definition of an AI system. The use of this system is ongoing and public, but the article does not report any actual harm or incident resulting from its deployment. The concerns raised are about plausible future harms and ethical issues, which aligns with the definition of Complementary Information, as it provides context and societal response to the AI technology rather than reporting an AI Incident or Hazard.
Thumbnail Image

Μπορεί η AI να "αναστήσει" τους νεκρούς; Δείτε την εφαρμογή 2wai - Digital Life

2025-11-17
Digital Life!
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates avatars of deceased persons for user interaction, which is a clear AI application. Although no direct harm has been reported yet, the public criticism highlights credible risks of emotional and ethical harm, such as affecting the grieving process and consent violations. Since these harms are plausible future outcomes of the AI system's use, the event fits the definition of an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential risks of the AI application.
Thumbnail Image

Εφαρμογή τεχνητής νοημοσύνης "ξαναζωντανεύει" νεκρούς συγγενείς - Έντονες αντιδράσεις μετά την παρουσίαση της εταιρείας - Fibernews

2025-11-14
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates interactive avatars of deceased persons, which is a clear AI application. The event stems from the use and deployment of this AI system. While the article highlights significant ethical concerns and public backlash, it does not document any realized injury, violation of rights, or other harms caused by the AI system. The concerns about emotional harm and alteration of the grieving process are plausible future harms. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its societal impact are central to the article.
Thumbnail Image

Esta app para hablar con tus familiares muertos parece sacada de 'Black Mirror'

2025-11-17
Hipertextual
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates avatars and enables real-time interaction, fulfilling the AI system criterion. However, the article does not describe any realized injury, violation of rights, or other harms caused by the app's use so far. Instead, it raises concerns about potential emotional harm and dependency, which are plausible future risks. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no direct or indirect harm has yet occurred.
Thumbnail Image

La aplicación de IA de una estrella de Disney Channel que crea avatares de familiares fallecidos genera controversia

2025-11-14
Forbes México
Why's our monitor labelling this an incident or hazard?
The application 2wai uses AI to create interactive avatars of deceased individuals, which is a clear AI system involvement. The public criticism highlights potential emotional and psychological harms, such as exploitation of grief and dehumanization, which are plausible harms that could arise from the use of this technology. However, the article does not report any actual injury, rights violations, or other harms that have materialized. Thus, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the controversy and potential risks, not on responses or updates to a prior incident. It is not Unrelated because the AI system and its societal implications are central to the event.
Thumbnail Image

Calum Worthy de Disney cofunda la aplicación de inteligencia artificial 2wai en medio de la controversia - Es de Latino News

2025-11-14
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (the HoloAvatar AI generating digital recreations). The controversy and criticism relate to ethical concerns about consent and digital likenesses, which could plausibly lead to violations of rights or harm to communities if misused. However, the article does not report any actual harm or incidents caused by the AI system so far. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk of harm due to the AI system's use and ethical issues raised.
Thumbnail Image

Disney Star lanza controvertida aplicación de inteligencia artificial para hablar con personas muertas

2025-11-14
lanetaneta.com
Why's our monitor labelling this an incident or hazard?
The application clearly involves an AI system that generates digital avatars with realistic human features and simulated memories, which fits the definition of an AI system. The controversy and ethical concerns indicate plausible risks of harm, such as violations of consent and potential emotional or psychological harm to users or affected individuals. However, since the application is currently in beta and no direct or indirect harm has been reported or confirmed, this event represents a plausible risk rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the credible potential for harm arising from the AI system's use, but not an AI Incident at this stage.
Thumbnail Image

La app para 'hablar' con familiares muertos desata críticas: "Parece un capítulo de Black Mirror"

2025-11-18
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The application clearly involves an AI system that generates digital avatars capable of real-time conversation, fulfilling the definition of an AI system. The concerns raised by users and commentators about psychological harm and exploitation indicate a credible risk that the AI system's use could lead to injury or harm to persons (psychological harm). Since the article does not report actual harm occurring yet but highlights significant plausible risks, this event fits the definition of an AI Hazard rather than an AI Incident. The focus is on potential harm rather than realized harm, and the event is not primarily about responses or governance measures, so it is not Complementary Information.
Thumbnail Image

Polémica por app que permite 'hablar' con familiares fallecidos | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-11-18
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The application involves an AI system that generates interactive avatars of deceased persons, which could plausibly lead to psychological harm or ethical violations. Since no actual harm has been reported but credible concerns and warnings exist about potential emotional damage and ethical issues, this event fits the definition of an AI Hazard rather than an AI Incident. The AI system's use could plausibly lead to harm, but such harm is not yet realized according to the article.
Thumbnail Image

This AI app lets you chat with the dead using a few minutes of video - and not everyone is okay with that

2025-11-21
TechRadar
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as generating digital avatars of deceased people, which can influence users' emotional and psychological states. The article highlights public discomfort and ethical concerns, indicating potential for harm to individuals' mental health and privacy rights. However, no direct or indirect harm has been reported as having occurred yet. The event thus fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as emotional distress, privacy violations, or identity misuse in the future.
Thumbnail Image

Former Disney star launches "demonic" app to chat with deceased relatives

2025-11-19
NEWS.am STYLE
Why's our monitor labelling this an incident or hazard?
The app is an AI system that generates realistic avatars of deceased people, enabling conversations that mimic the deceased. Critics warn of serious psychological risks, which are plausible harms linked to the AI system's use. Since no actual harm or incidents have been reported, but credible concerns about future harm exist, the event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or complementary information because it focuses on the potential harms of this specific AI system's use.
Thumbnail Image

'The most evil thing I've ever seen': New AI avatar app sparks fury with one deeply unsettling feature | Attack of the Fanboy

2025-11-18
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates digital avatars with memory and conversational abilities. The public backlash and ethical concerns indicate potential for harm, especially emotional or psychological harm related to grief and consent. However, the article does not report any actual injury, rights violation, or other harm that has occurred due to the app's use. The harms are potential and plausible, not realized. Thus, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has yet been documented.
Thumbnail Image

Disney Star Launches AI App That Recreates Deceased Loved Ones - Is It Too Much Like Black Mirror?

2025-11-18
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive avatars simulating deceased persons, fulfilling the AI system definition. The event stems from the use of this AI system. While no direct harm is reported, the widespread ethical concerns and emotional backlash indicate plausible future harm, such as emotional exploitation or psychological distress. The app's monetization through premium avatars further suggests potential exploitation risks. Since no actual harm has been documented yet, but credible risks exist, the event fits the AI Hazard category rather than an AI Incident or Complementary Information.
Thumbnail Image

AI Companies Are Encouraging Users To Believe Chatbots Are People, And It's Insanely Creepy

2025-11-21
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots and AI avatars) being used in a way that leads to psychological harm to users by encouraging them to believe these AI entities are real people, causing emotional addiction and delusions. This is a direct harm to individuals' mental health and a broader societal harm by challenging fundamental understandings of personhood. The AI systems' use is central to this harm, meeting the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in causing it.
Thumbnail Image

Would you use an app that lets you talk to the dead?

2025-11-21
LAFM
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the app uses conversational AI to create digital avatars of deceased people. The event stems from the use of this AI system. Although no direct harm has been reported, the article extensively discusses credible risks of psychological harm, exploitation, and manipulation that could plausibly arise from the app's use. These potential harms align with the definition of an AI Hazard, as the AI system's use could plausibly lead to injury or harm to persons (mental health), harm to communities (exploitation), and violations of rights (identity and consent). Since no actual harm has yet occurred, the event is not an AI Incident. The article is not primarily about responses or updates, so it is not Complementary Information. It is not unrelated because the AI system and its potential harms are central to the report.
Thumbnail Image

Ex-Disney Channel star sparks outrage with bizarre AI app: 'Objectively one of the most evil ideas imaginable'

2025-11-22
The Cool Down
Why's our monitor labelling this an incident or hazard?
The app clearly involves an AI system that generates interactive avatars capable of real-time conversation, fulfilling the AI system definition. The criticisms focus on potential emotional harm to users and ethical issues around grief and identity, which could plausibly lead to psychological harm or violations of personal rights. However, the article does not report any actual injury, rights violation, or other harm having occurred yet. The environmental concerns relate to AI model training but are general and not tied to a specific incident. Hence, the event fits the definition of an AI Hazard, as it could plausibly lead to harm but no direct or indirect harm has been documented at this stage.