AI-Generated Death Hoax Targets Dolly Parton and Reba McEntire

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images falsely depicting Dolly Parton on her deathbed with Reba McEntire circulated online, causing public concern and emotional distress. Both celebrities publicly refuted the hoax, highlighting the harm caused by AI-driven misinformation and the need for vigilance against such fabricated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article mentions an 'AI mess' involving Dolly Parton, implying that an AI-generated image or content led to false rumors about her health. This misinformation caused harm to her reputation and potentially to her community of fans by spreading false health scare news. Since the AI-generated content directly led to misinformation and reputational harm, this qualifies as an AI Incident under the category of harm to communities and violation of rights related to misinformation.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenGeneral public

Harm types
PsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Reba McEntire Weighs in on Dolly Parton's Health Scare News, Calls AI Photo a 'Mess'

2025-10-10
Country 102.5
Why's our monitor labelling this an incident or hazard?
The article mentions an 'AI mess' involving Dolly Parton, implying that an AI-generated image or content led to false rumors about her health. This misinformation caused harm to her reputation and potentially to her community of fans by spreading false health scare news. Since the AI-generated content directly led to misinformation and reputational harm, this qualifies as an AI Incident under the category of harm to communities and violation of rights related to misinformation.
Thumbnail Image

Reba calls out AI 'nonsense' in sweet message to Dolly Parton

2025-10-10
99.9 Y Country
Why's our monitor labelling this an incident or hazard?
AI systems are involved as the images circulating are AI-generated, creating false narratives about Dolly Parton's health and Reba's pregnancy. While these images spread misinformation, the article does not report any actual harm occurring, only the potential for reputational or emotional harm. The main content is a public figure's response to misinformation, aiming to correct false AI-generated content. Therefore, this is best classified as Complementary Information, as it provides context and response to AI-generated misinformation without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Reba McEntire responds to AI photo of her at Dolly Parton's 'deathbed'

2025-10-10
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake photos (deepfake or AI-manipulated images) that falsely depict a sensitive and harmful scenario (deathbed of a public figure). This misinformation has caused public concern and emotional distress, which fits the definition of harm to communities and reputational harm. The AI system's use in generating these images is central to the incident. Although the article focuses on the celebrities' responses, the harm caused by the AI-generated content is real and materialized, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Reba McEntire slams 'AI mess' in support of fellow country legend Dolly Parton: 'I love you'

2025-10-10
Aol
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images spreading false rumors about Dolly Parton and Reba McEntire, which is a form of misinformation. However, there is no indication that this misinformation has caused direct harm such as injury, rights violations, or significant disruption. The celebrities are responding to the misinformation, clarifying facts, and expressing concern about the 'AI mess.' Since the article focuses on the social and reputational effects and the public discourse around AI-generated misinformation without describing a concrete harmful event, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Dolly Parton Slams AI Photo of Reba McEntire at Her Death Bed

2025-10-09
Mandatory
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated content (AI system involvement) that has indirectly led to misinformation and emotional distress among fans, which can be considered harm to communities. However, since the harm is limited to misinformation and no direct physical or legal harm is reported, and the main focus is on clarifying and correcting the misinformation, this fits best as Complementary Information. It provides context and updates on the impact of AI-generated misinformation and the response to it, rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Reba McEntire slams 'AI mess' after Dolly Parton death rumors spread...

2025-10-10
New York Post
Why's our monitor labelling this an incident or hazard?
AI-generated images falsely depicting Dolly Parton's death and Reba McEntire's pregnancy led to public alarm and required clarifications from the celebrities. The AI system's role in creating and spreading misleading content directly caused harm in the form of misinformation and emotional distress to the individuals and their fan communities. This fits the definition of an AI Incident as the AI system's use indirectly led to harm to communities and reputational harm. There is no indication that this is merely a potential risk or a complementary update; the harm has materialized through misinformation and public confusion.
Thumbnail Image

Reba McEntire Responds to Dolly Parton AI Image: 'Too Young to Die, Too Old for Babies'

2025-10-10
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that falsely depict celebrities in harmful or distressing situations. This constitutes misuse of AI technology leading to misinformation and potential emotional harm to the individuals and their communities. Since the images are circulating and causing concern, this is a realized harm related to misinformation and reputational impact, which can be considered harm to communities. Therefore, it qualifies as an AI Incident due to the direct role of AI in generating harmful false content that affects individuals and public perception.
Thumbnail Image

Dolly Parton Slams AI Photo of Reba McEntire at Her "Deathbed"

2025-10-08
E! Online
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image, which is a misuse of AI technology to create misinformation. However, the article does not report any realized harm such as health injury, rights violations, or community harm. The event is primarily about addressing and correcting misinformation caused by AI-generated content, which fits the category of Complementary Information as it provides context and response to a prior AI-related issue rather than reporting a new incident or hazard.
Thumbnail Image

Reba McEntire Calls Out Dolly Parton Death Hoax "Mess"

2025-10-09
E! Online
Why's our monitor labelling this an incident or hazard?
The article discusses AI-generated fake images (deepfakes) that have led to false rumors about Dolly Parton's death and Reba McEntire's pregnancy. While this involves AI-generated misinformation, there is no indication that harm has materialized beyond the spread of false information, nor is there a direct or indirect harm described such as injury, rights violations, or disruption. The event focuses on the social reaction and debunking of the misinformation, which is a complementary update on AI's societal impact rather than a new incident or hazard.
Thumbnail Image

Reba McEntire Supports Dolly Parton After She Calls Out Death Hoax

2025-10-09
Us Weekly
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated photos that falsely show Reba McEntire at Dolly Parton's alleged deathbed, which is not real. This misinformation caused public confusion and emotional distress, constituting harm to communities and individuals. The AI system's use in generating these images directly led to this harm. Hence, it meets the criteria for an AI Incident as the AI system's outputs have directly led to harm through misinformation and reputational damage.
Thumbnail Image

Reba McEntire slams 'AI mess' in support of fellow country legend Dolly Parton: 'I love you'

2025-10-09
Entertainment Weekly
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that falsely portray Reba McEntire and Dolly Parton in distressing and untrue situations, such as McEntire appearing pregnant and Parton in a hospital bed. These AI-generated fabrications have led to rumors and concerns about their health, which have been publicly addressed and dismissed by the celebrities themselves. The AI system's role in generating and disseminating these misleading images has directly contributed to reputational and emotional harm, fitting the definition of an AI Incident due to harm to persons and communities through misinformation and emotional distress.
Thumbnail Image

WATCH: Reba McEntire Responds to Dolly Parton's Health Post

2025-10-09
KEAN 105
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake photos that are misleading but does not report any actual harm resulting from these images. The AI system's involvement is in generating false content, but no injury, rights violation, or disruption is reported. Therefore, this is not an AI Incident. Since no plausible future harm or credible risk is described beyond the existence of the images, it does not meet the threshold for an AI Hazard. The main focus is on clarifying misinformation and the social reaction to AI-generated fakes, which fits the category of Complementary Information.
Thumbnail Image

Dolly Parton Confronts AI-Generated Death Hoax and Health Rumors

2025-10-09
Bangla news
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image falsely depicting Dolly Parton and Reba McEntire in a deathbed scenario, which directly led to misinformation and public distress. This constitutes harm to communities through the spread of false information and emotional harm. Since the AI-generated content caused actual misinformation and public concern, this qualifies as an AI Incident under the definition of harm to communities. The article does not describe any potential or future harm only, nor is it primarily about responses or updates, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

WATCH: Reba McEntire Responds to Dolly Parton's Health Post

2025-10-09
97.3 The Dawg
Why's our monitor labelling this an incident or hazard?
AI-generated images are explicitly mentioned as the source of misinformation. While the images are fake and have caused confusion, the article does not report any realized harm such as health injury, rights violations, or community harm. The event highlights the potential for AI-generated content to mislead, but since the harm is not realized and the main focus is on correcting misinformation, this fits the definition of an AI Hazard (plausible future harm) rather than an AI Incident. It is not Complementary Information because the main narrative is about the AI-generated misinformation event itself, not a response to a prior incident.
Thumbnail Image

Reba McEntire addresses AI-generated image of her at Dolly Parton's 'final moments' - Internewscast Journal

2025-10-10
internewscast.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI-generated image (AI system involvement) that falsely depicts a sensitive and harmful scenario (Dolly Parton's deathbed). This misinformation has caused public concern and emotional distress, which constitutes harm to communities and individuals. The AI system's use in creating and spreading this false image directly led to this harm. Although no physical injury or legal violation is reported, the reputational and emotional harm is significant and clearly articulated. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Reba slams AI after fake Dolly deathbed photo

2025-10-10
Winnipeg Sun
Why's our monitor labelling this an incident or hazard?
The article describes a fake photo generated by AI that falsely shows Dolly Parton on her deathbed, leading to misinformation and emotional harm to the community and individuals involved. The AI system's use in creating and spreading this false content directly led to harm in the form of misinformation and reputational impact, which qualifies as harm to communities. Therefore, this event is an AI Incident.
Thumbnail Image

Reba McEntire calls out AI-generated fake pregnancy photos and Dolly Parton 'deathbed' images

2025-10-10
Fox News
Why's our monitor labelling this an incident or hazard?
The article discusses AI-generated fake images that misrepresent personal situations of public figures, which can be considered misinformation. However, the article does not report any direct or indirect harm such as physical injury, legal rights violations, or disruption of critical infrastructure caused by these images. The focus is on the celebrities' responses to the misinformation and clarifications to their fans. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on societal reactions and challenges related to AI-generated content.
Thumbnail Image

Reba McEntire Slams Bizarre AI Rumors About Her And Dolly Parton

2025-10-10
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating fake images that falsely depicted Dolly Parton and Reba McEntire in a hospital, which caused misinformation and potential reputational harm. However, the article does not describe any realized physical harm, violation of rights, or other significant harms directly caused by the AI-generated images. The harm is reputational and misinformation-based but no direct or indirect harm as defined (such as injury, rights violation, or community harm) is reported as having occurred. Therefore, this event is best classified as Complementary Information, as it provides context and response to an AI-related misinformation issue rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Reba McEntire slams AI over Dolly Parton deathbed hoax

2025-10-10
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images that have been widely circulated, causing false beliefs about Dolly Parton's health and leading to public concern and emotional distress. The AI system's role in generating these deceptive images is pivotal to the harm caused. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (misinformation and emotional harm) and individuals (reputational and emotional harm).
Thumbnail Image

Reba McEntire calls out AI-generated fake pregnancy photos and Dolly Parton 'deathbed' images

2025-10-11
Aol
Why's our monitor labelling this an incident or hazard?
AI systems were used to generate fake images that falsely portray real individuals in sensitive and harmful contexts. This misuse of AI has caused reputational and emotional harm, which qualifies as harm to communities and individuals. Since the AI-generated content has already been disseminated and caused harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Reba McEntire Slams Bizarre AI Rumors About Her And Dolly Parton

2025-10-10
HuffPost
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake images that falsely depicted Dolly Parton and Reba McEntire in a hospital, leading to rumors about Parton's health. This misinformation caused reputational and emotional harm, which fits the definition of an AI Incident as the AI system's use directly led to harm to individuals and communities through misinformation. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Dolly Parton's pal Reba McEntire responds to photo of her at 'deathbed' amid heath fears - The Mirror

2025-10-10
Mirror
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a fake image (an AI hoax) that led to misinformation and public concern about Dolly Parton's health. However, no actual harm occurred as a result; the event centers on the spread of AI-generated misinformation and the subsequent public response to clarify the truth. There is no direct or indirect harm materialized from the AI system's use, nor is there a plausible future harm indicated beyond the misinformation episode. Therefore, this event is best classified as Complementary Information, as it provides context and updates about the impact of AI-generated content and the societal response to it, rather than constituting a new AI Incident or AI Hazard.
Thumbnail Image

Reba McEntire Weighs in on Dolly Parton's Health Scare News, Calls AI Photo a 'Mess'

2025-10-10
995qyk.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI-generated photo that contributed to misinformation about Dolly Parton's health, which can be considered a form of harm to community perception. However, the article centers on the response to this misinformation rather than the incident itself. There is no detailed description of harm caused by the AI system's use or malfunction, nor is there a clear indication of direct or indirect harm resulting from the AI system. Therefore, this is best classified as Complementary Information, as it provides context and response to an AI-related misinformation event rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Music legend slams sick AI photo showing her at Dolly Parton's deathbed

2025-10-10
Metro
Why's our monitor labelling this an incident or hazard?
An AI-generated fake image is involved, which is an AI system creating misleading content. This could cause harm to reputation and emotional distress, which falls under harm to communities or individuals. However, the article mainly reports the celebrities' responses to the misinformation, clarifying and mitigating concerns. There is no indication of significant realized harm beyond misinformation, no physical harm, no legal violations, or critical infrastructure disruption. The article's main focus is on the response and clarification, making it Complementary Information rather than a new AI Incident or Hazard.
Thumbnail Image

Reba McEntire Reacts to Dolly Parton AI Deathbed Hoax: 'I Love You, Dolly'! | Just Jared: Celebrity News and Gossip | Entertainment

2025-10-12
Just Jared
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is limited to generating a false image (hoax) that caused misinformation. However, the article does not report any actual harm such as health injury, rights violations, or community harm resulting from this AI-generated hoax. The main focus is on the celebrities' responses to clarify and dismiss the hoax. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and societal response to an AI-generated misinformation event without reporting new harm or credible future harm.
Thumbnail Image

Reba McEntire Slams Viral AI Images Of Her And Dolly Parton

2025-10-10
The Blast
Why's our monitor labelling this an incident or hazard?
The presence of AI is explicit in the creation of deepfake images, which are AI-generated synthetic media. The harm is realized as misinformation and reputational damage to the individuals depicted, which can be considered harm to communities and individuals' rights to accurate information. Since the AI-generated images have already circulated widely and caused public confusion and distress, this constitutes an AI Incident. The article does not merely discuss potential harm or responses but reports on an actual event where AI-generated content caused harm.
Thumbnail Image

Reba McEntire slams AI over Dolly Parton deathbed hoax

2025-10-10
ArcaMax
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated fake images that have been used to spread false and harmful narratives about public figures, causing emotional distress and misinformation. The AI system's role in generating these hoaxes is explicit and directly linked to the harm caused. Therefore, this qualifies as an AI Incident due to harm to communities and individuals through misinformation and reputational damage.
Thumbnail Image

Reba McEntire Slams Fake AI Photos of Her at Dolly Parton's Deathbed

2025-10-10
Celebrity
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated fake images, which implies the use of AI systems to create misleading content. However, the harm (false rumors about Dolly Parton's death) is not confirmed as having caused significant harm beyond public concern, and the main focus is on the celebrities' responses to clarify the misinformation. There is no indication of injury, rights violations, or other significant harms caused by the AI system's use. The event is best classified as Complementary Information because it provides context and societal response to AI-generated misinformation rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Reba McEntire slams AI hoax in support of Dolly Parton

2025-10-12
The News International
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image that misled the public about Dolly Parton's health, causing harm in the form of misinformation and emotional distress to the community and individuals involved. This constitutes harm to communities through the spread of false information. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated hoax.