AI-Generated Fake Obituaries Cause Distress and Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers are using AI tools to rapidly generate and post fake obituaries of living individuals online, including journalist Deborah Vankin, to attract clicks and ad revenue. This AI-driven scheme spreads misinformation, causes emotional distress, and can expose victims to further cyber risks such as malware.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI to generate fake obituaries, which are then posted online to deceive readers and generate ad revenue. The harm is realized as individuals are falsely declared dead, causing emotional distress and misinformation spread among their social circles and the public. The AI system's use in creating and disseminating this false content directly leads to harm to persons and communities, fulfilling the criteria for an AI Incident.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityHuman wellbeingRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
PsychologicalReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Scammers posted obituaries declaring them dead. They were very much alive | CNN

2024-03-19
CNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake obituaries, which are then posted online to deceive readers and generate ad revenue. The harm is realized as individuals are falsely declared dead, causing emotional distress and misinformation spread among their social circles and the public. The AI system's use in creating and disseminating this false content directly leads to harm to persons and communities, fulfilling the criteria for an AI Incident.
Thumbnail Image

An LA reporter read her own obituary. She's just one victim of a broader death hoax scam

2024-03-22
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake obituaries used by scammers to create false death announcements, which mislead the public and generate ad revenue through clicks. This involves the use of AI systems to produce deceptive content that causes harm by spreading misinformation and emotional distress. The harm is direct and realized, as victims have their obituaries fabricated and misinformation is actively disseminated. The AI system's role is pivotal in generating the fake content at scale, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scammers posted obituaries declaring them dead. They were very much alive

2024-03-19
CTV News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that scammers use AI tools to generate fake obituaries, which are then posted online to generate ad revenue. This AI-generated misinformation has directly harmed individuals by falsely reporting their deaths, causing emotional distress and confusion among their social circles. The harm is realized and ongoing, meeting the criteria for an AI Incident due to violations of rights to truthful information and harm to communities through misinformation. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Why AI Obituary Scams Are a Cyber-Risk for Businesses

2024-03-22
Dark Reading
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of chatbots (an AI system) to generate fake obituaries that are used in scams. These scams have caused realized harm by deceiving vulnerable people and potentially infecting devices with malware. The AI system's role is pivotal in the rapid creation and dissemination of fake obituaries, which directly leads to harm. Therefore, this qualifies as an AI Incident under the framework, as there is direct harm caused by the AI system's use.
Thumbnail Image

Scammers posted obituaries declaring them dead. They were very much alive

2024-03-20
KHBS/KHOG Channel 40/29
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools by scammers to generate fake obituaries, which are then posted online to deceive people and generate ad revenue. The AI system's use directly leads to misinformation and emotional harm to the victims and their communities, fulfilling the criteria for harm to communities and individuals. The harm is realized, not just potential, as victims experienced confusion, distress, and reputational damage. Hence, this is an AI Incident rather than a hazard or complementary information.