AI Deepfake Scam Impersonating Soap Star Causes Woman to Lose Life Savings and Home

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A scammer used AI-generated deepfake videos and voice cloning to impersonate 'General Hospital' actor Steve Burton, deceiving 66-year-old Abigail Ruvalcaba into sending over $150,000 and selling her home. The convincing AI technology enabled the scam, resulting in severe financial and emotional harm to the victim and her family.[AI generated]

Why's our monitor labelling this an incident or hazard?

The scammer used a deepfake version of actor Steve Burton, which is an AI-generated synthetic media technology, to deceive the victim into believing she was in a relationship with the actor. This AI system's use directly caused financial harm to the victim through fraud. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of an AI system (deepfake) in a malicious context.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

A Woman Sent $80K to a Scammer After They Used a Deepfake Version of Actor Steve Burton

2025-08-29
Distractify
Why's our monitor labelling this an incident or hazard?
The scammer used a deepfake version of actor Steve Burton, which is an AI-generated synthetic media technology, to deceive the victim into believing she was in a relationship with the actor. This AI system's use directly caused financial harm to the victim through fraud. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of an AI system (deepfake) in a malicious context.
Thumbnail Image

Woman loses HOME after falling for AI-generated General Hospital star

2025-08-28
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event describes a scam where deepfake AI technology was used to impersonate a public figure, leading to the victim losing $81,000 and her home. The AI system's role in generating realistic fake videos and voice messages was pivotal in deceiving the victim and causing harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to a person (financial and property loss).
Thumbnail Image

'General Hospital' Star Steve Burton Responds to A.I. Scam That Cost Fan $80K

2025-08-28
Aol
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake videos to impersonate an actor, which directly led to financial harm (loss of $81,304 and sale of property) to a person with mental illness. The AI system's use in generating convincing fake videos was pivotal in enabling the scam and the resulting harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in fraudulent activity.
Thumbnail Image

How AI Videos of General Hospital's Steve Burton Scammed Woman Out of More Than $81,000

2025-08-28
E! Online
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating realistic videos impersonating a known person was pivotal in the scam, directly causing financial harm to the victim. The harm is realized and significant, involving deception and exploitation of a vulnerable person. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial and emotional), fulfilling the criteria for injury or harm to a person or group of people.
Thumbnail Image

Fake Video of General Hospital's Steve Burton Scams Woman Out of $80,000

2025-08-28
Us Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that convincingly mimic a real person's voice and likeness. The AI-generated content was maliciously used to deceive and financially defraud the victim, resulting in direct harm (financial loss and emotional distress). This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial and emotional harm).
Thumbnail Image

Love-struck fan duped by AI videos of popular soap star in shocking...

2025-08-28
Page Six
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake videos, which are AI systems that create realistic synthetic media. The scammer's use of these AI videos directly led to financial harm (loss of over $81,000 and impending bankruptcy) and emotional harm to the victim. This fits the definition of an AI Incident because the AI system's use directly caused harm to a person. The harm is realized, not just potential, and the AI system's role is central to the incident.
Thumbnail Image

AI Videos Of General Hospital's Steve Burton Used To Scam Woman Out Of $81,000 - And She's Not The First! - Perez Hilton

2025-08-28
Perez Hilton
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake AI video generation) used in a malicious way to deceive and defraud a person, causing direct harm (financial loss, emotional distress). The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident. The harm includes financial loss and emotional harm to the individual and family, which fits under harm to persons and communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

Woman conned out of life savings by scammers using AI to pose as General Hospital star

2025-08-26
ABC7
Why's our monitor labelling this an incident or hazard?
The scammer used AI technology to create deepfake videos and cloned voice messages impersonating a known actor, which directly led to the victim sending large sums of money and selling her property under false pretenses. This is a clear case where the AI system's use caused direct harm (financial loss, emotional harm, and property loss). Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-enabled deception.
Thumbnail Image

SoCal woman loses home after being scammed by AI deepfake scammer pretending to be General Hospital actor

2025-08-28
KTLA 5
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake videos and voice to impersonate a known actor, which directly led to significant financial harm to the victim, including loss of property. This fits the definition of an AI Incident because the AI system's use directly caused harm to a person (financial and emotional harm) and harm to property (loss of home). The scammer's use of AI deepfake technology was pivotal in deceiving the victim, making this a clear case of AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Version of General Hospital's Jason Actor Causes Fan to Lose Almost $100k

2025-08-28
ComingSoon.net
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to create deepfake videos and voice clones of an actor, which were used by a scammer to deceive a victim into sending large sums of money. The AI system's outputs were pivotal in enabling the scam, causing direct financial harm and emotional distress to the victim. This fits the definition of an AI Incident, as the AI system's use directly led to harm to a person. The event is not merely a potential risk or a general news item but a realized harm caused by AI misuse.
Thumbnail Image

AI Deepfake Scam Depicting Soap Star Cost Woman Her Life Savings

2025-08-29
eWEEK
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create a deepfake video and voice impersonation that directly caused financial harm to an individual. The scammer exploited AI-generated content to manipulate the victim into transferring money and selling property, which is a clear case of harm to a person. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident as the AI system's use directly led to significant harm.
Thumbnail Image

LA Woman Loses $151,000 & Home After Falling For AI Posing as General Hospital Actor: 'She Never Once Thought It Strange?'

2025-08-28
The Nerd Stash
Why's our monitor labelling this an incident or hazard?
The scammer used AI-powered deepfake technology to impersonate a celebrity in video calls, which directly led to the victim losing $151,000 and her home. This constitutes harm to a person (financial and property harm) caused by the use of an AI system. Therefore, this qualifies as an AI Incident because the AI system's use was pivotal in causing the harm.
Thumbnail Image

Deepfake of General Hospital star Steve Burton scams Southern California woman for $80K

2025-08-29
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create convincing fake videos and messages impersonating a real person. This AI-enabled impersonation directly led to financial harm and distress to the victim, fulfilling the criteria for an AI Incident due to harm to the individual (harm to health and property through financial loss). The harm is realized and directly linked to the AI system's malicious use.
Thumbnail Image

Daughter speaks out after love-struck mum scammed for $81,000 by AI deepfake of TV star

2025-08-29
LADbible
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology to impersonate a celebrity, which directly led to a financial scam causing significant harm to the victim. The AI system's outputs (deepfake videos) were pivotal in deceiving the victim and causing the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial loss, emotional distress).
Thumbnail Image

Woman Loses Life Savings After Scammers Use AI to Pose as General Hospital Star

2025-08-29
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The scammers used AI technology to create realistic videos impersonating a General Hospital star, which directly led to the victim losing her life savings and incurring severe financial harm. The AI system's use in generating deceptive content was pivotal in enabling the scam, fulfilling the criteria for an AI Incident due to harm to the individual (a).
Thumbnail Image

'I Thought I was In Love' - AI-Generated Steve Burton Tricks Los Angeles Woman Into Losing $80,000

2025-08-29
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The scammers used AI-generated deepfake videos and voice synthesis to impersonate Steve Burton, which directly caused the victim to lose $81,000 and emotionally manipulated her. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (financial and emotional harm). The involvement of AI is explicit and central to the harm caused.
Thumbnail Image

Fake videos of 'General Hospital' star Steve Burton allegedly scam woman out of over $80k

2025-08-29
Entertainment Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos used by scammers to impersonate a celebrity and defraud a woman of over $80,000 and her condo. The AI system's outputs (deepfake videos and messages) directly facilitated the scam, leading to financial loss and potential eviction, which are harms to the individual and her property. The involvement of AI in generating realistic fake videos and messages is central to the incident, fulfilling the criteria for an AI Incident as the harm has materialized and is directly linked to the AI system's misuse.
Thumbnail Image

"Heartbreaking": Perez Hilton slams "sickening" AI scam where woman with bipolar disorder lost $81k to Steve Burton impersonator

2025-08-29
Sportskeeda
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI-generated videos and voice cloning to impersonate Steve Burton, which directly led to the victim losing $81,000 and suffering emotional harm. The AI system's use in creating realistic fake content was pivotal in deceiving the victim, fulfilling the criteria for an AI Incident due to harm to a person (financial and emotional).
Thumbnail Image

Steve Burton's Likeness Used to Scam Woman Out of $81,000

2025-08-29
TV Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos, which are AI systems capable of creating realistic fake content. The scammer used these AI-generated videos to deceive the victim into sending money, resulting in substantial financial harm. The harm is direct and significant, fulfilling the criteria for an AI Incident under the framework. The AI system's use was malicious and led to realized harm, not just potential harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system played a pivotal role in the scam and resulting harm.
Thumbnail Image

'General Hospital' Star Steve Burton's Likeness Used to Scam Woman Out of $81,000

2025-08-29
KTBS
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI deepfake technology to create videos that impersonated a real person, which directly caused the victim to lose a significant amount of money ($81,000 plus proceeds from a condo sale). This constitutes harm to the individual (financial harm) caused by the use of an AI system. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to a person.
Thumbnail Image

'General Hospital' Star Steve Burton's Likeness Used to Scam Woman Out of $81,000

2025-08-29
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake videos, which are a product of AI systems capable of creating realistic fake content. The scam directly caused financial harm to the victim, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm (financial loss) to a person.
Thumbnail Image

'General Hospital' Star Steve Burton's Likeness Used to Scam Woman Out of $81,000 | Fox 11 Tri Cities Fox 41 Yakima

2025-08-29
FOX 11 41 Tri Cities Yakima
Why's our monitor labelling this an incident or hazard?
The event clearly describes the use of AI-generated deepfake technology to create fake videos of Steve Burton, which were used to deceive and defraud a woman, resulting in substantial financial loss. This constitutes direct harm to a person caused by the use of an AI system (deepfake generation). Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm to a person.
Thumbnail Image

Woman conned out of life savings after scammers use AI to pose as General Hospital star

2025-08-29
UNILAD
Why's our monitor labelling this an incident or hazard?
The scammers used AI to clone the celebrity's likeness and voice to create convincing videos, which were instrumental in deceiving the victim. This AI-enabled impersonation directly caused financial harm (loss of life savings and forced property sale), fitting the definition of an AI Incident as the AI system's use directly led to harm to a person (financial harm).