AI-Driven Romance Scams Surge Globally Ahead of Valentine's Day

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals are increasingly using AI tools—such as generative models, deepfakes, and voice cloning—to create highly convincing romance scams on social media and dating apps. These AI-powered scams have led to significant financial and emotional harm, with losses exceeding $1 billion globally, including in Australia and the US.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (large language models, voice cloning, deepfake videos) being used to perpetrate romance scams that have caused real financial and emotional harm to victims. The harm is direct and materialized, not hypothetical or potential. The AI systems' development and use have enabled scammers to scale and improve the effectiveness of their fraudulent activities, leading to violations of personal and financial security. Hence, this event qualifies as an AI Incident under the framework.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
General public

Harm types
Economic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Dating online this Valentine's Day? Here's how to spot an AI romance scam

2026-02-12
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, voice cloning, deepfake videos) being used to perpetrate romance scams that have caused real financial and emotional harm to victims. The harm is direct and materialized, not hypothetical or potential. The AI systems' development and use have enabled scammers to scale and improve the effectiveness of their fraudulent activities, leading to violations of personal and financial security. Hence, this event qualifies as an AI Incident under the framework.
Thumbnail Image

Valentine's warning as AI fuels 'insidious' love scams

2026-02-12
The West Australian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to analyze personal social media information and automatically generate tailored romance scam conversations, which directly leads to financial and emotional harm to victims. The harm is realized and significant, fulfilling the criteria for an AI Incident. The AI system's use in automating and scaling scams is a direct cause of the harm. This is not merely a potential risk or a general discussion but a report of ongoing harm caused by AI-enabled scams.
Thumbnail Image

AI fuelling 'insidious' love scams in Australia

2026-02-12
Otago Daily Times Online News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to perpetrate romance scams that have directly led to financial harm to victims, fulfilling the criteria for an AI Incident. The AI's role in analyzing personal data and automating scam conversations is central to the harm described. The article reports realized harm (over $220 million lost) and emotional manipulation, which are clear harms to individuals and communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

FBI warns AI playing growing role in Valentine's Day scams

2026-02-12
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used by scammers to create realistic fake content and communications that manipulate victims into handing over money, resulting in actual financial losses. This is a direct harm caused by the use of AI systems in the scam operations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to people (financial injury and exploitation).
Thumbnail Image

Federal officials warn Western New Yorkers about romance scam surge ahead of Valentine's Day

2026-02-12
WKBW
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence by criminals to enhance the sophistication and effectiveness of romance scams, which have caused real financial harm to victims. The AI system's involvement is in the use phase, where it is employed to generate convincing messages that facilitate the scam. The harm is direct and significant, involving loss of money and emotional damage to vulnerable individuals. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Valentine's warning as AI fuels 'insidious' love scams - Michael West

2026-02-12
Michael West
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems analyzing personal data and automating scam conversations, which directly leads to financial and emotional harm to victims. The use of AI in this context is a clear example of AI-enabled malicious use causing harm to people, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves violation of trust and financial damage, which are significant harms. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

How to protect yourself against romance scams: Expert tips to stay safe online during Valentine's Day

2026-02-13
Upgrade Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that cybercriminals use generative AI tools to create convincing fake identities and content, which directly leads to romance scams causing financial harm to victims. The involvement of AI in the creation of fake images and videos is a pivotal factor in the scams' effectiveness. The harm (financial loss, emotional manipulation) has already occurred, as evidenced by arrests and reported losses exceeding a billion dollars. Thus, the event meets the criteria for an AI Incident, as the AI system's use has directly led to harm to people.
Thumbnail Image

Valentine's warning as AI fuels 'insidious' love scams - News | InDaily, Inside Queensland

2026-02-13
indailyqld.com.au
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to analyze personal data and generate tailored romance scams, which have directly led to financial harm to victims (over $220 million lost in Australia). The AI's role in automating and scaling these scams is pivotal to the harm caused. Therefore, this meets the definition of an AI Incident as the AI system's use has directly led to harm to people (financial and emotional).
Thumbnail Image

4 romance scams to watch out for this V-Day -- including AI grifts

2026-02-13
Axios
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and deepfake technology by scammers to perpetrate romance scams that have resulted in actual financial losses to victims. This constitutes direct harm caused by the use of AI systems in fraud. Therefore, the event qualifies as an AI Incident because the development and malicious use of AI systems have directly led to harm to persons through financial fraud.
Thumbnail Image

Your next match might be an AI bot: McAfee flags surge in romance scams - CNBC TV18

2026-02-13
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake profiles and AI-assisted conversations being used to perpetrate romance scams, which have directly led to financial harm and emotional manipulation of victims. The involvement of AI systems in creating deceptive content and interactions that cause real harm fits the definition of an AI Incident. The harm is realized and ongoing, with detailed statistics on victimization and financial losses, confirming direct or indirect causation by AI systems.
Thumbnail Image

Valentine's Day: Beware your perfect match could be an AI scam, FBI warns

2026-02-13
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-driven 'love traps' and generative AI tools used to create convincing fake profiles and messages that have directly led to victims being defrauded financially and emotionally harmed. The AI systems are central to the scam operations, enabling scalability and sophistication beyond traditional manual scams. The harms include financial fraud (harm to persons) and deception causing emotional harm, fitting the definition of an AI Incident. The involvement is in the use of AI systems maliciously to perpetrate scams, causing direct harm.
Thumbnail Image

Romance scams target Arizona seniors with devastating financial consequences

2026-02-13
abc15 Arizona
Why's our monitor labelling this an incident or hazard?
The article explicitly states that artificial intelligence is making romance scams more sophisticated and harder to detect, which directly contributes to the harm experienced by seniors who lose money and suffer emotional devastation. The AI system is used maliciously by scammers to deceive victims, fulfilling the criteria of an AI system's use leading directly to harm (financial and emotional). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI being used in Romance Scams. Here's what to know

2026-02-14
NBC Southern California
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated content is used by fraudsters to deceive victims, build trust, and ultimately steal money through romance scams and fake cryptocurrency investment schemes. This involves the use of AI systems in the scam's execution, causing direct financial harm to victims. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to persons through fraud and financial loss.
Thumbnail Image

Tainted Love: Romance Scam Victims Climb In 2025

2026-02-14
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake video technology) in the execution of romance scams that have directly caused financial harm to individuals. The AI system's use in generating convincing fake videos is a contributing factor to the harm experienced by victims. Therefore, this qualifies as an AI Incident because the development and malicious use of AI systems have directly led to significant harm to people (financial losses and emotional harm).
Thumbnail Image

SA daters warned as AI 'pig butchering' surges ahead of Valentines Day

2026-02-13
ITWeb
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered deepfake technology in romance scams that have directly led to financial harm to victims. The AI systems are used in the scam's operation, including real-time video manipulation and identity fabrication, which are central to the harm caused. The harm is materialized (financial loss and emotional harm), and the AI system's role is pivotal in enabling the scam at scale. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

AI spews 'dark age' of love scams

2026-02-13
Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of advanced AI tools in the development and use of romance scams and investment frauds, which have directly caused significant financial harm to victims. The AI systems' outputs (linguistically perfect messages, AI-generated images, voice cloning) are pivotal in enabling these scams. Therefore, this event meets the criteria for an AI Incident due to direct harm to people (financial injury) caused by AI-enabled fraudulent activities.
Thumbnail Image

Attorney General's Office warns of romance scams as Valentines Day approaches

2026-02-13
Herald/Review Media
Why's our monitor labelling this an incident or hazard?
The use of AI-generated content to perpetrate romance scams directly leads to harm to individuals by deceiving them and causing financial loss. The AI systems involved are used maliciously to create convincing fake identities, which is a misuse of AI technology resulting in realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm to persons through scams.
Thumbnail Image

FBI warns of AI-powered romance scams surging ahead of Valentine's Day -- victims lose over $1B to fake love

2026-02-13
Newstarget.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated images, deepfake videos, chatbots) in the active perpetration of romance scams that have caused substantial financial losses and psychological harm to victims. The AI's role is pivotal in enabling the deception and manipulation that leads to these harms. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to significant harm to people (financial loss and emotional trauma).
Thumbnail Image

Deepfakes, voice cloning, and AI-generated identities fuel surge in romance scams | TahawulTech.com

2026-02-13
TahawulTech.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated content (deepfakes, voice cloning, AI-generated identities) is being used by organized criminal networks to perpetrate romance scams that cause financial and psychological harm to victims. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss and emotional trauma). The involvement of AI is clear and central to the harm described, and the harm is realized, not just potential. Hence, the classification is AI Incident.
Thumbnail Image

Romance fraud schemes become more sophisticated with artificial intelligence

2026-02-14
https://www.kkco11news.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI bots being used in coordinated romance scams that have caused financial harm to victims. The AI systems are used to remember personal information and manipulate victims, leading to direct harm (financial loss) to individuals. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to people (financial injury) through sophisticated scam operations.
Thumbnail Image

Love in the time of scammers: How to protect your heart -- and your wallet -- online

2026-02-13
ConsumerAffairs
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI bots by scammers to build trust and perpetrate romance scams, which have caused real financial losses and emotional harm to victims. The AI system's use in this context is a contributing factor to the harm experienced by individuals. Since the harm is realized and directly linked to the AI system's use, this qualifies as an AI Incident under the framework's criteria.
Thumbnail Image

San Francisco dating is bad. Now it's full of AI-scammers.

2026-02-13
sfstandard.com
Why's our monitor labelling this an incident or hazard?
The event involves realized harm (financial losses) caused by scams that are plausibly facilitated by AI systems, as indicated by the reference to 'AI-scammers' in the title. The AI system's use in generating deceptive communications or interactions that lead to victims being defrauded fits the definition of an AI Incident, where the AI's use indirectly leads to harm to people (financial harm). Although the article does not detail the AI mechanisms, the implication of AI involvement in the scam operations and the direct financial harm to victims justifies classification as an AI Incident.
Thumbnail Image

Romance Scam: FBI Releases List of Red Flags to Identify Scammers This Valentine's Day | LatestLY

2026-02-15
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by scammers to generate personalized, emotionally convincing messages that facilitate romance scams causing financial and emotional harm to victims. The AI's role is pivotal in enabling the sophistication and effectiveness of these scams, which have already resulted in realized harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people.
Thumbnail Image

What is 'pig butchering'? This new romance scam is increasingly targeting Canadians

2026-02-14
Yorkregion.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbots trained on victims' interests and romance novels, as well as deepfake technology used in video chats, indicating the involvement of AI systems. The misuse of these AI systems in the scam has directly led to financial losses and emotional harm to victims, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, as evidenced by the reported millions lost and the detailed description of the scam's operation.
Thumbnail Image

What is 'pig butchering'? This new romance scam is increasingly targeting Canadians

2026-02-14
Simcoe.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used by scammers to perpetrate romance scams that cause direct financial harm to victims. The AI chatbots and deepfake technologies are integral to the scam's operation, enabling fraudsters to build trust and deceive victims effectively. The resulting harm includes financial loss and violation of personal rights, fitting the definition of an AI Incident. Therefore, the classification is AI Incident.
Thumbnail Image

Tenable warns AI is fueling a new wave of romance scams | Back End News

2026-02-14
Back End News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is being used to create convincing scam messages and deepfake content, which directly contributes to the success of romance scams causing financial losses to victims. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial injury and exploitation). The harm is materialized, not just potential, and the AI's role is pivotal in enabling the scams' increased effectiveness and scale.
Thumbnail Image

WA Romance Scam Warning: Red Flags & $3.8M Lost in 2023 - News Directory 3

2026-02-14
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake video technology) being used in the commission of romance scams that have caused real financial and emotional harm to individuals. The AI system's use in generating realistic fake video calls is a direct factor in the deception and subsequent losses. The harm is realized and significant, meeting the criteria for an AI Incident. The article does not merely warn of potential future harm but documents actual cases and losses linked to AI-enabled scams.
Thumbnail Image

Baden-Württemberg: Love-Scamming: Immer mehr Opfer verlieren hohe Summen

2026-02-20
N-tv
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate fake images and texts that are used by scammers to deceive victims, resulting in substantial financial losses. This constitutes direct harm to individuals (financial harm), caused by the use of AI systems in the scam process. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI-enabled deception.
Thumbnail Image

Love-Scamming: Immer mehr Opfer verlieren hohe Summen - WELT

2026-02-20
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images and texts that are used in love-scamming, leading to victims losing significant amounts of money. The AI system's role is pivotal in enabling the deception and consequent harm. The harm is realized (financial loss and emotional harm), and the AI system's involvement is direct in producing the fake content that causes the harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Love-Scamming: Immer mehr Opfer verlieren hohe Summen - Land Baden-Württemberg - Reutlinger General-Anzeiger - gea.de

2026-02-20
Reutlinger General-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI is used to generate fake images and texts that are instrumental in the love-scamming schemes. These AI-generated fake identities and communications directly lead to victims losing large sums of money, which constitutes harm to individuals (financial harm). The involvement of AI in the development and use of these deceptive materials is central to the incident. Hence, this is an AI Incident as the AI system's use has directly led to significant harm.