AI Voice Cloning Used in Kidnapping Scam Against Arizona Mother

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers used AI voice cloning technology to convincingly imitate an Arizona mother's 15-year-old daughter, faking a kidnapping and demanding ransom. The realistic AI-generated voice caused severe emotional distress and nearly led to financial harm, highlighting the risks of AI misuse in extortion schemes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to clone the teenager's voice is explicitly mentioned and is central to the scam. The AI system's use directly led to an incident of fraud and emotional harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the scam call caused distress and required police intervention. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceSafetyHuman wellbeingRespect of human rightsRobustness & digital securityTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Digital securityConsumer servicesMedia, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
PsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Golpistas usam IA para clonar voz de adolescente em 'golpe do sequestro'

2023-04-15
uol.com.br
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the teenager's voice is explicitly mentioned and is central to the scam. The AI system's use directly led to an incident of fraud and emotional harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the scam call caused distress and required police intervention. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Filha liga à mãe e diz que foi raptada. Mas era um burlão com Inteligência Artificial que replicou a voz da adolescente - MAGG

2023-04-13
MAGG
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to replicate a person's voice to deceive and extort money, which directly led to psychological harm and an attempted crime. The AI system's malicious use was pivotal in the scam's effectiveness. The harm is realized (emotional distress, attempted extortion), meeting the criteria for an AI Incident. There is no indication that this is merely a potential risk or a complementary update; the harm has occurred due to AI misuse.
Thumbnail Image

Inteligência artificial 'clona' voz em golpe de sequestro falso

2023-04-14
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone a person's voice from a short audio sample, which is then used maliciously in a scam. The AI system's use directly leads to harm by deceiving victims, causing emotional distress and potential financial or other harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the malicious use of an AI system.
Thumbnail Image

Inteligência artificial é usada para simular voz de filha e aplicar golpe na mãe por celular

2023-04-14
JC
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to simulate the daughter's voice with high fidelity to perpetrate a phone scam involving a fake kidnapping and ransom demand. This use of AI directly caused harm to the mother through emotional distress and an attempted financial scam. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to harm to a person.
Thumbnail Image

Mulher tem voz 'clonada' por IA e mãe quase cai em golpe

2023-04-14
Istoe dinheiro
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to create a realistic fake voice of a family member. This AI use directly led to harm: emotional distress to the mother and an attempted financial scam. The AI system's use here is malicious and caused realized harm, fitting the definition of an AI Incident due to harm to a person (psychological/emotional harm) and attempted financial harm.
Thumbnail Image

AI Voice Scam: Woman Convinced Her Daughter Was Kidnapped In Startling Tale

2023-04-13
IndiaTimes
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI voice synthesis) to generate a convincing imitation of a person's voice, which was used maliciously to cause emotional distress and attempt financial harm. The AI system's use directly led to an attempted scam causing psychological harm and potential financial loss, which qualifies as harm to a person. Therefore, this is an AI Incident due to the realized harm caused by the malicious use of AI voice technology in a scam.
Thumbnail Image

Arizona mother describes AI phone scam faking daughter's kidnapping: 'It was completely her voice'

2023-04-14
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology for voice cloning to convincingly fake the daughter's voice, which directly caused harm to the mother and potentially others targeted by the scam. This fits the definition of an AI Incident because the AI system's use led to harm to individuals (psychological harm and potential financial harm).
Thumbnail Image

Chilling AI scam cloned 'kidnapped' girl's voice in call to mum to demand ransom

2023-04-12
Daily Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system capable of cloning a person's voice to create a convincing fake kidnapping call. This AI-enabled scam directly caused harm to the victim (emotional distress and potential financial loss), fulfilling the criteria for an AI Incident. The AI system's use was malicious and directly linked to the harm experienced, including psychological harm and the risk of financial exploitation.
Thumbnail Image

AI used in terrifying fake kidnapping scam

2023-04-14
ConsumerAffairs
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI to clone a person's voice realistically, which was then used maliciously to extort money and cause emotional harm. The AI system's use was central to the incident, enabling the scammer to convincingly impersonate the victim and threaten harm, fulfilling the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

A mother reportedly got a scam call saying her daughter had been kidnapped and she'd have to pay a ransom. The 'kidnapper' cloned the daughter's voice using AI.

2023-04-13
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning software) to create a realistic imitation of a person's voice. The scammer's use of this AI-generated voice directly led to a harmful event: a ransom scam causing emotional distress and potential financial harm. The AI system's role is pivotal in enabling the scam, fulfilling the criteria for an AI Incident as the harm has occurred due to the AI system's use.
Thumbnail Image

Terrifying new AI kidnapping scam used teen girl's voice to demand $1m

2023-04-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to clone a person's voice, which was then used in a kidnapping scam causing emotional harm to the victim's family. The AI system's use directly led to realized harm (psychological distress and potential risk of physical harm). Therefore, this qualifies as an AI Incident under the definition of harm to persons or groups of people caused directly or indirectly by the use of an AI system.
Thumbnail Image

A mother reportedly got a scam call saying her daughter had been kidnapped and she'd have to pay a ransom. The 'kidnapper' cloned the daughter's voice using AI.

2023-04-13
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning software) to create a realistic fake audio of the daughter, which was then used in a scam call to extort money. This directly led to harm in the form of psychological distress to the mother and potential financial loss, fulfilling the criteria for an AI Incident. The AI system's use was malicious and directly contributed to the harm experienced.
Thumbnail Image

How AI helped criminals in $1 million kidnapping scam

2023-04-14
MoneyControl
Why's our monitor labelling this an incident or hazard?
The AI system was used in the scammer's malicious use of voice cloning technology to impersonate the victim's daughter, directly causing psychological harm and an extortion attempt. This fits the definition of an AI Incident because the AI system's use directly led to harm (emotional distress and criminal threat). The event involves the use of an AI system (voice cloning) and the harm is realized, not just potential. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Arizona Mother Warns About AI Voice Cloning After Kidnapping Scam

2023-04-13
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create a deepfake voice of the victim's daughter, which was used in a kidnapping scam. This AI system's use directly caused emotional distress to the mother and posed a risk of financial harm through potential ransom demands. The involvement of AI in the scam is clear and central to the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm to a person (psychological and potential financial harm) and a violation of personal security rights.
Thumbnail Image

AI clones teen girl's voice in $1M kidnapping scam: 'I've got your...

2023-04-12
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly states that scammers used AI voice cloning technology to simulate the victim's daughter's voice, which directly caused emotional harm and a ransom demand. The AI system's use was central to the scam's effectiveness and the resulting harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (emotional distress and financial threat) and harm to the community (scam victims).
Thumbnail Image

Ariz. Mom Says Daughter's Voice Was Cloned with AI in $1 Million Kidnapping Hoax

2023-04-14
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the scammer used AI to clone the daughter's voice, which was convincing enough to deceive the mother into believing her daughter was in danger. This use of AI directly caused emotional harm and distress, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in enabling the scam, which involved psychological harm and potential risk to the family. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI clones child's voice in fake kidnapping scam

2023-04-13
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to generate a fake voice that convincingly mimicked the victim's daughter. This AI use directly led to harm by enabling a scam that caused emotional distress and an attempted extortion demand. The harm is realized and directly linked to the AI system's use. Therefore, this qualifies as an AI Incident under the definition of harm to persons and communities caused by AI misuse.
Thumbnail Image

Warning over terrifying 'AI kidnap attack' that raids your bank

2023-04-13
The Sun
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI voice cloning technology to create a highly convincing fake voice of the victim's daughter, which directly led to psychological harm (trauma and fear) and posed a financial harm risk (ransom demand). The AI system's use was central to the incident, enabling the scammer to deceive the victim effectively. Therefore, this qualifies as an AI Incident due to realized harm to a person (psychological harm) and potential financial harm resulting from the AI-enabled scam.
Thumbnail Image

Fake kidnappers simulate teen's voice through AI, use it to demand ransom from her mother

2023-04-13
TheBlaze
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system capable of voice cloning, which was exploited to perpetrate a fraudulent kidnapping ransom demand. The AI-generated voice directly led to psychological harm and distress, fulfilling the criteria for harm to a person or group. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated synthetic voice technology.
Thumbnail Image

AI generated US scam call imitated daughter's voice

2023-04-12
Stuff
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) to generate a fake voice that directly caused emotional harm to the victim by inducing fear and distress. The AI system's use in this scam call is a clear example of AI misuse leading to harm to a person, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI system's output (the cloned voice).
Thumbnail Image

Sick scammers used AI to clone 15-year-old girl's voice in fake kidnapping hoax

2023-04-13
Daily Star
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the victim's voice and create a fake kidnapping scenario directly caused harm by deceiving the mother, causing emotional distress and an attempted extortion. The AI system's malicious use led to realized harm, including psychological harm and attempted financial harm, which fits the definition of an AI Incident. The event involves the use of an AI system (voice cloning) and the harm is direct and materialized, not just potential.
Thumbnail Image

Warning over terrifying 'AI kidnap attack' that raids your bank

2023-04-13
The US Sun
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) in a malicious scam that directly caused harm to a person (psychological trauma) and posed a financial harm risk. The AI system's use was central to the scam's effectiveness, making it an AI Incident under the definition of harm to a person or group. The harm is realized (emotional trauma) and the potential financial harm is also present. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Mom Says Daughter's Voice Was Cloned By AI Technology To Create Seven-Figure Kidnapping Scam! OMG! - Perez Hilton

2023-04-14
Perez Hilton
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to create a synthetic voice that mimicked the victim's real voice. This AI-generated voice was used in a scam that caused direct harm to the victim's family by inducing fear, emotional distress, and an attempted ransom demand. The AI system's use here is central to the harm, fulfilling the criteria for an AI Incident as it directly led to harm to persons (psychological harm) and attempted financial harm.
Thumbnail Image

Mom Says Creeps Used AI to Fake Daughter's Kidnapping

2023-04-15
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for voice cloning, which was exploited by criminals to impersonate a victim and attempt a ransom scam. This directly led to harm in the form of emotional distress to the victim's family and an attempted financial crime. The AI system's use was central to the incident, fulfilling the criteria for an AI Incident due to realized harm caused by malicious use of AI-generated content.
Thumbnail Image

Kidnapping scam uses artificial intelligence to clone teen girl's voice, mother issues warning

2023-04-13
ABC7
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to clone a teen girl's voice, which was then used in a kidnapping scam. This use of AI directly caused harm by deceiving the mother and causing emotional distress. The AI system's involvement is clear and central to the incident, fulfilling the criteria for an AI Incident due to harm to persons (psychological/emotional harm) and potential violation of rights.
Thumbnail Image

AI Was Used To Clone A Teen Girl's Voice In A $1 Million Kidnapping Scam - Wonderful Engineering

2023-04-14
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the victim's daughter's voice directly led to harm: emotional distress and financial loss. The AI system's use in the scam was pivotal in deceiving the victim, fulfilling the criteria for an AI Incident as it caused harm to a person. The event involves the use of an AI system (voice cloning) in a malicious context resulting in realized harm, not just potential harm or general information.
Thumbnail Image

A mom thought her daughter had been kidnapped -- it was just AI mimicking her voice

2023-04-14
Popular Science
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice-cloning technology) to impersonate a person's voice in a scam that caused direct harm to the victim through emotional distress and threats of violence. The AI system's use was central to the scam's effectiveness and the resulting harm, meeting the definition of an AI Incident due to injury or harm to a person (psychological harm and threat of bodily harm).
Thumbnail Image

Scammers Use AI to Clone Girl's Voice, Demand Mother Pay $1 Million Ransom

2023-04-14
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to impersonate a person's voice in a scam that directly caused psychological harm and a financial threat to the victim. The AI system's use led to a clear harm (psychological distress and attempted extortion), meeting the definition of an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the scam's effectiveness.
Thumbnail Image

Mom Says Kidnapping Scammers Used AI To Recreate Her Daughter's Voice

2023-04-14
Scary Mommy
Why's our monitor labelling this an incident or hazard?
The article explicitly states that scammers used AI technology to recreate the daughter's voice to trick the mother into believing her daughter was kidnapped. This use of AI directly caused harm by inducing fear, emotional distress, and an attempted extortion, which fits the definition of an AI Incident involving harm to a person or group. The AI system's use was malicious and directly linked to the harm experienced, fulfilling the criteria for an AI Incident.
Thumbnail Image

AI Cloning Used By Scammers In $1 Million Kidnapping Scheme

2023-04-13
Ubergizmo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI voice cloning technology was used by scammers to imitate the victim's daughter's voice, which directly led to a kidnapping scam causing emotional harm and a financial threat. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (emotional distress and potential financial loss).
Thumbnail Image

AI scam tricked mother into $1 million ransom for her 'kidnapped daughter'

2023-04-13
indy100.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice simulation technology to impersonate a person's voice convincingly, which directly caused harm to the victim through emotional distress and an attempted ransom scam. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological harm and potential financial harm). The scam's success relied on the AI-generated voice, making the AI system pivotal in causing the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Terrifying new AI scam used teen girl's REAL voice to call her mother and demand $1million | Express Digest

2023-04-11
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to clone the teen girl's voice from a short sample and weaponize it in a scam call. The AI system's involvement is in the malicious use of voice cloning AI to impersonate the daughter, which directly caused emotional harm to the mother and posed a threat of physical harm. This fits the definition of an AI Incident as the AI system's use directly led to harm (psychological distress and potential threat to safety). The event is not merely a potential risk but an actual realized harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the scam and harm.
Thumbnail Image

How Scammers Use AI Voice-Cloning To Stage $1 Million Kidnapping Hoax - The Trent

2023-04-15
The Trent
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice-cloning AI) to generate synthetic audio that convincingly mimicked the victim's daughter's voice. This AI-generated voice was used maliciously to deceive and extort money, causing harm to the victim. The harm is realized and directly linked to the AI system's use in the scam. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malicious use.
Thumbnail Image

A mother reportedly got a scam call saying her daughter had been kidnapped and she'd have to pay a ransom. The 'kidnapper' cloned the daughter's voice using AI.

2023-04-13
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning to impersonate a person and attempt to extort money constitutes direct involvement of an AI system in causing harm. The scammer's use of AI-generated audio led to an incident of attempted fraud and psychological distress, which falls under harm to persons (a). Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm or attempted harm.
Thumbnail Image

Arizona Mom Claims AI Kidnapping Scam Duplicated Her Daughter's Voice

2023-04-14
Inside Edition
Why's our monitor labelling this an incident or hazard?
The scammer used AI to duplicate the daughter's voice, which directly led to a threatening and harmful situation involving extortion and emotional distress. The AI system's use in voice duplication was pivotal to the scam, constituting an AI Incident due to the realized harm (psychological and potential financial harm) caused by the AI-enabled scam.
Thumbnail Image

Arizona Mom Claims AI Cloned Daughter's Voice in Terrifying $1 Million Ransom Scam

2023-04-12
The Federalist Papers
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to mimic the daughter's voice in a ransom scam. This AI system's use directly led to harm, including emotional distress to the mother and the potential for financial loss. The scam's nature involves violation of rights and harm to the individual and community through fraud and psychological impact. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malicious use.
Thumbnail Image

Wait, What!? Scammers Used AI To Clone Daughter's Voice So They Could Demand $1 Million Ransom From The Mother

2023-04-14
Hollywood Unlocked
Why's our monitor labelling this an incident or hazard?
The scammer used an AI system to clone the daughter's voice, which directly led to a harmful event involving emotional distress and an attempted financial scam. The AI system's use was central to the scam's effectiveness, fulfilling the criteria for an AI Incident due to harm to a person (emotional harm and attempted financial extortion).
Thumbnail Image

'I've got your daughter': Arizona mom warns of close call with AI voice cloning scam - KION546

2023-04-11
KION546
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning technology to create a realistic fake voice of a family member, which was then used in a scam call to extort money. This use of AI directly caused emotional harm and posed a risk of financial loss, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in enabling the scam's effectiveness, and the harm is realized as the victim experienced terror and distress. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mother Issues Warning After AI Uses Teen Daughter's Voice in Kidnapping Scam

2023-04-14
Baller Alert
Why's our monitor labelling this an incident or hazard?
The scammers used an AI system to clone the daughter's voice, which directly led to a harmful event involving an extortion attempt and emotional harm to the family. The AI system's use in the scam is central to the incident, fulfilling the criteria of an AI Incident due to harm to persons (emotional distress and extortion threat).
Thumbnail Image

WATCH: Mother describes chilling AI phone scam faking daughter's kidnapping: 'It was completely her voice'

2023-04-14
Dennis Michael Lynch
Why's our monitor labelling this an incident or hazard?
The scam involves the use of AI voice cloning technology to generate a convincing fake voice of the daughter, which directly caused emotional distress to the mother and posed a risk of financial harm through ransom demands. The AI system's role is pivotal in enabling the fraudsters to convincingly impersonate the daughter, leading to realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Arizona mum warns of close call with AI voice clone - Internewscast

2023-04-12
Internewscast
Why's our monitor labelling this an incident or hazard?
An AI system (voice cloning based on deep learning) was used maliciously to impersonate a person's daughter, attempting to scam money. This constitutes indirect harm to the person targeted (potential financial harm and emotional distress). The event involves the use of AI technology leading directly to a harmful incident (scam attempt). Although the scam was unsuccessful, the harm was imminent and the AI system's role was pivotal in enabling the deception. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Scottsdale mother warns of A.I. kidnapping hoax

2023-04-13
KGUN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a synthetic voice to impersonate a person, which directly caused harm to the mother through emotional distress and led to police being contacted unnecessarily. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological harm) and disruption of emergency response resources. Therefore, it is classified as an AI Incident.
Thumbnail Image

Mom warns of hoax using AI to clone daughter's voice

2023-04-13
MyCentralOregon.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning technology to impersonate a person's voice in a criminal scam, which directly led to psychological harm and a threat of kidnapping ransom. This fits the definition of an AI Incident because the AI system's use directly led to harm (psychological distress and potential for further harm). The presence of AI is explicit (voice cloning), and the harm is realized, not just potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

اختطفوا ابنتها بالذكاء الاصطناعي.. عملية احتيال مريبة للغاية

2023-04-15
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a realistic voice simulation, which was used maliciously to cause emotional distress and attempt financial fraud. The AI system's use directly led to harm in the form of psychological harm to the mother and an attempted financial scam. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in the scam.
Thumbnail Image

أول عملية من نوعها.. جريمة خطف باستخدام الذكاء الاصطناعي في احدى الدول - صورة - الوكيل الاخباري

2023-04-15
الوكيل الاخباري
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice synthesis technology) to commit fraud, which directly caused harm by inducing fear and attempting extortion. The AI's role was pivotal in enabling the scam through voice imitation. Therefore, this qualifies as an AI Incident due to realized harm (psychological distress and attempted financial harm) caused by the AI system's malicious use.
Thumbnail Image

اختطفوا ابنتها بالذكاء الاصطناعى.. أغرب عملية احتيال وطلب فدية مليون دولار بأمريكا - اليوم السابع

2023-04-15
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice simulation technology) to perpetrate a scam that caused direct harm by inducing fear and extorting money. The AI system's use directly led to the harm (psychological distress and attempted financial extortion), fitting the definition of an AI Incident under violations of rights and harm to persons. Therefore, this is classified as an AI Incident.
Thumbnail Image

عملية احتيال مريبة.. عصابة تخطف فتاة بالذكاء الاصطناعى لطلب فدية مليون دولار - اليوم السابع

2023-04-16
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a realistic voice simulation of the daughter, which was used maliciously to cause fear and extort money. The AI system's use directly led to harm (psychological distress and attempted financial fraud). Therefore, it meets the criteria for an AI Incident as the AI system's use directly caused harm to a person (the mother) and attempted harm (financial extortion).
Thumbnail Image

طلبوا فدية ضخمة.. حيلة عصابة لخطف طفلة بطريقة "الذكاء الاصطناعي"

2023-04-16
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice synthesis technology) to commit fraud by simulating a human voice to deceive the victim's mother. This use of AI directly led to psychological harm and an attempted financial extortion, which qualifies as harm to persons and potentially to property (financial harm). Therefore, this is an AI Incident because the AI system's use directly caused harm through malicious impersonation and fraud.
Thumbnail Image

احتيال مريب.. عملية اختطاف بالذكاء الاصطناعي

2023-04-15
البيان
Why's our monitor labelling this an incident or hazard?
The use of AI to simulate the victim's voice directly led to a harmful event involving psychological distress and attempted financial extortion. The AI system's malicious use caused harm to the victim's family, fitting the definition of an AI Incident due to violation of personal security and potential harm to individuals.
Thumbnail Image

عملية احتيال صادمة.. اختطاف فتاة بالذكاء الاصطناعي | البوابة

2023-04-16
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a realistic voice impersonation directly caused harm by enabling a scam that threatened the victim's safety and demanded a large ransom. This constitutes an AI Incident because the AI system's use led to realized harm (psychological distress and attempted financial extortion).
Thumbnail Image

عصابة تستعين بالذكاء الاصطناعى لطلب فدية مليون دولار

2023-04-16
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The event describes a scam where AI-generated voice cloning was used to impersonate a person's daughter to extort money. The AI system's use directly caused harm by inducing fear and emotional distress and attempting to extort a large sum of money. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological harm and attempted financial harm).
Thumbnail Image

محتالون يوظفون تقنيات الذكاء الاصطناعي لإيهام أسرة أن ابنتهم مختطفة بغرض طلب فدية

2023-04-15
مغرس
Why's our monitor labelling this an incident or hazard?
The use of AI to simulate the victim's voice for a ransom scam directly caused harm by deceiving the family and causing emotional distress and potential financial loss. The AI system's malicious use is central to the incident, fulfilling the criteria for an AI Incident as it led to realized harm through fraudulent activity and psychological impact.
Thumbnail Image

عملية احتيال مريبة للغاية... اختطفوا ابنتها بالذكاء الاصطناعي!

2023-04-15
https://kataeb.org/
Why's our monitor labelling this an incident or hazard?
The use of AI to generate a fake voice that led to an attempted extortion constitutes direct involvement of an AI system in causing harm. The harm includes psychological distress and potential financial loss, which fits under harm to persons or groups. Therefore, this qualifies as an AI Incident due to the realized harm caused by the malicious use of AI-generated voice simulation.
Thumbnail Image

اختطفوا ابنتها بالذكاء الاصطناعي.. عملية احتيال مريبة للغاية - وكالة أوقات الشام الإخبارية

2023-04-16
وكالة أوقات الشام الإخبارية
Why's our monitor labelling this an incident or hazard?
The use of AI to simulate the daughter's voice constitutes the involvement of an AI system. The AI system's use directly led to a harmful event: a scam causing emotional distress and an attempted extortion, which qualifies as harm to a person or group. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the malicious use of AI-generated voice simulation in a fraud scheme.
Thumbnail Image

Estafadores estarían tumbando a personas con voces de sus familiares; así operan

2023-04-16
PULZO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to impersonate a victim's daughter, leading to an extortion attempt that caused emotional distress and potential financial harm. The AI system's use directly led to harm (emotional and potential financial), fulfilling the criteria for an AI Incident. The involvement of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete incident of AI misuse causing harm.
Thumbnail Image

Madre de Arizona advierte sobre la clonación de voz de IA tras una estafa de secuestro

2023-04-13
LA GRAN ÉPOCA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create a realistic fake voice of the victim's daughter, which was used in a scam call to extort money. This involves the use of an AI system (voice cloning) in a malicious way that directly caused emotional harm to the victim and posed a risk of financial harm. The harm is realized, not just potential, as the victim was deceived and emotionally distressed. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to a person (emotional distress and potential financial loss).
Thumbnail Image

Inteligencia Artificial clonó voz de una adolescente para estafa de secuestro en Arizona

2023-04-13
Minuto30.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to clone a person's voice, which was then used maliciously in a scam to extort money. This caused direct harm to the victim's family through emotional distress and the threat of financial loss. The AI system's use in this fraudulent activity meets the criteria for an AI Incident, as it directly led to harm to persons (psychological harm) and communities (through the scam).
Thumbnail Image

Secuestradores usan la IA para clonar voz de una joven y pedir $ 1 millón

2023-04-13
Mag.
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to clone the voice of the daughter, which was then used by a criminal to impersonate her and demand ransom. This use of AI directly caused harm to the mother and family through emotional distress and extortion attempt. The AI system's involvement is clear and central to the incident, fulfilling the criteria for an AI Incident as it directly led to harm to persons and violation of rights.
Thumbnail Image

Inteligencia artificial: "Secuestrador" casi engaña a madre generando la voz de su hija

2023-04-13
RPP noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system capable of voice synthesis to impersonate a human voice with high fidelity. The AI system's use directly led to an attempted extortion, which constitutes harm to a person (emotional harm and potential financial harm). Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through malicious use of AI-generated voice cloning for fraud and extortion.
Thumbnail Image

Un secuestro falso confirma uno de los principales temores de las IA: "No dudé ni un segundo de que era ella"

2023-04-14
3D Juegos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone the daughter's voice, which was used to deceive the mother into believing a kidnapping was real. This is a direct use of an AI system (voice cloning) leading to emotional harm and attempted financial harm (extortion). The harm is realized, not just potential, as the victim was deceived and emotionally affected, and a ransom demand was made. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (emotional distress) and attempted harm to property (money).
Thumbnail Image

Oír la voz de un familiar cuando te llaman pidiendo rescate por él... y que esté sano y salvo: la IA lleva las estafas a otro nivel

2023-04-14
Genbeta
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to impersonate a family member's voice in a ransom scam call. This use of AI directly led to psychological harm and an attempted financial scam, fulfilling the criteria for an AI Incident. The AI system's use was malicious and caused harm, even though the victim was ultimately safe. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use in a criminal scam.
Thumbnail Image

Inteligencia Artificial clona la voz de una adolescente en una estafa de secuestro en Arizona: "Era exactamente igual a su voz"

2023-04-13
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to clone the teenager's voice, which was then used in a scam to extort money by impersonating the victim. This caused direct psychological harm and distress to the mother and family, fulfilling harm to persons and communities. The AI system's use was central to the scam's success, making it an AI Incident. There is no indication that this is only a potential risk or a complementary update; the harm has already occurred.
Thumbnail Image

¿Clonación de voz con Inteligencia Artificial? Advierten sobre 'secuestros falsos'

2023-04-13
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to clone the daughter's voice to simulate a kidnapping, which caused significant emotional harm to the mother and an attempted extortion. The AI system's use directly led to realized harm (emotional distress and fraud attempt), meeting the criteria for an AI Incident under harm to persons and communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

La Inteligencia Artificial ya es utilizada para extorsionar ¡Hacen clonación de voz!

2023-04-14
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for voice cloning to impersonate a victim's daughter, which was used to extort money through threats. The AI system's use directly caused harm (psychological harm and extortion attempt), fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harm described, and the harm is realized, not just potential.
Thumbnail Image

Delincuentes usaron inteligencia artificial para clonar la voz de una niña y extorsionar a su madre | NTN24.COM

2023-04-14
NTN24 | Últimas Noticias de América y el Mundo.
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system for voice cloning to perpetrate a criminal extortion attempt, which caused psychological harm to the victim. The AI system's use was central to the deception and the resulting harm. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to harm to a person.
Thumbnail Image

Leer más

2023-04-16
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (voice cloning chatbot) in a criminal extortion attempt, which directly harms the victim by causing psychological distress and threatening physical harm to a family member. This fits the definition of an AI Incident because the AI system's use has directly led to harm (psychological and potential physical harm through extortion).
Thumbnail Image

أول عملية من نوعها.. جريمة خطف باستخدام الذكاء الاصطناعي

2023-04-14
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The use of AI to clone the victim's voice is explicitly mentioned, indicating the involvement of an AI system in the scam. The AI system's use directly led to harm by enabling the fraudsters to convincingly impersonate the victim, causing emotional distress and potential further harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in a criminal context.
Thumbnail Image

الجريمة تدخل عصرا جديدا.. أول عملية نصب بالذكاء الاصطناعي

2023-04-15
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The use of AI voice cloning technology to impersonate a person and commit fraud constitutes direct involvement of an AI system in causing harm. The harm includes financial loss and psychological harm to individuals, fitting the definition of an AI Incident. The AI system's use was central to the crime's execution, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الأولى من نوعها.. الذكاء الاصطناعي "شريك" في جريمة خطف

2023-04-14
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to commit a crime involving impersonation and extortion. The AI system's use directly led to harm (psychological distress and attempted financial fraud). The event involves the use and misuse of an AI system, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the crime.
Thumbnail Image

باستخدام الذكاء الاصطناعي: محتالون سيستنسخون صوت فتاة للتظاهر بخطفها

2023-04-14
annahar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (voice cloning technology) to produce a synthetic voice that was used in a fraudulent kidnapping call. This use of AI directly led to harm, including psychological distress to the victim's mother and attempted financial fraud, which qualifies as harm to persons and communities. Therefore, this constitutes an AI Incident as the AI system's use directly caused harm.
Thumbnail Image

"الذكاء الاصطناعي".. سلاح بيد المجرمين.. اختطاف !

2023-04-16
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI for voice cloning, which is an AI system. The AI's use directly led to harm by enabling a scam involving impersonation and extortion, causing emotional distress and potential financial loss. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malicious use.
Thumbnail Image

تفاصيل مثيرة.. أول جريمة خطف باستخدام الذكاء الاصطناعي - صحيفة تواصل الالكترونية

2023-04-14
صحيفة تواصل الاخبارية www.twasul.info
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to create synthetic audio that was used in a scam to impersonate a person and extort money. This use of AI directly caused harm to the victim's family through psychological distress and financial threat. The AI system's involvement is explicit and central to the incident. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm.
Thumbnail Image

أول عملية من نوعها.. جريمة خطف باستخدام الذكاء الاصطناعي | صحيفة تواصل نيوز

2023-04-14
تواصل
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) to commit fraud, which directly caused harm (financial loss and emotional distress). The AI system's use was central to the crime, enabling the impersonation and deception. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (emotional and financial harm).
Thumbnail Image

Scammers clone girl's voice using AI in 'kidnapping scam,' demand $1 million as ransom

2023-04-17
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to clone a human voice, which was then used in a kidnapping scam causing direct harm to the victim's family through psychological distress and potential financial loss. The AI system's role is pivotal in enabling the scam, fulfilling the criteria for an AI Incident due to realized harm caused by the AI-enabled voice cloning technology.
Thumbnail Image

Mother receives ransom AI phone call about her daughter's kidnapping

2023-04-19
TweakTown
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (voice cloning technology) to generate a realistic fake voice of the daughter, which was used maliciously to cause emotional distress and attempt extortion. The harm is realized in the form of psychological trauma and potential financial loss, fitting the definition of an AI Incident due to direct harm caused by the AI system's use in a scam.
Thumbnail Image

Woman Claims AI Cloned Her Daughter's Voice In $1 Million Kidnapping Scam

2023-04-17
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to clone the daughter's voice, which was then used in a kidnapping scam demanding ransom. This constitutes direct harm to the victim's family through psychological distress and attempted financial extortion. The AI system's use was central to the scam's effectiveness, making this an AI Incident under the definition of harm to persons and communities through malicious use of AI-generated content.
Thumbnail Image

Scammers use AI to clone teenager's voice in kidnapping scam, demand USD 1 million from mother

2023-04-16
India Today
Why's our monitor labelling this an incident or hazard?
The incident involves the use of AI voice cloning technology to impersonate a person and perpetrate a scam, which directly caused harm to the victim's family through emotional distress and an extortion attempt. The AI system's role is pivotal as it enabled the scammers to convincingly mimic the teenager's voice, leading to the scam's effectiveness. This fits the definition of an AI Incident because the AI system's use directly led to harm to persons (psychological harm and criminal extortion).
Thumbnail Image

AI kidnapping scam copied teen girl's voice in $1M extortion attempt - National | Globalnews.ca

2023-04-18
Global News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI software was used to clone the teen girl's voice, which was then used in a scam to extort money by falsely claiming the daughter was kidnapped. This use of AI directly caused harm by deceiving the mother and causing emotional trauma, as well as posing a financial threat. The involvement of AI in generating the synthetic voice is central to the incident, and the harm is realized, not just potential. Therefore, this qualifies as an AI Incident under the framework, as it involves the use of an AI system leading directly to harm (psychological and financial extortion).
Thumbnail Image

Mother Says AI Was Used To Clone 'Kidnapped' Daughter's Voice to Fool Her, Fake Abduction of Her Child

2023-04-18
Science Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning technology to impersonate a victim's voice in a ransom scam, which directly caused psychological harm and an attempted criminal act. The AI system's use was malicious and led to realized harm, meeting the criteria for an AI Incident. The harm is to the health and well-being of the person (psychological distress) and the community (scam attempt).
Thumbnail Image

I've got your daughter: AI voice cloning scam - USAHITMAN Conspiracy News

2023-04-18
USAHitman.com - Conspiracy News & Much More
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create a convincing fake voice of the victim's daughter, which was used maliciously in a scam. This caused direct harm to the victim's mental health and posed a risk of financial harm through extortion attempts. The AI system's role is pivotal as it enabled the scammer to convincingly impersonate the daughter, leading to the harm described. Therefore, this qualifies as an AI Incident due to realized harm caused by the malicious use of an AI system.
Thumbnail Image

Kidnappers Used AI To Clone Girl's Voice, Demand $1M - DailyAlts -

2023-04-17
DailyAlts
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves the use of AI to clone the victim's voice, which was then used in a kidnapping scam demanding ransom. This constitutes direct harm through deception and extortion, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm to the individuals involved (psychological distress and criminal extortion).
Thumbnail Image

Mom warns of AI scam after receiving call claiming child was kidnapped

2023-04-18
Denver 7 Colorado News
Why's our monitor labelling this an incident or hazard?
The use of AI voice synthesis technology to impersonate a person's voice in a kidnapping scam directly led to psychological harm and fear for the victim and her family. The AI system's role was pivotal in enabling the scam by generating a realistic voice that deceived the mother. This meets the criteria for an AI Incident as the AI system's use directly caused harm to persons through a malicious scam.
Thumbnail Image

Mom warns of AI scam after receiving call claiming child was kidnapped

2023-04-18
Scripps News
Why's our monitor labelling this an incident or hazard?
The scam involved the use of AI to replicate the daughter's voice, which directly led to emotional and psychological harm to the mother and her family. The AI system's use in generating the voice was pivotal to the scam's effectiveness and the resulting trauma. Therefore, this qualifies as an AI Incident due to harm to persons (psychological harm) caused by the AI-enabled scam.
Thumbnail Image

"¡Ayuda!": Inteligencia Artificial imita voz de adolescente para fingir secuestro y pedir rescate

2023-04-20
Radio Duna
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to clone the teenager's voice to impersonate her and simulate a kidnapping, which is a misuse of AI technology causing direct harm to the victim's family through emotional distress and attempted extortion. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological harm and attempted financial harm).
Thumbnail Image

Alertan sobre secuestros virtuales con voces clonadas por inteligencia artificial

2023-04-17
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based voice cloning technology to impersonate a family member's voice in a phone scam involving a virtual kidnapping and extortion attempt. The AI system's involvement is in the use of voice cloning technology to deceive the victim, which directly led to harm in the form of psychological distress and attempted financial extortion. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to harm to persons (psychological harm and extortion).
Thumbnail Image

Clonan con IA la voz de una adolescente para extorsionar a su familia (+video) - Diario Primicia

2023-04-20
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that criminals used AI to clone the voice of a 15-year-old girl to extort her family by pretending she was kidnapped. This use of AI directly caused psychological harm and a threat to personal safety, which qualifies as harm to a person or group. The AI system's involvement is central to the incident, as the voice cloning enabled the extortion attempt. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Estafadores usaron inteligencia artificial para imitar voz de una adolescente y simular su secuestro

2023-04-20
BioBioChile
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning AI) to simulate a person's voice in a criminal scam, directly causing psychological harm and an attempted extortion. The AI's role is pivotal in enabling the fraudsters to convincingly impersonate the victim's daughter. This constitutes an AI Incident because the AI system's use directly led to harm (emotional distress and attempted financial harm) to the victim. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Aterrador: IA clona voz de una chica y extorsionan a la familia | Digital Trends Español

2023-04-19
Digital Trends Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone the voice of a young woman, which was then used by criminals to simulate a kidnapping and demand ransom. This use of AI directly led to harm in the form of emotional distress and extortion attempts against the family, fulfilling the criteria for an AI Incident. The AI system's use in voice cloning was pivotal to the harm caused, and the incident involves the malicious use of AI technology.
Thumbnail Image

From Scams to Music, AI Voice Cloning Is on the Rise

2023-04-29
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used to perpetrate scam calls that caused fear and distress to a family, which constitutes direct harm to people (a form of injury or harm to persons). Additionally, the unauthorized AI-generated music tracks using artists' voices without consent implicate violations of intellectual property rights and potential reputational harm, which fall under violations of human rights or intellectual property rights. Therefore, the event involves AI system use leading directly to harms (scams and rights violations), qualifying it as an AI Incident.
Thumbnail Image

'Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping

2023-04-29
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology to create a convincing fake voice of the victim's daughter, which was used to perpetrate a kidnapping scam. This led to direct harm in the form of emotional distress and a ransom demand, fulfilling the criteria for an AI Incident. The AI system's malicious use directly caused harm to the victim, meeting the definition of an AI Incident under the framework.
Thumbnail Image

'Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping | CNN

2023-04-29
CNN International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology by scammers to create a fake kidnapping scenario. The AI system's involvement is in the use of voice cloning to impersonate the victim's daughter, which directly caused psychological harm and financial threat to the family. This fits the definition of an AI Incident because the AI system's use directly led to harm to a person (psychological distress) and potential financial harm. The event is not merely a potential risk but an actual realized harm, thus qualifying as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

AI Voice Cloning Is on the Rise. Here's What to Know

2023-04-29
TIME
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used to perpetrate scam calls that caused fear and distress to a family, which is a direct harm to individuals (psychological harm and potential financial harm). This meets the criteria for harm to persons. Furthermore, the unauthorized AI-generated music tracks infringe on artists' rights and reputations, constituting violations of intellectual property rights. Both aspects involve the use of AI systems leading to realized harms, qualifying the event as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Scammers use AI to clone voice in US, attempt fake kidnapping call: Report

2023-04-29
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to clone a voice in a scam involving a fake kidnapping call. The scam caused emotional harm and financial loss to victims, which fits the definition of harm to persons or groups. The AI system's use was central to the scam's effectiveness, directly leading to harm. Hence, this is an AI Incident involving the malicious use of an AI system causing realized harm.
Thumbnail Image

'Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping

2023-05-01
CTV News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice cloning technology by scammers to create a convincing fake voice of the victim's daughter, which was used to perpetrate a kidnapping scam. This caused direct harm to the victim through emotional distress and the threat of financial extortion. The AI system's involvement is central to the incident, as the cloned voice was pivotal in deceiving the victim and enabling the scam. Therefore, this qualifies as an AI Incident due to realized harm caused by the use of an AI system.
Thumbnail Image

"Everything was so real": Virtual kidnapping scams made more realistic with AI

2023-04-30
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-powered voice filters to clone voices, which were used by scammers to conduct a virtual kidnapping scam. This use of AI directly led to harm in the form of psychological distress and attempted extortion, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in making the scam realistic and convincing, thus directly contributing to the harm experienced by the victim.
Thumbnail Image

Mom: Kidnap Scam Used AI Clone of Daughter's Voice

2023-04-30
Newser
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to clone a human voice to perpetrate a kidnapping scam. The AI's use directly led to psychological harm and fear, fulfilling the criteria for an AI Incident under harm to persons and communities. The scam's use of AI voice cloning is a direct cause of the harm experienced, even though the kidnapping was fake. Therefore, this qualifies as an AI Incident.
Thumbnail Image

US mother gets terrifying call from sobbing daughter, saying she'd been kidnapped but it was a scam | Newshub

2023-04-29
Newshub
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning to perpetrate a kidnapping scam, which directly caused emotional harm and financial risk to the victim's family. The AI system's role is pivotal in making the scam believable and effective, leading to realized harm (psychological distress and potential financial loss). The article explicitly links the scam's sophistication to AI voice cloning technology, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

'Mom, these bad men have me': She believes scammers cloned her daughter's voice in a fake kidnapping | News Channel 3-12

2023-04-29
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses scammers using AI voice cloning technology to create a convincing fake voice of the victim's daughter, which was used to perpetrate a kidnapping scam. This AI-enabled impersonation directly caused harm to the victim's family through emotional distress and extortion attempts. The involvement of AI in the scam's execution and the realized harm to the family meet the definition of an AI Incident, as the AI system's use directly led to harm to a person (psychological harm and financial threat).
Thumbnail Image

They clone a young woman's voice and make her look like she's been kidnapped

2023-04-30
MoviesOnline
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI to clone a young woman's voice, which was then used in a scam to make her mother believe she was kidnapped. This use of AI directly led to psychological harm and distress to the victim's family, constituting harm to a person. The AI system's use here is malicious and directly caused harm, fitting the definition of an AI Incident.
Thumbnail Image

Artificial intelligence: criminals clone the voice of a teenager to make believe in his abduction - The Bobr Times

2023-04-29
bobrtimes.com
Why's our monitor labelling this an incident or hazard?
The criminals used AI-based voice cloning technology to impersonate the teenager's voice, leading to a scam attempt and emotional harm to the mother. The AI system's use was central to the incident, as the cloned voice was the key factor in the deception and panic caused. This fits the definition of an AI Incident because the AI system's use directly led to harm (psychological distress and attempted financial fraud).