AI-Generated Voices Used in Phone Scams Cause Financial Losses in Lithuania

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers in Lithuania are using AI-generated synthetic voices to conduct phone scams, deceiving even tech-savvy individuals and causing financial losses. The advanced AI tools enable convincing, accent-free conversations, making it harder for victims to detect fraud. Insurance company BTA reports increasing sophistication and harm from these AI-enabled scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems that generate natural-sounding synthetic voices to conduct phone scams, which directly cause financial harm to people. The article explicitly states that AI-generated voices are used by scammers to deceive victims, leading to actual losses. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial loss) to individuals. The article does not merely warn about potential harm or discuss responses but reports on ongoing harm caused by AI-enabled scams.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Digital security

Affected stakeholders
General public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Telefoniniai sukčiai prakalbo lietuviškai: dirbtinis intelektas apgauna net ir pačius budriausius

2026-03-30
lrytas.lt
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems that generate natural-sounding synthetic voices to conduct phone scams, which directly cause financial harm to people. The article explicitly states that AI-generated voices are used by scammers to deceive victims, leading to actual losses. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial loss) to individuals. The article does not merely warn about potential harm or discuss responses but reports on ongoing harm caused by AI-enabled scams.
Thumbnail Image

Perspėja: DI haliucinacijos veikia abiem kryptimis - tai gali sustiprinti ir pagilinti mūsų pačių iliuzijas

2026-03-30
lrytas.lt
Why's our monitor labelling this an incident or hazard?
The involvement of generative AI systems in creating false information and influencing harmful behavior constitutes direct or indirect harm to people and communities. The fabricated content and the planning of a violent act linked to AI chatbot interaction meet the criteria for AI Incident, as the AI's outputs have directly or indirectly led to harm or significant risk thereof.
Thumbnail Image

Aktyvus dirbtinio intelekto naudojimas sekina smegenis?

2026-03-30
diena.lt
Why's our monitor labelling this an incident or hazard?
The article centers on user experiences of mental fatigue linked to extensive AI tool use, which is a form of cognitive strain but not a direct harm caused by AI malfunction or misuse. There is no indication of injury, rights violations, or other harms materializing from AI use. The discussion is about potential risks and management strategies rather than an actual incident or a credible imminent hazard. Hence, it fits the definition of Complementary Information, as it enhances understanding of AI's impact and suggests organizational responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Naujas klasių karas: kodėl ateities laimėtojus nulems DI valdymo įgūdžiai

2026-03-29
Dienraštis Vakaru ekspresas
Why's our monitor labelling this an incident or hazard?
The article focuses on the broader societal and economic implications of AI skill disparities and the resulting class divide. It does not report a concrete AI Incident (harm realized) or an immediate AI Hazard (plausible imminent harm) but rather discusses trends, research findings, and expert opinions about potential future risks and the need for policy responses. Therefore, it fits best as Complementary Information, providing context and insight into AI's impact on labor markets and inequality without describing a specific harmful event or hazard.