AI-Driven Online Financial Scams Surge in Bulgaria

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

European financial regulators warn of a sharp rise in online financial scams in Bulgaria, enabled by AI-generated fake messages, profiles, voices, and videos. Criminals use these technologies to impersonate trusted individuals, leading to financial loss, identity theft, and psychological harm among victims.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI-generated content (voices, videos, messages) by scammers to perpetrate financial frauds that have already caused harm such as financial loss and psychological stress. The AI systems' use is central to the harm, as they enable more convincing and effective scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (financial and psychological). The article is not merely a warning or potential risk but describes realized harms due to AI-enabled scams.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Financial and insurance servicesDigital security

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Зачестяват онлайн измамите с изкуствен интелект

2026-03-02
Actualno.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (voices, videos, messages) by scammers to perpetrate financial frauds that have already caused harm such as financial loss and psychological stress. The AI systems' use is central to the harm, as they enable more convincing and effective scams. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons (financial and psychological). The article is not merely a warning or potential risk but describes realized harms due to AI-enabled scams.
Thumbnail Image

Зачестяват онлайн измамите с изкуствен интелект

2026-03-02
Fakti.bg
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake and generative AI technologies) in the perpetration of online financial scams, which have caused actual harm such as financial loss and psychological stress. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities. The article does not merely warn about potential future harm but describes ongoing harm caused by AI-enabled scams. Hence, the classification is AI Incident.
Thumbnail Image

Ръст на измамите с изкуствен интелект -- OFFNews

2026-03-02
offnews.bg
Why's our monitor labelling this an incident or hazard?
The event describes actual harms caused by the use of AI systems in online financial scams, including financial loss and psychological stress, which fall under harm to persons and communities. The AI systems are used maliciously to generate convincing fake content that facilitates these scams. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Зачестяват онлайн финансовите измами с изкуствен интелект

2026-03-02
Investor.bg
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfakes and content generation) in the commission of financial scams that have caused realized harm to individuals (financial loss, identity theft, psychological harm). The AI's role is pivotal in enabling more convincing and effective scams. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm to people (financial and psychological), fitting the definition of an AI Incident under harm category (a) injury or harm to persons and (e) other significant harms where AI's role is pivotal.
Thumbnail Image

Комисия за финансов надзор: Онлайн измами с изкуствен интелект

2026-03-02
Българска Телеграфна Агенция
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for creating fake voices, videos, and messages) in the commission of online financial scams, which have directly led to harm to individuals (financial loss, identity theft, psychological stress). Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm to people.
Thumbnail Image

Зачестяват случаите на онлайн финансови измами с изкуствен интелект, предупреждават европейски надзорни органи

2026-03-02
Българска Телеграфна Агенция
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfake voices, videos, and messages) in the commission of online financial scams that have already caused harm to people, including financial loss and identity theft. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article is not merely a warning or potential risk but describes ongoing harms caused by AI-enabled scams. Hence, it is classified as an AI Incident.
Thumbnail Image

Зачестяват онлайн измамите с изкуствен интелект | www.pariteni.bg

2026-03-02
pariteni.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by criminals to generate fake messages, profiles, voices, and videos that facilitate online financial scams. These scams have already resulted in actual harm to people, including financial loss and identity theft, which fall under harm to persons and communities. The AI system's use is central to the incident as it enables the creation of convincing fraudulent content. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing harm.