AI-Generated Content Used in Social Engineering Scams During Lunar New Year in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Hackers in Taiwan are using AI-generated voice and video messages to impersonate acquaintances and distribute malicious links disguised as Lunar New Year greetings. These AI-enabled social engineering attacks have led to personal data breaches and financial losses. Authorities urge the public to verify suspicious messages and avoid clicking unknown links or attachments.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI technology by malicious actors to create realistic fake greetings and messages that trick users into compromising their personal information or devices. This use of AI directly leads to harms such as personal data breaches and financial losses, which fall under violations of rights and harm to individuals. Therefore, this constitutes an AI Incident as the AI system's use in social engineering attacks has directly led to realized harm.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Digital security

Affected stakeholders
General public

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

春節祝福藏玄機?數發部教戰3步驟 辨識新年社交工程陷阱 | 生活焦點 | 要聞 | NOWnews今日新聞

2026-02-17
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology by malicious actors to create realistic fake greetings and messages that trick users into compromising their personal information or devices. This use of AI directly leads to harms such as personal data breaches and financial losses, which fall under violations of rights and harm to individuals. Therefore, this constitutes an AI Incident as the AI system's use in social engineering attacks has directly led to realized harm.
Thumbnail Image

AI拜年影片可能有毒?資安署揭社交工程新招 - 自由財經

2026-02-16
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used as a tool in social engineering attacks, which involves AI systems in the use phase. The harms described (privacy breaches, financial loss) are real harms that can result from these attacks. However, the article does not report a concrete AI Incident occurring but rather warns about the potential for such incidents. Therefore, this fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to harm but does not document a realized harm event. The advisory nature and focus on risk prevention confirm it is not Complementary Information or Unrelated news.
Thumbnail Image

春節祝福藏玄機?資安署三步驟教你辨識新年社交陷阱 | 生活 | Newtalk新聞

2026-02-17
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (AI-generated content and deepfake-like voice or video messages) by malicious actors to conduct social engineering attacks that have directly led to harms such as personal data breaches and financial losses. This fits the definition of an AI Incident because the AI system's use (including malicious use) has directly led to harm to persons (harm to health or property through fraud and data theft) and harm to communities (through social engineering scams). The article is primarily a warning and advice piece about ongoing AI-enabled scams causing harm, not just a general discussion or future risk, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

小心過年祝福藏詐騙 「停、看、聽」三招防護| 台灣大紀元

2026-02-17
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used by hackers to disguise malicious programs as greetings, indicating AI system involvement in the use phase (malicious use). However, it does not describe a realized harm event but warns about potential harm such as data breaches or financial loss. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no specific incident of harm is reported.
Thumbnail Image

春節詐騙花樣多!資安署教3步驟辨識假祝福訊息|壹蘋新聞網

2026-02-17
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology by hackers to create realistic voice or video messages to impersonate acquaintances and deceive users, resulting in actual harms such as personal data leakage and financial loss. This constitutes direct involvement of AI systems in causing harm to people, fitting the definition of an AI Incident. The article's main focus is on the realized harms caused by AI-enabled scams, not just potential risks or general information, so it is classified as an AI Incident.
Thumbnail Image

春節祝福藏玄機?數發部三步驟辨社交工程陷阱

2026-02-17
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used by hackers to create convincing fake voice and video messages for social engineering attacks, which have directly led to harms such as personal data breaches and financial losses. This fits the definition of an AI Incident because the AI system's use in generating deceptive content is a contributing factor to the realized harm. The article also provides guidance to mitigate these harms but the primary focus is on the ongoing AI-enabled social engineering attacks causing harm.