HKCERT warns of AI-powered phishing surge and deepfake scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

HKCERT handled 7,752 security incidents in 2023, with phishing rising 27% to nearly half of cases. Malicious actors increasingly use AI-based tools like WormGPT to generate malware, code and deepfake videos for phishing. HKCERT forecasts five 2024 risks, urging updated backups, vulnerability management and deepfake detection.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a current situation where AI is already being used by hackers to conduct phishing and cyberattacks, which are causing harm to individuals and organizations (harm to property, communities, and potentially financial harm). The use of AI tools like WormGPT and Deepfake for malicious purposes is explicitly mentioned as contributing to these attacks. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to realized harms through cybercrime. Additionally, the article discusses plausible future harms from AI weaponization in cyberattacks, but since harm is already occurring, the classification prioritizes AI Incident over AI Hazard.[AI generated]
AI principles
SafetyPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainability

Industries
Digital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

港網絡釣魚創5年新高 資訊保安界憂「AI武裝化」 - 大紀元

2024-02-02
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article describes a current situation where AI is already being used by hackers to conduct phishing and cyberattacks, which are causing harm to individuals and organizations (harm to property, communities, and potentially financial harm). The use of AI tools like WormGPT and Deepfake for malicious purposes is explicitly mentioned as contributing to these attacks. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to realized harms through cybercrime. Additionally, the article discusses plausible future harms from AI weaponization in cyberattacks, but since harm is already occurring, the classification prioritizes AI Incident over AI Hazard.
Thumbnail Image

網絡釣魚5年新高 電腦保安協調中心籲防AI詐騙 - 20240202 - 港聞

2024-02-01
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (generative AI, deepfake technology) by malicious actors to conduct phishing and cyberattacks, which could plausibly lead to harms including financial loss and deception. The event is about the potential and ongoing risk of AI-enhanced cybercrime rather than a specific realized incident causing harm. Therefore, it fits the definition of an AI Hazard, as it highlights credible future risks stemming from AI use in cyberattacks without detailing a particular AI-caused harm event.
Thumbnail Image

香港電腦保安事故協調中心去年處理逾7700宗保安事故跌8% - RTHK

2024-02-01
news.rthk.hk
Why's our monitor labelling this an incident or hazard?
The article discusses the current cybersecurity landscape and the potential for AI to be used maliciously by hackers, including generating malware, deepfakes, and phishing websites. However, it does not describe a specific AI incident where harm has already occurred. Instead, it outlines credible risks and warnings about how AI could plausibly lead to cybersecurity incidents and fraud in the future. Therefore, this qualifies as an AI Hazard, as it highlights plausible future harms stemming from AI use in cyberattacks and fraud, but no realized harm is reported in this article.
Thumbnail Image

去年7752宗網路釣魚攻擊創5年新高 資科界籲減少分享身份特徵

2024-02-01
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI, including deepfake technology, by hackers to conduct phishing attacks and impersonation scams, which have resulted in a record number of incidents and data breaches. The harms include theft of sensitive data and fraud, which fall under harm to persons and communities. The AI involvement is in the malicious use of AI systems by hackers, directly leading to these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

18:19:49網絡釣魚個案創5年新高 銀行金融及電子支付行業為重災區

2024-02-01
hkcd.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, deepfake technology) being used by hackers to conduct phishing and ransomware attacks, which have directly led to harm including financial fraud and data breaches. The involvement of AI in creating malicious code and fake identities is a direct contributing factor to these harms. The harms affect individuals and organizations, including violations of property and community harm through fraud and data theft. Since the harms are occurring and AI is pivotal in enabling these attacks, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

F5 : 2024 網路安全預測 " 網路資訊雜誌

2024-02-02
網路資訊雜誌
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, notably generative AI and LLMs, in the context of cybersecurity threats. It discusses how AI is used or could be used by attackers to enhance phishing, social engineering, misinformation, and other attacks, which could plausibly lead to harms such as violations of rights, harm to communities, and disruption of critical infrastructure. However, it does not report a specific AI Incident where harm has already occurred due to AI misuse or malfunction. Instead, it provides a forward-looking assessment of risks and emerging threats, consistent with the definition of an AI Hazard. It also does not primarily focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because AI involvement and plausible harm are central to the discussion.
Thumbnail Image

AI武器化攻擊網絡最危 今年5大風險 小心深偽釣魚詐騙

2024-02-02
EJ Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used maliciously to conduct cyberattacks, including AI-generated phishing, malware, and deepfake videos that have already caused harm. The harms include financial fraud, deception, and security breaches affecting critical sectors like banking and e-commerce. The involvement of AI in these attacks is direct and causal to the harms described. Hence, this is an AI Incident as the AI system's use has directly led to realized harms.