AI Deepfake Trojan Targets iPhone and Android Users for Banking Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A new banking trojan, GoldPickaxe, has been discovered targeting iPhone and Android users. The malware collects biometric and ID data, then uses AI-generated deepfakes to impersonate victims and access their bank accounts, leading to financial theft and privacy violations. Initial attacks focus on Vietnam and Thailand, with expansion expected.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the AI technology used by the malware to impersonate users) that is actively used in malicious operations to steal sensitive data and financial assets from users. This constitutes a violation of users' rights and causes direct harm to individuals' property and privacy. Therefore, it meets the criteria of an AI Incident due to the realized harm caused by the AI-enabled malware.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyRespect of human rightsAccountabilityTransparency & explainabilityHuman wellbeingDemocracy & human autonomy

Industries
Financial and insurance servicesDigital securityConsumer products

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

iOS系统首现木马病毒GoldPickaxe:利用AI技术侵入用户金融APP

2024-02-20
chinaz.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI technology used by the malware to impersonate users) that is actively used in malicious operations to steal sensitive data and financial assets from users. This constitutes a violation of users' rights and causes direct harm to individuals' property and privacy. Therefore, it meets the criteria of an AI Incident due to the realized harm caused by the AI-enabled malware.
Thumbnail Image

拿下一杀,苹果iOS系统首现木马病毒

2024-02-17
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system component (AI-generated deepfake technology) used maliciously within a malware system to impersonate victims and facilitate financial theft, which constitutes harm to individuals' property and privacy. The malware's active use and resulting theft qualify this as an AI Incident because the AI system's use has directly led to harm. The involvement of AI in the deepfake creation is pivotal to the attack's success, and the harm is realized, not just potential.
Thumbnail Image

苹果iOS现首个木马病毒:能窃取面部识别等信息

2024-02-18
中关村在线
Why's our monitor labelling this an incident or hazard?
The malware uses AI-related capabilities such as collecting biometric data and creating deepfake videos, which are AI system functions. The use of this AI system has directly led to violations of personal privacy and security, and enables unauthorized access to bank accounts, constituting harm to persons and their property. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use in malicious software.
Thumbnail Image

iPhone首次面临这种安全挑战!iPhone也并非高枕无忧

2024-02-18
中关村在线
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-related malware that collects biometric data and uses AI to impersonate victims for financial theft. This constitutes direct harm to individuals' property and privacy, fulfilling the criteria for an AI Incident. The malware's active deployment and ongoing attacks confirm realized harm rather than just potential risk.
Thumbnail Image

iPhone首次被黑客进行木马攻击!苹果遭安全考验

2024-02-17
中关村在线
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) as part of a malware attack that has already occurred, causing direct harm to individuals by stealing biometric data and financial information, and enabling fraudulent access to bank accounts. This meets the criteria for an AI Incident because the AI system's use directly leads to harm (financial theft and identity fraud). The malware's infection and data theft are realized harms, and the AI-generated deepfake impersonation is a key factor in the attack's success.
Thumbnail Image

苹果也不安全了!iOS系统首现木马病毒:窃取iPhone用户面部数据身份证、访问银行账户

2024-02-17
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system or AI-enabled malware that uses biometric data and deepfake technology to commit fraud and unauthorized access. The malware's use has directly caused harm by stealing sensitive personal data and enabling fraudulent banking access, which fits the definition of an AI Incident due to violations of rights and harm to property. The AI system's role is pivotal in generating deepfake videos and automating access to banking apps, leading to realized harm.
Thumbnail Image

苹果iPhone首个银行木马被曝光:收集面部信息,窃取你的财产

2024-02-17
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation) used maliciously as part of a banking trojan malware to steal financial assets and biometric data from users. The AI system's use directly leads to harm to individuals' property and privacy, fulfilling the criteria for an AI Incident. The malware's active deployment and ongoing attacks confirm realized harm rather than potential harm, so it is not merely a hazard. The involvement of AI in generating deepfakes for impersonation and fraud is central to the incident.
Thumbnail Image

iPhone首次面临这种安全挑战!iPhone也并非高枕无忧

2024-02-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (AI-generated impersonations) as part of a malware attack that has already infected iPhone users and led to financial theft and privacy violations. The AI system's use is integral to the harm caused, fulfilling the criteria for an AI Incident. The harm includes injury to individuals' financial security and violations of privacy rights. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

iOS系统首现木马病毒:收集必要信息侵入用户银行账户

2024-02-18
Techweb
Why's our monitor labelling this an incident or hazard?
The article explicitly describes a malware (GoldPickaxe) that uses AI techniques (video deepfake generation) to impersonate users and access their bank accounts, leading to direct harm (financial theft and privacy violations). The AI system's use is central to the incident, as it enables the malware to bypass biometric security. The harm is realized and ongoing, affecting users' rights and security. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

瞄准iOS用户!新型木马病毒来袭 专盗银行帐户等资讯 | 科技 | 生活

2024-02-16
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the malware uses AI to create deepfake images) involved in malicious activity that has caused harm by stealing sensitive data and enabling identity fraud. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals' property and privacy. The malware's deployment and active attacks on users constitute realized harm, not just potential harm.
Thumbnail Image

苹果 iPhone 首个银行木马被曝光

2024-02-17
华商网
Why's our monitor labelling this an incident or hazard?
The article reports on a new banking Trojan malware, GoldPickaxe, targeting iPhone and Android users. The malware collects biometric data and uses AI-generated Deepfake technology to impersonate victims, facilitating theft from bank accounts. This constitutes direct harm to individuals' property and privacy, fulfilling the criteria for an AI Incident because the AI system (Deepfake generation) is directly involved in causing harm through malicious use.
Thumbnail Image

苹果 iPhone 首个银行木马被曝光:收集面部信息,窃取你的财产

2024-02-16
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake AI) used maliciously as part of a malware attack to impersonate victims and steal financial assets, which constitutes a violation of rights and harm to property. The malware's use of AI-generated deepfakes to facilitate theft directly leads to harm. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-enabled malicious use.
Thumbnail Image

苹果"安全"神话破灭?!iOS首次出现木马病毒

2024-02-18
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system indirectly through the use of AI tools to generate fake videos for fraudulent purposes, which is part of the malware's operation. The malware's development and use have directly led to harm in terms of privacy violations and potential financial theft, fulfilling the criteria for an AI Incident. The article describes actual harm occurring or highly likely to occur due to the malware's operation, not just a potential risk. Hence, it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

科技早报:iOS现木马病毒|折叠iPhone开发暂停|孙正义计划1000亿战英伟达

2024-02-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The GoldPickaxe.iOS malware involves AI-related capabilities (deepfake video creation and automated access) and has directly led to harm by stealing sensitive biometric and identity data and enabling unauthorized bank access, which is a violation of rights and causes harm to individuals. This fits the definition of an AI Incident. The other news items do not describe realized or plausible AI-related harms or hazards and are thus unrelated to AI harm tracking.