AI deepfakes and voice cloning fuel cyberfraud and espionage

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kaspersky warns criminals increasingly use AI-generated deepfake videos and voice clones to bypass security controls, commit fraud, identity theft and corporate espionage. The underground market charges $300–$20,000 per minute for high-quality fakes. A Kaspersky study found 75% of Peruvians unfamiliar with deepfakes, heightening vulnerability to these attacks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (deepfake technology using generative neural networks) that have been used to create convincing fake content leading to harms such as identity theft, fraud, disinformation, and reputational damage. These harms have materialized or are actively occurring, fulfilling the criteria for an AI Incident. The discussion of risks and recommendations supports the presence of actual harm rather than just potential harm. Therefore, the event is best classified as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityAccountabilitySafetyDemocracy & human autonomy

Industries
Digital securityFinancial and insurance servicesMedia, social platforms, and marketingConsumer servicesIT infrastructure and hosting

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
ICT management and information securityCitizen/customer service

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

El peligro de los deepfakes y la inteligencia artificial en el Perú y el mundo

2024-08-26
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake technology using generative neural networks) that have been used to create convincing fake content leading to harms such as identity theft, fraud, disinformation, and reputational damage. These harms have materialized or are actively occurring, fulfilling the criteria for an AI Incident. The discussion of risks and recommendations supports the presence of actual harm rather than just potential harm. Therefore, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El lado oscuro de la inteligencia artificial: así usan de deepfakes y clones de voz para engañar a usuarios

2024-08-30
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely generative AI used to create deepfake videos and voice clones. The use of these AI systems by cybercriminals has directly led to harms including fraud, identity theft, unauthorized access to accounts, and reputational damage, which fall under harms to individuals and communities as well as violations of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused realized harm. The article does not merely warn of potential harm but documents ongoing malicious use and its consequences.
Thumbnail Image

Aumenta el uso de deepfakes y clonación de voz en ciberataques

2024-08-28
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (generative AI and machine learning models) used to create deepfake videos and voice clones that have been employed in actual cyberattacks causing financial fraud and identity theft. These harms fall under violations of rights and harm to individuals and communities. The involvement of AI is direct and central to the harm described. Hence, this qualifies as an AI Incident rather than a hazard or complementary information. The article also discusses the market for such AI-generated content and the risks posed, but the primary focus is on realized harms from AI misuse.
Thumbnail Image

¡Cuidado con los ciberataques! el 75% de peruanos no sabe qué es un deepfake, según estudio

2024-08-26
Panamericana Televisión
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies deepfakes as AI-generated content that is being misused to cause real harms such as financial fraud, identity theft, and defamation. These harms fall under violations of rights and harm to individuals and communities. Since these harms are occurring and are directly linked to the use of AI systems generating deepfakes, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Deepfakes y clonación de voz son las nuevas amenazas en ciberseguridad

2024-08-29
El Ciudadano
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems (deepfakes and voice cloning) used maliciously to commit fraud and identity theft, which are harms to individuals and communities. The involvement of AI in creating realistic fake content that enables these crimes is clear. Since the harms are occurring (fraud affecting the financial sector and identity theft), this qualifies as an AI Incident rather than a hazard or complementary information. The article also highlights the market for such AI-generated content and the risks posed, confirming the realized harm linked to AI misuse.
Thumbnail Image

El lado oscuro de la IA: Crece uso de deepfakes y clonación de voz en ciberataques

2024-08-29
DiarioDigitalRD
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI and machine learning models) used to create deepfakes and voice clones that have been employed in actual cyberattacks causing financial fraud and identity theft. These harms fall under injury to persons (psychological and reputational harm) and harm to property (financial loss). The AI system's use in these attacks is a direct contributing factor to the harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.