Organized Crime in Asia Exploits AI for Cybercrime

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The UNODC reports that organized crime in Asia is leveraging AI, including generative AI and deepfake technology, to commit cyber fraud and create illicit content. These groups are integrating new technologies into their operations, establishing underground markets, and using cryptocurrency for money laundering.[AI generated]

Why's our monitor labelling this an incident or hazard?

The UNODC report describes ongoing crimes—fraud, money laundering, forced labor, creation of ultrafake content—explicitly enabled by generative AI systems. These activities have already caused significant financial and human harms. Because AI played a direct and pivotal role in these realized harms, this constitutes an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Digital securityFinancial and insurance servicesMedia, social platforms, and marketingGovernment, security, and defenceIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Asia se convierte en un campo de pruebas para los delitos cibernéticos del crimen organizado

2024-10-07
UN News
Why's our monitor labelling this an incident or hazard?
The UNODC report describes ongoing crimes—fraud, money laundering, forced labor, creation of ultrafake content—explicitly enabled by generative AI systems. These activities have already caused significant financial and human harms. Because AI played a direct and pivotal role in these realized harms, this constitutes an AI Incident.
Thumbnail Image

Asia se convierte en un campo de pruebas para los delitos cibernéticos del crimen organizado

2024-10-07
Periódico Noroeste
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and ultrafake content by criminal organizations to conduct cyber fraud and money laundering, which are forms of harm to communities and individuals. The AI systems are part of the criminal modus operandi, directly contributing to the harm. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm through cybercrime activities.
Thumbnail Image

Read more

2024-10-07
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that criminal organizations are integrating generative AI and deepfake content into their operations, which has led to substantial financial losses estimated between $18 billion and $37 billion in 2023 alone. The AI systems are used to create ultrafake content that facilitates fraud and scams, directly causing harm to individuals and communities. Therefore, the involvement of AI systems in causing these harms meets the criteria for an AI Incident, as the AI's use has directly led to violations of rights and significant harm to communities.
Thumbnail Image

Asia se convierte en un campo de pruebas para delitos cibernéticos

2024-10-08
Vértigo Político
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI generative models and ultrafake content are being used by criminal organizations to commit cyber fraud and scams, causing realized financial harm to victims. This constitutes direct involvement of AI systems in causing harm (fraud, financial loss, and deception), which fits the definition of an AI Incident. The harms include violations of property rights (financial loss) and harm to communities (through widespread scams and deception). Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UNODC: Asia, campo de pruebas para los delitos cibernéticos del crimen organizado

2024-10-08
elmercuriodigital.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI generative systems by criminal organizations to perpetrate cyber fraud and money laundering, which have directly led to significant financial losses and social harms in Asia. The report provides evidence of realized harms caused by AI-enabled criminal activities, including the use of ultra-fake content and AI-driven scams. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of law. The detailed description of ongoing criminal use and its impacts confirms this classification rather than a mere hazard or complementary information.
Thumbnail Image

Telegram favorizează activitatea rețelelor criminale din Asia de Sud-Est

2024-10-07
Jurnal de Chisinau
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is explicit through the mention of deepfake software and chatbots used by criminals to commit fraud and obtain sensitive data. The harms described include violations of human rights, privacy breaches, and exploitation, which have materialized as a result of the AI systems' use. Telegram's platform is implicated in facilitating these harms, making this an AI Incident due to the direct link between AI-enabled criminal activities and realized harm.
Thumbnail Image

Raport. Telegram favorizează activitatea rețelelor criminale din Asia de Sud-Est. Date piratate, inclusiv carduri de credit, parole și alte date personale, sunt tranzacționate pe platformă la scară largă. - Biziday

2024-10-07
Biziday
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI-related technologies such as deepfake software and chatbots being used maliciously on Telegram to facilitate large-scale criminal activities including data theft and fraud. The involvement of AI systems in these activities has directly caused harm to individuals and companies, such as the data breach of the Indian insurer and the widespread trading of stolen data. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to individuals and communities.