Wild West AI Crime Exploits in Canada

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Canadian police warn that criminals are exploiting artificial intelligence by jailbreaking LLM safeguards for deepfake pornography, voice impersonation, and financial fraud. Cybercriminals are also building their own AI models and using dark web forums and Telegram channels, raising concerns about potential harm from misused AI technology.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems (large language models, generative AI tools like ChatGPT and DALL-E) being used or misused to cause real harms such as financial fraud, sexual exploitation, and incitement to violence. The harms described fall under injury or harm to persons, violations of rights, and harm to communities. The criminal use of AI jailbreaking to remove safeguards and enable illegal activities is a direct cause of these harms. The article also references legal actions and law enforcement responses, confirming the harms are realized and significant. Thus, this qualifies as an AI Incident under the OECD framework.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyPrivacy & data governanceRespect of human rightsHuman wellbeingTransparency & explainability

Industries
Digital securityMedia, social platforms, and marketingFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada

2025-03-27
CityNews Halifax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (large language models, generative AI tools like ChatGPT and DALL-E) being used or misused to cause real harms such as financial fraud, sexual exploitation, and incitement to violence. The harms described fall under injury or harm to persons, violations of rights, and harm to communities. The criminal use of AI jailbreaking to remove safeguards and enable illegal activities is a direct cause of these harms. The article also references legal actions and law enforcement responses, confirming the harms are realized and significant. Thus, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada - Medicine Hat News

2025-03-27
Medicine Hat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs) being exploited by criminals to commit fraud, impersonation, and other crimes, which are realized harms to individuals and communities. The mention of jailbreaking AI to remove safeguards and the lawsuit involving an AI chatbot causing psychological harm further supports that AI misuse has directly led to harm. The discussion of potential future harms and weaponization is secondary to the current criminal activities and harms described. Hence, this qualifies as an AI Incident due to the direct and indirect harms caused by AI misuse in criminal contexts.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada

2025-03-27
The Peterborough Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI systems have been used in criminal activities causing direct harm, such as fraud, impersonation, and the creation of illegal content, fulfilling the criteria for an AI Incident. The harms include violations of human rights, financial harm, and threats to physical safety. The involvement of AI is clear and central to the incidents described, including jailbreaking AI models to bypass safeguards for criminal purposes. Therefore, this event is classified as an AI Incident.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada

2025-03-27
Lethbridge News Now
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs like ChatGPT, AI image generation tools like DALL-E, AI chatbots) being used or manipulated to commit crimes, including fraud, impersonation, and production of illegal content. These uses have directly caused harm, such as the suicide of a minor influenced by an AI chatbot, the creation of illegal deepfake child pornography, and the use of AI to assist in a car bombing. The involvement of AI in these harms is direct and pivotal. Therefore, the event qualifies as an AI Incident. The article also discusses regulatory and societal responses, but the primary focus is on realized harms caused by AI misuse.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada - Business News

2025-03-27
Castanet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs, ChatGPT, deepfake generation tools) being used by criminals to commit various crimes, including financial fraud, sexual exploitation via deepfake pornography, and even aiding in bomb-making. These are direct harms caused by the use and misuse of AI systems. The harms include violations of rights (e.g., sexual exploitation, fraud), harm to communities (e.g., financial scams), and psychological harm (e.g., chatbot-induced suicide). The article also references law enforcement responses and the need for regulation, but the primary focus is on the realized harms caused by AI misuse. Hence, this qualifies as an AI Incident.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada

2025-03-27
CHEK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as large language models (LLMs) and generative AI tools being used or misused to cause harm, including fraud, deepfake child pornography, and instructions for bomb-making. These are direct harms to individuals and communities, fulfilling the criteria for an AI Incident. The involvement of AI jailbreaking to remove safeguards and the use of AI-generated content for scams and impersonation further confirm the AI system's role in causing these harms. The article also references a specific tragic case linked to AI chatbot misuse, reinforcing the presence of realized harm. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

'It's the Wild West': How AI Is Creating New Frontiers for Crime in Canada

2025-03-27
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (LLMs, generative AI) being used and misused by criminals to cause harm such as fraud, impersonation, and creation of illegal content. These harms have materialized, including financial losses, psychological harm, and legal violations. The jailbreaking of AI models to remove safeguards is a direct enabler of these harms. The involvement of AI in these criminal activities meets the definition of an AI Incident, as the AI system's use and misuse have directly led to violations of rights, harm to communities, and other significant harms. The article also discusses responses and challenges but the primary focus is on the realized harms caused by AI misuse.
Thumbnail Image

'It's the Wild West': How AI is creating new frontiers for crime in Canada

2025-03-27
Sudbury.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as large language models (e.g., ChatGPT) being used or misused to facilitate crimes including fraud, deepfake child pornography, and aiding in a fatal bombing. These are direct harms to individuals and communities, fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harms described. The article also discusses the challenges in regulation and enforcement, but the primary focus is on realized harms caused by AI misuse, not just potential future risks or general AI news. Hence, the classification as AI Incident is appropriate.