State-Sponsored Hackers Exploit AI Tools for Global Cybercrime and Disinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

State-backed hacker groups from China, Russia, North Korea, and Iran have used AI tools like ChatGPT to conduct cyberattacks, spread disinformation, create fake profiles, and perpetrate fraud, undermining global cybersecurity and social trust. OpenAI recently dismantled ten major malicious networks abusing its AI systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems being used maliciously by state-sponsored hacker groups to perpetrate online fraud, spread false information, and conduct cyberattacks, all of which have caused real harm. The harms include financial losses to victims, societal polarization, and threats to cybersecurity, fitting the definitions of harm to communities and violations of rights. The involvement of AI in these malicious campaigns is direct and central. The legal issues around data use also point to rights violations. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Digital securityMedia, social platforms, and marketingGovernment, security, and defenceFinancial and insurance servicesIT infrastructure and hosting

Affected stakeholders
ConsumersBusinessGovernmentGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Prevara nove generacije: ChatGPT u rukama hakera postaje nezaustavljivo oružje!

2025-06-13
kurir.rs
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used maliciously by state-sponsored hacker groups to perpetrate online fraud, spread false information, and conduct cyberattacks, all of which have caused real harm. The harms include financial losses to victims, societal polarization, and threats to cybersecurity, fitting the definitions of harm to communities and violations of rights. The involvement of AI in these malicious campaigns is direct and central. The legal issues around data use also point to rights violations. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Prevara nove generacije: ChatGPT u rukama hakera postaje nezaustavljivo oružje

2025-06-13
Nezavisne novine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenAI's tools, ChatGPT) being used maliciously by hackers to spread disinformation, create fake profiles, conduct cyber espionage, and perpetrate fraud, all of which have caused direct harm to individuals and communities. The harms include violations of rights, harm to communities through misinformation and social destabilization, and harm to property and data through cyberattacks. The involvement of AI in these harms is direct and central, meeting the criteria for AI Incident. The legal challenges mentioned relate to AI development and use but do not overshadow the primary focus on realized harms caused by AI misuse.
Thumbnail Image

ChatGPT u rukama hakera postaje nezaustavljivo oružje

2025-06-13
Oslobođenje d.o.o.
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by hackers from several countries to spread disinformation, create fake profiles, conduct espionage, and perpetrate fraud, all of which have caused direct harm to individuals, communities, and global cybersecurity. The harms include manipulation of information causing societal division (harm to communities), unauthorized data access (violation of rights), and financial scams (harm to persons). The involvement of AI in these malicious uses is clear and central to the harms described. The legal dispute over AI training data also points to violations of intellectual property rights. Hence, this is an AI Incident as the AI systems' use has directly led to significant harms.
Thumbnail Image

Prevara nove generacije: ChatGPT u rukama hakera postaje nezaustavljivo oružje | 6yka

2025-06-13
BUKA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT and other AI tools) being used by hackers from countries like China, Russia, North Korea, and Iran to conduct harmful activities including spreading disinformation, creating fake profiles, developing malware, and executing fraud. These actions have directly caused harm to communities, individuals, and global cybersecurity, fitting the definition of an AI Incident. The harms are realized and ongoing, not merely potential. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT u rukama hakera postaje nezaustavljivo oružje

2025-06-13
vijesti.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by malicious actors to spread false information, create fake profiles, conduct cyber espionage, and perpetrate scams, all of which have caused real harm to individuals, communities, and global cybersecurity. The involvement of AI in these harmful activities is direct and central to the incidents described. Furthermore, the legal disputes over AI training data and privacy highlight additional rights violations linked to AI development and use. Therefore, the event meets the criteria for an AI Incident due to direct and indirect harms caused by AI misuse.