AI Tools FraudGpt, XxxGpt, and WolfGpt Enable Cybercrime

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Check Point Software's cybersecurity experts discovered AI tools FraudGpt, XxxGpt, and WolfGpt, which facilitate the creation of malware, phishing emails, and false identities. These tools, mimicking ChatGpt, allow even those with minimal technical skills to execute complex cyber attacks, posing significant risks to individuals and communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

FraudGpt, XxxGpt, and WolfGpt are AI systems whose use by criminals has directly enabled real cyber harms—phishing, data theft, ransomware, attacks on ATMs/POS—constituting an AI Incident. The article describes actual malicious deployment of these AI tools, not merely hypothetical risks or post‐incident analysis.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

'ai' fa rima con guai - gli esperti di cybersicurezza di 'check point software' hanno scoperto...

2024-06-21
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The article describes the discovery of AI-powered malware-generation platforms actively used or readily usable by criminals. Although specific incidents are not detailed, the tools’ capabilities pose a credible threat of harm (cyberattacks, data theft, financial loss). Thus, this situation represents an AI Hazard—an emerging risk that could plausibly lead to significant cyber incidents.
Thumbnail Image

Scoperte tre app IA per hacker, sono FraudGpt, XxxGpt e WolfGpt

2024-06-21
Tiscali Notizie
Why's our monitor labelling this an incident or hazard?
The article describes the discovery and capabilities of AI-driven hacking tools that are actively available to and used by malicious actors. Although it does not detail a particular successful attack or realized harm, it highlights the plausible future risk of significant cyber incidents arising from these AI systems’ misuse. This fits the definition of an AI Hazard.
Thumbnail Image

Scoperte tre app IA per hacker, sono FraudGpt, XxxGpt e WolfGpt - Cybersecurity - Ansa.it

2024-06-21
ANSA.it
Why's our monitor labelling this an incident or hazard?
FraudGpt, XxxGpt, and WolfGpt are AI systems whose use by criminals has directly enabled real cyber harms—phishing, data theft, ransomware, attacks on ATMs/POS—constituting an AI Incident. The article describes actual malicious deployment of these AI tools, not merely hypothetical risks or post‐incident analysis.
Thumbnail Image

Intelligenza artificiale, Check Point Research: "È al servizio degli hacker"

2024-06-22
Il Mattino
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI tools used by cybercriminals to enhance attacks. The harm discussed is potential misuse leading to more effective cyberattacks and data theft, which could cause injury to persons or harm to communities indirectly. However, the article does not report a specific AI Incident where harm has already occurred; rather, it outlines a recognized risk and calls for preventive measures. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents in the future if misuse continues or escalates.
Thumbnail Image

Scoperte tre app d'Intelligenza artificiale per hacker

2024-06-21
L'opinione delle Libertà
Why's our monitor labelling this an incident or hazard?
The described AI systems are explicitly used to generate malicious content and malware that have already been employed by criminals to cause harm, including identity theft, data breaches, and financial fraud. The involvement of AI in enabling these cyberattacks and the resulting harms to individuals and organizations meet the criteria for an AI Incident, as the AI systems' use has directly led to violations of security and harm to property and communities.
Thumbnail Image

hacker, quali sono le 3 app scoperte da Check Point Software

2024-06-21
Startupitalia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as generative AI tools used maliciously by hackers to create harmful content and execute cyberattacks. The use of these AI systems has directly led to harms such as data breaches, unauthorized access, and financial crimes, which constitute violations of rights and harm to property and communities. Therefore, this qualifies as an AI Incident because the AI systems' use has directly caused significant harm.
Thumbnail Image

L'Intelligenza Artificiale al servizio degli hacker che utilizzano servizi di IA speciali come FraudGPT, XXXGPT e WolfGPT " LMF Lamiafinanza

2024-06-20
LMF La mia finanza
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used maliciously by hackers to perpetrate cyberattacks that lead to harm including theft of sensitive data, creation of malware, and disinformation. The AI systems' use directly leads to violations of rights and harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to realized harms through cybercrime activities.
Thumbnail Image

Check Point Research: l'Intelligenza Artificiale al servizio degli hacker

2024-06-20
ilcorrieredellasicurezza.it
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used by hackers to carry out harmful cyberattacks that have already occurred or are ongoing, such as phishing, malware distribution, and data theft. These activities constitute violations of rights and harm to individuals and organizations. Since the AI systems' use has directly led to realized harms, this qualifies as an AI Incident. The article also discusses responses and ethical considerations, but the primary focus is on the active misuse of AI causing harm, not just potential future risks or complementary information.