AI-Powered Bots Dominate Web Traffic, Increasing Cyber Threats

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybersecurity reports from Imperva and Thales highlight that AI-driven bots now exceed human traffic online, constituting over 50% of global internet activity. The rise of AI-powered bots enables increasingly sophisticated, malicious attacks such as spamming and DDoS, posing significant risks to online infrastructure.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI and LLMs being leveraged to create malicious bots that have increased cyberattacks and fraudulent activities, which are harms to property and communities. The AI systems' use directly leads to these harms, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, as the article reports ongoing attacks and their impacts on sectors like travel and retail. Hence, the event is classified as an AI Incident.[AI generated]
AI principles
Robustness & digital securityAccountabilitySafetyTransparency & explainability

Industries
Digital securityIT infrastructure and hostingMedia, social platforms, and marketing

Affected stakeholders
ConsumersBusinessGovernmentGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

AI system task:
Content generationGoal-driven organisationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI makes bots easier to deploy and harder to detect

2025-04-15
BetaNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and LLMs being leveraged to create malicious bots that have increased cyberattacks and fraudulent activities, which are harms to property and communities. The AI systems' use directly leads to these harms, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, as the article reports ongoing attacks and their impacts on sectors like travel and retail. Hence, the event is classified as an AI Incident.
Thumbnail Image

Bots now make up the majority of all internet traffic

2025-04-15
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered bots causing real cyber harms such as spamming campaigns and DDoS attacks that disrupt websites, which constitute harm to property and business operations. The involvement of AI in creating and refining these bots is clear, and the harms are ongoing and materialized, not just potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

2025-04-15
financialpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and LLMs to develop and scale malicious bots that are actively attacking and evading security measures. This constitutes the use of AI systems leading directly to harm in the form of cyber attacks and security breaches, which fall under harm to property and disruption of critical infrastructure. Since the harm is occurring and increasing, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI is helping bad bots take over the internet

2025-04-15
channelpro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to create malicious bots that have surpassed human traffic and are responsible for a large share of automated attacks. These AI-driven bots are actively causing harm by enabling cyberattacks, which disrupt digital infrastructure and pose risks to businesses and users. The harm is realized and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident due to the direct involvement of AI in causing harm through malicious bot activity.
Thumbnail Image

Bots Dominate Internet Traffic - News Directory 3

2025-04-17
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools and large language models facilitating the creation of sophisticated malicious bots that now constitute a significant portion of internet traffic and are responsible for AI-related attacks. This ongoing harmful activity involving AI systems meets the criteria for an AI Incident, as it directly leads to harm in cybersecurity, affecting property, communities, and potentially individuals' data security. The harm is realized and ongoing, not merely potential.
Thumbnail Image

Artificial Intelligence fuels rise of hard-to-detect bots that now make up more than half of global internet traffic, according to the 2025 Imperva Bad Bot Report - Express Computer

2025-04-15
Express Computer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, LLMs, AI tools like ChatGPT, ByteSpider Bot, ClaudeBot) being used to develop and operate malicious bots that have materially increased cyberattacks and caused harm such as data breaches, fraud, and disruption of services. The harms described include violations of privacy and security, financial losses, and operational disruptions in critical sectors, which fit the definition of AI Incident. The AI systems' use in bot creation and attack execution is a direct cause of these harms, not merely a potential risk or background context. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

AI Fuels Rise of Bots to Surpass Human Activity

2025-04-15
DIGIT
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI, LLMs, AI-powered bots) being used maliciously to conduct cyberattacks that have materialized harms such as account takeovers, data theft, and service disruptions. The harms affect individuals (through theft of PII and financial fraud), organizations (through operational disruption and data breaches), and critical infrastructure (APIs underpinning essential services). Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the OECD framework. The detailed description of realized attacks and their impacts confirms this classification rather than a mere hazard or complementary information.
Thumbnail Image

There are now more bots than humans on the web - and that's a danger for all of us

2025-04-15
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and large language models as drivers of the rise in automated bots that conduct malicious activities online, including stealing money and account takeovers. These activities constitute harm to people and property, fulfilling the criteria for an AI Incident. The AI systems are used maliciously and have directly led to harm, not just posing a potential risk. Hence, the event is classified as an AI Incident.
Thumbnail Image

2025 Imperva Bad Bot Report: How AI is Supercharging the Bot Threat

2025-04-15
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI is fueling the rise of bad bots that are actively conducting malicious activities causing financial and security harms. The bots use AI and machine learning to evade detection and optimize attacks, leading to account takeovers and other fraudulent activities. These harms are realized and ongoing, not merely potential. The involvement of AI systems in the development and use of these bots directly contributes to these harms, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

'It has serious implications for businesses worldwide' - bots found to make up majority of all internet traffic

2025-04-16
Irish Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered bots making up the majority of internet traffic and being used in harmful activities like spamming and DDoS attacks, which disrupt services and cause harm to businesses and users. The AI systems' use in these attacks directly leads to harm (disruption of services and potential economic damage), fulfilling the criteria for an AI Incident. The involvement of AI in the bots and the resulting harm is clear and direct, not merely potential or speculative.
Thumbnail Image

AI-Powered Bad Bots Account for 51% of Traffic, Surpassing Human Traffic for the First Time - IT Security News

2025-04-16
IT Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered bad bots, which are AI systems generating automated traffic. These bots are involved in malicious activities (bad bot attacks) that can disrupt online services, potentially causing harm to digital infrastructure and communities. Since the bots are actively causing harm through their use, this qualifies as an AI Incident due to the realized harm from AI system use in cyberattacks.
Thumbnail Image

With AI's Help, Bad Bots Are Taking Over the Web

2025-04-15
darkreading.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled bots conducting automated attacks that have grown in volume and sophistication, directly impacting web traffic and security. The use of AI to mimic human behavior and bypass security measures indicates the involvement of AI systems in malicious activities. These activities can lead to harm such as disruption of services, potential fraud, and other harms to property or communities. Since the harm is occurring and linked to AI system use, this qualifies as an AI Incident.
Thumbnail Image

There are now more bots than humans on the web - and that's a danger for all of us

2025-04-15
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and large language models as drivers enabling the creation of automated bots that conduct malicious activities such as stealing money and account takeovers. These activities constitute direct harm to individuals and communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the bots are actively attacking and exploiting internet users. The AI systems' use in automating these attacks is central to the incident, and the harms include financial fraud and disruption of online environments, which align with the definitions of AI Incident.
Thumbnail Image

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

2025-04-15
AiThority
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems (generative AI, LLMs) are used to create and operate malicious bots that have caused significant harm through cyberattacks, including data breaches, fraud, and disruption of services. These harms fall under violations of rights and harm to communities and property. Since the harms are occurring and directly linked to AI system use, this qualifies as an AI Incident rather than a hazard or complementary information. The detailed description of realized harms and AI's pivotal role in enabling these attacks supports this classification.
Thumbnail Image

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

2025-04-15
itnewsonline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems (generative AI, LLMs) are being used to create and operate malicious bots that have caused significant cyber harms such as data breaches, fraud, and service disruptions. These harms fall under violations of rights (privacy, data protection) and harm to communities (disruption, fraud). The AI systems' use is central to the scale and sophistication of these attacks, making this an AI Incident. The report documents realized harms, not just potential risks, and thus it is not merely a hazard or complementary information. The involvement of AI in the development and use of these bots directly leads to the harms described.
Thumbnail Image

Widely available AI tools signal new era of malicious bot activity - IT Security News

2025-04-18
IT Security News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI tools and large language models) used maliciously to generate automated bot traffic, which has surpassed human traffic and is used for cyber attacks. This use of AI directly contributes to harm by enabling large-scale malicious bot activity, which can disrupt services, compromise security, and harm users. Therefore, this constitutes an AI Incident due to realized harm linked to AI system use.
Thumbnail Image

AI arms the bots: Half of internet traffic now automated threat, warns report

2025-04-17
intelligentcio.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-powered bots and LLMs) whose use has directly led to realized harms including cyberattacks causing data theft, fraud, and disruption of critical infrastructure (APIs). The report documents ongoing and increasing malicious activity, not just potential risk, thus constituting an AI Incident. The harms include violations of rights (data breaches), harm to property and communities (financial and service disruptions), and disruption of critical infrastructure (API exploitation).
Thumbnail Image

Bots now account for over half of all internet traffic

2025-04-16
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI and large language models as key enablers of malicious bots that are responsible for a large share of internet traffic and cyberattacks. These bots cause harm by attacking sectors such as travel and retail, disrupting operations and posing security risks. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI systems in causing harm through malicious bot activity.
Thumbnail Image

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

2025-04-15
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, large language models) being used to develop and operate malicious bots that have directly led to significant cyber harms such as account takeovers, data breaches, and fraud. These harms constitute violations of rights and harm to communities and property (financial and data assets). The involvement of AI in the development and use of these bots is central to the incident, and the harms are ongoing and materialized, not merely potential. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Artificial Intelligence Fuels Rise of Hard-to-Detect Bots That Now Make up More Than Half of Global Internet Traffic, According to the 2025 Imperva Bad Bot Report

2025-04-15
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems, including generative AI and large language models, are being used to create and operate malicious bots that have caused significant harm through cyberattacks. The harms include data breaches, account takeovers, fraud, and disruption of critical digital infrastructure, which align with the definitions of AI Incident harms (a) injury or harm to persons, (c) violations of rights, and (d) harm to communities and property. The involvement of AI in both the development and use of these bots is clear and central to the harm described. Hence, the event is classified as an AI Incident.
Thumbnail Image

Artificial Intelligence fuels rise of hard-to-detect bots that now make up more than half of global internet traffic, according to the 2025 Imperva Bad Bot Report

2025-04-15
Thales Group
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI, large language models, AI-driven bots) being used maliciously to conduct cyberattacks that have already caused harm such as account takeovers, fraud, and data breaches. The harms are direct and realized, affecting critical sectors and leading to violations of security and privacy. The AI systems' use in these attacks is central to the incident, fulfilling the definition of an AI Incident. The report is not merely a warning or potential risk but documents ongoing harm caused by AI-powered bots.