AI-Driven eCommerce Fraud Predicted to Surge by 2029

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Juniper Research study forecasts eCommerce fraud to rise from $44 billion in 2024 to $107 billion by 2029, driven by AI advancements. Fraudsters are using AI to create deepfakes and synthetic identities, bypassing verification systems and increasing 'friendly fraud,' posing significant threats to merchant profitability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly links AI to the rise and execution of large-scale eCommerce fraud, which causes significant financial harm to merchants and customers. The use of AI-generated deepfakes and synthetic identities to bypass security measures and commit fraud is a direct misuse of AI leading to harm. The harm is materialized and ongoing, not just potential. Although the article also discusses AI-driven fraud detection as a response, the primary focus is on the harm caused by AI-enabled fraud. Hence, this qualifies as an AI Incident under the framework, as AI misuse has directly led to significant harm (financial losses and exploitation).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Consumer servicesLogistics, wholesale, and retailFinancial and insurance servicesDigital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

eCommerce Fraud to Exceed $107 Billion in 2029

2024-10-07
Markets Insider
Why's our monitor labelling this an incident or hazard?
This is a high-level market study and forecast highlighting trends in AI-enabled fraud and countermeasures. It does not report a concrete incident or a narrowly defined hazard event, nor does it primarily focus on a policy or governance response. Instead, it offers contextual commentary on the evolving AI fraud landscape, so it is classified as Complementary Information.
Thumbnail Image

AI will push global eCommerce fraud to $107 billion by 2029 - Report

2024-10-10
Nairametrics
Why's our monitor labelling this an incident or hazard?
Juniper’s report warns that fraudsters’ growing use of AI—via deepfakes, synthetic identities, and automated schemes—could plausibly lead to a substantial surge in eCommerce fraud by 2029. No concrete incident is described; instead, it is a credible advisory about potential future harms from AI-driven attacks.
Thumbnail Image

Report links AI to $107 billion global eCommerce fraud

2024-10-11
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI to the rise and execution of large-scale eCommerce fraud, which causes significant financial harm to merchants and customers. The use of AI-generated deepfakes and synthetic identities to bypass security measures and commit fraud is a direct misuse of AI leading to harm. The harm is materialized and ongoing, not just potential. Although the article also discusses AI-driven fraud detection as a response, the primary focus is on the harm caused by AI-enabled fraud. Hence, this qualifies as an AI Incident under the framework, as AI misuse has directly led to significant harm (financial losses and exploitation).
Thumbnail Image

Ecommerce fraud to exceed $100bn by 2029

2024-10-07
Finextra Research
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI's role in enabling more sophisticated and large-scale ecommerce fraud, including the use of deepfakes to defeat verification systems. This directly leads to financial harm (losses exceeding $44bn and projected to rise to $107bn), which fits the definition of an AI Incident as AI use has directly led to harm to property (financial loss). The harm is realized and ongoing, not just a potential future risk, so it is not merely an AI Hazard. The article is not focused on responses or updates but on the harm caused, so it is not Complementary Information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

AI-Driven eCommerce Fraud to Top $107 Billion by 2029

2024-10-09
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI-generated deepfakes, synthetic identities, and generative AI for phishing and fraud) in the commission of e-commerce fraud, which directly leads to significant financial harm to merchants and disruption of the e-commerce ecosystem. This constitutes harm to property and communities (economic harm to businesses and consumers). The AI's role is pivotal as it enables fraudsters to outwit traditional security measures and scale attacks. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm, not just a potential risk.
Thumbnail Image

42.5% of Fraud Attempts Are Now AI-Driven: Financial Institutions Rushing to Strengthen Cyber Defences - HS Today

2024-10-09
HSToday
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-driven fraud constitutes 42.5% of all detected fraud attempts in the financial sector, with 29% of those attempts successful, indicating realized harm to people and institutions (harm to property and individuals). The AI systems are used maliciously to perpetrate fraud, which is a direct cause of harm. Therefore, this qualifies as an AI Incident because the development and use of AI systems by fraudsters have directly led to harm. The article does not merely warn of potential future harm but documents ongoing harm and impact, which excludes classification as an AI Hazard or Complementary Information. It is not unrelated because AI involvement and harm are central to the report.
Thumbnail Image

42% of fraud attempts are now AI-driven

2024-10-08
accountancydaily.co
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are being used by criminals to carry out complex fraud schemes, including deepfakes and synthetic identities, which have directly led to successful fraud attempts. This constitutes harm to property and communities (financial losses and trust erosion). Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm through fraud.
Thumbnail Image

Global eCommerce Fraud Set to Reach $107 Billion in 2029, Driven by AI-Powered Attacks - Report - Tekedia

2024-10-10
Tekedia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used by fraudsters to conduct sophisticated and large-scale fraudulent activities that are already impacting eCommerce merchants and financial institutions, constituting harm to property and communities. This meets the criteria for an AI Incident because the AI system's use has directly led to harm through fraud. Although the article also discusses mitigation efforts, the primary focus is on the existing and growing harm caused by AI-driven fraud, not just potential future harm or complementary information.
Thumbnail Image

Global eCommerce Fraud To Hit $107 Billion By 2029

2024-10-10
BizWatchNigeria.Ng
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used by fraudsters to automate and enhance fraud schemes, including deepfakes to bypass verification, which is a clear AI system involvement. The harms described (financial fraud, identity theft, chargeback fraud) are significant harms to property and economic interests. Since the article focuses on projected growth and potential threats rather than a specific realized incident, it fits the definition of an AI Hazard, where AI use could plausibly lead to significant harm in the future. The mention of current challenges and recommendations for AI-driven detection systems supports the context but does not elevate the event to an AI Incident. Therefore, the classification is AI Hazard.