AI-Driven Identity Attacks Surpass Stolen Credentials as Top Enterprise Threat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

HYPR's 2026 report reveals that generative and agentic AI now pose the leading identity security threats, overtaking stolen credentials. Organizations report increased incidents of AI-enabled impersonation, including deepfakes and voice cloning, prompting a shift toward identity verification solutions to combat industrial-scale automated attacks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions generative AI and agentic AI as primary factors in identity-based attacks, including deepfakes and AI voice cloning, which have directly led to increased identity impersonation incidents and data theft. These harms fall under violations of rights and harm to communities through fraud and impersonation. The AI systems are used maliciously to automate and scale attacks, causing realized harm. Hence, this qualifies as an AI Incident due to the direct and ongoing harm caused by AI-enabled identity attacks.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital security

Affected stakeholders
Business

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Identity Verification Emerges as a New Enterprise Standard to Combat Rising Impersonation Threats, Despite Plateau in Passwordless Adoption

2026-03-10
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI and agentic AI as primary factors in identity-based attacks, including deepfakes and AI voice cloning, which have directly led to increased identity impersonation incidents and data theft. These harms fall under violations of rights and harm to communities through fraud and impersonation. The AI systems are used maliciously to automate and scale attacks, causing realized harm. Hence, this qualifies as an AI Incident due to the direct and ongoing harm caused by AI-enabled identity attacks.
Thumbnail Image

AI has overtaken stolen passwords as the top identity threat, report says - BetaNews

2026-03-10
BetaNews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI, automated agents) being used in identity attacks that have already caused harm to organizations by compromising security and enabling impersonation and data theft. This constitutes harm to property and communities (enterprises and their users) through breaches and fraud. Since the AI systems' use has directly led to realized harms (identity threats and impersonation incidents), this qualifies as an AI Incident rather than a hazard or complementary information. The article focuses on the current impact of AI-driven identity attacks, not just potential future risks or responses.
Thumbnail Image

Identity Verification Emerges as a New Enterprise Standard to Combat Rising Impersonation ...

2026-03-10
Eagle-Tribune
Why's our monitor labelling this an incident or hazard?
The article focuses on summarizing research findings about AI-driven identity threats and the industry's strategic shift toward Identity Verification. It does not report a concrete AI Incident or AI Hazard, nor does it describe a direct or indirect harm caused by AI systems. The content is best classified as Complementary Information because it provides important context and understanding of AI's impact on identity security without detailing a specific harmful event or imminent risk.
Thumbnail Image

AI Automation to Combat Rising Impersonation Threats: Study

2026-03-12
Supply and Demand Chain Executive
Why's our monitor labelling this an incident or hazard?
The article focuses on the evolving landscape of identity security risks due to AI and the anticipated future challenges, but it does not report any realized harm or a specific event involving AI malfunction or misuse. The content is primarily about potential risks and organizational responses, making it a discussion of plausible future harm and strategic adaptation rather than an actual incident or hazard event. Therefore, it fits best as Complementary Information, providing context and insight into AI-related security concerns without describing a direct AI Incident or AI Hazard.