Anthropic's Claude AI Wrongly Suspends Fintech Firm's Accounts, Disrupting Operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Anthropic's automated safeguards in its Claude AI system mistakenly suspended over 60 accounts of Argentina-based fintech firm Belo, disrupting operations and cutting employee access to key workflows and data. The abrupt action, lacking clear explanation or warning, highlighted risks of over-reliance on single AI providers.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Anthropic's Claude) whose automated policy enforcement system suspended multiple user accounts without clear justification, causing operational harm to the affected company. The harm is indirect but material, as the suspension disrupted business processes and access to AI-driven tools. The later restoration of access confirms the suspension was a malfunction (false positive). This meets the criteria for an AI Incident because the AI system's malfunction directly led to harm (operational disruption and loss of access). The event is not merely a product update or general news, nor is it a potential future risk; the harm has already occurred. Therefore, the classification is AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Financial and insurance services

Affected stakeholders
BusinessWorkers

Harm types
Economic/Property

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Anthropic's Claude Shut Down Firm Without Explanation, Claims CTO And Issues Warning

2026-04-19
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Anthropic's Claude) whose automated policy enforcement system suspended multiple user accounts without clear justification, causing operational harm to the affected company. The harm is indirect but material, as the suspension disrupted business processes and access to AI-driven tools. The later restoration of access confirms the suspension was a malfunction (false positive). This meets the criteria for an AI Incident because the AI system's malfunction directly led to harm (operational disruption and loss of access). The event is not merely a product update or general news, nor is it a potential future risk; the harm has already occurred. Therefore, the classification is AI Incident.
Thumbnail Image

'No Warning, No Reason': Fintech CTO Claims Anthropic Disabled 60+ Claude Accounts

2026-04-20
News18
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (Claude chatbot) whose automated safeguards mistakenly revoked access to multiple accounts, disrupting the fintech firm's operations. This is a malfunction of the AI system leading to harm (operational disruption). The harm is direct and materialized, as the firm's operations were affected until access was restored. Therefore, this qualifies as an AI Incident due to the AI system's malfunction causing harm to the firm's operations.
Thumbnail Image

Fintech CTO slams Anthropic over mass Claude suspensions, warns developers 'never put all your eggs in one basket' | Mint

2026-04-20
mint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude AI) whose use and malfunction (false positive account suspensions) directly caused harm by disrupting the management and operation of a fintech startup's critical workflows. This disruption qualifies as harm under the definition of AI Incident (disruption of management and operation of critical infrastructure or business operations). The event is not merely a product update or general news but describes a concrete incident where the AI system's malfunction led to significant operational harm. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Claude 'takes down' fintech startup as Anthropic suspends over 60 accounts; CTO warns 'never put your eggs in one basket'

2026-04-19
The Financial Express
Why's our monitor labelling this an incident or hazard?
An AI system (Anthropic's Claude) was involved in automated detection and suspension of accounts, which directly led to operational disruption and harm to the company's ability to function. This constitutes harm to the company's property and operations (harm to property and communities). The harm was realized (not just potential), and the AI system's malfunction (false positive automated suspension) was the direct cause. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Upset tech startup CEO to Anthropic: You took down accounts of my entire company without any warning; shares email from Claude team saying ...

2026-04-18
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Claude) whose automated safeguards incorrectly identified policy violations, leading to suspension of over 60 user accounts of a legitimate company. This caused significant disruption to the company's operations, loss of access to integrated AI tools, and loss of conversation histories, which are critical for their work. The harm is realized and direct, stemming from the AI system's malfunction and the company's inability to use the AI service. Although the accounts were restored, the incident itself meets the criteria for an AI Incident because the AI system's malfunction directly led to operational harm. The event is not merely a product update or general news, nor is it a potential future risk; the harm occurred and was material. Hence, it is classified as an AI Incident.
Thumbnail Image

60+ accounts removed: Anthropic face backlash over Claude performance dip

2026-04-20
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Anthropic's Claude models) whose use and backend changes directly led to operational disruptions and harm to users' businesses. The sudden account deactivations and performance dips caused tangible harm by halting workflows and freezing integrations, which fits the definition of an AI Incident due to indirect harm caused by the AI system's malfunction or use. The harm is realized, not just potential, and the AI system's role is pivotal in causing these issues.
Thumbnail Image

Anthropic faces backlash over account deactivations, Claude performance issues

2026-04-20
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI system (Claude) is clearly involved, and the event concerns its use and malfunction (account deactivation). However, the article does not describe any harm resulting from this event, nor does it suggest plausible future harm. The concerns are about service reliability and communication practices, which do not meet the threshold for AI Incident or AI Hazard. The event is best classified as Complementary Information as it provides context on user reactions and service issues related to an AI system without describing harm.
Thumbnail Image

'Very bad UX and customer service': CTO says Anthropic shut down firm's Claude access with no warning

2026-04-20
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Claude chatbot) whose automated safeguards incorrectly flagged a violation, causing the system to be shut down for the fintech firm without warning. This led to direct harm: disruption of operations, loss of access to critical tools and data, and impact on employees' work. The harm is material and realized, not just potential. The AI system's malfunction is the pivotal cause of the incident. Although access was restored later, the incident itself meets the criteria for an AI Incident due to the direct operational harm caused by the AI system's erroneous action.
Thumbnail Image

Anthropic blocks entire company out of Claude without a reasonable explanation

2026-04-22
WION
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (Claude) being used by a company and then access being blocked due to alleged policy violations without clear explanation. The blocking caused disruption to the company's operations, but there is no evidence of injury, rights violations, or other harms directly caused by the AI system's malfunction or outputs. The issue is about service access and policy enforcement rather than AI system failure or misuse causing harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides contextual information about AI system governance and user impact, fitting the definition of Complementary Information.