Kaiser Permanente Therapists Strike Over AI Screening System Delays and Patient Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Therapists at Kaiser Permanente in Northern California went on strike, alleging that an AI-driven mental health screening system delays care and misclassifies high-risk patients, leading to harm. The AI system, used for triage and treatment recommendations, has reportedly replaced clinical judgment, sparking labor disputes and concerns over patient safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an algorithmic screening tool and AI-related technologies in Kaiser's mental health patient triage process. Licensed therapists report over 70 examples of negative care outcomes linked to this system, including delays in care for high-risk patients, which is a direct harm to patient health. The union's complaints and regulatory settlements further support that the AI system's deployment has caused realized harm. Although Kaiser denies that clerical staff or AI make clinical assessments, the evidence suggests the algorithm influences triage decisions, leading to harmful delays and misprioritization. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by the AI system's use in patient screening and triage.[AI generated]
AI principles
SafetyAccountability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

Business function:
Other

AI system task:
Forecasting/predictionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

'Thank God they're still alive': Kaiser therapists claim its new screening system puts patients at higher risk by delaying their care

2026-03-21
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic screening tool and AI-related technologies in Kaiser's mental health patient triage process. Licensed therapists report over 70 examples of negative care outcomes linked to this system, including delays in care for high-risk patients, which is a direct harm to patient health. The union's complaints and regulatory settlements further support that the AI system's deployment has caused realized harm. Although Kaiser denies that clerical staff or AI make clinical assessments, the evidence suggests the algorithm influences triage decisions, leading to harmful delays and misprioritization. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by the AI system's use in patient screening and triage.
Thumbnail Image

Therapists Go on Strike, Saying They're Being Replaced by AI

2026-03-21
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to replace or reduce the role of licensed mental health clinicians, leading to labor disputes and strikes. The AI's use in triage and charting affects workers' rights and working conditions, which falls under violations of labor rights and harm to groups of people. The harm is realized as workers have staged a strike protesting these impacts. Hence, the event meets the criteria for an AI Incident due to the direct involvement of AI systems causing harm to labor rights and working conditions.
Thumbnail Image

'Thank God they're still alive': Kaiser therapists claim its new screening system puts patients at higher risk by delaying their care

2026-03-21
AOL.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic screening tool in Kaiser's mental health triage process, which is an AI system by definition as it processes patient input to generate risk scores and guide scheduling decisions. The reported delays and misclassifications have directly led to harm by putting high-risk patients at greater risk due to delayed care, fulfilling the criteria for harm to health. The union's complaints and therapists' testimonies provide evidence of realized harm, not just potential harm. Although Kaiser denies that clerical staff or AI make clinical determinations, the algorithm's role in guiding triage decisions and the resulting negative outcomes indicate AI involvement in causing harm. Hence, this event is best classified as an AI Incident.
Thumbnail Image

The Therapist's Revolt: Mental Health Workers Draw a Hard Line Against AI in the Consulting Room

2026-03-21
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in mental health screening and treatment recommendation, which have directly contributed to harm by misclassifying patients and limiting access to necessary care. The strike is a response to these harms and the perceived replacement of human clinical judgment by AI. The harms include injury or harm to health (a), and harm to communities (d) through inadequate mental health care. The AI system's use is central to the event and the resulting harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kaiser screening delay increases patient risk?

2026-03-22
AllToc
Why's our monitor labelling this an incident or hazard?
While the screening system likely involves AI or automated decision-support components, the article only presents allegations and counterclaims without confirmed evidence of harm or delays causing injury. There is a plausible risk that the AI-supported screening could lead to harm if delays are significant, but no direct or indirect harm is documented. Therefore, this situation represents a potential risk or concern about AI use in healthcare screening that could plausibly lead to harm if issues are confirmed, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

2,400 Kaiser Mental Health Professionals Strike in Northern California Over AI Concerns

2026-03-18
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Kaiser does not currently use AI for therapy or care decisions, so no AI system has caused harm or disruption yet. The strike is driven by fears that AI could be used in the future to replace human therapists, which could plausibly lead to harm such as reduced quality of care or labor rights violations. This fits the definition of an AI Hazard, as the development or use of AI systems could plausibly lead to harm, but no harm has yet occurred. The event is not a Complementary Information piece because it is not primarily about responses or updates to an existing AI incident or hazard, but about a labor strike motivated by AI concerns. It is not unrelated because AI is central to the concerns motivating the strike.
Thumbnail Image

Thousands of Kaiser therapists strike over AI, contract negotiations

2026-03-18
San Jose Mercury News
Why's our monitor labelling this an incident or hazard?
The article describes a labor strike triggered by concerns over AI use in healthcare settings, specifically regarding job security and contract negotiations. While AI systems are involved in recording and screening patients, no actual harm or incident caused by AI is reported. The strike is a reaction to AI adoption and its potential impact on workers, which fits the definition of Complementary Information as it provides context on societal responses to AI use. There is no direct or indirect harm caused by AI, nor a plausible future harm event described that would qualify as an AI Hazard. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

Mental Health Professionals Hold One-Day Strike Over AI

2026-03-18
Newser
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for initial mental health screening via an online questionnaire. The union's concern is that this AI-powered screening could miss patients at high risk of self-harm or crisis, which implies a plausible risk of harm to patient health. Although no specific harm is reported as having occurred yet, the protest highlights credible concerns about patient safety risks directly linked to the AI system's use. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no direct harm is confirmed in the article.
Thumbnail Image

2,400 Kaiser mental health professionals strike in Northern California over AI concerns

2026-03-19
KCRA
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Kaiser does not currently use AI for therapy or care decisions, so no AI system has caused harm or malfunctioned. The strike is driven by concerns that AI might be used in the future to replace human therapists, which could plausibly lead to harm such as reduced quality of care or violations of labor rights. Since no actual harm has occurred yet, but there is a credible risk and concern about AI's future use, this qualifies as an AI Hazard. The event is not a Complementary Information piece because the main focus is on the strike and AI-related concerns, not on responses or updates to prior incidents. It is not unrelated because AI is central to the union's concerns and the strike's motivation.
Thumbnail Image

Kaiser healthcare workers strike over AI use

2026-03-18
KRON4
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI chatbots and automated tools in mental health care) and concerns about its use. However, there is no evidence that the AI system has directly or indirectly caused harm to patients or workers. The strike is a protest against the potential negative impact of AI on care quality and safety, reflecting a plausible risk but not a realized incident. Thus, this qualifies as an AI Hazard because the AI use could plausibly lead to harm, but no harm has been reported or confirmed. It is not Complementary Information because the main focus is the strike and concerns about AI use, not a response or update to a prior incident. It is not Unrelated because AI is central to the event.
Thumbnail Image

2,400 Kaiser mental health professionals strike in Northern California over AI concerns

2026-03-18
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The article describes a labor strike motivated by fears that AI might replace human therapists, which implies a plausible risk of harm related to employment and patient care. Kaiser denies current replacement or AI-driven care decisions, indicating no realized harm yet. The AI system's role is in its potential use, not in an incident causing harm. Hence, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Kaiser NUHW Mental Health Clinicians & CNA NNU Nurses Have UFLP Strike Over AI & Layoffs : Indybay

2026-03-19
Indybay
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI chatbots in healthcare decision-making, which is an AI system. The union alleges that this AI use threatens patient health and safety and violates legal protections, with regulatory fines already imposed. These claims indicate that the AI system's use has directly or indirectly led to harm or violations of rights related to patient care and worker safety. Therefore, this qualifies as an AI Incident due to realized harm and legal breaches linked to AI deployment in a critical healthcare setting.
Thumbnail Image

Kaiser NUHW Mental Health Therapists To Strike : Indybay

2026-03-18
Indybay
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being deployed to replace therapist work and to triage patients, leading to reduced quality of mental health care and longer wait times, which harms patient health. The AI system's involvement in decision-making about patient care and replacement of human therapists is directly linked to the harm described. This meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm to persons (patients) and a violation of care standards.
Thumbnail Image

Kaiser Mental Health Care Workers Stage 1 Day Strike - CLAYCORD.com

2026-03-18
CLAYCORD.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used or planned for use in mental health care tasks such as communication, triage, and note-taking. The workers' strike is motivated by concerns that AI might replace human providers or degrade care quality, which could plausibly lead to harms such as reduced patient care standards or job losses. However, no actual harm or incident caused by AI is reported. The event is about potential risks and labor disputes over AI use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI involvement is central to the dispute.
Thumbnail Image

Kaiser Permanente, AI, and the Workers on Strike, Again

2026-03-18
ZNetwork
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in making patient care determinations that affect health outcomes, which falls under harm to health (a). The workers' strike and regulatory fines indicate that harm has already occurred due to the AI system's deployment and its impact on patient care quality and access. The AI system's role in replacing human therapists and making critical assessments without transparency or human involvement constitutes an AI Incident as it has directly led to harm to people and violations of rights. The article focuses on the realized harm and ongoing conflict caused by the AI system's use, not just potential future risks or complementary information.