
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Therapists at Kaiser Permanente in Northern California went on strike, alleging that an AI-driven mental health screening system delays care and misclassifies high-risk patients, leading to harm. The AI system, used for triage and treatment recommendations, has reportedly replaced clinical judgment, sparking labor disputes and concerns over patient safety.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an algorithmic screening tool and AI-related technologies in Kaiser's mental health patient triage process. Licensed therapists report over 70 examples of negative care outcomes linked to this system, including delays in care for high-risk patients, which is a direct harm to patient health. The union's complaints and regulatory settlements further support that the AI system's deployment has caused realized harm. Although Kaiser denies that clerical staff or AI make clinical assessments, the evidence suggests the algorithm influences triage decisions, leading to harmful delays and misprioritization. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by the AI system's use in patient screening and triage.[AI generated]