
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A federal judge allowed parts of a lawsuit against UnitedHealthcare to proceed. The suit alleges that the insurer’s use of a faulty AI algorithm wrongfully denied post-acute coverage for Medicare and elderly patients, overriding medically necessary recommendations and posing significant health risks.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (nH Predict) used by UnitedHealthcare to make coverage decisions that allegedly led to denial of necessary medical care for elderly patients, causing harm to their health and financial well-being. The plaintiffs claim a high error rate and misuse of the AI system to override doctors' recommendations, which constitutes direct harm. The legal claims focus on breach of contract and good faith related to the AI's use, confirming the AI system's pivotal role in the harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of rights.[AI generated]