
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Cigna used an AI-driven system (PXDX) to automatically flag and deny insurance claims without proper medical review, resulting in wrongful denials of medically necessary care. This practice caused financial and potential health harm to patients, as claims were rejected en masse with minimal human oversight.[AI generated]
Why's our monitor labelling this an incident or hazard?
The report indicates that Cigna employs a computer system to instantly reject claims on medical grounds without reviewing patient files, which implies the use of an AI or algorithmic decision-making system in healthcare claim approvals. This automated denial process can cause harm to patients by obstructing access to necessary medical treatments, constituting injury or harm to health. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in claim denials.[AI generated]