Cigna's AI System Wrongfully Denies Medical Claims, Causing Patient Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cigna used an AI-driven system (PXDX) to automatically flag and deny insurance claims without proper medical review, resulting in wrongful denials of medically necessary care. This practice caused financial and potential health harm to patients, as claims were rejected en masse with minimal human oversight.[AI generated]

Why's our monitor labelling this an incident or hazard?

The report indicates that Cigna employs a computer system to instantly reject claims on medical grounds without reviewing patient files, which implies the use of an AI or algorithmic decision-making system in healthcare claim approvals. This automated denial process can cause harm to patients by obstructing access to necessary medical treatments, constituting injury or harm to health. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in claim denials.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRespect of human rightsHuman wellbeingDemocracy & human autonomy

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Economic/PropertyPhysical (injury)

Severity
AI incident

Business function:
Accounting

AI system task:
Event/anomaly detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Is a Major Health Insurer Rejecting Treatments Too Quickly?

2023-03-26
InsideHook
Why's our monitor labelling this an incident or hazard?
The report indicates that Cigna employs a computer system to instantly reject claims on medical grounds without reviewing patient files, which implies the use of an AI or algorithmic decision-making system in healthcare claim approvals. This automated denial process can cause harm to patients by obstructing access to necessary medical treatments, constituting injury or harm to health. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in claim denials.
Thumbnail Image

How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them

2023-03-25
Talking Points Memo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI-based algorithm (PXDX) that automatically flags and denies insurance claims without proper medical review, leading to wrongful denials of medically necessary care. This has directly harmed patients financially and medically, fulfilling the criteria for an AI Incident. The AI system's use in claim denial is central to the harm described, including violation of patients' rights and health risks. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Cigna saves millions by having its doctors reject claims without reading them

2023-03-25
Bangor Daily News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (PXDX) that algorithmically matches diagnoses to approved procedures and automatically flags claims for denial without medical directors reviewing patient records. The system's use has directly led to wrongful denials of medically necessary claims, causing financial harm to patients and potential health risks from delayed diagnosis or treatment. The harm is realized and ongoing, fulfilling the criteria for an AI Incident. The system's design and deployment reflect a failure to comply with legal and ethical standards for claim review, further supporting classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Cigna saves millions by having its doctors reject claims without reading them

2023-03-25
The CT Mirror
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI-enabled algorithmic system (PXDX) that automatically flags and denies insurance claims without medical directors reviewing patient records, with doctors merely rubber-stamping denials. This system has directly caused harm to patients by denying payment for medically necessary tests, leading to unexpected bills and financial burdens. The harm includes violation of patients' rights to fair and objective claim review and harm to individuals through financial injury. The AI system's role is pivotal in enabling mass denials with minimal human oversight, fulfilling the criteria for an AI Incident under the OECD framework.