Study Warns AI in Medical Imaging Risks Worsening Health Inequities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

University of Maryland researchers found that AI algorithms for medical imaging often lack demographic data and bias evaluation, risking unrepresentative and potentially unfair diagnoses. This systemic issue could amplify health inequities, especially for vulnerable and underrepresented groups, if not addressed.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in medical imaging and their development using crowd-sourced datasets. It identifies a plausible risk that these AI systems could cause harm by amplifying biases and inequities in healthcare outcomes, which fits the definition of an AI Hazard. There is no report of actual harm occurring yet, only a credible potential for harm due to biased data and lack of bias evaluation in AI algorithms. Therefore, this event is best classified as an AI Hazard.[AI generated]
AI principles
FairnessTransparency & explainabilityPrivacy & data governanceAccountabilityRespect of human rightsRobustness & digital securitySafetyHuman wellbeing

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Research and developmentMonitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

AI in medical imaging may magnify health inequities: Study

2023-05-03
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in medical imaging and their development using crowd-sourced datasets. It identifies a plausible risk that these AI systems could cause harm by amplifying biases and inequities in healthcare outcomes, which fits the definition of an AI Hazard. There is no report of actual harm occurring yet, only a credible potential for harm due to biased data and lack of bias evaluation in AI algorithms. Therefore, this event is best classified as an AI Hazard.
Thumbnail Image

AI in medical imaging could magnify health inequities, study finds

2023-05-02
Medical Xpress - Medical and Health News
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of AI systems in medical imaging and the potential for these systems to cause harm by amplifying health inequities through biased data and algorithms. No actual harm or incident has been reported; rather, the study warns about plausible future harms if these biases are not addressed. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm but has not yet resulted in a realized incident.
Thumbnail Image

Artificial Intelligence in Medical Imaging May Magnify Health Inequities, Says Study | 📲 LatestLY

2023-05-03
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in medical imaging diagnosis. It identifies a significant risk that the development and use of these AI systems, due to biased or unrepresentative training data and lack of bias evaluation, could plausibly lead to harm by magnifying health inequities. However, the article does not report any actual incident of harm occurring yet, only the potential for such harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI in medical imaging could magnify health inequities, study finds

2023-05-03
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in medical imaging and discusses their development and use. It identifies a significant risk of bias and inequity that could lead to harm to communities and violations of rights, but it does not report a specific AI Incident where harm has already occurred. Instead, it raises awareness and calls for addressing these biases, which fits the definition of Complementary Information. The article enhances understanding of AI's societal implications and promotes governance and fairness without describing a direct or indirect harm event. Hence, it is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

AI in medical imaging may magnify health inequities: Study

2023-05-03
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI systems in medical imaging and identifies that the datasets used to train these AI algorithms often lack demographic diversity and do not evaluate for bias. This can lead to biased AI outputs that disproportionately affect vulnerable groups, thereby indirectly causing harm through inequitable healthcare. Although no specific incident of harm is reported, the study reveals a credible risk that these AI systems could lead to health inequities, which qualifies as an AI Hazard under the framework because the AI's development and use could plausibly lead to harm to groups of people.
Thumbnail Image

AI in medical imaging could magnify health inequities, study finds

2023-05-02
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in medical imaging and their development and use. It identifies a plausible risk that biased AI algorithms, due to unrepresentative training data and lack of bias evaluation, could lead to health inequities, a form of harm to communities and individuals. However, the article does not report any realized harm or incident but rather warns of potential future harm if these issues are not addressed. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI diagnosed as 'unfair'

2023-05-03
punemirror.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for medical diagnosis and highlights that their development and use involve biased datasets and lack of bias evaluation, which can indirectly lead to harm by increasing healthcare inequities and unfair treatment of certain demographic groups. Although no specific harm event is reported, the study identifies a systemic issue in AI development that plausibly leads to harm in healthcare outcomes, qualifying this as an AI Hazard rather than an Incident since the harm is potential and systemic rather than a specific realized event.