AI Pathology Models Exhibit Demographic Bias in Cancer Diagnosis

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies led by Harvard Medical School reveal that AI models used for cancer diagnosis from pathology slides perform unequally across demographic groups, resulting in less accurate diagnoses for certain populations. Researchers identified causes of this bias and developed a tool to mitigate it, highlighting the need for systematic bias checks in medical AI.[AI generated]

Why's our monitor labelling this an incident or hazard?

The pathology AI models are explicitly AI systems used for cancer diagnosis. The event details how these models have caused unequal diagnostic performance across demographic groups, which can lead to harm to patients' health (harm category a). This is a direct harm resulting from the AI system's use. The article also discusses mitigation efforts but the primary focus is on the realized bias and its impact, not just potential or future harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
FairnessRobustness & digital securitySafetyRespect of human rightsAccountability

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)

Severity
AI incident

Business function:
Research and development

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Computational pathology in breast cancer: optimizing molecular prediction through task-oriented AI models - npj Breast Cancer

2025-12-16
Nature
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and challenges of AI in breast cancer pathology, describing ongoing research and development efforts rather than any concrete incident or hazard. There is no mention of AI malfunction, misuse, or harm occurring or plausibly imminent. The discussion is about improving AI models and overcoming barriers to clinical adoption, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting new incidents or hazards.
Thumbnail Image

Pathology AI models show demographic bias in cancer diagnosis

2025-12-16
News-Medical.net
Why's our monitor labelling this an incident or hazard?
The pathology AI models are explicitly AI systems used for cancer diagnosis. The event details how these models have caused unequal diagnostic performance across demographic groups, which can lead to harm to patients' health (harm category a). This is a direct harm resulting from the AI system's use. The article also discusses mitigation efforts but the primary focus is on the realized bias and its impact, not just potential or future harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What AI Learned From Cancer Slides Shocked Researchers

2025-12-16
SciTechDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in cancer diagnosis, which is a clear AI system application. The study finds that these AI systems cause unequal diagnostic accuracy across demographic groups, which can lead to harm to health and violations of fairness in medical treatment. This constitutes harm to groups of people (patients) and implicates health outcomes, fitting the definition of an AI Incident. The researchers' development of a mitigation framework is a response but does not negate the existence of the incident. Hence, the event is classified as an AI Incident.
Thumbnail Image

Researchers discover bias in AI models that analyze pathology samples

2025-12-16
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deep-learning pathology models) used for cancer diagnosis that have demonstrated biased performance across demographic groups, leading to less accurate diagnoses for certain populations. This diagnostic inaccuracy constitutes harm to health and potentially violates equitable care principles, fitting the definition of an AI Incident. The development of a mitigation tool (FAIR-Path) is a response to this incident but does not negate the fact that harm has occurred. Hence, the event is classified as an AI Incident due to realized harm caused by AI system use in medical diagnosis.
Thumbnail Image

AI Models Show Bias in Pathology Sample Analysis

2025-12-16
Mirage News
Why's our monitor labelling this an incident or hazard?
The pathology AI models are explicitly described as AI systems used for cancer diagnosis. The bias in these models has caused unequal diagnostic accuracy across demographic groups, which can directly harm patients' health outcomes and lead to inequitable care. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health and communities. The article also discusses mitigation efforts but the primary focus is on the realized bias and harm, not just potential or responses, so it is not Complementary Information. Therefore, this event is classified as an AI Incident.
Thumbnail Image

AI detects cancer but it's also reading who you are

2025-12-18
ScienceDaily
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in medical diagnosis whose use has directly led to harm in the form of biased and less accurate cancer diagnoses for certain demographic groups, which constitutes a violation of rights and harm to health. The article describes realized disparities in diagnostic accuracy affecting patient groups defined by race, gender, and age, which can negatively impact health outcomes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The article also discusses mitigation efforts, but the primary focus is on the identified bias and its impact, not just on responses or updates, so it is not merely Complementary Information.
Thumbnail Image

Una herramienta reduce el sesgo de los análisis de muestras cancerosas realizados con IA

2025-12-17
LaSexta
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for cancer diagnosis and identifies a significant bias issue that could lead to harm (unequal diagnostic accuracy across demographic groups). While this bias represents a plausible risk of harm (an AI Hazard), the article does not report any actual incidents of harm or injury caused by these AI models. Instead, it focuses on research findings and the development of FAIR-Path, a tool to reduce bias and improve fairness. This constitutes a governance and technical response to a known AI hazard, enhancing understanding and mitigation of AI risks. Therefore, the event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Investigadores revelan sesgos en IA para diagnóstico de cáncer y presentan una solución - Ciencia - ABC Color

2025-12-16
ABC Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in medical diagnosis, explicitly described as machine learning models analyzing pathology slides. The researchers found that these AI models perform unevenly across demographic groups, leading to diagnostic inaccuracies that can harm patients by providing less effective or incorrect cancer diagnoses. This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents actual performance deficiencies causing harm. The development of FAIR-Path is a response to this incident but does not negate the existence of the harm. Therefore, the classification is AI Incident.
Thumbnail Image

Descubren sesgos importantes en modelos de IA que analizan muestras...

2025-12-17
Infosalus
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for cancer diagnosis, which is a clear AI system application. The study found that these AI models produced biased diagnostic results, leading to less accurate diagnoses for certain demographic groups, which can harm patient health and outcomes. This harm is directly linked to the AI systems' use and their biased performance. Therefore, the event meets the criteria for an AI Incident due to harm to health and violation of equitable treatment. The article also discusses mitigation efforts but the primary focus is on the discovered bias and its impact, not just a response or update, so it is not merely Complementary Information.
Thumbnail Image

Una herramienta reduce el sesgo de los análisis de muestras cancerosas realizados con IA

2025-12-17
Agencia Sinc
Why's our monitor labelling this an incident or hazard?
The AI systems involved are pathology diagnostic models that have been shown to have biased performance across demographic groups, which can indirectly lead to harm by causing misdiagnosis or less effective treatment for affected populations. This fits the definition of an AI Hazard because the biases could plausibly lead to harm in healthcare outcomes. The article does not report actual cases of harm or injury caused by these AI models, but rather the identification of bias and a proposed solution to reduce it. Hence, it is not an AI Incident. It is also not merely complementary information since the main focus is on the bias risk and the new tool to mitigate it, which relates to plausible harm. Therefore, the classification is AI Hazard.
Thumbnail Image

Una nueva herramienta reduce el sesgo de los análisis patológicos con IA

2025-12-17
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used in medical diagnosis, specifically AI models analyzing pathological images for cancer detection. It documents that these AI systems have caused harm indirectly by producing biased and less accurate diagnoses for certain demographic groups, which can lead to health disparities and potentially harm patient outcomes. This constitutes harm to health (a) and a violation of rights related to equitable healthcare (c). The development and deployment of FAIR-Path is a response to this harm but does not negate the fact that the biased AI models have already caused harm. Therefore, this event qualifies as an AI Incident due to realized harm from AI system bias in medical diagnosis.
Thumbnail Image

Una nueva herramienta reduce el sesgo de los análisis patológicos con IA, según estudio

2025-12-16
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in medical diagnosis, which directly impact human health. The identified biases in AI models have led to less accurate diagnoses for certain demographic groups, constituting harm to health due to unequal treatment. The development of FAIR-Path aims to reduce this harm by mitigating bias. Since the harm (diagnostic inaccuracies affecting patient groups) has already occurred and the AI system's role is pivotal, this qualifies as an AI Incident. The article does not merely discuss potential harm or general AI developments but reports realized harm and a response to it.
Thumbnail Image

AI detects cancer but it's also reading who you are - Conservative Angle

2025-12-18
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in diagnosing cancer, which directly impacts patient health. The bias in the AI's diagnostic results constitutes harm to groups of people, fulfilling the criteria for an AI Incident. The research revealing the bias and mitigation methods is complementary information but the core issue is the realized bias causing harm, not just a potential hazard or general update.
Thumbnail Image

AI Cancer Detection: Beyond Diagnosis, Reading Your Identity - News Directory 3

2025-12-19
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used in medical diagnosis, which have caused harm through biased performance leading to less accurate diagnoses for certain demographic groups. This constitutes a violation of rights related to equitable healthcare and harm to patient outcomes, fitting the definition of an AI Incident. The article also discusses a solution framework, but the primary focus is on the realized harm from biased AI diagnosis and its implications.
Thumbnail Image

成大與哈佛醫學院合作揭露AI診斷風險 登國際期刊封面 | 聯合新聞網

2025-12-22
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in medical diagnosis (pathology image analysis). It highlights the risk of systematic diagnostic bias affecting different patient groups, which could lead to harm in terms of unfair or incorrect medical diagnoses (harm to health and violation of rights to equitable treatment). However, the article does not report specific incidents of harm occurring but rather reveals the potential for such harm and presents a mitigation approach. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, and the research aims to address this risk. It is not Complementary Information because the main focus is on revealing the risk and proposing a solution, not on updates or responses to a past incident. It is not an AI Incident because no actual harm is reported as having occurred.
Thumbnail Image

從突變到疾病,AI 一次說清楚!V2P 提升基因檢測速度與準確度

2025-12-20
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (V2P) that uses machine learning to analyze genetic data and predict disease phenotypes. The system's use is clearly described, but there is no mention or implication of any injury, rights violation, disruption, or other harm resulting from its development or use. Instead, it is a positive advancement in healthcare technology. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is not a mere product launch or general AI news, as it provides detailed information about the system's capabilities and research findings, but since no harm or plausible harm is described, it fits best as Complementary Information.
Thumbnail Image

成大資工與哈佛醫學院合作揭露 AI 診斷偏差之風險

2025-12-22
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in medical diagnosis, explicitly described as pathology image AI models. The research reveals that these AI systems have caused diagnostic bias, which is a form of harm to health and potentially violates principles of fairness and equality in healthcare. Although the article focuses on research findings rather than reporting a specific incident of harm occurring to patients, the documented systemic bias in deployed AI models implies realized harm or at least ongoing risk to patient health and diagnostic quality. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm in the form of diagnostic disparities affecting patient groups. The article also discusses mitigation strategies but the primary focus is on the identification of bias and its harmful impact, not just on responses or updates, so it is not merely Complementary Information.