Google AI Overviews Spread Harmful Health Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's AI Overviews, which generate health information summaries atop search results, have provided inaccurate and misleading medical advice, including dangerous recommendations for cancer and liver disease patients. Experts warn these errors could worsen health outcomes or increase mortality, directly putting users at risk of harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (Google's AI Overviews) that generates health information summaries. The inaccuracies in these AI-generated summaries have directly led to misinformation that experts warn could cause physical harm or worsen health outcomes, such as jeopardizing cancer treatment or misinforming about liver disease. This constitutes direct harm to people's health caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityHuman wellbeingRespect of human rights

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)Physical (death)

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Google AI Overviews put people at risk of harm with misleading health advice

2026-01-02
the Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates health information summaries. The inaccuracies in these AI-generated summaries have directly led to misinformation that experts warn could cause physical harm or worsen health outcomes, such as jeopardizing cancer treatment or misinforming about liver disease. This constitutes direct harm to people's health caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

Google AI Overviews put people at risk of harm with misleading health advice

2026-01-02
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI Overviews) whose use has directly led to harm to people's health by disseminating false and misleading medical information. The harm is materialized and significant, as experts describe the advice as 'really dangerous' and 'alarming,' with potential to increase mortality risk and cause misdiagnosis or neglect of symptoms. This fits the definition of an AI Incident, as the AI system's use has directly led to injury or harm to health (harm category a).
Thumbnail Image

Google AI Overviews put people at risk of harm with misleading health advice

2026-01-02
democraticunderground.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system (Google's generative AI summaries) has provided inaccurate health information that puts people at risk of harm, including potentially life-threatening advice. This constitutes direct harm to health caused by the AI system's outputs, meeting the criteria for an AI Incident under harm to health (a).
Thumbnail Image

Google AI Overviews put people at risk of harm with misleading health ...

2026-01-02
blog.quintarelli.it
Why's our monitor labelling this an incident or hazard?
The summaries are generated by an AI system (LLM) integrated into Google's search results. The inaccurate health advice has directly led to potential harm to individuals' health, such as advising pancreatic cancer patients to avoid high-fat foods contrary to medical guidance, and misleading information about liver function tests and women's cancer tests. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to people's health.
Thumbnail Image

Google AI Overviews Put People At Risk Of Harm With Misleading Health Advice - Report

2026-01-03
https://radiojamaicanewsonline.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI summaries) is explicitly mentioned and is used to generate health information. The misleading and inaccurate health advice has directly led to a risk of harm to people's health, fulfilling the criteria for an AI Incident under harm to health (a). The investigation shows that the AI outputs have caused or could cause injury or harm to persons relying on this information. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google AI Health Summaries Risk Patient Safety

2026-01-03
Colitco
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) that generates health summaries. The AI system's use has directly led to harm by providing incorrect medical advice and misleading information to patients, which can cause injury or harm to health (harm category a). The article documents concrete examples of such harm, including life-threatening misinformation and misleading test result interpretations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and directly linked to the AI system's outputs.
Thumbnail Image

Google's AI Overview Offers Inaccurate Health Advice

2026-01-03
The Chosun Daily
Why's our monitor labelling this an incident or hazard?
An AI system (Google's generative AI summarization feature) is explicitly involved in providing health information. The inaccurate outputs have directly led to potential harm by misleading patients with unsafe health advice, which could negatively impact their treatment and survival. The article documents actual cases of inaccurate AI-generated health information verified by experts, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google AI Overviews Warned as 'Dangerous' After Giving Cancer Patients Wrong Advice

2026-01-03
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI-powered search summaries) that generates health-related content. The AI's outputs have directly misled users with false medical advice, which can cause injury or harm to health (harm category a). Users with serious conditions have been affected, and the misinformation has caused alarm and potential health risks. The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as users report actual misleading advice and consequences such as delayed treatment or unnecessary worry. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Experts warn of dangerous health advice in Google AI Overviews

2026-01-04
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that Google AI Overviews generate health advice that is often inaccurate and misleading, leading to potential delays in diagnosis and harmful treatment decisions. The AI system is directly involved in producing these summaries, which are positioned prominently and trusted by users, thereby causing real harm to individuals' health. This fits the definition of an AI Incident as the AI system's use has directly led to harm to health (a).
Thumbnail Image

Google AI Overviews Give Inaccurate Health Advice, Guardian Probe Reveals

2026-01-04
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies Google's AI Overviews as generative AI systems that produce health-related summaries. These AI outputs have been shown to contain inaccuracies that have already caused harm by misleading patients and complicating medical care. The harm is direct and realized, not merely potential, as patients have reportedly acted on incorrect advice. The involvement of the AI system in generating these harmful outputs is clear, and the consequences include injury or harm to health, which fits the definition of an AI Incident. Although regulatory and mitigation efforts are discussed, the primary focus is on the existing harm caused by the AI system's use.
Thumbnail Image

Google AI health summaries under scrutiny over accuracy concerns

2026-01-04
The Express Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('AI Overview') generating health summaries that have provided misleading and potentially harmful medical information. The harm is related to health risks for patients relying on this information, which fits the definition of an AI Incident involving harm to health (a). The AI system's outputs have directly contributed to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The presence of the AI system, the nature of its use, and the resulting harm are clearly described, justifying classification as an AI Incident.
Thumbnail Image

Google AI Overviews Criticized for Misleading Health Advice

2026-01-05
eWEEK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI summaries) whose use has directly led to misleading health information that could harm patients' health and well-being. The harm is realized or ongoing as users may act on incorrect advice, potentially resulting in injury or worsening health conditions. The AI system's role is pivotal as it generates the misleading summaries prominently displayed to users, influencing their decisions. Therefore, this qualifies as an AI Incident under the framework, specifically harm to health and communities due to misinformation.
Thumbnail Image

Google AI summaries deliver misleading health information, raising safety concerns - NaturalNews.com

2026-01-05
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates health information summaries. The use of this AI system has directly led to the dissemination of inaccurate and misleading health advice, which experts warn could cause physical harm to patients (e.g., pancreatic cancer patients receiving dangerous dietary advice). This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The harm is realized or ongoing, not merely potential, as the misleading information is actively presented to millions of users. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google's AI Overviews Caught Giving Dangerous "Health" Advice

2026-01-05
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) whose use has directly led to harm in the form of dangerous health advice. The AI system's hallucinations and inaccuracies have caused misinformation that could seriously harm individuals' health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The article documents realized harm and expert warnings about the risks, not just potential future harm, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Use Google AI Overview for health advice? It's 'really dangerous,' investigation finds

2026-01-06
ZDNET
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) whose use has directly led to the dissemination of false and misleading health information, constituting harm to the health of individuals. The article provides concrete examples where the AI's outputs were incorrect and potentially dangerous, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as users relying on this information could suffer health consequences. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

خطر فى محرك البحث.. تحذيرات من نصائح طبية كارثية يقدمها ذكاء جوجل - اليوم السابع

2026-01-03
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated search summaries) whose use has directly led to harm to people's health by providing inaccurate and misleading medical advice. This fits the definition of an AI Incident because the AI system's outputs have caused or could cause injury or harm to individuals or groups. The article details specific cases and expert opinions confirming the risk and actual harm potential, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

تقرير يحذر.. البحث بالذكاء الاصطناعى يقدم معلومات صحية مضللة - اليوم السابع

2026-01-03
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's generative AI for health information summaries) whose outputs have directly misled patients with serious health conditions, potentially causing injury or harm to their health. The article provides concrete examples of harmful advice and expert warnings about the risks, indicating realized harm or at least harm that is occurring due to reliance on the AI-generated information. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is materialized and linked to the AI system's use.
Thumbnail Image

الجارديان: ملخصات جوجل المدعومة بالذكاء الاصطناعي تعرض الناس للخطر بنصائح صحية مضللة

2026-01-02
جريدة الدستور
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI-powered summaries) whose outputs have directly led to harm by providing misleading health information. The harm is realized and significant, involving potential injury or harm to users' health due to reliance on incorrect AI-generated advice. The AI system's malfunction or misuse in generating inaccurate content is the direct cause of the harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

احذر!.. أداة " AI Overviews" الذكية من غوغل تعرض حياة المستخدمين للخطر

2026-01-03
العربية
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's 'AI Overviews', a generative AI tool providing health information summaries. The use of this AI system has directly led to harm by disseminating false or misleading health information, which can cause injury or harm to users' health (harm category a). The article provides concrete examples of such misinformation causing potential or actual harm, such as dangerous dietary advice and misleading test interpretations. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

جوجل تواجه أزمة.. أداة AI تعرض المستخدمين للخطر والشركة تعلق

2026-01-03
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is used to generate health information summaries. The misinformation provided by the AI system has directly led to potential health harms to users, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The event involves the use of the AI system and its malfunction in providing inaccurate outputs. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

احذر AI Overviews.. ملخصات الذكاء الاصطناعى من جوجل قد تقتل مستخدميها | المصري اليوم

2026-01-03
المصري اليوم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the AI system (Google's AI Overviews) provided false health information that could increase the risk of death for patients and cause users to misinterpret serious health symptoms. This is a direct harm to users' health caused by the AI system's outputs. Therefore, this qualifies as an AI Incident under the definition of injury or harm to health caused directly or indirectly by the use of an AI system.
Thumbnail Image

خطر في محركات البحث.. تحذيرات من نصائح طبية كارثية يقدمها ذكاء جوجل

2026-01-03
albawabhnews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates medical advice summaries. The AI system's outputs have directly caused harm by disseminating inaccurate and potentially dangerous medical information, which can lead to deterioration of health conditions and neglect of necessary medical follow-up. The harm is realized and significant, affecting users' health and safety. Hence, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the use of an AI system.
Thumbnail Image

مستخدمو جوجل في مواجهة محتوى مضلل من "AI Overviews"

2026-01-03
almashhad.news
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as generating health information summaries. The misinformation it produces has already caused or could cause harm to users' health, which is a direct harm to persons. The article details specific examples of misleading advice that contradicts medical guidelines, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's outputs affecting health outcomes.
Thumbnail Image

تحذير: البحث بالذكاء الاصطناعي ينشر معلومات صحية خاطئة

2026-01-03
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating health misinformation that misleads patients and could worsen health outcomes, which fits the definition of an AI Incident due to indirect harm to health caused by AI use. The AI system's outputs have directly led to misinformation that could harm patients, fulfilling the criteria for an AI Incident.
Thumbnail Image

كارثة احذرها.. أداة ذكاء اصطناعي تعرض حياة المستخدمين للخطر| ما القصة؟

2026-01-04
صدى البلد
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's generative AI used to produce health information summaries. The use of this AI system has directly led to the dissemination of false and harmful health advice, which can cause injury or harm to users, fulfilling the criteria for an AI Incident under harm to health. The article details specific cases where the AI's outputs were dangerously incorrect, indicating realized harm or at least a direct causal link to potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"AI Overviews" قد تعرّض حياة المستخدمين للخطر

2026-01-04
عنب بلدي
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's 'AI Overviews', a generative AI tool providing health information summaries. The use of this AI system has directly led to the dissemination of false health information, which can cause injury or harm to users' health, fulfilling the criteria for an AI Incident under harm category (a). The article details specific cases where the AI's outputs are dangerously incorrect, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

مخاوف صحيّة من معلومات مضلّلة في ملخّصات 'غوغل' الذكيّة

2026-01-05
annahar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a generative AI system by Google to produce health summaries. The misinformation provided by this AI system has already caused or could cause serious health harms to patients, such as malnutrition, neglect of medical follow-up, and misdiagnosis. These harms fall under injury or harm to health of persons, fulfilling the criteria for an AI Incident. The AI system's development and use have directly led to these harms through inaccurate outputs. The event is not merely a potential risk or a complementary update but a realized harm scenario.
Thumbnail Image

الذكاء الاصطناعي يتحول من وسيلة مساعدة إلى مصدر خطر على الصحة.. ما القصة؟

2026-01-05
جريدة الدستور
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) generating medical summaries that contain incorrect advice and misleading information. This misinformation has directly impacted users' health decisions, posing real health risks. The harm is realized or ongoing, as users rely on these summaries for critical medical decisions. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

هل يمكننا الوثوق في ملخصات Google AI Overview؟

2026-01-06
Shorouk News
Why's our monitor labelling this an incident or hazard?
The event involves Google's AI Overview, a generative AI system used in search results to provide summaries. The investigative report documents actual cases where the AI system's outputs have provided false and misleading health information, which medical experts warn could pose real health risks to users. This constitutes direct harm to people's health due to the AI system's use. The article also discusses the system's development and use, and the resulting harm is realized, not hypothetical. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to health (a).
Thumbnail Image

"구글 'AI 오버뷰', 잘못된 건강 조언으로 위험 초래 가능성" | 연합뉴스

2026-01-03
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI 'AI Overview') whose use has directly led to the dissemination of incorrect health information. This misinformation poses a real risk of physical harm to users relying on it for medical advice, as highlighted by experts and health organizations. Since the harm is realized or ongoing (people receiving and potentially acting on wrong health advice), this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

구글 AI 요약 기능, 잘못된 건강 정보 제공할 가능성

2026-01-03
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's generative AI-powered summary feature) providing inaccurate health information that could harm users. The harm is related to health risks from following incorrect dietary advice for pancreatic cancer patients, which is a direct harm to health. The AI system's use is the cause of this misinformation, fulfilling the criteria for an AI Incident. The harm is realized or ongoing as the misinformation is presented at the top of search results and could influence users' decisions, not merely a potential future risk.
Thumbnail Image

구글 AI 요약, 잘못된 건강 정보 노출 논란

2026-01-03
기술로 세상을 바꾸는 사람들의 놀이터
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overview) is explicitly involved in generating health-related summaries that have directly led to the dissemination of incorrect medical information. This misinformation can cause harm to individuals' health by influencing their medical decisions adversely, fulfilling the criteria for harm to health (a). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm or risk of harm to people.
Thumbnail Image

"믿었다간 큰일"...구글 'AI 오버뷰' 잘못된 건강 조언

2026-01-03
아시아경제
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overview) is explicitly involved, providing health-related advice. The AI's use has directly led to the dissemination of incorrect health information, which poses a risk of injury or harm to people's health (harm category a). The harm is realized or ongoing as people may rely on this information, making this an AI Incident rather than a hazard. The article details specific examples of harm and expert concerns, confirming the direct link between the AI system's outputs and potential health harm.
Thumbnail Image

"구글 'AI 오버뷰', 잘못된 건강 조언으로 위험 초래 가능성"

2026-01-03
YTN
Why's our monitor labelling this an incident or hazard?
The AI Overview is an AI system involved in generating health information summaries. The article reports that the AI system's outputs have been inaccurate and misleading, which can directly threaten individuals' health, fulfilling the harm criterion (a) injury or harm to health. The harm is indirect because the AI system's incorrect advice could lead users to make harmful health decisions or delay necessary medical care. Hence, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

"그대로 믿었다가는 큰일"...환자 위험 초래 '경고'

2026-01-03
Wow TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overview) that generates health-related summaries. It reports multiple instances where the AI system provided incorrect or misleading medical information, which could cause users to make harmful health decisions. This constitutes direct or indirect harm to people's health, fitting the definition of an AI Incident. Although Google disputes some claims, the reported inaccuracies and expert warnings indicate realized harm or at least harm occurring through reliance on the AI outputs.
Thumbnail Image

구글 AI 요약 정보, 건강 정보 부정확...환자 위험 초래 우려 ↑

2026-01-03
이투데이
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overview) is explicitly involved, providing health-related summaries. The inaccuracies in the AI's outputs have directly led to potential harm to individuals' health by disseminating medically incorrect advice and interpretations. Since the AI's use has caused or could cause injury or harm to health, this qualifies as an AI Incident under the definition of harm to health resulting from AI system use.
Thumbnail Image

구글 'AI 오버뷰', 잘못된 건강조언 위험 초래 - 전파신문

2026-01-03
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overview) whose use has directly led to the dissemination of incorrect health advice, which can harm individuals' health. The AI system's outputs are misleading and have been confirmed by health experts and charities to pose real risks to patients. Therefore, this constitutes an AI Incident due to direct harm to people's health caused by the AI system's use.
Thumbnail Image

Guardian: Ο κίνδυνος που διατρέχουμε από τα AI Overviews της Google

2026-01-02
Η ΝΑΥΤΕΜΠΟΡΙΚΗ
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates health-related content. The use of this AI system has directly led to harm by disseminating inaccurate medical information that can mislead patients and cause health risks, including increased risk of death or neglect of symptoms. This constitutes injury or harm to health (harm category a). Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

Το AI της Google δίνει λάθος συμβουλές υγείας, θέτει ανθρώπους σε κίνδυνο - Παραδείγματα - Dnews

2026-01-02
dnews.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates health-related summaries. The AI system's use has directly led to harm (a) injury or harm to health of persons, by providing false and misleading health information that could cause patients to make dangerous health decisions. The article documents specific cases where the AI's outputs are factually incorrect and potentially life-threatening, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misleading information is actively presented to users who may rely on it.
Thumbnail Image

Guardian: Η επισκόπηση AI της Google μπορεί να θέσει τους ανθρώπους σε κίνδυνο | LiFO

2026-01-03
LiFO.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as Google's AI Overviews, which use generative AI to provide health information summaries. The AI system's use has directly led to harm by disseminating false and misleading health advice that could worsen patient outcomes or delay necessary medical care. The harm includes risk to health and potential injury or death, which fits the definition of an AI Incident. The article documents realized harm and expert concerns about the dangerous impact of these AI-generated summaries, not just potential or hypothetical risks.
Thumbnail Image

Επικίνδυνες απαντήσεις υγείας από το AI της Google - Δείτε παραδείγματα

2026-01-03
Aftodioikisi.gr
Why's our monitor labelling this an incident or hazard?
The AI Overviews system is explicitly described as an AI system generating health information summaries. The article documents multiple cases where the AI's outputs contained false or misleading health information that could cause physical harm or worsen health conditions, such as advising pancreatic cancer patients incorrectly or providing wrong cancer screening information. These harms fall under injury or harm to health (a). The AI system's use has directly led to these harms by providing dangerous misinformation. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Η AI μπορεί να βλάψει σοβαρά την υγεία - Τι αποκαλύπτει έρευνα για τα Overviews των μηχανών αναζήτησης | in.gr

2026-01-03
in.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's AI Overviews) that generates medical information summaries. The AI's outputs have directly led to harm by providing false or misleading health information, causing users to make potentially dangerous health decisions. This constitutes injury or harm to the health of persons, meeting the definition of an AI Incident. The article documents realized harm rather than potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's outputs, not on responses or governance measures. Therefore, the classification is AI Incident.
Thumbnail Image

Guardian: Ανησυχία για τα Google AI Overviews - Θέτουν σε κίνδυνο την υγεία των χρηστών με παραπλανητικές συμβουλές

2026-01-03
HuffPost Greece
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Google's AI Overviews, which use generative AI, have produced false and misleading health information that can cause serious harm to users, including increased risk of death for pancreatic cancer patients and misinformed liver disease patients. The AI system's outputs have directly led to health risks, fulfilling the criteria for an AI Incident under harm to health (a). The presence and malfunction (inaccurate outputs) of the AI system are clear, and the harm is materialized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Τα AI Overviews της Google Θέτουν σε Κίνδυνο Χρήστες με Ανακριβείς Ιατρικές Συμβουλές | Pagenews.gr

2026-01-04
Pagenews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) generating medical advice that is inaccurate and potentially dangerous, thus posing health risks to users. The harm is realized or ongoing as users may rely on these summaries for health decisions, which can cause injury or harm to health. The AI system's malfunction or limitations in providing accurate information directly contribute to this harm. Therefore, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by an AI system's use.
Thumbnail Image

Rezumatele AI ale Google oferă informații medicale greșite și pot pune vieți în pericol

2026-01-02
Mediafax
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's AI Overviews, which generates automatic summaries using AI. The inaccuracies in medical information have directly led to potential harm to users' health, such as cancer patients receiving harmful dietary advice and individuals misunderstanding their medical test results. This constitutes an AI Incident because the AI system's outputs have directly caused or could cause injury or harm to persons, fulfilling the harm to health criterion (a).
Thumbnail Image

The Guardian: Rezumatele AI ale Google oferă informații medicale greșite și pot pune vieți în pericol / A recomandat în mod eronat ca pacienții cu cancer pancreatic să evite alimentele bogate în grăsimi

2026-01-02
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned as generating medical summaries that contain inaccurate and harmful information. The harm is direct because users relying on these summaries may make health decisions that negatively affect their treatment and survival chances, fulfilling the criterion of injury or harm to health. The prominence of these summaries increases the risk of users accepting incorrect advice without further verification. Hence, the event meets the definition of an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

Google AI ar expune oamenii la riscuri prin sfaturi medicale înșelătoare. Investigație The Guardian

2026-01-03
Cotidianul RO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI for search result summaries) whose use has directly led to harm by providing false and misleading medical advice. The harms include potential injury or death to patients (harm to health), misinformation that could cause people to ignore symptoms or avoid necessary treatment, and thus the AI system's outputs have directly caused or could cause significant harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to people's health.
Thumbnail Image

Inteligența artificială de pe Google Search pune în pericol viețile oamenilor, demonstrează The Guardian

2026-01-03
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies an AI system (Google's AI Overviews) that generates medical summaries. These summaries have produced incorrect and potentially dangerous medical advice, which can lead to injury or harm to health, fulfilling the criteria for an AI Incident. The harm is direct or indirect, as users rely on these AI-generated summaries for health decisions, and the misinformation can cause delays in proper treatment or harmful behaviors. The involvement is through the AI system's use and malfunction (inaccurate outputs). Therefore, this event is classified as an AI Incident.
Thumbnail Image

Informațiile medicale false sau înșelătoare date de rezumatele AI de la Google pun oamenii în pericol

2026-01-03
Libertatea
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI summaries) whose use has directly led to the dissemination of false medical information. This misinformation poses a direct risk of injury or harm to people's health, as confirmed by expert warnings and specific examples of dangerous advice. The harm is realized or ongoing, not merely potential, as users rely on these summaries for health decisions. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and health-related harm.
Thumbnail Image

Rezumatele AI ale Google oferă informații medicale greșite și pot pune vieți în pericol - Stiripesurse.md

2026-01-04
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) generating medical summaries that contain erroneous and dangerous advice, such as incorrect dietary recommendations for pancreatic cancer patients and misleading information about liver function tests. These inaccuracies can directly harm users' health by causing delays in proper treatment or avoidance of necessary medical care. The AI system's outputs have directly led to realized harm, fulfilling the criteria for an AI Incident under harm to health. The involvement is through the AI system's use and malfunction (providing incorrect outputs).
Thumbnail Image

Rezumatele AI de la Google pun în pericol sănătatea oamenilor prin informații medicale eronate - Stiripesurse.md

2026-01-04
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates medical information summaries. The AI's outputs have directly led to the dissemination of false and misleading medical advice, which experts warn can cause harm to patients' health, including increased risk of death or failure to seek necessary medical care. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people's health. The article provides multiple concrete examples of such harm, including dangerous dietary advice for pancreatic cancer patients and incorrect cancer screening information. The involvement of the AI system is clear, and the harm is realized, not just potential. Thus, the event is classified as an AI Incident.
Thumbnail Image

Semnal de alarmă: Rezumatele AI ale Google care apar în topul paginii de căutare pun în pericol viețile oamenilor

2026-01-05
DoctorulZilei
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated summaries by Google that have given incorrect medical advice, which can lead to serious health consequences including reduced survival chances for patients. The AI system's outputs have directly led to harm by misleading users about critical health information. This fits the definition of an AI Incident as the AI system's use has directly caused harm to people's health.
Thumbnail Image

AI của Google bị cảnh báo 'có thể gây chết người' vì tư vấn y tế sai lệch

2026-01-04
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned as providing medical advice that is sometimes incorrect or inconsistent, which can directly lead to harm to users' health, including increased risk of mortality. This fits the definition of an AI Incident because the AI's use has directly led to harm to persons through misinformation and poor health outcomes. The article reports realized harm risks, not just potential, and the AI's role is pivotal in causing these harms.
Thumbnail Image

Aperçus de l'IA de Google : des conseils de santé trompeurs qui mettent les gens en danger ! | LesNews

2026-01-02
LesNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI-generated health summaries) whose use has directly led to harm by disseminating inaccurate medical advice that can endanger lives. The harm is realized, not just potential, as experts and health professionals warn about the dangerous consequences of these false recommendations. This fits the definition of an AI Incident because the AI system's use has directly caused harm to people's health, a primary harm category under the framework.
Thumbnail Image

Intelligence artificielle: Les résumés de Google donnent des conseils médicaux dangereux

2026-01-03
Tribune de Genève
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) generating medical advice that is factually incorrect and potentially harmful to users' health. The harm is direct, as following these erroneous recommendations could worsen health outcomes or delay necessary treatment, fulfilling the criteria for injury or harm to health. Therefore, this constitutes an AI Incident due to the direct link between the AI system's outputs and the risk of harm to individuals' health.
Thumbnail Image

2026-01-05
next.ink
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI-generated search result summaries) whose use has directly led to the dissemination of false and potentially harmful medical advice. The harm is materialized as misleading health information that could cause injury or death, fulfilling the criteria for an AI Incident. The involvement is through the AI system's use and malfunction in generating inaccurate medical content. The article documents actual harm risks and expert concerns, not just potential future harm or general AI news, so it is not an AI Hazard or Complementary Information. Hence, the classification is AI Incident.
Thumbnail Image

AI Overviews: la fonction de recherche "intelligente" de Google fournirait de fausses informations sur des sujets de santé importants, mettant ainsi en danger la vie de malades et des internautes

2026-01-05
BFM BUSINESS
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates health-related summaries. The AI's outputs have directly led to the dissemination of false and harmful health information, which can cause injury or harm to individuals relying on this advice. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to the health of people. The harm is realized and ongoing, not merely potential, and the AI system's malfunction or limitations in providing accurate information are central to the issue.
Thumbnail Image

Un nouveau nouveau dérapage d'AI Overviews peut mettre notre santé en danger

2026-01-06
clubic.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google AI Overviews) is explicitly involved in generating health information summaries. The use of this AI system has directly led to misinformation that could cause physical harm to patients (e.g., compromising chemotherapy tolerance, delaying medical follow-up) and mental health risks. Therefore, this qualifies as an AI Incident because the AI's outputs have directly or indirectly led to harm to people's health, fulfilling the criteria for harm (a) under AI Incident definitions.
Thumbnail Image

AI Overviews : quand les résumés de Google dérapent sur la santé - Siècle Digital

2026-01-06
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) generating medical summaries that contain factual inaccuracies. These inaccuracies have already led to misinformation that could harm individuals' health by causing them to make misguided decisions or avoid necessary medical follow-up. This fits the definition of an AI Incident because the AI system's use has directly led to harm to the health of people (harm category a). The article documents realized harm through misleading medical advice and the potential for serious health consequences, not just a hypothetical risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Yapay zekadan ölümcül hata! Kanser hastalarına yanlış sağlık tavsiyeleri infial yarattı

2026-01-11
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that was used to provide health information. The AI system's use directly led to the dissemination of harmful medical advice, which could cause injury or harm to patients' health, fulfilling the criteria for an AI Incident. The harm is realized or highly likely given the misleading information about liver function tests and pancreatic cancer dietary advice. The company's response to disable the feature is a mitigation step but does not negate the incident classification.
Thumbnail Image

Google retira consultas de salud de sus resúmenes de IA

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health-related summaries that contained inaccurate information, which could mislead patients and cause health harm. The investigation found specific examples where the AI's outputs were misleading or inappropriate for medical advice, such as incorrect dietary recommendations for cancer patients. This constitutes direct harm to health caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Google removes AI Overviews from some medical searches after experts warn of risks to users' health

2026-01-12
TechSpot
Why's our monitor labelling this an incident or hazard?
The AI Overviews are AI systems generating medical information summaries. Their use has directly led to the dissemination of inaccurate health advice, which can cause injury or harm to users' health. The event reports concrete examples of misleading information that could increase risk of death or cause misinterpretation of serious symptoms. Google’s removal of these AI Overviews is a response to realized harm, not just potential harm. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google removes AI Overviews for health queries after accuracy concerns

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health information summaries that were inaccurate and misleading, which can directly harm individuals relying on this information for medical decisions. The investigation found specific examples where the AI's outputs could increase health risks, fulfilling the criteria for harm to health (a). The removal of the AI Overviews is a response to this realized harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and potential health harm.
Thumbnail Image

Google remove resumos de IA sobre saúde após alertas de informações falsas | Exame

2026-01-12
Exame
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used to provide health information but produced inaccurate and misleading content that could cause harm to individuals' health by leading to misinterpretation of medical test results. This constitutes indirect harm to health due to reliance on AI outputs. The event involves the use and malfunction (inaccurate outputs) of an AI system leading to realized or potential harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm or risk of harm to people's health.
Thumbnail Image

Google elimina algunos resultados sobre salud en 'AI Overviews' tras...

2026-01-12
europapress.es
Why's our monitor labelling this an incident or hazard?
The AI system ('AI Overviews' powered by Google's Gemini AI) was used to generate health-related summaries that contained false and misleading information. This misinformation could directly harm users by causing them to misinterpret medical test results or follow harmful health advice, which constitutes injury or harm to health. Google's subsequent removal of some health-related AI summaries acknowledges the risk and harm caused. Therefore, this event qualifies as an AI Incident because the AI system's use directly led to harm to health through dissemination of incorrect medical information.
Thumbnail Image

Google pulls AI Overviews on some health queries after claims they mislead users

2026-01-12
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The AI Overviews are AI-generated content designed to provide health information. The article details that these AI outputs gave misleading and incorrect advice on health topics, such as liver blood tests and cancer symptoms, which could jeopardize patient health and treatment outcomes. This constitutes harm to health (a) caused directly by the AI system's outputs. Google's removal of some AI Overviews is a response to this harm. Therefore, this event qualifies as an AI Incident due to the realized harm from the AI system's use and malfunction in generating misleading health information.
Thumbnail Image

O Google ocultou avaliações de IA com recomendações médicas "alarmantes".

2026-01-12
avalanchenoticias.com.br
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI-generated search summaries) was used to provide medical information. The AI's outputs were inaccurate and lacked critical contextual details (age, sex, ethnicity), which could directly harm users by misleading them about their health status. This constitutes an AI Incident because the AI system's use has directly led to potential harm to individuals' health through misinformation. The event involves the use and malfunction of the AI system in delivering medical advice, fulfilling the criteria for an AI Incident under harm to health (a).
Thumbnail Image

Google's AI Overviews Said to Provide Wrong Medical Advice

2026-01-12
Gadgets 360
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) is explicitly involved, generating medical advice that is factually incorrect and potentially harmful to users' health. The harm is realized as users relying on this information could make dangerous health decisions, fulfilling the criterion of injury or harm to health (a). The AI system's malfunction or misuse in providing inaccurate medical summaries directly led to this harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Google frena algunos los resúmenes de su IA tras detectar consejos de salud peligrosos

2026-01-12
gacetadesalud.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews powered by Gemini) whose use has directly led to the dissemination of false health information that could cause injury or harm to users' health. The harm is materialized as the AI system's outputs misinform users about serious health conditions, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The article also notes that Google has responded by limiting AI-generated health summaries, but the primary event is the occurrence of harmful AI outputs, not the response, so this is not merely complementary information.
Thumbnail Image

The dangers of AI health summaries: What Google's removal means for South Africans

2026-01-12
Diamond Fields Advertiser
Why's our monitor labelling this an incident or hazard?
The AI system (Google AI Overviews) was used to generate health summaries that were inaccurate and potentially dangerous, directly leading to a risk of harm to users' health. The misleading information about liver blood test normal ranges could cause users to wrongly believe they are healthy, delaying necessary medical care. This constitutes an AI Incident because the AI system's use has directly led to harm or risk of harm to people's health, fulfilling the criteria for injury or harm to persons due to AI system use.
Thumbnail Image

Google elimina algunos resultados sobre salud en 'AI Overviews' tras identificar información errónea peligrosa

2026-01-12
heraldo.es
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('AI Overviews' powered by Google's Gemini AI) that generated health-related summaries. The AI system's outputs contained false and misleading information that could cause physical harm to users by influencing their health decisions incorrectly. The harm is realized or ongoing, as users might rely on these summaries to make health decisions, which meets the definition of an AI Incident. Google's removal of some health-related AI summaries is a response to this harm but does not negate the incident itself. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google отключила обзоры ИИ по здоровью из-за сомнений в точности

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating health-related summaries. The AI's outputs contained inaccurate medical information, such as misleading normal ranges for liver function tests and poor dietary advice for pancreatic cancer patients, which could directly harm individuals' health. Google's removal of these AI Overviews indicates recognition of the harm caused. Therefore, the AI system's use directly led to harm to people's health, fitting the definition of an AI Incident.
Thumbnail Image

Google doğruluk kaygısıyla sağlık aramalarında AI Overviews'i kaldırdı

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI Overviews) that generates summaries for health-related queries. The AI-generated content was found to contain inaccurate and misleading health information, which poses a risk of harm to users' health. Google responded by removing these AI Overviews from health searches, indicating recognition of the harm caused. The AI system's malfunction or misuse (producing incorrect health information) directly led to potential harm to people, fitting the definition of an AI Incident involving harm to health (a).
Thumbnail Image

"Опасно для здоровья". ИИ-сервис Google давал пользователям ложные советы на медицинские запросы

2026-01-12
Российская газета
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-powered medical information service) whose use directly led to harm to health by providing false or misleading medical advice. This fits the definition of an AI Incident because the AI system's outputs have directly led to potential injury or harm to health of individuals relying on the information. The harm is realized as users may be misled about their health status, potentially delaying necessary medical treatment. The article also notes Google's response, but the primary event is the harm caused by the AI system's inaccurate outputs.
Thumbnail Image

Google tiene que dar marcha atrás en sus resúmenes médicos con IA: daba información errónea a los usuarios

2026-01-12
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI-based summarization for search queries) whose use led to the direct dissemination of erroneous and potentially harmful medical information to users. This misinformation poses a risk to users' health and well-being, fulfilling the criteria for harm to persons. The harm has already occurred as users received dangerous advice, and Google has responded by disabling the feature for some queries, indicating recognition of the incident. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Google 被控提供誤導性健康資訊,移除部分 AI 摘要

2026-01-13
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI used to create search result summaries) that has been used to provide health information. The AI summaries contained inaccurate and misleading content that could cause users to misinterpret their health condition, potentially leading to harm to their health. This is a direct link between the AI system's outputs and a risk of injury or harm to persons, fulfilling the criteria for an AI Incident. The removal of some AI summaries is a response but does not negate the fact that harm has occurred or is occurring due to the AI system's outputs.
Thumbnail Image

Investigation: Google's 'AI Overview' Provides Wildly Inaccurate Medical Advice

2026-01-13
Breitbart
Why's our monitor labelling this an incident or hazard?
The AI Overview is a generative AI system providing medical information summaries. The investigation found that it produced false and misleading health information, which could lead to patients misunderstanding their medical test results and potentially skipping necessary healthcare. This constitutes direct harm to health (a), fulfilling the criteria for an AI Incident. The company's partial removal of some problematic content and ongoing issues with other inaccurate summaries do not negate the realized harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Google removes some AI health summaries after investigation finds...

2026-01-12
Ars Technica
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly mentioned as generating health summaries. The design flaw led to problematic outputs, which could misinform users about health matters, potentially causing harm to individuals relying on this information. Since the issue has already led to removal of content after investigation, it implies that harm or risk of harm was realized or at least significant enough to warrant action. Therefore, this qualifies as an AI Incident due to harm to health or potential harm caused by the AI system's outputs.
Thumbnail Image

Google just removed AI Overviews for certain medical searches, and what they were telling people is deeply concerning | Attack of the Fanboy

2026-01-12
Attack of the Fanboy
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical information for users. The AI provided misleading and incorrect medical advice without necessary context, which could cause users to misinterpret their health status and potentially ignore serious health issues, constituting harm to health (a). The event involves the use of an AI system and the harm is directly linked to its outputs. The removal of the AI content is a response to the incident, not the incident itself. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google No Longer Uses AI to Answer Some Medical Queries

2026-01-12
ExtremeTech
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical advice that was inaccurate and potentially harmful, directly leading to risks of harm to patients' health and well-being. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (harm category a). The event involves the use of an AI system, and the harm is realized or ongoing, not merely potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

Google frena a la IA: elimina resúmenes de salud por riesgos para los usuarios

2026-01-12
Excélsior
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) generated medical summaries that were inaccurate or lacked necessary context, which could mislead users about their health conditions. This misuse of AI-generated content has already caused concern about harm to users' health, fulfilling the criteria for an AI Incident under harm to health (a). The event is not merely a potential risk but a realized issue prompting Google to remove the feature. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google removes some AI health summaries after wrong advice sparks alarm

2026-01-12
Business Standard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI feature 'AI Overviews') that generated incorrect medical advice, which is a direct use of AI leading to potential harm to people's health. The misleading summaries could cause users to delay diagnosis or follow harmful dietary advice, fulfilling the criteria for harm to health (a). The removal of some AI Overviews is a response to the incident but does not negate the fact that harm has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's outputs.
Thumbnail Image

Причиняли вред: Google удалил ответы ИИ на вопросы пользователей о медицинских анализах

2026-01-11
Главные новости в России и мире - RTVI
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated medical summaries) whose use has directly led to harm to health by providing misleading medical information. The harm is realized in the form of misinformation that could cause users to make harmful health decisions. This fits the definition of an AI Incident because the AI system's outputs have directly caused or could cause injury or harm to people's health. The article documents actual harm potential and Google's remedial action, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

Google Restricts AI Overviews In Search After Reports Of Harmful Health Information

2026-01-14
Ubergizmo
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health-related summaries that omitted critical contextual information, resulting in false medical advice. This misuse of AI led to a direct harm risk to users' health, fulfilling the criteria for an AI Incident under harm to health (a). Google's response to limit the feature indicates recognition of the harm caused. Therefore, this event qualifies as an AI Incident due to the realized harm from AI-generated misinformation in health contexts.
Thumbnail Image

Google pulls AI health summaries after investigation finds dangerous medical errors - NaturalNews.com

2026-01-13
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generated health summaries. The AI's outputs were inaccurate and misleading, directly causing potential harm to users' health decisions, including risks of missed diagnoses or unnecessary anxiety and medical procedures. This fits the definition of an AI Incident because the AI system's use directly led to harm to people's health and well-being. The removal of the AI Overviews is a response to this harm but does not negate the incident itself.
Thumbnail Image

Google Pulls Some Health-Related AI Overviews as Industry Goes All-In on Healthcare

2026-01-13
Gizmodo
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated health summaries) was used to provide health information but produced misleading and inaccurate content that could directly harm users' health by causing misinterpretation and delayed care. This fits the definition of an AI Incident because the AI system's use directly led to harm to health (harm category a). The article also mentions industry developments but these serve as context rather than the main event. Therefore, the classification is AI Incident.
Thumbnail Image

Google épinglé pour des synthèses médicales erronées

2026-01-13
24matins
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating medical summaries that contain errors and misleading information, which could endanger patients' health. This constitutes harm to health (a) and harm to communities (d) due to misinformation. The AI system's malfunction (providing incorrect outputs) directly led to these harms. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Google Scrubs Faulty AI-Generated Health Advice That Could Mislead Sick Patients

2026-01-13
Technology Org
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system generating health advice that is factually incorrect and potentially harmful to patients, fulfilling the definition of an AI system. The AI's outputs have directly led to misinformation that could cause injury or harm to health, meeting the harm criteria for an AI Incident. The partial removal of problematic content shows the AI system's malfunction and use have already caused harm, not just a plausible future risk. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Google pulls some AI Overviews following concerns over misleading health advice

2026-01-11
The New Indian Express
Why's our monitor labelling this an incident or hazard?
The event describes an AI system (Google's AI Overviews) generating health-related content that misled users and posed a risk of harm to their health. The AI system's outputs failed to consider important personal factors, leading to inaccurate health advice. This constitutes an AI Incident because the AI system's use directly led to potential harm to users' health, fulfilling the criteria for injury or harm to a person due to AI system malfunction or misuse.
Thumbnail Image

'Dangerous and alarming': Google removes some of its AI summaries after users' health put at risk

2026-01-11
democraticunderground.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Google's AI Overviews, which use generative AI to provide health information, served up false and misleading content that experts described as dangerous and alarming. The misinformation about liver function tests could cause users to wrongly believe they are healthy despite serious liver disease, directly risking injury or harm to health. This meets the definition of an AI Incident as the AI system's use has directly led to harm to people’s health.
Thumbnail Image

Google takes down some AI summaries over health risk concerns

2026-01-11
Cybernews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (AI Overviews) that has been used to provide health information. The inaccuracies in these AI summaries have already led to the dissemination of misleading health advice, which constitutes harm to people's health (harm category a). The AI system's outputs are directly linked to this harm, as the misleading summaries are generated by the AI and served to users. Google's removal of some summaries and ongoing improvements are responses to this harm but do not negate the fact that harm has occurred. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Google removes AI Overviews for certain medical queries - RocketNews

2026-01-11
RocketNews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was generating medical information that could mislead users about their health status, which is a plausible risk of harm to health (harm category a). The event involves the use of an AI system and its outputs potentially leading to harm, but the article focuses on the removal of these AI-generated summaries to prevent such harm. Since no actual injury or harm is reported, and the focus is on preventing potential harm, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the risk and mitigation of AI-generated misleading health content, not just updates or responses to past incidents.
Thumbnail Image

Google Pulls Some AI Overviews After Report Flags Misleading Health Information - iAfrica.com

2026-01-11
iAfrica.com
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) is explicitly involved, generating health-related summaries. The issue arises from the AI's use, where its outputs could mislead users about their health, posing a plausible risk of harm to individuals' health understanding. Although no direct harm is reported, the potential for harm is credible and significant, especially given the health context. The company's removal of some AI Overviews is a mitigation response to this hazard. Since no actual harm has been reported, this does not qualify as an AI Incident but rather as an AI Hazard.
Thumbnail Image

Google pulls AI overviews for some medical searches

2026-01-11
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overviews) was used to provide medical information but produced false and misleading advice that experts described as dangerous, potentially increasing the risk of death or serious health issues for patients. This constitutes direct harm to health caused by the AI system's outputs, meeting the definition of an AI Incident. The removal of the feature is a mitigation step and thus complementary information but does not change the classification of the original harm event.
Thumbnail Image

Google Discontinues AI Summaries for Select Medical Searches - Internewscast Journal

2026-01-11
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use and subsequent removal of an AI system's outputs (AI summaries) in medical searches. While no direct harm is reported, the removal is due to concerns about inaccuracies and missing context, which could plausibly lead to harm if users relied on incorrect medical information. However, since the AI summaries have been disabled and no harm is reported as having occurred, this event represents a precautionary measure addressing potential risks rather than an incident of realized harm. Therefore, it qualifies as Complementary Information, detailing a governance and quality control response to AI outputs in a critical domain.
Thumbnail Image

Google removes AI overviews from some medical searches after false health information sparks backlash | Mint

2026-01-12
mint
Why's our monitor labelling this an incident or hazard?
The AI Overviews feature is an AI system generating health-related content. The false and misleading medical information it provided constitutes a direct harm to users' health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to the health of a person or groups of people. The event involves the use of the AI system and its malfunction in providing inaccurate outputs. The harm is realized or ongoing, not merely potential, as users could be misled by the false information. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Google quietly pulls AI Overviews from some health searches, here's why

2026-01-12
Moneycontrol
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate medical information summaries that lacked critical contextual details, potentially misleading users about their health. This constitutes indirect harm to health (a) because users might rely on inaccurate or incomplete AI-generated information for medical decisions. The event involves the use of an AI system and its outputs leading to potential harm, which has been partially mitigated but not fully resolved. Therefore, this qualifies as an AI Incident due to realized harm from AI-generated misleading health information.
Thumbnail Image

Google stops showing AI Overviews for certain health questions

2026-01-12
Digit
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate health-related summaries that in some cases provided misleading or incomplete information about liver blood test reference ranges. This misinformation could directly or indirectly lead to harm to users' health by causing incorrect assumptions about medical test results. The event involves the use and malfunction (inaccurate outputs) of an AI system leading to potential health harm. Since some AI Overviews remain active for similar queries, there is ongoing plausible risk. However, the event primarily reports on the removal of certain AI Overviews after harm was identified, indicating that harm has occurred or was imminent. Therefore, this qualifies as an AI Incident due to realized or imminent harm from AI-generated misinformation affecting health understanding.
Thumbnail Image

Google pulls AI Overviews from select medical searches after Guardian investigation

2026-01-12
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The AI Overviews system is an AI system generating medical information summaries. The Guardian's investigation revealed that these AI-generated summaries could mislead users by presenting numerical ranges without necessary contextual variables, potentially causing harm to users' health understanding. This is a direct consequence of the AI system's outputs. Google's partial removal of these AI Overviews in response to the investigation confirms the recognition of harm. The event involves realized harm (misleading health information) and the AI system's role is pivotal. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google removes AI overviews from health queries - Daily Times

2026-01-12
Daily Times
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI-assisted Overviews) was used to generate health-related summaries that could mislead users, posing a risk of harm through misinformation in a sensitive domain (health). However, the event describes a proactive mitigation step (removal of these AI summaries) to reduce this risk. There is no indication that actual harm has occurred, but the potential for harm was recognized and addressed. Therefore, this event fits the definition of Complementary Information as it reports on a governance and safety response to a previously identified AI-related risk rather than describing a realized AI Incident or a plausible future hazard.
Thumbnail Image

ИИ от Google оказался опасным: расследование раскрыло неутешительную картину

2026-01-12
ФОКУС
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) providing health-related information that is false and misleading, which can cause users to misinterpret their medical condition, posing a risk to their health. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people's health. The company's partial mitigation (removal of some AI overviews) does not eliminate the ongoing risk, as similar queries still yield potentially harmful AI-generated content. Hence, the event is not merely a hazard or complementary information but an AI Incident due to realized harm.
Thumbnail Image

Google removes AI Overviews for certain medical queries - TechCrunch

2026-01-12
blog.quintarelli.it
Why's our monitor labelling this an incident or hazard?
An AI system (Google AI Overviews) was used to generate health-related information. The AI system provided misleading outputs that failed to consider important factors like nationality, sex, ethnicity, or age, which could lead to incorrect health assessments by users. This constitutes indirect harm to health (a), as users might misinterpret their medical test results based on inaccurate AI-generated information. The event involves the use of an AI system and realized harm through misleading health information, qualifying it as an AI Incident. The removal of the AI Overviews is a mitigation measure but does not change the fact that harm occurred.
Thumbnail Image

萬事問谷歌?Google被控提供誤導性健康資訊 移除部分AI摘要|壹蘋新聞網

2026-01-12
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI for AI summaries) whose use has directly led to harm to health (users receiving inaccurate health information that could delay treatment for serious liver disease). The harm is realized, not just potential, as experts warn about the dangerous consequences of misleading AI-generated health summaries. Google's removal of some AI summaries is a response but does not negate the fact that harm has occurred. Therefore, this is classified as an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Google has removed some AI health summaries. But why? What is the company saying? - Firstpost

2026-01-12
Firstpost
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's Med-Gemini) generating health summaries that were found to be inaccurate and misleading, which could cause harm to patients by giving false impressions about their health status. The harm is realized because the misleading information was actively provided to users, and experts warned about the potential negative health consequences. The company's removal of some AI summaries is a response to this harm. Hence, this event meets the criteria for an AI Incident due to direct harm to people's health caused by the AI system's outputs.
Thumbnail Image

Google remove resumo de IA de pesquisas sobre saúde após respostas enganosas

2026-01-12
TecMundo: Tudo sobre Tecnologia, Entretenimento, Ciência e Games
Why's our monitor labelling this an incident or hazard?
The AI system (large language model generating search result summaries) produced incorrect and misleading health information, which can directly or indirectly harm users' health by causing misinformed decisions. The event involves the use of the AI system and its malfunction in providing inaccurate outputs. The harm is realized as the misleading information was publicly available and could influence users. The company's removal of the AI summaries is a mitigation step but does not negate the fact that harm occurred. Hence, this fits the definition of an AI Incident involving harm to health (a).
Thumbnail Image

Google scraps AI Overviews for certain medical queries: Find out why

2026-01-12
geo.tv
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health-related summaries that omitted critical contextual information, which could plausibly lead to users misinterpreting their health status, posing a risk of harm to health. The event does not report actual harm occurring but highlights a credible risk that led to the removal of the AI feature. This aligns with the definition of an AI Hazard, where the AI system's use could plausibly lead to harm. The event also includes a governance response (removal of AI Overviews) but the primary focus is on the potential harm from the AI system's outputs, not on the response itself, so it is not Complementary Information. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Google AI出包? 外媒踢爆Google AI健康摘要陷阱 - 自由健康網

2026-01-12
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's generative AI used for health summaries) whose outputs have provided inaccurate health information. This misinformation can directly harm users by causing them to misinterpret their medical test results and potentially avoid needed medical care, constituting harm to health (a). The event involves the use of the AI system and its malfunction in providing misleading content. Since harm is occurring or highly likely due to the AI's outputs, this qualifies as an AI Incident rather than a hazard or complementary information. The article also notes Google's partial remediation but ongoing concerns, reinforcing the incident classification.
Thumbnail Image

Google Disables Certain AI Overviews After Offering 'alarming' Medical Advice - Stuff South Africa

2026-01-12
Stuff South Africa
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generated medical advice. The AI's outputs were factually incorrect and potentially harmful, directly leading to a risk of injury or harm to users' health, fulfilling the criteria for an AI Incident under harm category (a). The harm is realized or ongoing as users could have been misled by the AI's advice. Google's disabling of the feature for some queries is a response but does not negate the incident classification.
Thumbnail Image

Google被控提供誤導性健康資訊 移除部分AI摘要 | 科技 | 中央社 CNA

2026-01-12
cna.com.tw
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) is explicitly mentioned and is used to provide health information. The misinformation has already occurred and poses a direct risk of harm to users' health, fulfilling the criteria for harm to health (a). The event describes realized harm potential through misleading content that could cause users to neglect medical care. Google's removal of some summaries is a response but does not negate the fact that harm has occurred or is ongoing. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google elimina algunos resultados sobre salud en 'AI Overviews' tras identificar información errónea peligrosa

2026-01-12
Diario Siglo XXI
Why's our monitor labelling this an incident or hazard?
The AI system (Google's 'AI Overviews' powered by Gemini) was used to generate health-related summaries that contained false and misleading information. This misinformation could directly harm users by causing them to misinterpret medical test results or follow harmful advice, which fits the definition of harm to health (a). The event involves the use of an AI system and the harm is realized or ongoing, as users could be misled by the information. Google's removal of some results is a mitigation response but does not negate the fact that harm has occurred or is occurring. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Removes AI Health Summaries After Risky Medical Errors Exposed

2026-01-13
Digital Trends
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated health summaries) was used to provide medical information but failed to account for critical contextual factors like age, sex, and medical history, leading to dangerously misleading outputs. This misuse of AI in a health context has directly caused harm by potentially causing users to misinterpret their health status and delay or avoid necessary medical care. The event involves the use and malfunction of an AI system resulting in harm to health, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google removes AI answers that pose health risks

2026-01-13
Azernews.Az
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Reviews feature) is explicitly involved as it generates medical information responses. The event stems from the AI system's use, where it provided incomplete and potentially misleading health information without proper context, which can directly lead to harm to users' health (harm to persons). Although Google has taken some mitigation steps, the issue remains unresolved for some queries, indicating ongoing risk. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to potential health harm through misinformation.
Thumbnail Image

谷歌AI概覽健康資訊準確性惹爭議 恐耽誤治療 | 谷歌AI概覽功能 | 患者 | 新唐人电视台

2026-01-13
NTDChinese
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overview feature) is explicitly involved in generating health information summaries. The inaccurate and misleading outputs have directly led to potential harm to patients' health by causing misjudgment and delayed treatment, which fits the definition of an AI Incident involving harm to health (a). The article reports realized harm risks and expert concerns about increased mortality risk, indicating direct or indirect harm caused by the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI摘要衝擊新聞生態 全球媒體面臨結構轉型壓力 | ETtoday AI科技 | ETtoday新聞雲

2026-01-13
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (AI search summaries, chatbots) influencing media consumption patterns, but it does not report any realized harm or incident caused by AI. The focus is on the evolving media ecosystem and strategic responses to AI-driven disruption, without describing any direct or indirect harm or plausible immediate harm. Therefore, it is best classified as Complementary Information, providing context and insight into AI's impact on media without constituting an AI Incident or AI Hazard.
Thumbnail Image

AI Overviews medical advice "disastrous" warning shocks users

2026-01-13
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The AI Overviews feature is an AI system generating medical advice summaries. The event details how the AI system's outputs have directly led to potentially harmful misinformation affecting users' health, fulfilling the criteria for an AI Incident due to harm to health (a). The harm is realized or ongoing as users could be misled by the AI-generated content. The removal of problematic content is a response but does not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Google removes some AI health summaries after investigation finds "dangerous" flaws

2026-01-12
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI Overviews) whose use has directly led to harm by providing inaccurate and misleading health information. The harm is to the health of individuals who might rely on these summaries, fulfilling the criteria for an AI Incident under harm category (a). The investigation found concrete examples of dangerous misinformation, and Google took partial remedial action, confirming the AI system's role in causing harm. The AI system's malfunction in generating authoritative but incorrect health advice is central to the incident.
Thumbnail Image

Google's Recent Misstep Is a Warning for AI-Generated Medical Info

2026-01-12
Inc
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical information for users. The investigation found that the AI provided inaccurate or misleading medical advice, such as incorrect test interpretations and wrong dietary recommendations for pancreatic cancer patients. These inaccuracies could directly harm patients' health if acted upon, fulfilling the criteria for injury or harm to health due to AI system use. Therefore, this qualifies as an AI Incident because the AI system's use directly led to potential harm to persons' health.
Thumbnail Image

After AI review: Google stops dangerous health advice

2026-01-12
Computerworld
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it provides health-related answers. The AI's outputs have directly led to potentially harmful misinformation about health, which can cause injury or harm to individuals relying on this advice. Although Google has taken steps to mitigate the harm, the ongoing risk of dangerous outputs means the harm is current and not merely potential. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google Turns Off Some AI Overviews Following Reports of "Dangerous" Medical Information

2026-01-12
Thurrott.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates medical information summaries. The inaccurate outputs have directly led to potential harm to users' health by providing misleading medical advice, which could cause patients to neglect necessary healthcare. This constitutes harm to health (a) under the AI Incident definition. The event reports realized harm risk and actual misleading information dissemination, not just a potential hazard. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google remove resumos de IA em buscas sobre saúde após erros perigosos

2026-01-12
Olhar Digital
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated health summaries) was used in a way that provided incorrect medical guidance, ignoring important contextual factors like age, sex, and ethnicity, which are critical for accurate diagnosis. This led to potentially dangerous outcomes, such as patients receiving false reassurance or harmful dietary advice that could compromise treatment. These are direct harms to health caused by the AI system's outputs. The article explicitly states these harms have occurred and prompted the removal of the AI feature. Hence, it meets the criteria for an AI Incident due to direct harm to health caused by the AI system's use.
Thumbnail Image

Google Pulls AI Overviews for Health Searches After Safety Warnings

2026-01-12
Android Headlines
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate medical information summaries for health-related searches. The AI's outputs contained inaccuracies and misleading information that could directly harm users' health decisions, such as incorrect interpretations of liver blood test ranges and cancer screening capabilities. This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The event involves the use of an AI system whose outputs have directly led to potential harm, prompting corrective action by Google. Therefore, this is classified as an AI Incident.
Thumbnail Image

Google Scales Back AI Overviews on Health Searches After Questions Over Accuracy and Clinical Risk - Tekedia

2026-01-12
Tekedia
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system generating health-related summaries that have been found to be misleading and potentially harmful to users' health decisions. The harm is indirect but significant, as misleading medical information can delay diagnosis or treatment, causing injury or harm to health. Google's rollback of AI Overviews is a response to this realized harm. The event meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to health, and the article describes actual consequences and responses rather than just potential risks or general AI news.
Thumbnail Image

Alertado sobre falhas, Google desativa resumos de IA sobre saúde - Startups

2026-01-12
Startups
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by Google to generate health summaries that contained false or misleading information. This misinformation could directly harm users by causing them to misinterpret their health conditions and avoid necessary medical care, which is a harm to health (a). The AI system's malfunction or erroneous output is the direct cause of this harm. The event is not merely a potential risk but has already led to recognized harm or significant risk, as evidenced by Google's decision to disable the feature. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Disables Medical AI Overviews After Safety Investigation Reveals Dangerous Advice - WinBuzzer

2026-01-12
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) that was used to provide medical information. The AI system's outputs included dangerous inaccuracies and hallucinations that could directly harm patients if followed, such as advising against necessary high-fat diets and misrepresenting diagnostic tests. These harms relate to injury or harm to health, fulfilling the criteria for an AI Incident. The company's response to disable the feature for certain queries confirms the recognition of realized harm. The article also contrasts this incident with other companies expanding healthcare AI, but the core event is the harmful AI-generated medical advice and its consequences, not just potential or future harm, thus it is not merely a hazard or complementary information.
Thumbnail Image

Google limita su IA por respuestas peligrosas a consultas médicas

2026-01-12
Androidphoria
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical information that was incomplete and potentially misleading, which could cause users to misinterpret their health status and delay seeking necessary medical care. This constitutes direct harm to health (a), fulfilling the criteria for an AI Incident. The event involves the use of an AI system, the harm is realized (not just potential), and Google took remedial action after the harm was identified. Hence, this is classified as an AI Incident.
Thumbnail Image

Google Under Fire After AI Health Search Results Endanger Users - Thailand Medical News

2026-01-12
Hidden Persistent Low-Grade Inflammation Drives Long COVID Breathing Problems - Thailand Medical News
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is responsible for generating health summaries that contain fabricated and misleading medical information. This misinformation has directly led to potential harm to users' health by encouraging false reassurance and possible neglect of medical care, which is a clear harm to persons. The event involves the use and malfunction (inaccurate outputs) of the AI system. The harm is realized or ongoing, as users have been exposed to dangerous misinformation. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Removes AI Overviews for Specific Medical Questions Following Reports of Inaccuracies

2026-01-12
Tech Times
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved and has malfunctioned by providing inaccurate medical information. Although no direct harm has been reported, the misinformation in medical queries could plausibly lead to harm to users' health if relied upon. Google's removal of the feature for certain queries is a mitigation step acknowledging this risk. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but no harm has yet been documented. It is not an AI Incident because no injury or harm has been reported as having occurred. It is not Complementary Information because the main focus is on the removal of the AI feature due to risk, not just an update or governance response. It is not Unrelated because the event clearly involves an AI system and its impact.
Thumbnail Image

AI Overviews Removed by Google for Some Medical Searches Following Reports of Dangerous and Misleading Information

2026-01-12
Techlusive
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) generating medical information that was false and potentially dangerous, leading to real risks of harm to users' health. The AI system's outputs misled users about critical health information, which is a direct link to harm. Google's removal of the AI Overviews for some queries is a response to this harm. Therefore, this is an AI Incident as the AI system's use has directly led to harm to health through misinformation.
Thumbnail Image

Yapay Zeka Hatalı Sağlık Bilgileri Veriyordu: Google Geri Adım Attı!

2026-01-11
tamindir.com
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI summaries) was used to provide health information but gave inaccurate or incomplete outputs that could mislead users about medical test results, potentially causing harm to individuals' health understanding and decisions. This constitutes indirect harm to health due to the AI system's outputs. The event reports realized harm (misleading health information) and a company response to mitigate it, fitting the definition of an AI Incident.
Thumbnail Image

Google elimina AI Overviews para consultas médicas tras investigaciones

2026-01-11
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI Overviews) used in medical information queries, which is explicitly mentioned. The AI system's use led to the dissemination of potentially misleading health information, which could directly or indirectly harm users' health by causing misinterpretation of medical test results. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to potential harm to health (harm category a). The removal of the AI Overviews is a response to this harm. Although the article does not report actual injuries, the potential for harm is significant and realized enough to warrant removal, indicating the incident status rather than a mere hazard. The event is not merely complementary information or unrelated news, as it concerns the direct impact of AI system outputs on health-related information and user safety.
Thumbnail Image

Google elimina algunos de sus resúmenes escritos con IA porque podrían poner en riesgo la salud de los usuarios

2026-01-11
Cadena SER
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI for search summaries) whose outputs contained inaccuracies about health information. This misinformation could directly harm users by causing them to misunderstand their health status, which qualifies as injury or harm to health. The harm is realized or at least highly plausible given the nature of the misinformation. Therefore, this qualifies as an AI Incident because the AI system's use directly led to potential or actual harm to users' health.
Thumbnail Image

Google retira resúmenes de salud generados por IA tras detectar riesgos para los usuarios

2026-01-11
infoLibre
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI for health summaries) was used to generate medical information that was inaccurate and lacked necessary context, leading to a risk of users misunderstanding their health conditions. This constitutes indirect harm to health (a) because users could be misled into believing they are healthy when they might have serious conditions, potentially delaying medical care. The event involves the use of an AI system and the harm has materialized or is ongoing, as evidenced by the removal of some summaries and expert concerns. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Google'dan radikal karar! Bazı aramalar için o özellik kaldırıldı: 'Tehlikeli ve 'endişe verici'

2026-01-12
Mynet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI-powered health summaries) that was used to provide medical information. The AI system malfunctioned by delivering inaccurate health data, which could directly harm users by causing them to misinterpret their health status and potentially avoid necessary medical care. The harm is realized or at least highly plausible given the misleading information's nature and the expert characterization of the summaries as 'dangerous' and 'concerning.' Google's removal of the summaries is a response to the incident but does not negate the fact that harm occurred or was imminent. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google apaga esta IA al descubrir que engañaba a los usuarios: si todavía tienes acceso, deja de usarla ya mismo

2026-01-12
Hipertextual
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is used to generate medical information responses. The false and misleading outputs have directly led to potential harm to users' health, such as incorrect dietary advice for cancer patients and misleading test results that could cause users to ignore symptoms or avoid seeking help. The harm is realized or highly plausible given the nature of the misinformation. Google's partial mitigation does not negate the fact that harm has occurred or could occur. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's malfunction and use.
Thumbnail Image

Google遭爆健康資訊誤導用戶 已下架部分AI摘要「但換句話問仍出現」 | udn科技玩家

2026-01-12
udn科技玩家
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated health summaries) is explicitly involved and its use has directly led to misleading health information being presented to users. This misinformation can cause harm to users' health by leading them to incorrect conclusions about their medical test results, which fits the definition of an AI Incident involving harm to health. Although Google has taken some remedial actions, the problem persists, indicating ongoing harm rather than just a potential hazard or complementary information.
Thumbnail Image

Google遭爆健康資訊誤導用戶 已下架部分AI摘要「但換句話問仍出現」 | 聯合新聞網

2026-01-12
聯合新聞網
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI summarization in search results) is explicitly involved, generating health-related content. The misleading and inaccurate nature of this content has already caused or could cause harm to users' health by misinforming them about medical test results, which fits the definition of an AI Incident due to indirect harm to health. Google's partial removal of some summaries does not eliminate the ongoing risk, but the harm is already occurring or has occurred as users rely on these summaries. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

The dangers of AI health summaries: What Google's removal means for South Africans

2026-01-12
IOL
Why's our monitor labelling this an incident or hazard?
The AI system (Google AI Overviews) was used to generate health summaries that were inaccurate and potentially dangerous, directly leading to a risk of harm to users' health by providing misleading medical information. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to the health of people (harm category a). The event involves the use of an AI system and the harm is realized or ongoing, as users could be misled by the summaries. The removal of some summaries is a response but does not negate the incident classification since harm has occurred or is occurring. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Реакция на критику: Google скрыл медицинские ИИ-отчеты, вводившие в заблуждение

2026-01-12
Зеркало недели | Дзеркало тижня | Mirror Weekly
Why's our monitor labelling this an incident or hazard?
The event involves a generative AI system used by Google to create medical summaries that were shown in search results. These summaries contained inaccurate and misleading information about liver function test ranges, which could cause patients to misinterpret their health condition and delay seeking medical help, thus posing a direct risk to health (harm category a). The AI system's outputs were directly responsible for this misinformation. Although Google has taken remedial action by removing some of these summaries, the harm has already occurred and the risk remains. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Pulls Some AI Overviews for Health Queries

2026-01-12
Tempo English
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate medical information summaries that failed to consider critical factors like nationality, gender, ethnicity, and age, which are essential for accurate health information. This omission could mislead users into believing their liver test results are normal when they are not, posing a risk of harm to health. The AI summaries were actively used and displayed to users, constituting an AI Incident due to the realized risk of harm. Google's removal of these summaries is a mitigation response but does not negate the fact that the AI system's outputs had already led to potential health harm. Hence, this qualifies as an AI Incident involving indirect harm to health through misleading AI-generated content.
Thumbnail Image

Google Removes AI Overviews for Queries About Blood Tests Due to Misleading Info

2026-01-12
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical summaries that were misleading and incomplete, leading to a plausible risk of harm to users' health by providing false reassurance about serious conditions. This constitutes indirect harm to health (a) as users might delay seeking care based on inaccurate AI outputs. The event involves the use and malfunction (inaccuracy) of an AI system leading to harm. Since harm has occurred or is ongoing due to misleading information, this qualifies as an AI Incident rather than a hazard or complementary information. The removal of some AI Overviews is a response but does not negate the incident classification because the harm or risk of harm has materialized.
Thumbnail Image

Google Search Gave Risky Medical Info -- Then It Was Removed

2026-01-12
ProPakistani
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical information that did not account for important individual factors, potentially misleading users about their health status. This constitutes indirect harm to health as users could make harmful decisions based on inaccurate or incomplete AI-generated information. The removal of the feature for certain queries is a response to this harm but does not negate that the AI system caused or contributed to the risk of harm. Hence, this qualifies as an AI Incident due to realized or ongoing harm from the AI system's outputs.
Thumbnail Image

Google, yanıltıcı bilgiler nedeniyle sağlık konusunda AI Overviews özelliğini kaldırıyor

2026-01-12
Webrazzi
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate health-related summaries that misrepresented critical medical information, leading to a plausible risk of harm to users' health by misinforming them. The summaries caused or could cause users to misunderstand their health status, which is a direct or indirect harm to health (criterion a). Google responded by removing the feature for certain queries, acknowledging the AI system's role in the harm. This fits the definition of an AI Incident because the AI system's use directly led to potential or actual harm to people’s health through misinformation.
Thumbnail Image

Google Scales Back AI Search Summaries Following Health Risk Backlash | eWEEK

2026-01-12
eWEEK
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Google's AI system generated health summaries with dangerous misinformation that could cause patients to skip life-saving care, which is a direct harm to health. The AI system's use in providing medical advice without adequate safety measures or personalization led to this harm. The incident involves the AI system's use and malfunction (inaccurate outputs) causing or contributing to health risks, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Google quietly pulls AI summaries for select medical search queries

2026-01-12
Hindustan Times
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating medical summaries for search queries. The summaries omitted critical contextual information necessary for accurate interpretation, which could indirectly lead to harm to users' health by causing misinterpretation of medical results. Although no specific harm is reported as having occurred, the potential for harm is significant and recognized by Google, leading to partial removal of the AI summaries. This fits the definition of an AI Incident because the AI system's use has indirectly led to a risk of harm to health, and the event involves a response to mitigate that harm. The event is not merely a product update or general news but concerns a concrete issue of AI-generated content causing or potentially causing harm.
Thumbnail Image

Google removes AI Overviews for key health queries following accuracy concerns: Report - The Times of India

2026-01-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health summaries that contained false and misleading information about normal liver test ranges. This misinformation can directly harm users by causing confusion or incorrect health decisions, fulfilling the criteria for harm to health (a). The removal of these summaries is a response to the realized harm caused by the AI system's inaccurate outputs. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's malfunction (inaccurate information generation) and potential harm to users' health.
Thumbnail Image

Google remove resumos de saúde feitos por IA após alertas sobre riscos graves

2026-01-12
GD
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates automatic health summaries. The inaccuracies in these summaries have directly led to potential harm by misleading users about medical test results and treatment advice, which can affect health outcomes. The harm is realized or highly plausible given the nature of the misinformation and the scale of Google's search engine. The company's partial removal of some summaries and ongoing adjustments are responses to the incident but do not negate the fact that harm has occurred or is ongoing. Therefore, this event meets the criteria for an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google, "Hayati Hatalar" Yaptığı Gerekçesiyle Bazı Sağlık Aramalarında Yapay Zekâ Özetlerini Kapattı

2026-01-12
Webtekno
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overview feature) that generated incorrect medical information, which is a direct risk to patient health and safety. The harm is realized as the AI provided dangerous advice that could increase patients' health risks. The disabling of the feature is a response to this harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (criterion a).
Thumbnail Image

Google toma medidas ante resúmenes con IA erróneos en los resultados de búsqueda sobre consejos médicos

2026-01-13
infobae
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used to provide medical advice in search results. The AI outputs contained erroneous and potentially dangerous information, which could directly harm users' health if followed. The event involves the use of an AI system whose malfunction (incorrect outputs) led to harm to persons (harm to health). The removal of the summaries is a response but does not negate the fact that harm occurred or was likely occurring. Hence, this is an AI Incident, as the AI system's use directly led to harm or risk of harm to people.
Thumbnail Image

Google AI Overviews Spread Misinfo, Prompt Brand Strategy Shifts

2026-01-13
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that synthesizes web content to generate summaries. The article documents multiple instances where these AI-generated summaries have misled users with inaccurate or harmful information, including dangerous health advice and false brand claims. These inaccuracies have led to direct harms such as potential health risks and economic damage to brands, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or future harms but reports on harms that have already occurred and the resulting consequences, including regulatory attention and corporate responses. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Errores médicos y datos incompletos: Google replantea el uso de IA en búsquedas

2026-01-13
adn Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating health summaries that contained inaccurate and potentially harmful information. The AI's failure to consider important clinical variables and the prominent placement of these summaries in search results could mislead users, posing a direct risk to their health. Google's removal of these summaries after the issue was exposed confirms the AI system's malfunction and its role in causing harm. This fits the definition of an AI Incident as the AI system's use directly led to potential injury or harm to persons.
Thumbnail Image

Google, sağlık riski oluşturan yapay zeka özetlerini kaldırıyor

2026-01-13
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates health information summaries. The AI system's use has directly led to harm by providing misleading and inaccurate health information, which could cause patients to wrongly believe they are healthy and avoid needed medical treatment, thus posing a serious health risk. This fits the definition of an AI Incident as the AI system's use has directly led to harm to health (harm category a). The company's partial remediation (removal of some summaries) does not negate the occurrence of harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

Google pulls selected medical advice from AI Overview following safety concerns

2026-01-13
India TV News
Why's our monitor labelling this an incident or hazard?
The AI Overview feature is an AI system generating medical summaries. Its outputs were inaccurate and misleading, specifically regarding liver function tests, which is a health-related domain with high stakes. The misleading AI-generated advice could cause harm to users' health by leading to false reassurance and avoidance of medical treatment. Google responded by removing the problematic AI-generated content, indicating recognition of the harm caused. This fits the definition of an AI Incident because the AI system's use directly led to harm to health (a).
Thumbnail Image

Google restreint l'IA : les aperçus de l'IA sont retirés des recherches Santé en raison de trop nombreux bugs.

2026-01-13
informaticien.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's Gemini LLM) generating medical information that is inaccurate and potentially harmful, including dangerous advice for cancer patients and misleading interpretations of medical tests. The AI's malfunction (hallucinations and failure to contextualize medical data) has directly led to risks of harm to users' health, which is a clear case of harm to persons. Google's response to disable the AI feature for health queries confirms the recognition of this harm. Hence, this is an AI Incident involving direct harm due to AI system malfunction and use.
Thumbnail Image

Cuidado com esta IA da Google! Apresenta resultados prejudiciais para a saúde

2026-01-13
Pplware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI summary feature) that has been used for health-related queries and has provided false and misleading information that could harm users' health. The harm is realized or ongoing, as users have received advice that could increase mortality risk or cause them to ignore symptoms. The AI system's malfunction or misuse is central to the harm. Google's partial mitigation does not negate the fact that harm has occurred. Therefore, this event meets the criteria for an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Usar la IA para consultas médicas no es una buena idea, Google se acaba de dar cuenta por las malas

2026-01-13
iPadizate
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated medical summaries) was used to provide health information but failed to consider critical individual factors, leading to potentially harmful misinterpretations. This constitutes indirect harm to health (a), as users relying on this information could make unsafe health decisions. The company's removal of these summaries is a mitigation response but does not negate the fact that the AI system's use led to a significant risk of harm. Therefore, this qualifies as an AI Incident due to the realized risk and indirect harm to health from the AI system's outputs.
Thumbnail Image

Google retira consultas médicas de sus resúmenes con IA tras cuestionamientos sobre precisión - PasionMóvil

2026-01-13
PasionMóvil
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated summaries) whose outputs contained inaccurate medical information that could cause harm to users' health, a direct harm to persons. The AI system's malfunction or misuse (providing misleading health data) has directly led to potential health risks, which is a clear AI Incident under the framework. The removal of the summaries is a response to this incident, but the core issue is the realized harm risk from the AI system's outputs. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Fortune Tech: A painful acknowledgement | Fortune

2026-01-13
Fortune
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI Incident where AI systems have directly or indirectly caused harm, nor does it present a clear AI Hazard where AI systems could plausibly lead to harm. The mention of Google's removal of misleading AI Overviews is a response to previously identified issues and is framed as a mitigation effort, fitting the definition of Complementary Information. Other parts of the article discuss business moves, partnerships, and investments related to AI but do not describe harms or plausible harms. Therefore, the article is best classified as Complementary Information, providing context and updates on AI developments and responses without reporting a new incident or hazard.
Thumbnail Image

Google Pulls AI Overviews For Some Health Queries After Accuracy Concerns

2026-01-13
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in generating health advice. The misinformation provided has directly led to potential or actual harm to people's health, such as misinforming liver disease patients or giving harmful dietary advice to cancer patients. This fits the definition of an AI Incident as the AI system's use has directly led to harm to health (a). The disabling of the feature for some queries is a response but does not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Google Quietly Removes AI Overviews From Some Health Searches After Report Highlighted Issues

2026-01-13
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical advice summaries that ignored important individual context such as age, sex, ethnicity, and country, which could mislead users about their health status. This misuse or malfunction of the AI system's outputs has directly or indirectly led to potential harm to users' health by providing misleading information. The event describes a response to this harm by removing the AI Overviews for certain queries, but the harm or risk of harm has already occurred. Hence, it meets the criteria for an AI Incident due to harm to health caused by the AI system's use.
Thumbnail Image

Les AI Overviews de Google présentent un risque pour la santé des utilisateurs

2026-01-14
Génération NT
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates health-related summaries. The use of this AI system has directly led to the dissemination of inaccurate medical information, which constitutes harm to the health of users (harm category a). The article details specific examples where the AI's outputs could cause patients to neglect necessary medical care or follow dangerous advice, confirming realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google removes some AI summaries after investigation uncovers false information given to users: 'Completely wrong [and] really dangerous'

2026-01-14
The Cool Down
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as generating health-related summaries that contained false information. The use of the AI system directly led to the dissemination of misleading medical advice, which constitutes harm to people's health (harm category a). The event involves the use and malfunction of the AI system's outputs causing potential health harm. Since the harm is realized in the form of false medical information being given to users, this qualifies as an AI Incident rather than a hazard or complementary information. The removal of some summaries is a response but does not negate the fact that harm occurred or was ongoing.
Thumbnail Image

Google retira resúmenes de IA en salud tras alertas de información

2026-01-14
Diario de Morelos
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated health summaries) was used to provide medical information but produced misleading and potentially dangerous content, which could directly harm users' health if acted upon. The removal of these summaries followed credible reports of such harm, indicating that the AI system's malfunction and use led to realized or imminent harm. This fits the definition of an AI Incident because the AI system's outputs have directly or indirectly led to harm to the health of people, fulfilling criterion (a) under AI Incident. The event is not merely a potential risk (hazard) or a complementary update; it involves actual harm or risk realized enough to prompt removal of the AI outputs.
Thumbnail Image

AI Needs To Stop Rewarding Guesses

2026-01-14
mediapost.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (large language models and AI Overviews) and discusses their use and malfunction in providing inaccurate information. The potential harm includes health risks from incorrect medical advice and reputational harm from false claims. However, the article does not document a concrete event where harm has already materialized but rather explains the ongoing risk and systemic issues leading to such harms. Therefore, this qualifies as an AI Hazard because it plausibly leads to AI Incidents (harm) due to the AI systems' design and use, but no specific incident of harm is reported as having occurred yet.
Thumbnail Image

Las alucinaciones entran en terreno sensible: Google daba información médica errónea en sus búsquedas con IA

2026-01-14
Tecnología
Why's our monitor labelling this an incident or hazard?
The AI system (Google's 'AI Overviews' powered by Gemini) is explicitly involved in generating medical information. The incorrect outputs have directly led to potential harm to patients' health by providing misleading clinical data and advice, which could cause injury or harm to persons. The harm is not speculative but realized in the form of misinformation that could affect health decisions. Google's partial withdrawal of some results is a response but does not negate the occurrence of harm. Therefore, this event qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google, sağlık aramalarındaki yapay zeka özetlerini neden kaldırdı?

2026-01-14
CHIP Online
Why's our monitor labelling this an incident or hazard?
Google's AI system generated health summaries that contained misleading or incomplete information, which could cause users to misinterpret their health status and delay necessary medical care. This constitutes harm to health (a) as defined in the framework. The AI system's use directly contributed to this risk, and the removal of these summaries is a response to the incident. The article reports realized harm potential and the company's mitigation, so it is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

"خطيرة ومضللة": غوغل تحذف ملخصات ذكاء اصطناعي صحية بعد تحقيقات عن تهديدها لسلامة المستخدمين

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce health-related summaries that have directly led to misinformation posing a risk of harm to users' health. The misinformation about liver function test ranges could cause patients to wrongly believe they are healthy, which is a direct harm to health. Google's removal of these summaries is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's outputs have directly led to harm to health (harm category a).
Thumbnail Image

"غوغل" تتراجع عن وضع AI Overviews في بعض الاستفسارات الطبية

2026-01-12
العربية
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate medical summaries in search results, which directly led to the dissemination of potentially misleading health information. This misinformation could cause harm to users' health by giving them inaccurate interpretations of medical test results. The event involves the use and malfunction (inaccurate outputs) of an AI system leading to realized or at least credible harm. Therefore, this qualifies as an AI Incident due to harm to health caused by the AI system's outputs. The company's removal of the feature and review process is a response but does not negate the incident classification.
Thumbnail Image

غوغل تحذف ملخصات الذكاء الاصطناعي بعد اكتشاف معلومات صحية... - الوكيل الإخباري

2026-01-12
الوكيل الإخباري
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI health summarization) was used to provide health-related information but produced misleading outputs that could harm users' health understanding. Although no direct injury is reported, the misinformation could plausibly lead to harm to individuals' health, fulfilling the criteria for an AI Incident due to indirect harm. The removal of summaries and ongoing improvements show recognition of the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

غوغل تزيل ملخصات الذكاء الاصطناعي لأسئلة الصحة بعد مخاوف بشأن الدقة

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the source of the health summaries generated by AI. The use of this AI system directly led to the dissemination of inaccurate health information, which poses a risk of harm to users' health, fulfilling the criteria for harm (a) under AI Incident. The removal of the AI summaries is a response to this harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred through misinformation.
Thumbnail Image

"غوغل" تسحب ملخصات الذكاء الاصطناعي من نتائج البحث بعد رصد أخطاء صحية خطيرة

2026-01-11
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate health-related summaries that contained serious inaccuracies. These inaccuracies could directly lead to harm to users' health by providing misleading medical information, causing patients to potentially ignore necessary medical follow-up. The event involves the use and malfunction of an AI system leading to realized harm (or at least a significant risk of harm) to people's health, fitting the definition of an AI Incident. The removal of the summaries is a response but does not negate the incident itself.
Thumbnail Image

معلومات غير دقيقة وخطر على المرضى.. جوجل تتجه لحذف بعض الملخصات الصحية بالذكاء الاصطناعي

2026-01-11
الصباح العربي
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI-generated health summaries) is explicitly involved and has produced inaccurate medical information that could directly harm patients' health by misleading them about critical health parameters. This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The event involves the use of the AI system and its malfunction (inaccurate outputs). The harm is realized or ongoing, as experts express concern about the risk to patients. Therefore, this is classified as an AI Incident.
Thumbnail Image

الجارديان: جوجل تزيل بعض ملخصات الذكاء الاصطناعى لتقديمها معلومات صحية خاطئة - اليوم السابع

2026-01-11
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI summaries) that was used to provide health information. The AI system's outputs were inaccurate and misleading, directly leading to potential harm to users' health by causing false reassurance about serious medical conditions. This fits the definition of an AI Incident because the AI system's use has directly led to harm to health (harm category a). The company's removal of some summaries and efforts to improve the system are responses to the incident but do not change the classification of the event itself. Hence, the event is best classified as an AI Incident.
Thumbnail Image

"غوغل" تزيل بعض ملخصات الذكاء الاصطناعي بسبب معلومات صحية زائفة

2026-01-11
aawsat.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI health summaries) whose use has directly led to the dissemination of false and misleading health information. This misinformation can cause harm to users' health by leading to incorrect self-assessments and potentially dangerous medical decisions. The harm is realized or ongoing as users have been exposed to this misinformation, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. The removal of the summaries is a response to the incident but does not negate the fact that harm has occurred or is occurring.
Thumbnail Image

معلومات صحية "خطيرة" من الذكاء الاصطناعي.. وغوغل تتحرك

2026-01-11
أخبارنا
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's generative AI for search summaries) that has produced inaccurate health information, which can directly harm users by influencing their health decisions incorrectly. This fits the definition of an AI Incident because the AI system's use has directly led to harm to the health of people (harm category a). The event describes realized harm through misinformation and the potential for serious health consequences, not just a plausible risk. Google's removal of some summaries is a response but does not negate the incident classification. Therefore, this is an AI Incident.
Thumbnail Image

معلومات صحية "خطيرة" من الذكاء الاصطناعي.. وغوغل تتحرك

2026-01-11
العربية
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI for health summaries) is explicitly involved in generating content that has been shown to be inaccurate and potentially harmful to users' health decisions. The misinformation could cause users to underestimate serious health conditions, which constitutes direct harm to health. Google's removal of some summaries is a mitigation step but does not negate the fact that harm has occurred or is occurring. Therefore, this event qualifies as an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

غوغل تزيل بعض ملخصاتها بسبب خطرها على الصحة

2026-01-11
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI-generated summaries) that has been used to provide health information. The AI system's outputs have directly led to harm by misleading users about critical health metrics, which can cause injury or harm to health (harm category a). The harm is realized, not just potential, as users may be misled into believing they are healthy when they are not, which is dangerous. Although Google has taken some remedial action, the problem persists, confirming the incident status rather than a mere hazard or complementary information. The event also touches on broader issues of media trust and legal investigations but the core AI-related harm is the misleading health information causing direct health risks.
Thumbnail Image

مخاوف صحية تدفع غوغل للتراجع.. "الذكاء الاصطناعي" تسبب بأزمة - شفق نيوز

2026-01-11
شفق نيوز
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI for health summaries) whose use has directly led to the dissemination of inaccurate health information, posing a risk of harm to users' health. The harm is realized or at least ongoing, as misleading health summaries are shown to users, which can cause injury or harm to health (harm category a). The company's removal of some summaries is a mitigation step but does not negate the incident. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated misinformation in a critical domain (health).
Thumbnail Image

لخطورتها.. غوغل تزيل ملخصات صحية مدعومة بالذكاء الاصطناعي

2026-01-11
Agencia SANA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI health summaries) whose use has directly led to harm or risk of harm to users' health by providing inaccurate medical information. The harm is realized or at least clearly occurring, as users could be misled about serious health conditions, potentially leading to injury or worsening health outcomes. Therefore, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مطالب بـ"ضبط" ملخّصات "غوغل" عقب تقرير عن "معلومات زائفة"

2026-01-11
aawsat.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated summaries) whose use has led to concerns about misinformation, which is a form of harm to communities. However, the article does not document a concrete incident where harm has already occurred; rather, it reports on observed misleading examples and expert calls for better controls to prevent future harm. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm through misinformation if not properly managed.
Thumbnail Image

جوجل تزيل بعض ملخّصات الذكاء الاصطناعي بعد اتهامات بتعريض صحة المستخدمين للخطر

2026-01-12
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates health information summaries. The AI's outputs have directly led to harm by providing misleading medical information, which can cause users to misinterpret their health status and potentially delay or avoid necessary medical care, thus harming their health. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people's health. The article also discusses Google's mitigation efforts, but the harm has already occurred and the risk persists, confirming the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

غوغل تحذف ملخصات الذكاء الاصطناعي من نتائج البحث الطبي بعد مخاوف من التضليل

2026-01-14
elwatan.info
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated medical summaries) whose use directly led to misleading health information, which can harm users' health (harm category a). The AI system's outputs failed to consider critical individual factors, leading to potential misinterpretation of medical test results. This fits the definition of an AI Incident because the AI system's use has directly led to harm or risk of harm to persons. Although Google is now removing the feature and improving the system, the incident of misleading information dissemination has already occurred.
Thumbnail Image

"غوغل" تزيل ملخصات ذكاء صناعي متعلقة بالاستفسارات الصحية

2026-01-14
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating health-related summaries that contained misleading and medically unsafe information. This misinformation poses a direct risk of harm to users relying on these AI-generated outputs for health decisions, fulfilling the criteria for an AI Incident involving harm to health. The removal of the summaries is a response to this realized harm, confirming the incident status rather than a mere hazard or complementary information.
Thumbnail Image

'Dangerous and alarming': Google removes some of its AI summaries after users' health put at risk

2026-01-11
the Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI Overviews) that generated inaccurate health information. The misinformation directly risks harm to users' health by potentially causing them to misinterpret critical medical test results and avoid necessary follow-up care. This fits the definition of an AI Incident as the AI system's use has directly led to harm or risk of harm to people's health. The removal of some AI Overviews is a response but does not negate the fact that harm occurred or was plausible. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google removes certain AI summaries related to health issues

2026-01-11
NewsBytes
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI Overviews) produced false and misleading health information, which is a direct harm to users' health by potentially causing misinformed medical decisions. The removal of some summaries indicates recognition of harm, but the persistence of other inaccurate AI-generated health content means the risk remains. This fits the definition of an AI Incident because the AI system's use has directly led to harm or risk of harm to people's health, fulfilling criterion (a) under AI Incident definitions.
Thumbnail Image

'Dangerous And Alarming': Google Removes Some AI Summaries After It Puts Users' Health At Risk

2026-01-11
News18
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI providing health summaries) was used to generate health information that was inaccurate and misleading, posing a direct risk to users' health. This fits the definition of an AI Incident as it caused harm to health (a). The removal of the summaries is a response to the incident, but the core event is the realized harm from the AI system's outputs. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Google removes some AI summaries after users' health put at risk

2026-01-11
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI Overviews) is explicitly involved as it generates health-related summaries. The inaccurate information provided by the AI system has directly led to a risk of harm to users' health by potentially misleading seriously ill patients. This fits the definition of an AI Incident because the AI system's use has directly led to harm (or risk of harm) to people's health. The removal of some summaries is a response to this harm but does not negate the incident itself. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google removes AI health summaries after safety concerns

2026-01-11
The News International
Why's our monitor labelling this an incident or hazard?
The AI system (Google AI Overviews) was used to generate health summaries that were misleading and inaccurate, leading to a plausible risk of harm to patients who might misinterpret their liver test results. The event involves the use of an AI system whose outputs directly led to health-related misinformation, which can cause injury or harm to persons. The removal of the feature after investigation confirms the recognition of harm. Hence, this is an AI Incident as the AI system's use has directly led to harm or risk of harm to health.
Thumbnail Image

Google removes multiple AI health summaries after warning over 'misleading' results

2026-01-11
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's generative AI used to produce health summaries in search results. The misleading outputs, lacking critical context, have directly led to a risk of harm to individuals' health by potentially causing false reassurance and missed medical care. The event reports actual misleading AI outputs that were live and accessible, constituting realized harm rather than just a potential risk. Google's removal of these summaries is a response to this harm. Hence, this qualifies as an AI Incident due to the direct link between AI-generated content and potential injury or harm to health.
Thumbnail Image

Η Google αφαιρεί ορισμένα ερωτήματα υγείας από τα AI Overviews λόγω ανησυχιών για την ακρίβεια Πηγή: Euronews

2026-01-12
Investing.com Ελληνικά
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate health information summaries that were found to be inaccurate and potentially harmful, as evidenced by expert criticism and the removal of these summaries by Google. This constitutes an AI Incident because the AI system's outputs directly led to misinformation that could harm people's health, fulfilling the criteria of harm to persons. The event involves the use and malfunction (inaccuracy) of the AI system leading to realized harm potential, not just a plausible future risk. Therefore, it is classified as an AI Incident.
Thumbnail Image

"Επικίνδυνο και ανησυχητικό": Η Google αφαιρεί ορισμένες AI Overviews για την υγεία μετά από αποκαλυπτική έρευνα του Guardian | Pagenews.gr

2026-01-12
Pagenews.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI producing health-related summaries) whose use has directly led to harm or risk of harm to people's health by providing misleading medical information. The harm is realized in the form of potential misdiagnosis or delayed treatment due to false reassurance from AI-generated content. This fits the definition of an AI Incident because the AI system's outputs have directly or indirectly caused harm to individuals' health. The removal of some summaries is a response but does not negate the incident classification as harm has occurred or is ongoing.
Thumbnail Image

"Επικίνδυνο και ανησυχητικό" - Η Google αφαιρεί ορισμένες από τις περιλήψεις τεχνητής νοημοσύνης για την υγεία | LiFO

2026-01-11
LiFO.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating health information summaries. The AI system's outputs were inaccurate and misleading, directly causing potential harm to users' health by providing false reassurance about critical medical test results. This meets the criteria for an AI Incident because the AI system's use directly led to harm (or significant risk of harm) to persons. The company's removal of the problematic summaries is a response to the incident but does not negate the fact that harm occurred or was likely to occur. Hence, the classification is AI Incident.
Thumbnail Image

Google: Αφαιρεί ορισμένες από τις περιλήψεις της Τεχνητής Νοημοσύνης της - Ποιος ο λόγος | in.gr

2026-01-11
in.gr
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI for health summaries) was used to generate health information summaries that contained false and misleading data. This misinformation posed a direct risk to users' health, as it could cause serious patients to believe they were healthy and neglect medical follow-up, constituting harm to health (a). The event involves the use of an AI system and the harm is realized or ongoing, not just potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google: Απαντήσεις της AI Overview για θέματα υγείας αποδείχθηκαν επικίνδυνες

2026-01-11
News 24/7
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates health-related summaries. The AI's outputs contained inaccurate medical information, which directly led to potential harm to users' health by providing misleading data that could cause them to underestimate serious health conditions. This constitutes an AI Incident because the AI system's use has directly led to harm to people's health. The article details realized harm and the company's partial remediation, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google: Απέσυρε ορισμένες απαντήσεις επισκόπησης AI μετά από κίνδυνο για την υγεία των χρηστών - e-thessalia.gr

2026-01-11
e-thessalia.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI producing health summaries) whose outputs have directly led to harm to users' health by providing inaccurate and misleading medical information. The harm is realized, not just potential, as users could be misled about critical health conditions. The AI system's malfunction or misuse in generating these summaries is central to the incident. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google: Καταργεί τις Επισκοπήσεις Τεχνητής Νοημοσύνης για ορισμένα ιατρικά ερωτήματα

2026-01-12
Liberal.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated medical overviews) that was used to provide health-related information. The AI system's outputs were found to be misleading and potentially harmful by causing users to misunderstand their medical test results, which is a direct harm to health (harm category a). The company's action to remove these AI overviews for certain queries is a response to this harm. Since the AI system's use directly led to misleading health information, this constitutes an AI Incident rather than a hazard or complementary information. The event is not merely about policy or research updates but about realized harm from AI-generated content.
Thumbnail Image

Η Google αποσύρει επικίνδυνα AI Overviews για ιατρικά θέματα | Techblog.gr

2026-01-12
Techblog
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating medical summaries that contained inaccuracies and ambiguous guidance, which could mislead users and cause harm to their health. The harm is related to injury or harm to the health of persons, which is one of the defined harms for an AI Incident. The AI system's malfunction (providing misleading or incorrect information) directly led to the risk of harm, and the company's response confirms the recognition of this risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google: Απόσυρση AI Overviews για ορισμένες ιατρικές αναζητήσεις

2026-01-12
SecNews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI overviews) providing false and dangerous medical advice that could increase the risk of death or serious health deterioration. This constitutes direct harm to health (a), as users relying on these AI-generated answers could make harmful decisions. The withdrawal of the AI overviews is a response to this harm. Therefore, this event qualifies as an AI Incident due to the realized or highly probable harm caused by the AI system's outputs.
Thumbnail Image

Η Google αποσύρει τα AI Overviews που αφορούν ιατρικές συμβουλές

2026-01-12
Techgear.gr
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to provide medical advice and information, which is a critical domain affecting human health. The AI's outputs were inaccurate and misleading, directly posing risks to patients' health and safety. The harm is realized and not just potential, as misleading medical advice can cause injury or harm to individuals. Google's removal of the feature is a response to this harm. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's use and harm to health.
Thumbnail Image

Η Google αφαιρεί AI Overviews για την υγεία λόγω προβλημάτων ακρίβειας

2026-01-12
euronews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned as generating health-related summaries that contained inaccurate and potentially harmful information. The inaccuracies in medical advice, such as recommendations for cancer patients that could increase mortality risk, represent a direct or indirect harm to health. The removal of these AI Overviews is a response to this harm. Hence, the event meets the criteria for an AI Incident due to realized harm from the AI system's outputs affecting health information accuracy and safety.
Thumbnail Image

Η Google αποσύρει τα AI Overviews που αφορούν ιατρικές συμβουλές

2026-01-12
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical advice summaries, which is a clear AI system involvement. The AI's use led to the dissemination of inaccurate and potentially harmful medical information, which can cause injury or harm to users' health, fulfilling the criteria for an AI Incident. The harm is realized or at least directly linked to the AI system's outputs, not merely a potential risk, as the removal was a response to identified dangerous advice.
Thumbnail Image

Atenție la ce căutați pe Google. Diagnostice false și cum a pus inteligența artificială în pericol viața pacienților - HotNews.ro

2026-01-11
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI for search result summaries) was used to generate health information that was inaccurate and misleading. This use directly led to harm by putting patients at risk of misunderstanding their health status and possibly foregoing necessary medical care, which constitutes injury or harm to health (harm category a). The event involves the use of an AI system and realized harm has occurred, making this an AI Incident rather than a hazard or complementary information. The company's partial remediation does not negate the fact that harm has already occurred.
Thumbnail Image

Google retrage recomandări AI din zona de sănătate. Utilizatorii au fost expuși la informații periculoase - investigație The Guardian

2026-01-11
spotmedia.ro
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) was used to generate medical information summaries that were inaccurate and potentially harmful, directly exposing users to health risks. This constitutes harm to the health of persons (definition a) caused by the AI system's use. The event involves the use of an AI system that led to realized harm (dangerous misinformation), making it an AI Incident rather than a hazard or complementary information. The company's partial remediation does not negate the fact that harm occurred or was ongoing at the time of reporting.
Thumbnail Image

"Periculos și alarmant": Google elimină unele dintre rezumatele sale generate de IA după ce sănătatea utilizatorilor a fost pusă în pericol Google elimină unele dintre rezumatele sale generate de IA

2026-01-11
G4Media.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's generative AI for search result summaries) whose use directly led to harm by providing inaccurate health information that could cause users to misinterpret their medical condition, potentially resulting in health risks. This fits the definition of an AI Incident because the AI system's outputs have directly led to harm to persons (health risks). The company's removal of some summaries is a response but does not negate the incident classification. The event is not merely a potential risk (hazard) or a complementary update; it documents realized harm from AI-generated content.
Thumbnail Image

Google face un pas înapoi: unele rezumate generate de IA au fost eliminate din cauza riscurilor pentru sănătatea utilizatorilor

2026-01-11
A.M. Press
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) generated inaccurate medical summaries that misled users about critical health information, such as liver function test results, which could cause users to neglect necessary medical attention. This misinformation represents harm to the health of individuals, fulfilling the criteria for an AI Incident. The involvement of the AI system is explicit, and the harm is direct and realized, not merely potential. Although Google is taking corrective actions, the event centers on the harm caused by the AI system's outputs, not just on the response or broader ecosystem context.
Thumbnail Image

Google retrage AI Overviews pentru unele căutări medicale după informații considerate periculoase

2026-01-12
Playtech.ro
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) that generated medical information. The AI's outputs were factually incorrect and posed a significant risk to patient health, which is a direct harm to persons. The harm is realized or at least highly probable given the nature of the misinformation and its potential consequences. The company's decision to disable the AI feature for these queries is a response to the incident but does not negate the fact that harm occurred or was imminent. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and potential injury to health.
Thumbnail Image

Google începe să dezactiveze răspunsurile generate de AI pentru unele căutări medicale, după avertismentele medicilor

2026-01-12
Ziare.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating medical advice that is factually incorrect and potentially dangerous, with medical experts confirming the severity of the misinformation. The AI system's outputs have already caused or could cause harm to patients' health, fulfilling the criteria for an AI Incident under harm to health (a). The company's response to disable the AI-generated answers further confirms recognition of the harm caused. Therefore, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

Google elimină întrebările medicale din AI. Motivul este surprinzător

2026-01-13
DoctorulZilei
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate medical summaries that contained inaccurate or misleading information about critical health tests. This misuse of AI led to potential harm to users' health by providing incorrect medical data without necessary context, which is a direct harm to health as defined in the framework. The removal of these summaries is a response to an AI Incident where the AI's outputs caused or risked causing harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

回答被指存在误导性,谷歌下线部分健康类"AI 概览"

2026-01-11
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating health-related summaries that were found to be misleading and potentially harmful to users' understanding of their health. The misleading information could lead to harm to individuals' health (harm category a). The event involves the use of the AI system and its outputs directly leading to harm, even if indirect, as users might misinterpret their health status. The removal of the AI summaries is a response to this harm. Hence, this is an AI Incident rather than a hazard or complementary information, as harm has already occurred due to the AI system's outputs.
Thumbnail Image

谷歌针对部分医疗查询移除AI概览功能

2026-01-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overview feature) was used to generate health-related summaries that were misleading because they did not account for important factors like nationality, sex, race, or age. This misinformation could lead users to incorrectly assess their health, which is a direct harm to health (a). Google's removal of the feature for some queries is a mitigation step but does not fully resolve the broader issue. Since the AI system's use directly led to misleading health information, this qualifies as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

被指含不实健康信息 谷歌移除一些AI概览

2026-01-11
finance.eastmoney.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) generated inaccurate health-related content that could mislead users and cause harm, fulfilling the criteria for an AI Incident under harm to health. The removal of content indicates recognition of the harm caused. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and potential health harm to users.
Thumbnail Image

谷歌AI摘要提供錯誤醫療資訊 - 大公文匯網

2026-01-11
大公文匯網
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI summary feature) is explicitly involved, providing medical information that is false or misleading. This misinformation has directly led to potential harm to health (harm category a) because patients relying on these summaries might misinterpret their medical conditions and avoid needed treatment. The event involves the use of the AI system and its malfunction in providing inaccurate outputs. Since harm is occurring or highly likely due to the misleading medical information, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

被指含不实健康信息 谷歌移除一些AI概览

2026-01-11
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (Google's generative AI for AI Overviews) was used to generate health-related summaries that contained inaccurate information, which could mislead users and cause harm to their health. The removal of these AI Overviews indicates recognition of this harm. Since the AI system's outputs directly led to potential health harm, this qualifies as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

Google因误导性健康回答下线部分医疗类AI概览 - cnBeta.COM 移动版

2026-01-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generated medical information used by users to assess their health. The AI provided generalized reference ranges without accounting for important factors like age, sex, or ethnicity, which could mislead users and cause harm to their health (harm category a). The harm is indirect but plausible and significant, as users might delay seeking medical care based on incorrect AI outputs. Google responded by removing some AI-generated content, indicating recognition of the harm. Therefore, this qualifies as an AI Incident due to realized or ongoing harm caused by the AI system's outputs in a sensitive domain (health).
Thumbnail Image

谷歌针对部分医疗查询移除AI概览功能

2026-01-12
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI overview feature) was used to generate health-related summaries that did not account for important factors like nationality, sex, race, or age, leading to misleading information. This misinformation could cause users to misjudge their health status, constituting indirect harm to health. The event reports that the AI-generated content caused misleading health information, which is a direct link to harm (a). The removal of the feature for some queries is a response but does not eliminate the underlying risk. Hence, this qualifies as an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

谷歌回应AI生成健康信息不准确:已移除部分摘要并承诺改进

2026-01-12
环球网
Why's our monitor labelling this an incident or hazard?
The AI system's use has directly led to the dissemination of inaccurate health information, which poses a risk of harm to users' health by potentially causing them to misinterpret their medical test results and neglect necessary medical follow-up. This fits the definition of an AI Incident as the AI system's outputs have caused or could cause harm to people's health. The removal of some summaries and the commitment to improve are responses but do not negate the incident classification.
Thumbnail Image

Saran Kesehatan AI Overview Google Dinilai Berbahaya

2026-01-12
IDN Times
Why's our monitor labelling this an incident or hazard?
The AI Overview feature is an AI system generating health-related summaries. The investigation reveals that these summaries can be inaccurate and lack context, which can mislead users and potentially harm their health. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to the health of people by providing misleading medical information. The harm is realized or at least occurring as users are exposed to potentially harmful misinformation.
Thumbnail Image

Google Tarik Fitur AI Overviews pada Topik Kesehatan Setelah Dinilai Menyesatkan - Startup Katadata.co.id

2026-01-13
katadata.co.id
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google's AI Overviews) generating health-related summaries that were misleading and potentially harmful to users' understanding of their health test results. The harm is realized because users could be misled about their liver blood test results, which is a direct health risk. Google's removal of the feature for certain queries is a response to this harm but does not negate the fact that the AI system's outputs caused or could cause harm. Hence, this is an AI Incident due to realized harm from AI system use in health information dissemination.
Thumbnail Image

Google hapus sebagian ringkasan AI Overviews terkait topik kesehatan

2026-01-12
Antara News
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate health-related summaries that contained potentially misleading information, which could lead to harm to users' health understanding and decisions. This constitutes indirect harm to health due to the AI system's outputs. The event involves the use and malfunction (inaccurate or incomplete information) of the AI system leading to potential harm, and the company's subsequent mitigation efforts. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to potential harm to health, and the event describes realized issues and responses rather than just potential future harm or general information.
Thumbnail Image

Google Batasi Ringkasan AI untuk Pencarian Kesehatan

2026-01-12
Kabarin.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating health-related summaries for search queries. The misleading information could plausibly lead to harm to users' health understanding, which fits the definition of an AI Hazard. There is no indication that actual harm has occurred yet, but the risk is credible and recognized by both the media and Google. The company's response to limit the feature and review content supports the classification as a hazard rather than an incident or merely complementary information. The event is not unrelated because it directly concerns AI-generated content and its potential impact on health information.
Thumbnail Image

Google batasi ringkasan AI Overviews untuk pencarian kesehatan

2026-01-12
ANTARA News Kalteng
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in generating health information summaries. The investigation found that these summaries could mislead users by omitting important contextual factors, posing a plausible risk of harm to users' health if they rely on this information. Although no direct harm is reported, the potential for misleading health information is a credible risk. Google's response to limit the feature for certain queries indicates recognition of this hazard. Hence, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Google Hapus AI Overview di Sejumlah Kueri Kesehatan untuk Cegah Informasi Menyesatkan

2026-01-12
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
An AI system (Google's generative AI providing health overviews) was used to generate medical information that was misleading and oversimplified, directly leading to the risk of harm through misinformation in health queries. The event describes realized harm in the form of misleading health information being provided to users, which is a form of harm to health and communities. Google's removal of the AI overview for these queries is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (misinformation) affecting users' health understanding.
Thumbnail Image

Google Hapus Beberapa Informasi AI untuk Topik Kesehatan

2026-01-12
Tempo
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI summarization feature) was used to generate health-related summaries that were inaccurate and misleading, which can directly harm users by causing misunderstanding of their health status. The event involves the use and malfunction of the AI system leading to a health-related harm risk, and the company responded by removing the problematic AI summaries. This fits the definition of an AI Incident because the AI system's outputs have directly led to a harm scenario (misleading health information), even if the harm is indirect (users misinterpreting their health).
Thumbnail Image

Google Hapus Informasi AI Kesehatan, Pakar: AI Bukan Dokter Digital

2026-01-13
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate health-related summaries, which were found to be misleading and potentially harmful by not accounting for critical factors like age and sex. This misuse or malfunction of the AI system's outputs could lead to misunderstanding of health status, posing a risk to patient safety and health outcomes. Google's decision to remove these AI-generated summaries indicates recognition of the harm risk. The involvement of AI in producing misleading health information that could harm individuals' health understanding fits the definition of an AI Incident, as it indirectly led to potential harm to health. The event is not merely a complementary update or general news but concerns realized harm potential from AI use in a high-risk domain.
Thumbnail Image

Fitur AI Google Sajikan Informasi Medis Keliru, Ahli Kesehatan Protes Keras

2026-01-13
Bisnis.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a generative AI system by Google that produces medical summaries. The AI system's outputs have directly led to misinformation that can harm patients' health, such as dangerous dietary advice for cancer patients and misleading interpretations of liver test results. These harms fall under injury or harm to health (a) as defined in the framework. The AI system's malfunction or misuse in generating inaccurate medical content is the direct cause of these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Taruhan AI Raksasa Teknologi Dipertanyakan Saat Ringkasan AI Google Dinilai Membahayakan Informasi Kesehatan Publik

2026-01-13
Dime Dimov Jadi Kunci! 4 Fakta Yuran Fernandes Punya Kans Menyeberang ke Persebaya Surabaya - Jawa Pos
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is responsible for generating misleading health information. The misinformation about critical medical test results can cause users to have a false sense of security, potentially leading to delayed or missed medical interventions, which is a direct harm to health (a). The event involves the use and malfunction of the AI system, and the harm is realized, not just potential. Google's partial removal of the feature and ongoing review are responses to this incident. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs in a sensitive domain (health).
Thumbnail Image

Istraga pokazala: Googleova umjetna inteligencija širi "opasne dezinformacije o zdravlju" - TIP.ba

2026-01-11
TIP.ba
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's generative AI health summaries) whose use has directly led to the dissemination of false health information, exposing users to potential harm (harm to health). The misinformation could cause patients to misinterpret their medical test results and avoid necessary medical care, which is a direct health risk. The harm is realized as users have been exposed to these misleading AI outputs. Although Google has taken some remedial action, the core issue of AI-generated health misinformation causing harm is present. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to people's health.
Thumbnail Image

"Gardijan": Gugl uklonio dio svojih AI sažetaka nakon što je zdravlje korisnika dovedeno u rizik

2026-01-11
vijesti.me
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to produce health-related summaries that contained false and misleading information. Experts described the misinformation as dangerous and disturbing, highlighting the potential for serious health harm if users rely on these AI-generated summaries. Google's removal of certain AI Overviews after the investigation confirms the AI system's role in causing or enabling harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm or risk of harm to individuals' health.
Thumbnail Image

Stručnjaci istražili: Googleova umjetna inteligencija širi "opasne dezinformacije o zdravlju" - RTV SLON

2026-01-11
RTV SLON
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generated health-related summaries. These summaries contained incorrect information about liver function tests, which experts deemed potentially harmful and misleading. The AI's outputs directly led to a risk of harm to users' health by providing false reassurance or misinformation. Google removed some of these AI-generated summaries after the investigation, indicating the AI system's role in causing the harm. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and the realized or imminent harm to people's health.
Thumbnail Image

Googleov AI davao krive medicinske savjete, sada ih uklanjaju

2026-01-12
bug.hr
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI summaries) was used to generate medical advice that was potentially inaccurate and misleading, which could directly cause harm to users' health if they acted on this information. This constitutes harm to health (a) under the AI Incident definition. The event involves the use of an AI system and its outputs leading to realized or imminent harm, not just a potential hazard. The company's removal of problematic AI summaries is a response to the incident but does not negate the fact that the AI system's outputs caused or could cause harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Istraga pokazala: Googleova umjetna inteligencija širi "opasne dezinformacije o zdravlju"

2026-01-12
oslobodjenje.ba
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's generative AI for health information summaries) whose use has directly led to the dissemination of false and misleading health information. This misinformation can cause real harm to individuals' health by leading them to misinterpret medical test results and potentially avoid necessary medical care. The harm is to the health of people, fitting the definition of an AI Incident. The company's removal of some summaries is a response but does not negate the fact that harm has occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Google désactive ses synthèses automatiques sur certaines requêtes médicales

2026-01-12
Fredzone
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used to provide medical information but failed to account for essential individual factors, leading to misleading and potentially harmful health advice. This misuse of AI outputs has directly caused harm by risking patients' misunderstanding of their medical test results, which can affect health outcomes. The company's response to disable the AI summaries for some queries confirms recognition of the harm. Hence, the event meets the criteria for an AI Incident involving harm to health (a).
Thumbnail Image

Google retire certaines de ses résumés d'IA jugés dangereux : quand la santé des utilisateurs est en jeu ! | LesNews

2026-01-11
LesNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated health summaries) whose use has directly led to harm to users' health by providing false medical information. The harm is realized as users could be misled about their health status, which fits the definition of an AI Incident involving injury or harm to health. The article details the removal of some problematic AI outputs but highlights ongoing risks, confirming that harm has occurred and the AI system's role is pivotal.
Thumbnail Image

Google retire ses résumés IA après avoir donné des conseils santé dangereux

2026-01-12
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating health information summaries that contained dangerous inaccuracies. These inaccuracies have directly or indirectly led to harm or risk of harm to patients' health, fulfilling the criteria for an AI Incident. The AI system's malfunction or erroneous outputs caused misinformation in a sensitive domain (health), which can lead to injury or harm to persons. The removal of some summaries is a response but does not negate the fact that harm occurred or was plausible. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Dangereux et alarmant " : Google supprime certains de ses résumés générés par IA après avoir mis en danger la santé des utilisateurs. Une étude alerte sur le manque de fiabilité de la fonction AI Overviews

2026-01-12
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) that generates summaries based on multiple web sources. The AI's outputs have directly caused harm by providing inaccurate health information that could mislead users about serious medical conditions, thus posing a risk to their health. The harm is realized and not merely potential, as users could be misled into dangerous health decisions. Google's removal of some summaries is a mitigation step but does not negate the fact that harm has occurred. Therefore, this qualifies as an AI Incident due to direct harm to health caused by the AI system's outputs.
Thumbnail Image

Google freine après de nombreuses critiques ! | LesNews

2026-01-12
LesNews
Why's our monitor labelling this an incident or hazard?
An AI system (the AI-generated medical summaries in Google's search engine) is explicitly involved. The event stems from the use of this AI system, which produces outputs that may mislead users by omitting critical clinical context. While no direct harm is reported, the potential for harm to users' health is credible and significant, given the importance of accurate medical information. Google's partial removal of these summaries and the ongoing debate indicate recognition of this risk. Since the harm is plausible but not confirmed as having occurred, the event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the risk posed by the AI system's outputs, not on responses or governance measures alone.
Thumbnail Image

Santé : l'IA manque de tuer un internaute, Google la retire en urgence

2026-01-12
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used to provide health information but produced incomplete data that ignored important patient factors like age, sex, and nationality. This could have caused users to misinterpret their medical test results, posing a risk of harm to their health. Although no direct injury is reported, the potential for harm was significant enough for Google to take corrective action. Therefore, this qualifies as an AI Incident due to the indirect harm risk from the AI system's outputs leading to possible health harm.
Thumbnail Image

Googleが特定の健康関連検索クエリで「AIによる概要」を非表示に、誤解を招く情報を提供していたため

2026-01-12
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI-generated summaries) was used in the provision of health information and directly caused misleading and potentially harmful advice to users, which can lead to injury or harm to health (harm category a). The event involves the use of an AI system and the harm has materialized or is ongoing, as evidenced by the misleading health advice given to users. Google's response to remove some AI summaries is a mitigation measure but does not negate the fact that harm occurred or was likely occurring. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

GoogleがAIの回答をチェックする技術者募集、誤回答改善を目指す方針

2026-01-13
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google AI Overviews) that generates answers integrated into search results. The system's inaccurate or hallucinated answers have caused user confusion and could undermine trust, which is a form of harm to communities or users' right to reliable information. However, the article does not report a concrete incident of harm occurring but rather Google's initiative to hire engineers to improve answer quality and prevent further issues. This aligns with Complementary Information, as it provides an update on societal and technical responses to an AI-related problem, enhancing understanding of ongoing mitigation efforts rather than reporting a new incident or hazard.
Thumbnail Image

GoogleがGemini搭載のAIモードで新しいパーソナライズ広告を導入、製品を購入したいユーザーにベストなタイミングで限定オファーを提供可能

2026-01-13
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Gemini, AI mode, AI agents) used for personalized advertising and commerce support. While the AI system influences user behavior by delivering personalized offers, there is no indication that this has caused any injury, rights violations, or other harms. The event focuses on the announcement and rollout of these AI-powered features, which is informative about AI ecosystem evolution and governance implications but does not describe an incident or hazard involving harm or plausible harm. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

チャットAIが少年の自殺を後押ししたとする訴訟でGoogleとCharacter.AIが遺族との和解に合意

2026-01-14
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Character.AI's chatbot) whose use is alleged to have contributed to a teenager's suicide, a direct harm to a person's health. The involvement of Google through a contract with Character.AI and the resulting lawsuits and settlement confirm the AI system's role in the harm. This meets the definition of an AI Incident because the AI system's use directly or indirectly led to harm (injury or death). The settlement and legal actions are responses to this harm, but the primary event is the harm caused by the AI system's use.
Thumbnail Image

【え?】Googleさん、Gemini搭載の検索機能に広告を導入・・・AI検索を使うと「ベストなタイミング」で広告が表示されるように : オレ的ゲーム速報@刃

2026-01-14
オレ的ゲーム速報@刃
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Gemini) in personalizing ads based on user search behavior, which qualifies as AI system involvement. However, there is no indication that this has caused or is causing any harm (such as privacy breaches, discrimination, or other rights violations). The feature is newly introduced and described in a promotional manner without reports of negative consequences or risks. Since no harm has occurred and no plausible future harm is clearly indicated, it does not qualify as an AI Incident or AI Hazard. Instead, it provides additional information about AI deployment and its societal impact, fitting the definition of Complementary Information.
Thumbnail Image

Google gỡ nội dung AI Overviews sai sự thật về sức khỏe

2026-01-12
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews powered by Gemini) that generated health-related content. The AI's use has directly led to harm by providing false medical information that users followed, resulting in potential or actual health damage. This fits the definition of an AI Incident as the AI system's use caused harm to health (a) and harm to communities (d). The event is not merely a potential risk but describes realized harm and user impact, so it is not an AI Hazard or Complementary Information. The focus is on the harm caused by the AI system's outputs, justifying classification as an AI Incident.
Thumbnail Image

Google gỡ bỏ nhiều bản tóm tắt sức khỏe dựa trên AI

2026-01-12
xaluannews.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's AI Overview) whose outputs directly led to misinformation about health conditions, which can cause harm to individuals' health by misleading them about the severity or treatment of their conditions. This constitutes harm to health (a), fulfilling the criteria for an AI Incident. The removal of the summaries is a response to the realized harm caused by the AI system's outputs.
Thumbnail Image

Google hạn chế tính năng tóm tắt bằng AI cho các tìm kiếm liên quan sức khỏe

2026-01-13
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI-generated summaries provided inaccurate medical advice, such as incorrect dietary recommendations for pancreatic cancer patients and misleading information about liver function tests and cancer screenings. These inaccuracies pose a direct risk of harm to users' health, which is a clear harm under the AI Incident definition. Google's response to remove these AI summaries from health-related queries further confirms the recognition of harm caused by the AI system's outputs. Hence, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in disseminating misleading health information.
Thumbnail Image

Google Gemini vừa thay đổi điều này, người dùng cần chú ý ngay

2026-01-16
Báo Pháp Luật TP. Hồ Chí Minh
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) that uses personal data to generate outputs influencing user experience. While no direct harm or violation has been reported, the system's capability to analyze sensitive personal data could plausibly lead to privacy harms or breaches if misused or if users are unaware of the implications. Since the article focuses on the new AI capabilities and the potential privacy trade-offs without reporting actual incidents of harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Những người hay xóa lịch sử web chú ý: Google vừa tuyên bố sẽ 'đào' lại mọi thông tin của người dùng

2026-01-16
cafef
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Gemini chatbot) that uses personal data to generate outputs, fitting the definition of an AI system. The article discusses the AI's use and development, with potential privacy and security concerns, which could plausibly lead to harms such as violations of privacy rights or data misuse. However, since no actual harm or incident has been reported, and the article focuses on the planned deployment and features with user controls, this constitutes a plausible future risk rather than a realized incident. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

مخصوص طبی سوالات کیلئے گوگل AI اوورویوز کی سہولت ختم

2026-01-12
jang.com.pk
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) was used to generate medical information summaries. These summaries were found to be potentially misleading and could cause harm to users by providing inaccurate health information. The event involves the use and subsequent removal of an AI system's outputs due to these harms. Since the AI system's outputs have directly led to a risk of harm to users' health through misinformation, this qualifies as an AI Incident. The article describes realized harm potential and the company's response to mitigate it, not just a potential future risk or general AI news.
Thumbnail Image

گوگل کے مصنوعی ذہانت سرچ انجن پر صحت سے متعلق غلط معلومات فراہم کرنے کا انکشاف - Ummat News

2026-01-12
Ummat News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI Overviews) that generates health-related summaries. The AI system's outputs have directly led to the dissemination of false and misleading medical information, which can cause real harm to users' health by influencing their medical decisions. The harm is materialized and significant, as medical experts warn about the potential life-threatening consequences of relying on these AI-generated summaries. Therefore, this qualifies as an AI Incident under the definition of harm to health caused directly or indirectly by the AI system's use.
Thumbnail Image

گوگل جیمنائی میں پرسنل انٹیلی جنس متعارف، اے آئی اب صارف کے ذاتی ڈیٹا سے جوابات دے گا

2026-01-15
jasarat.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's Gemini) that accesses and processes personal data to generate personalized answers. While this raises significant privacy and data protection concerns, the article does not report any realized harm such as data breaches, misuse, or violations of rights occurring due to this feature. Instead, it announces the deployment and potential capabilities of the AI system. Therefore, it represents a plausible future risk scenario where misuse or harm could occur, but no direct or indirect harm has yet been reported. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

گوگل جیمنائی میں اب تک کی سب سے بڑی اپ ڈیٹ کر دی گئی

2026-01-15
Nawaiwaqt
Why's our monitor labelling this an incident or hazard?
While the article details a significant AI system update with enhanced capabilities to access and process personal data, it does not report any realized harm or incidents resulting from this update. There is no mention of injury, rights violations, disruption, or other harms caused by the AI's development or use. The update is presented as a new feature rollout with potential benefits, and no direct or indirect harm is indicated. Therefore, this event is best classified as Complementary Information, providing context and details about AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

UK Watchdog Wants Google To Give Publishers More AI Control

2026-01-28
Finimize
Why's our monitor labelling this an incident or hazard?
The article focuses on regulatory proposals and demands for transparency and opt-out mechanisms related to AI systems used by Google. There is no indication of realized harm or incidents caused by AI systems, nor is there a direct or indirect harm currently occurring. Instead, the event is about governance responses to potential or ongoing issues with AI use, making it complementary information that enhances understanding of AI ecosystem developments and responses.
Thumbnail Image

Google AI Overviews cite YouTube more than any medical site for health queries, study suggests

2026-01-24
The Guardian
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's AI Overviews, which uses generative AI to answer health queries. The study shows that the system disproportionately cites YouTube, a platform with mixed-quality content, including non-medical sources, leading to misinformation. This misinformation has already caused harm, as evidenced by a case where incorrect liver test information was provided, potentially endangering patients. The AI system's design choices and reliance on popularity rather than medical reliability are structural issues contributing to this harm. Hence, this qualifies as an AI Incident due to indirect harm to health caused by the AI system's outputs.
Thumbnail Image

How the 'confident authority' of Google AI Overviews is putting public health at risk

2026-01-24
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Google's AI Overviews) that generates health information summaries. The system's outputs have been shown to contain false and misleading medical advice, which experts warn could cause serious harm or death to patients relying on this information. The harm is realized or ongoing, not merely potential, as patients may act on incorrect advice. This constitutes injury or harm to health (criterion a) caused directly by the AI system's use. The company's partial removal of some AI Overviews and acknowledgment of errors does not negate the fact that harm has occurred. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Code Red: Google's AI Overviews Tap YouTube as Top Source for Health Advice, Alarming Medical and Tech Experts

2026-01-25
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in generating health advice by synthesizing information from various sources, including YouTube videos. The system's malfunction or design choices have directly led to the spread of misleading and potentially harmful medical information, which constitutes harm to communities and individuals seeking health advice. The documented incidents of incorrect answers and the systemic preference for less reliable sources indicate realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation in a critical domain (health).
Thumbnail Image

Google's AI Overviews Cite YouTube Over Medical Sites for Health Queries

2026-01-25
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in generating health information summaries. Its use has directly led to the dissemination of unreliable and potentially dangerous health advice, as documented by prior investigations and examples of harmful guidance. The system's structural bias towards citing popular but non-authoritative sources like YouTube videos increases the risk of harm to users' health, fulfilling the criteria for injury or harm to persons. Therefore, this event meets the definition of an AI Incident due to the realized harm stemming from the AI system's use and design.
Thumbnail Image

Google AI Overviews Medical Citations: Why YouTube Shows Up So Often

2026-01-25
CTN News l Chiang Rai Times
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Google's AI Overviews) that generates medical summaries and cites sources, which can influence user behavior and health decisions. While it identifies plausible risks of harm from overreliance on AI summaries lacking context, it does not document any realized harm or a specific event where the AI system directly or indirectly caused injury, rights violations, or other harms. The discussion is primarily about potential risks, citation patterns, and safe usage guidelines, making it a case of plausible future harm rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information, as it focuses on the credible risk that AI-generated health summaries could lead to harm if misused or misunderstood.
Thumbnail Image

Google AI Overviews Cite YouTube More Than Medical Websites

2026-01-26
CTN News l Chiang Rai Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google AI Overviews) generating health-related summaries that influence user information consumption. While no direct harm is reported, the article clearly outlines plausible future harms stemming from reliance on these AI summaries, such as incorrect self-diagnosis or medication errors. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the health domain. It is not an AI Incident because no actual harm has been documented in the article. It is not Complementary Information because the article is not primarily about responses or governance but about the risk analysis of AI citation behavior. It is not Unrelated because the AI system and its potential impact on health information quality are central to the discussion.
Thumbnail Image

Google's AI Overviews Face Scrutiny Over Health Information Accuracy

2026-01-24
Head Topics
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is responsible for generating health information summaries. The inaccuracies in these summaries have directly led to harm or risk of harm to users' health, fulfilling the criteria for an AI Incident under harm to health (a). The article provides concrete examples of harmful misinformation and expert concerns about the system's reliability and impact on patient behavior. This is not merely a potential risk but an ongoing issue with realized harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI system is central to the event and the harms described.
Thumbnail Image

* Google AI Overviews Cite YouTube More Than Medical Sites for Health Queries - News Directory 3

2026-01-24
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI system involved is Google's generative AI used in AI Overviews to answer health queries. The study and reports indicate that the AI system's outputs have directly led to misinformation about health conditions, which can cause harm to individuals' health (harm category a) and potentially violate rights to accurate information (category c). The presence of misleading or false health information generated or cited by the AI system, and the documented cases of harm or risk, meet the criteria for an AI Incident. The ongoing legal challenges and Google's responses are complementary information but do not negate the incident classification.
Thumbnail Image

YouTube Leads Google AI Overviews Citations for Health Queries | eWEEK

2026-01-26
eWEEK
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's AI Overviews) that generates health information summaries. The AI's use has directly led to harm by promoting less reliable sources (YouTube videos) over authoritative medical content, causing users to potentially ignore professional medical advice. This constitutes a violation of health safety and can be considered harm to people (a). Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's outputs affecting health outcomes.
Thumbnail Image

Google Removes Some AI Health Summaries After Reports Of Dangerous Errors - The News Chronicle

2026-01-26
The News Chronicle
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned and is used to generate health-related summaries. The event reports that the AI system provided inaccurate medical information that could lead to harm to users' health, fulfilling the harm criterion (a). The harm is indirect but plausible and significant, as misleading medical information can cause delayed or inappropriate medical care. The event describes actual harm occurring or likely occurring, not just potential harm, and the AI system's malfunction or misuse is central to the issue. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google AI Overviews Prefer YouTube Over Medical Sites For Health Queries, Study Finds

2026-01-26
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates health-related summaries and citations. The study shows that the AI system disproportionately relies on less reliable sources (YouTube) rather than authoritative medical sites, which can lead to misinformation and harm to users' health. The article references prior findings that misleading health information from these AI Overviews has affected people and posed risks to their lives, indicating realized harm. Therefore, this event meets the criteria for an AI Incident due to indirect harm to health caused by the AI system's outputs.
Thumbnail Image

Google's AI Health Tool Said To Be Giving Answers From YouTube Videos: Is That A Worry?

2026-01-26
News18
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in providing health information. The study shows that the AI's outputs are based on unreliable sources (YouTube videos), which can directly or indirectly cause harm to users relying on this information for health decisions. This constitutes harm to health (a) under the AI Incident definition. Since harm is occurring or highly likely due to the AI system's use, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Where does Google's AI get its health advice? A study points to YouTube

2026-01-26
Fast Company
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates health-related summaries for users. The study shows that the AI system's outputs rely heavily on less reliable sources (YouTube videos) rather than trusted medical sites, which can mislead users and cause harm to their health. This constitutes indirect harm to people's health due to the AI system's use. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (potentially dangerous health guidance) to a large population.
Thumbnail Image

If you use Google AI for symptoms, know it cites YouTube a lot

2026-01-26
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's AI for health symptom overviews) and its behavior in citing sources, which can influence users' health decisions. However, the article does not report any direct or indirect harm resulting from this AI system's outputs, nor does it describe a specific incident of injury, rights violation, or other harm. Instead, it provides an analysis and cautionary advice about the AI system's current limitations and the reliability of its cited sources. This fits the definition of Complementary Information, as it offers contextual details and guidance related to an AI system's use and its implications without reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

Google's AI health summaries cite YouTube more than any medical source, study finds

2026-01-26
TechSpot
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI Overviews) is explicitly involved, generating health-related summaries. The system's reliance on non-authoritative sources like YouTube, which includes non-expert content, has resulted in misleading and dangerous medical information being presented to users, constituting harm to people's health. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (a) injury or harm to health of persons. The event also includes Google's partial mitigation response but the primary issue remains the AI system causing harm through its outputs.
Thumbnail Image

Study Shows Google's Health-Focused AI Overviews Cite YouTube Over Any Medical Site

2026-01-27
ExtremeTech
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved as it generates health-related summaries. The study shows that these summaries often cite unreliable sources like YouTube rather than authoritative medical sites, resulting in misleading and potentially harmful advice. This constitutes indirect harm to health (a), as users may rely on inaccurate information for medical decisions. The harm is realized, not just potential, given the investigation and removal of AI Overviews from some health queries due to these risks. Therefore, this qualifies as an AI Incident.
Thumbnail Image

CMA proposes package of measures to improve Google search services in UK

2026-01-28
GOV.UK
Why's our monitor labelling this an incident or hazard?
The article focuses on the CMA's consultation on proposed conduct requirements to regulate Google's search services and AI features. It does not describe any realized harm or incident caused by AI systems, nor does it report a specific event where AI use led to injury, rights violations, or other harms. Instead, it details a policy and regulatory initiative aimed at preventing or mitigating potential harms and ensuring fair competition and transparency. Therefore, this is Complementary Information providing context on governance and societal responses to AI-related issues in digital markets.
Thumbnail Image

CMA Unveils Plan to Enhance Google Search in UK

2026-01-28
Mirage News
Why's our monitor labelling this an incident or hazard?
The article focuses on regulatory and governance responses to the use of AI in Google's search services, particularly regarding AI Overviews and AI Mode. It does not describe any realized harm or incident caused by AI systems, nor does it report a specific event where AI use led to injury, rights violations, or other harms. Instead, it outlines proposed conduct requirements aimed at preventing potential harms and promoting fairness and transparency. Therefore, this is Complementary Information about governance and societal responses to AI-related issues in digital markets.
Thumbnail Image

AI Answers Demand New Rules: Why Google SEO Fails ChatGPT Citations

2026-01-28
WebProNews
Why's our monitor labelling this an incident or hazard?
The content centers on understanding AI system behavior (large language models) and their impact on SEO practices, highlighting technical pitfalls and strategic responses. However, it does not report any incident of harm, violation of rights, disruption, or plausible risk of such harm caused by AI systems. The article is primarily informative and analytical, offering complementary insights into AI's influence on search and content strategies without describing an AI Incident or AI Hazard. Therefore, it fits the category of Complementary Information as it enhances understanding of AI systems and their ecosystem without reporting new harm or risk.
Thumbnail Image

The YouTube doctor: Why your AI health search is a prescription for misinformation

2026-01-29
Daily Nation
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overview) whose use in medical information retrieval has directly led to the spread of misinformation, a form of harm to health and a violation of the right to accurate healthcare information. The AI system prioritizes less authoritative sources, causing users to potentially rely on misleading content, which can result in injury or harm to health. The article provides expert opinions and analysis confirming these harms are occurring, not just potential. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

دراسة ألمانية: "نظرة عامة على الذكاء الاصطناعي" من غوغل تستشهد بيوتيوب أكثر من المواقع الطبية

2026-01-25
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Google's AI Overviews) that generates health information summaries. The study shows the AI system relies heavily on YouTube content, which may not be medically reliable, raising concerns about misinformation. While no direct harm is reported, the potential for misleading health information to cause harm is credible and plausible. Hence, the event fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it directly concerns AI system outputs and their implications.
Thumbnail Image

"أوفرفيوز" يستقي معلومات صحية من "يوتيوب" أكثر من المصادر الطبية

2026-01-25
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The AI system 'AI Overviews' is explicitly mentioned as providing health information summaries. The study and previous investigations show that the AI system's outputs have included misleading or incorrect health information, which can harm users' health (harm category a). The AI system's use of unreliable sources like YouTube, which is not a medical publisher, contributes to this harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to people's health through misinformation.
Thumbnail Image

الجارديان: الذكاء الاصطناعى بجوجل يعتمد فى معلوماته الطبية على اليوتيوب - اليوم السابع

2026-01-25
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-powered health search summaries) whose use in providing medical information is linked to potential harm due to reliance on non-authoritative sources like YouTube. The study and investigation indicate that this AI system's outputs could misinform users about health conditions, posing indirect harm to health. Although no specific incident of injury is detailed, the documented risk and exposure to misinformation constitute an AI Incident under the framework, as harm to health is occurring or highly plausible and linked to the AI system's use.
Thumbnail Image

الجارديان : الذكاء الاصطناعى بجوجل يعتمد فى معلوماته الطبية على اليوتيوب - صوت الأمة

2026-01-25
صوت الأمة
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI-generated medical summaries) whose use has directly led to harm in the form of misinformation about health, which can cause injury or harm to individuals relying on this information. The AI system's development and use in this context have resulted in an AI Incident because the harm (health misinformation) is occurring and linked to the AI's outputs. The article also includes Google's response but the primary focus is on the harm caused by the AI system's reliance on unreliable sources, fulfilling the criteria for an AI Incident.
Thumbnail Image

الجارديان: يعتمد الذكاء الاصطناعي بجوجل معلوماته الطبية على يوتيوب - الإمارات نيوز

2026-01-25
الإمارات نيوز
Why's our monitor labelling this an incident or hazard?
An AI system (Google's AI-generated medical summaries) is explicitly involved in providing health information. The study shows that these AI summaries often cite YouTube, a non-medical platform, as a source, which may lead to misinformation. The article mentions concerns about potential harm to users from misleading health advice, indicating a risk of injury or harm to health (harm category a). Although no specific incident of harm is reported, the risk of harm due to reliance on potentially unreliable AI-generated summaries is clearly articulated. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to health harm through misinformation, but no direct harm is confirmed yet.
Thumbnail Image

Το AI Overview της Google παραπέμπει στο... YouTube για θέματα υγείας

2026-01-25
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI Overviews) that generates health-related summaries using generative AI. The AI system's outputs have directly led to harm by providing misleading or incorrect medical information, which can cause injury or harm to people's health. The article documents empirical evidence of this harm and the structural nature of the problem, not isolated incidents. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons' health.
Thumbnail Image

YouTube: Έρευνα αποκαλύπτει ότι κυριαρχεί στις "ιατρικές" πηγές της Google - Ανησυχούν οι ειδικοί

2026-01-25
newsbomb.gr
Why's our monitor labelling this an incident or hazard?
The AI Overviews are AI systems that generate health-related summaries for billions of users. The study shows these summaries disproportionately reference YouTube, a platform with mixed-quality content, including non-expert videos. This reliance on potentially unreliable sources can misinform users, posing a risk to public health. Although no specific harm is reported as having occurred, the plausible risk of harm due to misinformation and the structural nature of the AI system's source selection qualifies this as an AI Hazard rather than an Incident. The event focuses on the potential for harm embedded in the AI system's operation rather than a realized harm event.
Thumbnail Image

Google AI Overviews και υγεία: Πρώτη πηγή το YouTube, όχι οι γιατροί

2026-01-24
LiFO
Why's our monitor labelling this an incident or hazard?
The AI Overviews are an AI system that generates health information summaries. The study shows that the system disproportionately references YouTube, a less controlled source, over authoritative medical sources, which can lead to misinformation and harm to users' health decisions. The article references prior incidents of harmful misinformation from the same AI system. This constitutes indirect harm to people's health due to the AI system's outputs. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm (a) injury or harm to health of people, through dissemination of potentially misleading health information.
Thumbnail Image

Google: Το AI Overviews παραπέμπει συχνότερα στο YouTube παρά σε αξιόπιστους ιατρικούς ιστότοπους

2026-01-25
ΕΛΕΥΘΕΡΙΑ Online
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly mentioned as generating health-related summaries. The summaries have directly led to harm by providing misleading or false medical information, which can cause injury or harm to health (harm category a). The study and journalistic investigation confirm that the AI system's outputs are not reliably referencing authoritative medical sources, increasing the risk of harm. This meets the criteria for an AI Incident because the AI system's use has directly led to harm through misinformation affecting health outcomes.
Thumbnail Image

Google: Το AI Overviews παραπέμπει συχνότερα στο YouTube παρά σε αξιόπιστους ιατρικούς ιστότοπους

2026-01-25
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The AI Overviews feature is an AI system generating health-related summaries and recommendations. The study and journalistic investigation show that the AI system's outputs have included misleading or incorrect medical information, which can cause harm to individuals relying on this information for health decisions. This constitutes indirect harm to health caused by the AI system's use and design. The presence of actual misleading information and the potential for health damage meets the criteria for an AI Incident rather than a mere hazard or complementary information. The event is not unrelated because it directly involves an AI system and its harmful outputs.
Thumbnail Image

Google / Το AI Overviews παραπέμπει συχνότερα στο YouTube παρά σε αξιόπιστους ιατρικούς ιστότοπους

2026-01-26
TVXS - TV Χωρίς Σύνορα
Why's our monitor labelling this an incident or hazard?
The AI Overviews system is an AI system generating health information summaries. The study and prior reports show that the AI system's outputs have led to misinformation and potentially harmful health outcomes, fulfilling the criteria for harm to health (a). The AI system's design and use have directly contributed to this harm by prioritizing popular but less reliable sources like YouTube over authoritative medical sites. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Google / Το AI Overviews παραπέμπει συχνότερα στο YouTube παρά σε αξιόπιστους ιατρικούς ιστότοπους

2026-01-26
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The AI system (Google's AI Overviews) is explicitly involved in generating health-related summaries and directing users to sources. The research points out a problematic pattern that could plausibly lead to harm (misinformation or poor health decisions) due to reliance on less reliable sources like YouTube. Since no actual harm or incident is described, but a credible risk is identified, the event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.