Health Experts Warn of Risks in AI-Driven Self-Diagnosis in India

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Indian health experts, including Dr. Jitender Nagpal, warn that increasing use of AI-generative tools for self-diagnosis and self-treatment poses significant safety and ethical risks. They stress that AI should support, not replace, clinical judgment, cautioning against overreliance and highlighting concerns about patient safety and data privacy.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (AI-driven self-diagnosis tools) and discusses the potential risks and harms that could plausibly arise from their misuse or overreliance, such as patient safety risks and privacy concerns. Since no actual harm or incident is reported, but credible concerns about future harm are raised, this fits the definition of an AI Hazard. The article serves as a cautionary advisory highlighting plausible future harms rather than describing a realized AI Incident or a complementary information update.[AI generated]
AI principles
SafetyPrivacy & data governance

Industries
Healthcare, drugs, and biotechnology

Affected stakeholders
Consumers

Harm types
Physical (injury)Human or fundamental rights

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Health Expert Cautions Against AI Self-Diagnosis

2026-02-27
Rediff.com India Ltd.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI-driven self-diagnosis tools) and discusses the potential risks and harms that could plausibly arise from their misuse or overreliance, such as patient safety risks and privacy concerns. Since no actual harm or incident is reported, but credible concerns about future harm are raised, this fits the definition of an AI Hazard. The article serves as a cautionary advisory highlighting plausible future harms rather than describing a realized AI Incident or a complementary information update.
Thumbnail Image

Using AI-generative tools for self treatment matter of concern, says health expert

2026-02-27
ThePrint
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in healthcare and discusses potential risks, especially the possibility of harm from self-treatment based on AI outputs. However, it does not describe any realized harm or a specific event where AI caused injury, rights violations, or other harms. The concerns are about plausible future harm and the need for caution, which aligns with the definition of an AI Hazard. Yet, since no specific event or circumstance is described as having led to harm or near harm, and the article mainly provides expert commentary and warnings, it fits best as Complementary Information, enhancing understanding of AI risks and responsible use in healthcare without reporting a concrete incident or hazard event.
Thumbnail Image

Using AI tools for self treatment matter of concern

2026-02-28
The Shillong Times
Why's our monitor labelling this an incident or hazard?
The article centers on expert concerns about the potential dangers of AI tools being used for self-diagnosis and treatment, which could plausibly lead to harm if patients rely on AI outputs without proper clinical oversight. This fits the definition of an AI Hazard, as it describes a credible risk of harm stemming from the use of AI systems in healthcare, but no actual harm or incident is reported. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

AI in Healthcare: A Double-Edged Sword | Health

2026-02-27
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI tools in healthcare) and discusses their use and potential misuse. However, it does not describe any realized harm or incident resulting from AI use, only plausible future risks such as unsafe self-diagnosis and data privacy concerns. Therefore, it fits the definition of an AI Hazard, as it highlights circumstances where AI use could plausibly lead to harm but no harm has yet occurred.
Thumbnail Image

Using AI-generative tools for self treatment matter of concern, says health expert

2026-02-27
NewsDrum
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI-generative tools) and their use in healthcare, particularly for self-diagnosis and self-treatment. While it highlights significant concerns about potential misuse and risks that could lead to harm, it does not describe any realized harm or specific incident caused by AI. The concerns raised fit the definition of an AI Hazard, as they describe circumstances where AI use could plausibly lead to harm if not properly managed. There is no indication of a current AI Incident or complementary information about responses to past incidents. Hence, the classification as AI Hazard is appropriate.