NHRC Probes AI Education Project Over Children's Data Privacy Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

India's National Human Rights Commission has issued notices to government bodies after complaints about privacy risks in an AI-powered education initiative by US-based Anthropic and NGO Pratham. The AI system processes children's academic data, raising concerns about potential violations of privacy and data protection laws under India's DPDP Act.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems in the context of data collection and processing related to children, which raises concerns about privacy and data protection. However, the article does not report any actual harm or data breach that has occurred; rather, it focuses on potential risks and the initiation of inquiries to prevent misuse. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm yet. The main focus is on governance and regulatory response to these potential risks, making it primarily a case of addressing an AI Hazard and related governance actions. Since the event centers on potential risks and regulatory inquiries rather than an actual incident of harm, it is best classified as an AI Hazard.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Education and training

Affected stakeholders
Children

Harm types
Human or fundamental rights

Severity
AI hazard

AI system task:
Organisation/recommendersForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

NHRC asks ministries, states to inquire into 'risks' to children's privacy in AI tie-up

2026-02-27
ThePrint
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in the context of data collection and processing related to children, which raises concerns about privacy and data protection. However, the article does not report any actual harm or data breach that has occurred; rather, it focuses on potential risks and the initiation of inquiries to prevent misuse. Therefore, this situation represents a plausible risk of harm due to AI system use but no realized harm yet. The main focus is on governance and regulatory response to these potential risks, making it primarily a case of addressing an AI Hazard and related governance actions. Since the event centers on potential risks and regulatory inquiries rather than an actual incident of harm, it is best classified as an AI Hazard.
Thumbnail Image

NHRC issues notice over alleged risks to kids' data privacy in US AI company-NGO Pratham collaboration

2026-02-27
Social News XYZ
Why's our monitor labelling this an incident or hazard?
An AI system (the Anytime Testing Machine) is explicitly mentioned as processing children's academic data, which involves AI-based data processing. The complaint alleges potential violations of privacy rights and data protection laws, which are human rights concerns. Although no actual harm is reported yet, the NHRC's intervention indicates credible concerns that the AI system's use could lead to violations of rights and data breaches affecting children. Therefore, this event represents an AI Hazard, as it plausibly could lead to an AI Incident involving violations of human rights and data privacy harms if safeguards are inadequate or ignored. The event is not merely general AI news or a product launch, nor is it a report of realized harm, so it is not an AI Incident or Complementary Information.
Thumbnail Image

NHRC issues notice over alleged risks to kids' data privacy in US AI company-NGO Pratham collaboration

2026-02-27
International Business Times, India Edition
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Anytime Testing Machine) used in education that processes children's personal data. The NHRC's notice is based on concerns that the AI system's use may have led or could lead to violations of human rights related to privacy and data protection of minors, which falls under harm category (c) - violations of human rights or breach of legal obligations protecting fundamental rights. Although no actual harm is reported yet, the concerns and regulatory scrutiny indicate plausible risks of harm from the AI system's use. Therefore, this event qualifies as an AI Hazard because it describes a credible risk of harm due to the AI system's use, prompting official investigation and calls for safeguards. It is not an AI Incident since no realized harm is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

Safeguarding Minors: AI Collaboration Sparks Privacy Concerns | Education

2026-02-27
Devdiscourse
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved ('Anytime Testing Machine' using AI to process children's academic data). The event concerns the use of this AI system and its potential violation of children's data privacy rights, which falls under violations of human rights and legal obligations. Although the harm is not yet confirmed, the investigation and notices indicate that the AI system's use has directly or indirectly led to concerns about breaches of privacy rights. Since the event focuses on realized concerns and official actions due to potential violations, it qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

NHRC Notice on Child Data Privacy in AI Education Initiative

2026-02-27
newKerala.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned and is involved in processing sensitive children's data, raising privacy and data protection concerns, which relate to human rights. However, the article describes a complaint and regulatory scrutiny rather than an actual harm or breach occurring. The NHRC's actions are governance responses to potential risks. Therefore, this event fits the definition of Complementary Information, as it provides updates on societal and governance responses to AI-related privacy concerns without reporting a realized AI Incident or an imminent AI Hazard.
Thumbnail Image

NHRC asks ministries, states to inquire into 'risks' to children's privacy in AI tie-up

2026-02-27
NewsDrum
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (processing children's handwritten responses and academic data) and concerns about potential harm related to privacy violations and data breaches affecting children, which implicates violations of rights and data protection laws. However, the article does not report that any actual harm or data breach has occurred yet; rather, it focuses on the potential risks and the regulatory inquiry into these risks. Therefore, this situation represents a plausible risk of harm due to the AI system's use, qualifying it as an AI Hazard rather than an AI Incident. The NHRC's actions and the complaint highlight concerns about future or potential harm rather than realized harm.