Studies Link ChatGPT Use to Reduced Brain Activity and Cognitive Skills

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies led by MIT's Nataliya Kosmyna found that students using AI tools like ChatGPT showed up to 55% less brain activity in creativity and information-processing areas, produced similar essays, and struggled with memory recall. These findings raise concerns about AI's negative impact on human cognition.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (ChatGPT and LLMs) and their impact on human brain activity and cognitive skills. The study shows a correlation between AI reliance and diminished critical thinking, which is a form of potential harm to individuals' cognitive health and educational development. However, the article does not report any realized injury, rights violation, or other direct harm caused by the AI system's malfunction or misuse. The harm is potential and plausible, related to future educational and cognitive risks. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their effects.[AI generated]
AI principles
Human wellbeing

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Heavy AI Use Could Be Making You Stupider: MIT Research

2026-04-21
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and LLMs) and their impact on human brain activity and cognitive skills. The study shows a correlation between AI reliance and diminished critical thinking, which is a form of potential harm to individuals' cognitive health and educational development. However, the article does not report any realized injury, rights violation, or other direct harm caused by the AI system's malfunction or misuse. The harm is potential and plausible, related to future educational and cognitive risks. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their effects.
Thumbnail Image

When Machines Think For Us: AI Is Making Life Easier, But Is It Making Us Dumber?

2026-04-21
News18
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems but rather discusses plausible future cognitive and societal harms due to overreliance on AI. There is no description of a concrete AI Incident or AI Hazard event, nor does it provide updates or responses to prior incidents. Therefore, it fits best as Complementary Information, providing context and insight into the broader implications of AI integration in daily life without reporting a specific incident or hazard.
Thumbnail Image

Concern Grows That AI Is Damaging Users' Cognitive Abilities

2026-04-21
Futurism
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems (ChatGPT) and discusses their use and potential negative effects on cognitive functions, it does not describe a specific event where harm has occurred or been directly caused by the AI system. The harms discussed are potential or emerging concerns based on research and anecdotal reports, not confirmed incidents. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm (cognitive decline) but no direct or indirect harm has been established yet.
Thumbnail Image

Is AI making us dumber? New study finds shocking brain impact

2026-04-21
ECR
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT and similar) whose use is linked to cognitive decline effects in users, as shown by scientific research. While no direct injury or violation has been reported, the plausible future harm to human cognitive abilities due to overreliance on AI tools constitutes a credible risk. The article focuses on the potential negative consequences of AI use rather than an actual incident of harm, making it an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the discussion and their impact is the main subject.
Thumbnail Image

ChatGPT's Hidden Cost: How AI Tools Are Quietly Eroding Human Smarts

2026-04-21
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI systems (ChatGPT and similar generative AI tools) and multiple studies showing that their use has caused measurable decreases in brain activity related to creativity and information processing, as well as declines in manual skills and memory. These are direct harms to health (cognitive health) and to communities (skill erosion affecting education and workplaces). The harms are realized and documented, not merely potential. The AI systems' use is the causal factor in these harms, fulfilling the criteria for an AI Incident. The article does not merely discuss potential risks or responses but documents actual negative outcomes linked to AI use.
Thumbnail Image

AI use may reduce brain activity and memory, researchers warn

2026-04-21
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) and its impact on human cognition, which is a potential harm area. However, the harm is not realized or confirmed; it is a research observation and a warning about possible future cognitive risks. There is no direct or indirect evidence of actual injury, rights violation, or other harms caused by AI use at this stage. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm in the future if overreliance on AI persists, but no incident has occurred yet.