Study Warns of Potential Memory Weakening from ChatGPT Use in Education

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A study led by researchers at the Federal University of Rio de Janeiro found that university students who used ChatGPT for assignments retained less information long-term compared to those using traditional methods. The findings suggest overreliance on AI tools may weaken memory and deep learning, highlighting potential cognitive risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves an AI system (ChatGPT) and discusses its use and potential misuse leading to cognitive harm (weakened memory retention). Although the harm is not yet realized as an incident, the study warns of plausible future harm from overreliance on AI for learning tasks. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm to individuals' cognitive health. There is no indication of actual injury or violation occurring yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential harm from AI use rather than updates or responses to past incidents.[AI generated]
AI principles
Human wellbeingDemocracy & human autonomy

Industries
Education and training

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI hazard

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

عصا المعرفة الرقمية | صحيفة الخليج

2026-04-07
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and its use by students, but the content focuses on research findings about cognitive effects rather than any realized or potential harm. There is no indication of injury, rights violations, or other harms caused or plausibly caused by the AI system. The study's results are observational and do not describe an AI Incident or AI Hazard. Therefore, this is Complementary Information providing context and understanding about AI's societal impact.
Thumbnail Image

دراسة تحذر: الذكاء الاصطناعي قد يضعف الذاكرة إذا أُسيء استخدامه

2026-04-06
موقع بكرا
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses its use and potential misuse leading to cognitive harm (weakened memory retention). Although the harm is not yet realized as an incident, the study warns of plausible future harm from overreliance on AI for learning tasks. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm to individuals' cognitive health. There is no indication of actual injury or violation occurring yet, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential harm from AI use rather than updates or responses to past incidents.
Thumbnail Image

هل يجعلنا الذكاء الاصطناعي أقل ذكاءً؟.. دراسة تحذر من "عكاز معرفي" يضعف الذاكرة!

2026-04-06
وكالة الصحافة المستقلة
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its use by students. The study shows that while AI use speeds up task completion, it may plausibly lead to cognitive harm by weakening memory and deep understanding over time. No actual injury, rights violation, or other harm has occurred yet; the harm is potential and plausible based on the study's findings. This aligns with the definition of an AI Hazard, where AI use could plausibly lead to harm but no incident has yet occurred. The article does not describe a response, governance action, or update to a prior incident, so it is not Complementary Information. It is not unrelated as it clearly involves AI and its effects.
Thumbnail Image

دراسة: ChatGPT قد يضعف الذاكرة

2026-04-06
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in an educational context. The study suggests a plausible risk that reliance on ChatGPT could lead to diminished memory and learning abilities, which is a form of cognitive harm. However, this harm is not reported as having already occurred on a harmful scale but is a potential negative effect identified by research. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (reduced cognitive function) in the future if not managed properly. It is not an AI Incident since no direct or indirect harm has been documented as occurring yet, nor is it Complementary Information or Unrelated.