Study Finds AI Chatbot Errors Become Users' False Memories

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Taiwanese research team found that 77% of incorrect information provided by AI chatbots is retained as users' memories, even when warnings are given. This demonstrates that AI-generated misinformation can embed false knowledge in users, posing cognitive risks that warnings alone cannot prevent.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves a generative AI system (a humanoid chatbot) providing erroneous information that becomes false memory in about 77% of cases. This is a direct consequence of the AI system's use, leading to harm in the form of misinformation internalized by users, which can affect their decisions and beliefs. The harm is realized and documented by research, not merely potential. Warnings do not mitigate the harm, confirming the AI system's role in causing this cognitive harm. Hence, this is an AI Incident involving indirect harm to users' cognitive health and informational integrity.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingEducation and trainingGeneral or personal use

Affected stakeholders
Consumers

Harm types
Psychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

AI給錯誤資訊 近8成變使用者記憶

2023-11-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves a generative AI system (a humanoid chatbot) providing erroneous information that becomes false memory in about 77% of cases. This is a direct consequence of the AI system's use, leading to harm in the form of misinformation internalized by users, which can affect their decisions and beliefs. The harm is realized and documented by research, not merely potential. Warnings do not mitigate the harm, confirming the AI system's role in causing this cognitive harm. Hence, this is an AI Incident involving indirect harm to users' cognitive health and informational integrity.
Thumbnail Image

研究:對話機器人錯誤詞彙 77%會成使用者記憶

2023-11-23
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a conversational AI chatbot) that produces incorrect information which users then remember as true, representing a form of harm to individuals' cognition and potentially to communities through misinformation. The harm is realized and documented through empirical research, showing direct impact on users' memory and understanding. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of misinformation internalized by users, which can affect their knowledge and decision-making.
Thumbnail Image

台大研究/AI提供錯誤資訊 77%會留在記憶 - 生活 - 自由時報電子報

2023-11-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system providing erroneous information that can mislead human memory, which is a form of harm to individuals' cognitive integrity and decision-making. However, the article describes research findings and warnings about potential harm rather than a specific realized harm event. Therefore, it fits the definition of Complementary Information, as it provides important context and understanding about AI risks and informs future governance and product development, without reporting a direct or indirect AI Incident or an immediate AI Hazard.
Thumbnail Image

台大研究:聊天機器人提供錯誤資訊 超過七成將被使用者記憶 | 聯合新聞網

2023-11-22
UDN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots) providing incorrect information that users internalize, which can indirectly lead to harm in decision-making and misinformation effects on individuals and communities. However, the article focuses on research findings and the potential for harm rather than a concrete incident of harm occurring. Therefore, it fits the definition of Complementary Information as it provides supporting data and context about AI's societal impact and informs future risk assessment and governance, without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

研究:對話機器人錯誤詞彙 77%會成使用者記憶 | 聯合新聞網

2023-11-23
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a conversational AI chatbot) that produces erroneous information which users then remember, potentially leading to misinformation harm. However, the article describes research findings about this phenomenon and discusses the implications and recommendations rather than reporting a specific incident where harm has occurred. There is no direct evidence of realized harm or incident, but the study points to a plausible risk of harm from AI-generated misinformation influencing users' beliefs. Therefore, this qualifies as Complementary Information, providing important context and understanding about AI risks and user impact, but not describing a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

研究:AI錯誤詞彙 逾7成變人類記憶 - 大紀元

2023-11-23
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a conversational AI chatbot) whose erroneous outputs have been shown to cause harm by embedding false information into human memory, a form of cognitive harm to individuals. This fits the definition of an AI Incident because the AI system's use has indirectly led to harm to people (harm to health of groups of people in terms of cognitive/psychological impact). The article describes realized harm (false memories formed) rather than just potential harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

研究:對話機器人錯誤資訊77%成使用者記憶 人類應適度懷疑AI內容 | 科技 | 中央社 CNA

2023-11-23
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a conversational AI robot) whose use leads to the indirect harm of users internalizing false information, which can be considered harm to individuals' cognitive integrity and potentially to communities if misinformation spreads. Since the harm (memory of incorrect information) has already occurred in the study participants, this qualifies as an AI Incident. The article does not describe a future risk alone, but actual realized harm in the study context. Therefore, it is classified as an AI Incident.
Thumbnail Image

AI問答提供錯誤資訊 77%會成使用者記憶 - Rti央廣

2023-11-23
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article describes a study involving an AI chatbot (an AI system) that outputs incorrect information which 77% of users remember as true, despite warnings. This misinformation can cause harm by misleading users, a form of harm to communities and individuals' cognitive health. The AI system's use directly leads to this harm, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but documents realized effects on users' memory, thus it is not a hazard or complementary information but an incident.
Thumbnail Image

當心AI誤導你! 研究:77%錯誤詞彙被用戶記憶-台視新聞網

2023-11-22
台視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (chatbots) that generate content with a high rate of errors, which users remember and which can influence their decisions negatively. This constitutes harm to communities and individuals through misinformation, a recognized form of harm under the AI Incident definition. The harm is realized as users have already been influenced by the erroneous AI outputs. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated misinformation and harm to users' cognition and decision-making.
Thumbnail Image

研究:AI錯誤詞彙 逾7成變人類記憶| 台灣大紀元

2023-11-23
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI chatbots (an AI system) that provide incorrect information which then becomes part of human memory, a form of harm to communities and individuals' cognitive health. Although the harm is indirect and cognitive rather than physical, it fits the definition of an AI Incident because the AI system's use has directly led to this harm. The article does not describe a potential future harm but a realized effect documented by research, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's outputs, not on responses or governance. Therefore, the classification is AI Incident.