Google Gemini AI Chatbot Faces Privacy and Misinformation Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google's Gemini AI chatbot, integrated into Google Messages, lacks end-to-end encryption, raising significant privacy risks as user data may be accessible to Google and third parties. Additionally, Gemini has been found to generate false information and inappropriate images, leading to user misinformation and public apologies from Google.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Google's Gemini AI chatbot) integrated into a messaging platform. The article highlights a significant privacy risk: the lack of end-to-end encryption for AI chatbot conversations and the human review of user data, which could lead to violations of user privacy and potentially human rights related to data protection. However, the article does not report any actual harm or incident occurring yet; it is primarily a warning about potential privacy risks and advises users to be cautious. Therefore, this constitutes an AI Hazard because the development and use of the AI system could plausibly lead to harm (privacy violations), but no direct harm has been reported at this time.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityAccountabilityRespect of human rightsHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityConsumer servicesIT infrastructure and hosting

Affected stakeholders
ConsumersGeneral publicBusiness

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI hazard

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Google Messages Warning For All Users As Radical Update Launches

2024-03-31
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's Gemini AI chatbot) integrated into a messaging platform. The article highlights a significant privacy risk: the lack of end-to-end encryption for AI chatbot conversations and the human review of user data, which could lead to violations of user privacy and potentially human rights related to data protection. However, the article does not report any actual harm or incident occurring yet; it is primarily a warning about potential privacy risks and advises users to be cautious. Therefore, this constitutes an AI Hazard because the development and use of the AI system could plausibly lead to harm (privacy violations), but no direct harm has been reported at this time.
Thumbnail Image

Google Gemini Chatbot Review: Hallucination Station

2024-04-02
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the AI system Gemini and its use, focusing on its hallucination problem—fabricating false information such as non-existent restaurants and research papers. This misinformation can harm users by misleading them, which fits the definition of harm to communities or individuals relying on the AI. The disabling of generative image capabilities due to inappropriate portrayals further supports the presence of harm caused by the AI's malfunction or flawed outputs. Since these harms are realized and directly linked to the AI system's outputs, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Messages' Gemini Update: What You Need To Know

2024-04-03
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI chatbots (an AI system) in the Gemini update and the lack of end-to-end encryption, which creates a plausible risk of privacy violations and exposure of sensitive user data. Although no direct harm is reported, the potential for such harm is credible and significant, fitting the definition of an AI Hazard. The event is not an AI Incident because harm has not yet materialized, nor is it merely complementary information or unrelated news, as the focus is on the risk posed by the AI system's design and use.