
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Google's Gemini AI chatbot, integrated into Google Messages, lacks end-to-end encryption, raising significant privacy risks as user data may be accessible to Google and third parties. Additionally, Gemini has been found to generate false information and inappropriate images, leading to user misinformation and public apologies from Google.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Google's Gemini AI chatbot) integrated into a messaging platform. The article highlights a significant privacy risk: the lack of end-to-end encryption for AI chatbot conversations and the human review of user data, which could lead to violations of user privacy and potentially human rights related to data protection. However, the article does not report any actual harm or incident occurring yet; it is primarily a warning about potential privacy risks and advises users to be cautious. Therefore, this constitutes an AI Hazard because the development and use of the AI system could plausibly lead to harm (privacy violations), but no direct harm has been reported at this time.[AI generated]