Google Gemini Prompt Injection Flaw Exposes Private Calendar Data

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A vulnerability in Google Gemini, discovered by Miggo Security, allowed attackers to use indirect prompt injection via Google Calendar invites to bypass privacy controls and access private meeting data. The exploit relied on embedding malicious natural language prompts, leading to unauthorized data exfiltration. Google has since patched the flaw.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (Google Gemini) whose misuse via prompt injection leads to unauthorized access to private user data, a violation of privacy and potentially human rights related to data protection. The harm has occurred as private meeting data could be stolen. Although the vulnerability has been mitigated, the incident itself is a realized harm caused by the AI system's behavior and its exploitation. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's misuse and malfunction.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilitySafety

Industries
Digital securityConsumer servicesIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Security alert: Researchers find Google Gemini can be used to steal your private data - here's how | Mint

2026-01-21
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Gemini) whose misuse via prompt injection leads to unauthorized access to private user data, a violation of privacy and potentially human rights related to data protection. The harm has occurred as private meeting data could be stolen. Although the vulnerability has been mitigated, the incident itself is a realized harm caused by the AI system's behavior and its exploitation. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's misuse and malfunction.
Thumbnail Image

Google Gemini Calendar Exploit Via Prompt Injection

2026-01-20
The Cyber Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, Google Gemini, which uses natural language understanding to interact with calendar data. The vulnerability exploited the AI's interpretation of natural language prompts embedded in calendar invites to bypass privacy controls and leak private data. This misuse of the AI system directly led to a privacy breach, a violation of fundamental rights and obligations under applicable law protecting privacy and data rights. The harm is realized and significant, as private calendar information was exfiltrated covertly. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction or misuse.
Thumbnail Image

Indirect prompt injection in Google Gemini enabled unauthorized access to meeting data - SiliconANGLE

2026-01-19
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) whose malfunction or misuse (indirect prompt injection) directly led to unauthorized access to sensitive meeting data, constituting a violation of privacy and potentially human rights related to data protection. The harm has already occurred, and the AI system's role is pivotal in enabling the exploit. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google Gemini Privacy Controls Bypassed to Access Private Meeting Data Using Calendar Invite

2026-01-20
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) whose malfunction in processing natural language prompts embedded in calendar invites led to unauthorized data access, a clear harm to privacy and user rights. The attack exploited the AI's intended functionality, causing direct harm through data exfiltration. Therefore, this qualifies as an AI Incident. The subsequent fix by Google is a response but does not change the classification of the original event as an incident.
Thumbnail Image

Google Gemini Prompt Injection Flaw Exposed Private Data

2026-01-20
TechNadu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Gemini, a large language model integrated with productivity tools) whose malfunction (prompt injection vulnerability) directly led to unauthorized data exfiltration, a violation of privacy and data protection rights. This fits the definition of an AI Incident because the AI system's use and malfunction caused a breach of obligations under applicable law protecting fundamental rights (privacy). The harm is realized, not just potential, and the event is not merely a general update or research announcement but a concrete security incident involving AI misuse.
Thumbnail Image

Google Gemini flaw allowed meeting data exposure

2026-01-20
SC Media
Why's our monitor labelling this an incident or hazard?
An AI system (Google Gemini) was involved, and its malfunction (a security flaw) directly led to the exposure of private meeting data, which constitutes harm to privacy and potentially a violation of rights. The exploit allowed unauthorized access to sensitive information, fulfilling the criteria for an AI Incident due to realized harm caused by the AI system's malfunction and misuse.
Thumbnail Image

Exploiting Google Gemini to Abuse Calendar Invites Illustrates AI Threats

2026-01-20
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Gemini large language model) whose misuse via prompt injection directly led to unauthorized access and leakage of private data, constituting a violation of privacy rights. The harm is realized, not just potential, as the researchers demonstrated the exploit and Google confirmed and fixed it. This fits the definition of an AI Incident because the AI system's malfunction and exploitation caused a breach of obligations intended to protect fundamental rights (privacy). The article also discusses the broader implications for AI security but the core event is a realized harm due to AI misuse.
Thumbnail Image

A Google Gemini Bug Turned Calendar Invites Into a Silent Data Leak

2026-01-20
Techloy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) whose use led directly to harm: the leakage of private meeting summaries to attackers without user awareness. The AI's interpretation of hidden instructions in calendar descriptions caused the creation of new calendar events visible to attackers, thus directly causing a breach of privacy and confidentiality. This fits the definition of an AI Incident because the AI system's use led to a violation of rights and harm to individuals' private data. The harm is realized, not just potential, and the AI system's malfunction or misuse is central to the incident.
Thumbnail Image

Google Gemini Flaw Turns Calendar Invites Into Attack Vector

2026-01-20
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini generative AI integrated with Google Calendar) that is exploited via a prompt injection vulnerability. This flaw allows attackers to bypass privacy controls and access sensitive private meeting data, which is a violation of privacy rights and security. The harm is realized and directly linked to the AI system's malfunction and use, fulfilling the criteria for an AI Incident. The article details the exploit and its consequences, not just a potential risk or a response, so it is not a hazard or complementary information.
Thumbnail Image

Google Gemini Flaw Let Attackers Access Private Calendar Data

2026-01-20
TechRepublic
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) whose malfunction (a security flaw) directly led to unauthorized access to private calendar data, a clear harm to individuals' privacy and rights. The attack exploited the AI's language understanding capabilities to bypass privacy controls, resulting in real data leakage. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm (violation of privacy rights). The mitigation by Google is a response but does not change the classification of the original event as an incident.
Thumbnail Image

How a Calendar Invite Can Trick Google Gemini Into Leaking Your Private Meetings - WinBuzzer

2026-01-20
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini AI assistant) whose malfunction and exploitation via prompt injection attacks have directly led to the leakage of private meeting data, a clear harm to privacy and potentially human rights. The attack uses the AI's legitimate functions to exfiltrate data without user consent, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the event details a concrete security breach scenario. Although mitigations have been deployed, the fundamental architectural vulnerability remains, but this does not negate the fact that an AI Incident has occurred.
Thumbnail Image

A Google Gemini security flaw let hackers use calendar invites to steal private data

2026-01-20
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini) whose malfunction (prompt injection vulnerability) was exploited to gain unauthorized access to private calendar data, violating privacy rights. This directly led to harm through data exfiltration and unauthorized access, fitting the definition of an AI Incident. The harm is realized, not just potential, and involves violation of rights and harm to individuals' privacy.
Thumbnail Image

This Gemini Calendar trick turns a simple invite into a privacy nightmare

2026-01-21
Android Authority
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini) that processes natural language calendar invites and queries. The misuse of Gemini's reasoning capabilities via prompt injection directly led to unauthorized access and disclosure of private calendar data, which is a clear harm to privacy and user rights. This harm has materialized, not just a potential risk, qualifying it as an AI Incident. The article also mentions Google's response, but the primary focus is on the realized exploit and its consequences, not just the response, so it is not merely Complementary Information.
Thumbnail Image

Google Gemini AI Flaw Leaks Private Data via Malicious Calendar Invites

2026-01-21
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Gemini) integrated with calendar applications, which processes natural language inputs and was exploited via indirect prompt injection. The exploitation led to unauthorized leakage of private and corporate calendar data, a clear violation of privacy and data security, which is a form of harm to persons and communities. The harm is realized, not just potential, as researchers demonstrated the exploit and Google responded with patches. The AI system's malfunction due to adversarial inputs directly caused the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Researchers got Gemini AI to leak Google Calendar data, they claim

2026-01-21
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI assistant) whose use has directly led to a privacy breach, a violation of fundamental rights related to data protection. The researchers demonstrated how the AI's behavior can be manipulated to leak sensitive information, which is a clear harm caused by the AI system's malfunction or misuse. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to harm (privacy violation).
Thumbnail Image

Researchers say they convinced Gemini to leak Google Calendar data

2026-01-21
Mashable SEA | Latest Entertainment & Trending
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI assistant) whose use was exploited to leak sensitive personal data without user consent. This constitutes a breach of privacy and a violation of rights, which fits the definition of an AI Incident. The researchers demonstrated the harm directly caused by the AI system's behavior under malicious prompting, confirming realized harm rather than just potential risk.
Thumbnail Image

Gemini AI Flaw Allowed Calendar Data Leaks Via Malicious Invites

2026-01-22
TechWorm
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gemini AI assistant) whose use and behavior directly caused harm by leaking private calendar data to attackers, fulfilling the criteria for an AI Incident. The harm is realized (data leak), and the AI system's malfunction or exploitation is pivotal to the incident. The article details the exploit, the harm caused, and the response, confirming this is not merely a potential risk or complementary information but a concrete AI Incident.
Thumbnail Image

The Prompt Attack: How Gemini AI Was Exploited Using Calendar Invite To Leak Private Data

2026-01-22
News18
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Gemini AI assistant) whose use is exploited via prompt injection attacks to leak private data, constituting a violation of privacy and potentially human rights related to data protection. The harm (data leakage) has occurred as a direct consequence of the AI system's misuse. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to individuals' private data and privacy rights.
Thumbnail Image

Google Patches Zero-Click Gemini AI Flaw Leaking Workspace Data

2026-01-22
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini) integrated into Google Workspace, which processes calendar invites and can be manipulated via indirect prompt injection to leak sensitive data. The exploit is zero-click and has been demonstrated by security researchers, indicating realized harm or at least active exploitation risk. The harm includes unauthorized disclosure of private and corporate information, which is a violation of privacy and potentially intellectual property rights. Google has patched the vulnerability, but the incident itself is a concrete example of harm caused by the AI system's malfunction and use. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Calendar Invite Trick Exposes Risks in Google's Gemini AI Security

2026-01-22
thehansindia.com
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the AI system Gemini, when connected to user accounts, can be manipulated via prompt injection embedded in calendar invites to leak sensitive personal data. This is a direct consequence of the AI's use and processing of inputs, leading to realized harm in terms of privacy breaches. The involvement of the AI system is clear, and the harm is concrete and ongoing, not merely potential. Hence, the event fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

From Grubhub to Google, Hackers Ate Well This Week

2026-01-23
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as Microsoft's Copilot and Google's Gemini, both of which have vulnerabilities exploited to exfiltrate sensitive data, constituting direct harm. The Google AdMob case involves illegal data collection on minors, violating legal protections and human rights related to privacy. The involvement of AI in these incidents is clear, and the harms have materialized, meeting the criteria for AI Incidents. Other cybersecurity issues mentioned, such as the Grubhub data breach and Tesla system vulnerabilities, while serious, do not explicitly involve AI systems or AI-related malfunctions in the article's description. Therefore, the overall classification is AI Incident due to the direct harms caused by AI system vulnerabilities and misuse described.
Thumbnail Image

如何判斷影片是否為AI生成?Gemini這項功能可偵測!如何使用一次看懂- SOGI 手機王

2026-01-20
SOGI手機王
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini AI and SynthID watermark technology) used to detect AI-generated videos. However, the article focuses on the introduction and explanation of this detection tool, which is a positive development aimed at mitigating potential harms from AI-generated misinformation. There is no indication of any realized harm or incident caused by AI, nor is there a plausible future harm described. The content is informational and supportive, enhancing understanding of AI capabilities and responses to AI-generated content. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Google Gemini間接提示注入漏洞可外洩會議、行事曆資訊

2026-01-20
iThome
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Gemini) that processes and interprets user calendar data to respond to queries. The described prompt injection vulnerability is a malfunction or security flaw in the AI system's use, which directly enabled unauthorized access to private calendar information and unauthorized modification of calendar events. This results in a clear violation of privacy rights and unauthorized data disclosure, fitting the definition of an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized (data exfiltration and privacy breach), not just potential, and the AI system's role is pivotal in enabling this harm.
Thumbnail Image

15億用戶的入口之爭!蘋果牽手Google:一場改寫AI權力結構的策略奇襲 | 鉅亨網 - 美股雷達

2026-01-21
news.cnyes.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses AI technology integration, but there is no indication of any realized harm or plausible future harm resulting from this cooperation. The article focuses on the strategic and market implications rather than any incident or hazard related to AI misuse, malfunction, or harm. Therefore, it is best classified as Complementary Information, providing context and updates on AI ecosystem developments without reporting an AI Incident or AI Hazard.
Thumbnail Image

Siri 被迫加速:蘋果借 Gemini 補強,Google 卡位最肥的商業入口

2026-01-20
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Google's Gemini models and Apple's Siri AI), but the content focuses on collaboration and product enhancement without any indication of realized harm or credible risk of harm. There is no mention of incidents, hazards, or governance responses related to harm. Therefore, this is general AI-related news about product strategy and ecosystem competition, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Google推出通用商务协议,Gemini智能体可代用户购物

2026-01-21
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction of a new AI protocol and AI shopping agents that enable streamlined commerce. While AI systems are clearly involved and their use could plausibly lead to future harms (e.g., privacy issues, market dominance concerns, or consumer manipulation), no actual harm or incident is reported. The main content is about the development and deployment of AI technology and the ecosystem's evolution, without describing any direct or indirect harm or legal violations. Therefore, this qualifies as Complementary Information, providing context and updates on AI system deployment and ecosystem changes rather than reporting an AI Incident or Hazard.
Thumbnail Image

DeepMind執行長:Gemini暫不推廣告 「OpenAI這麼做應該是很需要錢」

2026-01-21
聯合新聞網
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (chatbots Gemini and ChatGPT) but focuses on their monetization strategies and financial outlook rather than any harm or risk caused by or related to the AI systems. There is no indication of injury, rights violations, disruption, or other harms directly or indirectly caused by the AI systems. The discussion about ads and revenue models is a governance and business strategy topic, which fits the definition of Complementary Information as it provides supporting context and updates without reporting an incident or hazard.
Thumbnail Image

Google搜索AI模式可访问Gmail和相册实现个性化服务

2026-01-23
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article details a new AI-powered personalization feature that uses user data to enhance search results. While it involves AI system use and data analysis, there is no mention of any injury, rights violations, disruption, or other harms caused by the AI system. The company acknowledges potential errors and privacy concerns but frames them as design considerations and user controls. Therefore, this event does not describe an AI Incident or AI Hazard but rather provides information about an AI system's deployment and its privacy safeguards, fitting the definition of Complementary Information.
Thumbnail Image

Google搜索AI模式可通过邮件和图片进行个性化推荐

2026-01-22
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Google's Gemini 3 model) used for personalized recommendations, indicating AI system involvement. However, there is no indication of any harm or violation caused by the AI system's development or use. The mention of possible errors is a caution but does not describe any actual incident or plausible harm occurring. The focus is on the feature's introduction, privacy safeguards, and potential for errors, which aligns with Complementary Information as it updates on AI deployment and governance aspects without reporting an incident or hazard.
Thumbnail Image

غوغل تطلق ميزة ذكاء شخصي جديدة ضمن البحث لتعزيز تخصيص النتائج

2026-01-23
الوكالة العربية السورية للأنباء - سانا
Why's our monitor labelling this an incident or hazard?
The article primarily reports on a new AI feature launch by Google aimed at enhancing search personalization. It does not describe any realized harm or incident resulting from the AI system's development, use, or malfunction. Nor does it indicate any plausible future harm or risk associated with the feature. The information is about a new AI capability and its deployment context, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

اكتشاف ثغرة خطيرة جديدة في جيميني تهدد بيانات المستخدمين

2026-01-23
العربية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system ('Gemini' chatbot) whose malfunction (a security vulnerability) directly led to unauthorized disclosure of sensitive personal data, violating user privacy rights. This fits the definition of an AI Incident because the AI system's malfunction caused harm (violation of rights). The responsible disclosure and fix are complementary information but do not negate the fact that the incident occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

اكتشاف ثغرة خطيرة في "جيميني" تهدد خصوصية مستخدمي جوجل.. تفاصيل

2026-01-23
مصراوي.كوم
Why's our monitor labelling this an incident or hazard?
The chatbot 'Gemini' is an AI system that processes user calendar data and generates outputs based on user queries. The described attack exploited a command injection vulnerability in the AI's processing, leading to unauthorized disclosure of private information. This is a direct AI Incident because the AI system's malfunction enabled harm to user privacy, a violation of fundamental rights. The article details the realized harm and the AI system's role in causing it, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

غوغل تطلق "الذكاء الشخصي".. تجربة بحث جديدة تقرأ تفضيلاتك

2026-01-23
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's AI assistant Gemini 3) used to personalize search results by analyzing user data. However, the article does not report any realized harm, malfunction, or legal violation caused by this AI system. It also does not suggest plausible future harm or risks. Instead, it mainly provides information about a new AI feature launch and its privacy considerations, which fits the definition of Complementary Information as it enhances understanding of AI developments and their ecosystem without describing an incident or hazard.
Thumbnail Image

ثغرة جديدة تهدد خصوصية مستخدمي جيميني

2026-01-23
جريدة البلاد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini chatbot) whose malfunction (vulnerability exploitation) directly led to a violation of user privacy, a form of harm to individuals. The AI system's behavior in processing calendar data automatically was exploited to leak sensitive information, constituting a realized harm. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly caused harm to users' privacy rights.
Thumbnail Image

اطلاق الذكاء الشخصي.. يعرف كل شيء عن حياتك الرقمية

2026-01-25
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Personal Intelligence feature) that uses extensive personal data to generate personalized outputs. However, the article does not report any realized harm, violation of rights, or incidents caused by the AI system. It also does not describe any malfunction or misuse leading to harm. The concerns raised are about potential privacy risks, but these are not presented as actual incidents or imminent hazards. Therefore, the article is best classified as Complementary Information, providing context and discussion about a new AI development and its implications without reporting an AI Incident or AI Hazard.
Thumbnail Image

ذكاء "غوغل" الاصطناعي قد يفتش "جيميل" والصور

2026-01-25
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system accessing sensitive personal data, which could plausibly lead to privacy violations or other harms if misused or malfunctioning. However, since the article only discusses the feature's introduction, potential concerns, and ongoing development without any reported harm or incident, it fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the new AI capability and its potential risks, not on responses or ecosystem updates. It is not unrelated because AI involvement and plausible future harm are clearly present.
Thumbnail Image

مساعد غوغل الصوتي يكلف الشركة تعويضات بقيمة 68 مليون دولار

2026-01-26
قناة العربية
Why's our monitor labelling this an incident or hazard?
The Google Assistant is an AI system that listens for activation keywords and processes voice data. The reported harm is the unauthorized recording and sharing of private conversations, which constitutes a violation of privacy rights. The false activations ("activation errors") caused the AI system to record conversations without consent, directly leading to harm. The legal settlement confirms the recognition of harm caused by the AI system's malfunction or misuse. Hence, this event meets the criteria for an AI Incident due to direct harm to users' rights and privacy.
Thumbnail Image

دعوى: غوغل تتجسس على محادثات هاتفية للمستخدمين | صحيفة الخليج

2026-01-27
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The Google Assistant is an AI system designed to interpret voice commands and respond accordingly. The lawsuit alleges that the AI system misinterpreted conversations and recorded them without proper consent, leading to privacy violations and unauthorized data use for targeted advertising. This constitutes a violation of users' rights and privacy, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. Since the harm has occurred and led to a legal settlement, this qualifies as an AI Incident.
Thumbnail Image

مقابل هذا المبلغ الضخم... غوغل توافق على تسوية دعوى قضائية بشأن الخصوصية

2026-01-26
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The Google Assistant is an AI system that processes voice input to generate responses and perform tasks. The lawsuit alleges that the AI system recorded conversations without proper consent and used the data for targeted advertising, constituting a violation of privacy rights. This is a direct harm caused by the AI system's use, meeting the criteria for an AI Incident under violations of human rights or breach of applicable law protecting privacy rights. The settlement confirms the harm occurred and the AI system's role was pivotal.
Thumbnail Image

غوغل تسوي دعوى قضائية بشأن الخصوصية مقابل 68 مليون دولار

2026-01-26
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The Google Assistant is an AI system that processes voice input to generate responses and perform tasks. The incident involves the use of this AI system leading to violations of user privacy rights, a breach of fundamental rights protected by law. The lawsuit and settlement indicate that harm to users' privacy has occurred due to the AI system's operation. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of human rights (privacy).
Thumbnail Image

قضيّة تجسّس تكلّف 'غوغل' 68 مليون دولار!

2026-01-27
annahar.com
Why's our monitor labelling this an incident or hazard?
The Google Assistant is an AI system that processes voice inputs to respond to activation phrases and perform tasks. The reported misuse—recording private conversations without consent and using them for targeted ads—constitutes a violation of users' privacy rights, a breach of fundamental rights protected by law. The harm has already occurred, as users' private data was improperly collected and used. Therefore, this qualifies as an AI Incident due to the direct involvement of an AI system causing harm to human rights (privacy).