Apple's Siri AI Accused of Mass Privacy Violations by Whistleblower

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Whistleblower Thomas Le Bonniec revealed that Apple's Siri AI system recorded and processed users' conversations without consent, violating privacy rights and data protection laws. Despite public outcry and regulatory attention in the EU, Apple allegedly continued these practices, prompting calls for investigation and enforcement against the company.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details how Apple's AI system (Siri) was used to record and grade user audio data, including sensitive and private information, without user consent or awareness. This practice violates privacy rights and fundamental human rights, fulfilling the criteria for harm under the AI Incident definition (violations of human rights or breach of legal obligations). The involvement of the AI system is explicit, and the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Recognition/object detectionInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Apple whistleblower calls for European privacy probes into Big Tech voice assistants

2020-05-21
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI-powered voice assistants and the human review of recordings without user consent, which implicates AI system use and potential privacy harms (violations of rights). However, the article focuses on calls for investigation and regulatory scrutiny rather than reporting a confirmed AI Incident where harm has been realized or an AI Hazard where plausible future harm is demonstrated. The whistleblower's letter and the regulatory context provide complementary information about ongoing concerns and governance responses related to AI systems, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Apple Just Gave 1.5 Billion iPad, iPhone Users A Reason To Leave

2020-05-22
Forbes
Why's our monitor labelling this an incident or hazard?
The article details how Apple's AI system (Siri) was used to record and grade user audio data, including sensitive and private information, without user consent or awareness. This practice violates privacy rights and fundamental human rights, fulfilling the criteria for harm under the AI Incident definition (violations of human rights or breach of legal obligations). The involvement of the AI system is explicit, and the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

An Apple whistleblower has publicly slammed the company, claiming it violated 'fundamental rights' after Siri recorded users' intimate moments without consent

2020-05-20
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) whose use led to unauthorized recording and processing of private conversations, including sensitive content, without user consent. This directly violates fundamental rights and data protection laws, causing harm to individuals' privacy. The whistleblower's revelations and the company's prior apology and suspension of the grading program confirm that harm occurred. Hence, the event meets the criteria for an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

Irish regulator questions Apple over recordings

2020-05-21
Reuters
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (Apple's Siri voice assistant) that processes user audio recordings. The whistleblower alleges privacy violations related to data handling practices, which implicates potential breaches of data protection and privacy rights (a form of human rights violation). The regulator's involvement and the call for enforcement indicate that harm related to privacy rights may have occurred or is ongoing. Therefore, this qualifies as an AI Incident due to the direct or indirect violation of fundamental rights through the AI system's use.
Thumbnail Image

Apple whistleblower goes public over 'lack of action'

2020-05-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) that processes voice recordings using AI-based transcription and analysis. The whistleblower's testimony reveals that the system collected data without user consent, including sensitive personal information, violating privacy rights and data protection laws. This constitutes a violation of human rights and legal obligations, fulfilling the criteria for an AI Incident. The harm is direct and ongoing, as the data collection occurred on a massive scale and without proper user awareness or consent. Although Apple has made some changes, the whistleblower argues that no effective enforcement or investigation has taken place, indicating the harm is materialized and significant.
Thumbnail Image

An Apple whistleblower has publicly decried the company for 'violating fundamental rights' after Siri recorded users' intimate moments without consent

2020-05-20
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) whose use led to unauthorized recording and processing of private conversations, violating users' fundamental rights to privacy. The whistleblower's testimony confirms that the AI system's operation caused harm by breaching data protection laws and privacy rights. Apple acknowledged the issue and suspended the program, indicating the harm was realized. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to data protection.
Thumbnail Image

Whistleblower slams Apple for 'wiretapping entire populations'

2020-05-20
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) that processes voice data. The whistleblower's testimony indicates that the system's use led to unauthorized collection and human review of private conversations without consent, violating privacy rights and data protection laws. This is a direct harm to individuals' fundamental rights, fulfilling the criteria for an AI Incident. The company's acknowledgment and partial suspension of the practice do not negate the realized harm. Hence, the event is classified as an AI Incident due to the direct violation of human rights caused by the AI system's use.
Thumbnail Image

Apple Just Gave 1.5 Billion iPad, iPhone Users A Reason To Leave

2020-05-23
Forbes
Why's our monitor labelling this an incident or hazard?
The article details how Apple's AI-powered Siri system was used to collect and process private audio recordings without user consent, including sensitive information. This practice violates users' privacy rights and fundamental human rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The involvement of AI in processing and grading these recordings is explicit, and the harm (privacy violation) has already occurred. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Apple whistleblower goes public over data privacy protection

2020-05-20
Fox News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri's voice recognition and transcription system) whose use led to unauthorized collection and processing of user data, violating privacy rights. The whistleblower's testimony indicates that recordings were made without user activation or consent, constituting a breach of legal protections for data privacy. The harm is a violation of fundamental rights under applicable law, meeting the criteria for an AI Incident. Although Apple has taken some remedial steps, the incident itself has already occurred and caused harm, so it is not merely complementary information or a hazard.
Thumbnail Image

Apple Siri whistleblower pushes EU for more reforms on voice recording tech

2020-05-20
CNET
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri voice assistant) whose use led to the collection and transcription of private voice recordings without proper consent, including sensitive personal data. This unauthorized data collection and processing infringes on privacy rights and data protection laws, constituting a violation of human rights and legal obligations. The whistleblower's testimony and the described practices indicate direct harm caused by the AI system's use, meeting the criteria for an AI Incident.
Thumbnail Image

Apple Questioned by Irish Regulator Over Siri Audio Recordings

2020-05-22
NDTV Gadgets 360
Why's our monitor labelling this an incident or hazard?
The article discusses the regulatory follow-up on Apple's handling of Siri audio recordings, which involves AI systems processing personal data. While there is concern about privacy and data protection law compliance, no direct or indirect harm has been reported or confirmed. The event centers on regulatory engagement and calls for enforcement, which fits the definition of Complementary Information rather than an Incident or Hazard. The AI system's involvement is clear, but the focus is on governance and oversight rather than a realized or plausible harm event.
Thumbnail Image

Apple whistleblower calls for European privacy probes into Big Tech voice assistants

2020-05-21
POLITICO
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (voice assistants using AI for speech recognition and natural language understanding). The use of human reviewers to listen to recordings without user consent constitutes a violation of privacy rights and data protection laws, which are fundamental rights under applicable law. This harm has already occurred as users' privacy was breached without their knowledge or consent. The whistleblower's letter and calls for investigation indicate that the AI systems' use has directly led to these harms. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy obligations caused by the AI systems' data processing practices.
Thumbnail Image

Apple's Siri violated 'the privacy of millions,' says whistleblower

2020-05-20
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Siri) that processes voice inputs and generates transcriptions. The whistleblower reveals that the development and use of this AI system led to unauthorized collection and processing of personal data, violating privacy rights of millions of users. This constitutes a violation of human rights and applicable data protection laws, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the whistleblower claims no effective remediation has been done since 2019.
Thumbnail Image

Siri Whistleblower Goes Public to Protest Lack of Consequences for Apple

2020-05-20
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) whose development and use included human review of voice recordings to improve the system. This process led to unauthorized access to private conversations, including sensitive information, violating users' privacy rights and data protection laws. The harm is realized and ongoing, as private data was accessed without consent, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The whistleblower's protest highlights the lack of regulatory enforcement but does not negate the occurrence of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Apple's Siri 'listens in on users' intimate moments', whistleblower claims

2020-05-21
Mirror
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to provide assistance. The whistleblower reveals that Siri recordings, including sensitive and private conversations, are collected and reviewed by contractors without sufficient vetting or consent, leading to violations of privacy rights protected under law. This constitutes a breach of obligations intended to protect fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the recordings have been accessed and graded, implicating direct involvement of the AI system's use in causing the harm.
Thumbnail Image

Irish Regulators 'in Contact' With Apple Over Siri Quality Control Program

2020-05-21
MacRumors
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to generate responses. The event describes how Siri audio recordings were used by contractors for quality control without explicit user consent, leading to privacy violations. The Irish Data Protection Commission's engagement and the class-action lawsuit highlight that these actions have caused harm in terms of breaches of privacy rights under EU law. This harm is directly linked to the AI system's use and data handling practices, fulfilling the criteria for an AI Incident under violations of human rights or applicable law protecting fundamental rights.
Thumbnail Image

Former Apple contractor asks European Data Protection authorities to investigate Apple's Siri - Il Fatto Quotidiano

2020-05-20
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) whose development and use included human-assisted transcription of voice recordings. The alleged unauthorized collection and listening to private conversations without consent directly violates privacy rights, a fundamental human right protected under applicable law (e.g., GDPR). The harm is realized, not hypothetical, as the recordings include sensitive personal information and were made without user awareness. The involvement of the AI system in processing these recordings is central to the harm. Hence, this is an AI Incident involving violations of human rights and privacy.
Thumbnail Image

Siri grading whistleblower says Apple should face consequences - 9to5Mac

2020-05-20
9to5Mac
Why's our monitor labelling this an incident or hazard?
The event describes a direct harm caused by the use of an AI system (Siri) in processing and grading voice data, where private and sensitive conversations were recorded and reviewed without proper consent, violating fundamental privacy rights. This constitutes a breach of obligations under applicable data protection laws, fitting the definition of an AI Incident under violations of human rights or legal obligations. The whistleblower's call for consequences and regulatory enforcement further underscores the seriousness of the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Apple's handling of Siri snippets back in the frame after letter of complaint to EU privacy regulato (Natasha Lomas/TechCrunch)

2020-05-22
Tech Investor News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes user voice inputs to generate responses. The complaint concerns the use and handling of data generated by this AI system, with human contractors overhearing sensitive information, which implicates violations of privacy and data protection rights. This constitutes a violation of fundamental rights and legal obligations related to data privacy, thus meeting the criteria for an AI Incident due to harm to rights and breach of legal protections. The event describes realized harm (privacy violations) linked to the AI system's use, not just potential harm or general information.
Thumbnail Image

Former Apple contractor implores EU to investigate Apple over Siri

2020-05-20
iMore
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, Siri, which processes voice data using AI technologies. The whistleblower's claims indicate that the development and use of this AI system led to violations of fundamental rights, specifically privacy rights, through unauthorized data collection and listening. This harm has already occurred and is directly linked to the AI system's operation and data handling practices. Although Apple has since changed policies, the incident described concerns past practices that caused harm. Hence, it qualifies as an AI Incident under the framework's criteria for violations of human rights due to AI system use.
Thumbnail Image

Irish regulators question Apple's practices after whistleblower goes public

2020-05-22
@businessline
Why's our monitor labelling this an incident or hazard?
The whistleblower exposed that Apple's AI system (Siri) was recording and processing user audio data without proper consent, including sensitive personal information. This practice violates data protection laws and fundamental rights, constituting harm under the framework's category (c) violations of human rights or breach of legal obligations. The AI system's development and use directly led to this harm. The regulators' involvement and Apple's partial remedial actions do not negate the realized harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Siri whistleblower says Apple should face investigations over grading controversy - General Discussion Discussions on AppleInsider Forums

2020-05-20
AppleInsider Forums
Why's our monitor labelling this an incident or hazard?
The event describes the use and development of an AI system (Siri) that processes voice recordings, including manual review by contractors, which led to violations of privacy rights and potential breaches of data protection laws. The whistleblower's revelations indicate that harm to individuals' rights has occurred through unauthorized data collection and exposure of sensitive information. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy obligations directly linked to the AI system's use and data handling practices.
Thumbnail Image

Cork-based Apple Siri whistleblower urges action against tech giant

2020-05-20
Irish Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri voice recognition) and its development/use (grading by human contractors). The whistleblower's revelations indicate that the AI system's use led to violations of privacy rights and data protection laws, which are breaches of fundamental rights. The harm is realized, not just potential, as private conversations were recorded and listened to without consent. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of legal obligations protecting fundamental rights.
Thumbnail Image

Apple whistleblower calls for privacy probes into Big Tech voice assistants

2020-05-21
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (voice assistants using AI for speech recognition and natural language understanding). The whistleblower's revelations and the companies' practices have led to privacy harms (violations of fundamental rights) through unauthorized listening and data processing. However, the article does not report a new AI Incident but rather discusses calls for regulatory probes and the current state of enforcement, which fits the definition of Complementary Information. It provides important context on governance and societal responses to AI harms but does not itself describe a new incident or hazard.
Thumbnail Image

Ireland's data protection boss is questioning Apple over Siri's privacy (Luke Dormehl/Cult of Mac)

2020-05-22
Tech Investor News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs and generates responses. The privacy concerns raised by the ex-contractor suggest potential violations of data protection laws, which fall under human rights violations. Since the event is about the data protection authority questioning Apple and considering investigation, it is a governance or societal response to a potential issue rather than a confirmed incident of harm. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI-related governance and privacy concerns without confirming an AI Incident or Hazard.
Thumbnail Image

Former contractor wants Apple investigated for Siri's 'massive' data collection

2020-05-20
Cult of Mac
Why's our monitor labelling this an incident or hazard?
The event describes how Apple's AI-powered Siri system collected and processed user voice data, including recordings made without user consent or activation, which led to privacy violations and breaches of data protection laws. The AI system's use and the subsequent human review of recordings directly led to harm in the form of violations of fundamental rights and privacy. The article also mentions that Apple has taken some remedial actions, but the harm has already occurred. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and legal obligations.
Thumbnail Image

Whistleblower Says Apple Built Secret Dossier on You, via Siri - Security Boulevard

2020-05-21
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) used to process and transcribe user audio recordings. The whistleblower's claims indicate that the AI system's outputs (recordings and transcriptions) were linked to personal data and exploited without proper user consent, constituting a violation of privacy rights and data protection laws. This misuse has directly led to harm in terms of breaches of fundamental rights and legal obligations. The involvement of contractors listening to recordings without user knowledge further exacerbates the harm. Although Apple has made some changes, the whistleblower asserts ongoing issues, indicating the harm is current and significant. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Big Tech must face consequences of audio snooping, whistleblower says

2020-05-21
Institution of Engineering and Technology
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (virtual assistants using natural language processing AI) whose use has directly led to violations of fundamental rights (privacy and data protection) through unauthorized recording and transcription of private conversations. The whistleblower's testimony confirms that these harms have occurred and continue, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but reports actual harm caused by AI system use.
Thumbnail Image

Whistle-blower claims Apple 'ignoring and violating' users' rights

2020-05-20
The Week UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) that processes voice recordings. The whistle-blower's disclosures indicate that the development and use of this AI system have led to violations of fundamental rights (privacy and data protection). The alleged unauthorized listening to intimate recordings is a breach of legal obligations and users' rights, which fits the definition of an AI Incident under violations of human rights or breach of applicable law. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Apple still eavesdropping on private conversations at Irish HQ, whistleblower claims

2020-05-21
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) used for voice recognition and command processing. The whistleblower's claims indicate that the AI system's use has directly led to violations of privacy rights by recording private conversations without consent and linking them to personal data. This is a breach of obligations under applicable data protection laws and fundamental rights, fulfilling the criteria for an AI Incident. The harm is ongoing and has been publicly alleged, with investigations underway, confirming realized harm rather than just potential risk.
Thumbnail Image

An Apple whistleblower has publicly decried the company for violating fundamental rights after Sir (Isobel Asher Hamilton/Business Insider: Tech)

2020-05-20
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The whistleblower's disclosure concerns Apple's Siri AI system, which processes voice inputs to improve its performance. The collection and transcription of user recordings without proper consent or transparency directly implicate violations of privacy rights, a fundamental human right. Since the AI system's use has directly led to these rights violations, this qualifies as an AI Incident under the framework.
Thumbnail Image

Apple Siri: Sex Talk Of Apple Siri Users Goes Controversial - Research Snipers

2020-05-23
Research Snipers
Why's our monitor labelling this an incident or hazard?
The event describes a former employee's allegations that Apple used its AI-powered voice assistant Siri to record and store private conversations without user knowledge or consent, including sensitive and intimate content. This unauthorized use and processing of personal data breaches data protection laws and users' fundamental rights to privacy. The AI system's role in capturing and processing these conversations is central to the harm caused. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to privacy and data protection.
Thumbnail Image

Siri Grading Whistleblower Goes Public With Apple Dissatisfaction - The Mac Observer

2020-05-20
The Mac Observer
Why's our monitor labelling this an incident or hazard?
Apple's Siri grading program involves human review of voice data to improve the AI assistant, which is an AI system. The whistleblower reveals that this was done without proper disclosure or consent, leading to violations of fundamental rights, specifically privacy rights protected under EU data protection laws. This constitutes a breach of obligations under applicable law intended to protect fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized as the data collection and processing occurred without informed consent, and the whistleblower calls for urgent investigation, indicating ongoing concern about the harm caused. Thus, the event is classified as an AI Incident.
Thumbnail Image

Who Is Thomas Le Bonniec? Whistleblower Wants Apple to be Probed for Siri's 'Massive' Data Collection

2020-05-20
International Business Times, Singapore Edition
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Apple's Siri) whose development and use included human contractors listening to private conversations without consent, leading to violations of privacy rights. The whistleblower's revelations indicate that the AI system's use directly led to harm in the form of breaches of fundamental rights and privacy. The event is not merely a potential risk but describes actual harm that has occurred, meeting the criteria for an AI Incident under violations of human rights and privacy.
Thumbnail Image

Private Conversations Recorded by Siri Spark Apple Whistleblower to Go Public

2020-05-21
iDrop News
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (Siri) whose operation involves recording and processing user voice data. The whistleblower's revelations indicate that these recordings include private and sensitive information collected without proper user consent or awareness, constituting a violation of human rights and legal obligations related to privacy. The AI system's use has directly led to harm in the form of privacy violations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to a breach of fundamental rights.
Thumbnail Image

Ireland's Data Protection Commissioner questions Apple over Siri recordings (MacDailyNews/MacDailyNews)

2020-05-21
Tech Investor News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to generate responses. The whistleblower revealed that recordings were listened to, which could constitute a violation of privacy rights, a human rights concern. Since the regulator is questioning Apple and no confirmed harm or breach is reported yet, this situation represents a plausible risk of harm or violation rather than a confirmed incident. Therefore, it fits the definition of Complementary Information as it provides an update on regulatory scrutiny and potential issues but does not confirm an AI Incident or Hazard at this stage.
Thumbnail Image

Irish Regulators in Contact With Apple Over Siri Quality Control Program (Juli Clover/MacRumors)

2020-05-21
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The article details that the Irish Data Protection Commissioner is in contact with Apple regarding privacy concerns about Siri recordings being reviewed by employees. While this involves AI system use (Siri voice assistant) and potential privacy rights issues, no confirmed harm or violation has been established yet. The focus is on regulatory engagement and awaiting Apple's responses, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Siri whistleblower goes public over 'lack of action,' says Apple should face consequences (MacDailyNews/MacDailyNews)

2020-05-20
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (Siri) whose development and use involved human contractors listening to user recordings, raising privacy and fundamental rights violations. The whistleblower's disclosure indicates that these rights have been breached, fulfilling the criteria for an AI Incident under violations of human rights or breach of applicable law protecting fundamental rights. The harm is realized as it involves ongoing violations and data misuse, not just potential harm.
Thumbnail Image

Former Apple contractor unhappy with lack of action on alleged Siri privacy violations (Dennis Sellers/Apple World Today)

2020-05-20
Tech Investor News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to generate responses. The whistleblower's claim that Apple contractors listened to users' Siri recordings without proper consent or safeguards implies a breach of privacy rights, which is a violation of fundamental rights under applicable law. Since this has already occurred and involves harm to users' privacy rights, it qualifies as an AI Incident. The event directly relates to the use of an AI system and the resulting harm to human rights (privacy).
Thumbnail Image

Apple Whistleblower Goes Public Over Lack of Action (Slashdot)

2020-05-20
Tech Investor News
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (Siri) whose development and use involved human contractors listening to user recordings, which is part of the AI system's training and improvement process. The whistleblower's disclosures reveal violations of privacy rights, a fundamental human right, due to unauthorized or non-transparent data collection and processing. This constitutes harm under the framework's category (c) violations of human rights or breach of obligations under applicable law. Since the harm has occurred and is ongoing due to lack of sufficient remedial action, this qualifies as an AI Incident.
Thumbnail Image

Privacy: Apple allegedly using Siri to listen in to conversation without user's consent

2020-05-21
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Siri) that is allegedly recording conversations without user consent, leading to a massive violation of privacy rights, which is a breach of fundamental human rights and data protection laws. The whistleblower's testimony and the description of data collection and potential exploitation confirm direct harm caused by the AI system's use. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (violation of rights).
Thumbnail Image

Siri Privacy Whistleblower Unmasks to Urge Stricter Voice Assistant Privacy Regulation

2020-05-20
Voicebot.ai
Why's our monitor labelling this an incident or hazard?
The AI system (Siri voice assistant) was used to collect and process audio recordings, which were then accessed by contractors without adequate user consent or safeguards, leading to violations of privacy rights. This is a direct harm to human rights and privacy, fulfilling the criteria for an AI Incident. The whistleblower's revelations and the ongoing concerns about insufficient changes by Apple indicate that harm has occurred and continues, rather than just a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Apple whistleblower says Siri is always recording

2020-05-22
USSA News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes natural language voice input. The whistleblower's claim that Siri records users continuously without consent indicates misuse of the AI system's operation, leading to a breach of privacy rights and potentially GDPR violations. This is a direct harm to users' fundamental rights and personal data protection, fitting the definition of an AI Incident under violations of human rights or breach of applicable law. The event is not merely a potential risk but an ongoing or past violation as reported by insiders and whistleblowers, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Bitter Ex-Apple Sub-Contractor in Europe is demanding that action be taken against Apple for basically 'wiretapping entire populations' via Siri

2020-05-20
Patently Apple
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Apple's Siri) that processes voice inputs and recordings. The whistleblower's testimony indicates that the system has been used in a way that violates users' privacy rights by recording conversations without consent, including those of non-users. This constitutes a violation of human rights and data protection laws, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as unauthorized recordings and data collection have taken place. Although Apple is taking steps to improve privacy protections, the core issue described is an ongoing or past violation, not merely a potential risk or complementary information.
Thumbnail Image

Hey Siri, are you still recording people's conversations despite promising not to do so nine months ago?

2020-05-20
The Register
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Siri) that processes voice inputs using neural networks. The use and development of this AI system have directly led to violations of privacy and data protection rights, which are fundamental human rights. The alleged ongoing recording and transcription without user consent represent a breach of obligations under applicable law, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves direct misuse or failure to comply with legal frameworks by the AI system's operators.
Thumbnail Image

Apple Siri is again recording all your intimate conversations

2020-05-22
Information Security Newspaper | Hacking News
Why's our monitor labelling this an incident or hazard?
The Siri voice assistant is an AI system that processes natural language voice inputs to generate responses. The event reveals that Apple collected millions of voice recordings, including sensitive personal conversations, and allowed contractors to listen to them without proper user authorization. This constitutes a violation of user privacy and data protection laws, which are fundamental rights. The involvement of the AI system in collecting and processing these recordings is direct and central to the harm. Therefore, this qualifies as an AI Incident due to violations of human rights and legal obligations related to privacy and data protection.
Thumbnail Image

Ireland's Data Protection Commissioner questions Apple over Siri recordings

2020-05-23
MacDailyNews
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (Siri) and the handling of user data, which implicates fundamental rights and data protection laws. The whistleblower's complaint and regulatory engagement suggest plausible risks of violations of privacy rights due to the AI system's data collection and review practices. Since no confirmed harm or legal violation has been established yet, and the focus is on regulatory inquiry and potential enforcement, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The AI system's use could plausibly lead to violations of rights if not properly managed, justifying classification as an AI Hazard.
Thumbnail Image

Siri whistleblower goes public over 'lack of action,' says Apple should face consequences

2020-05-20
MacDailyNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) whose use led to privacy violations through unauthorized listening and grading of user recordings. This constitutes a breach of fundamental rights protected under applicable law (data protection laws in the EU). The whistleblower's revelations and the company's prior practices demonstrate that harm has occurred. Although Apple has made changes, the lack of enforcement and consequences means the harm persists or is ongoing. Hence, this is an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

"Lack of Action" on Siri Recordings

2020-05-21
Michael Tsai
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to generate responses. The whistleblower's disclosure about the massive collection of data and violation of fundamental rights indicates harm related to privacy and data protection, which are human rights concerns. The lack of action by Apple after these disclosures suggests ongoing harm. Therefore, this event qualifies as an AI Incident due to violations of human rights linked to the AI system's use.
Thumbnail Image

Apple Siri whistleblower pushes EU for more reforms on voice recording

2020-05-21
Anti Corruption Digest
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (voice assistants using AI for transcription and processing). The whistleblower's allegations indicate that the AI system's use led to violations of privacy rights, a breach of fundamental rights protected by law. The harm (privacy invasion) has already occurred, making this an AI Incident rather than a hazard or complementary information. The letter to regulators is a response to the incident but does not change the classification. Hence, this is an AI Incident due to realized harm from AI system use.
Thumbnail Image

Apple Questioned by Irish Regulator Over Siri Audio Recordings - The Us Posts

2020-05-22
The Us Posts
Why's our monitor labelling this an incident or hazard?
The event concerns the use of an AI system (Siri) and potential violations of data protection and privacy rights, which fall under violations of human rights or legal obligations. Since the article describes an ongoing investigation and no confirmed harm or breach has been established, this situation represents a plausible risk of harm rather than a realized incident. Therefore, it qualifies as Complementary Information, providing context and updates on regulatory responses to potential AI-related privacy issues, rather than an AI Incident or Hazard.
Thumbnail Image

Siri-Analyse: Whistleblower fordert Ermittlungen gegen Apple

2020-05-21
Bild
Why's our monitor labelling this an incident or hazard?
Apple's Siri is an AI system that processes voice inputs to generate responses. The whistleblower's revelations show that Siri's data collection and analysis practices led to unauthorized recording and processing of sensitive personal information, violating privacy rights and data protection laws. This constitutes a breach of obligations intended to protect fundamental rights, fitting the definition of an AI Incident. The harm is realized, not just potential, as private conversations were recorded and analyzed without consent. The whistleblower's demand for investigation further confirms the seriousness of the issue.
Thumbnail Image

Whistleblower kritisiert Apple wegen Siri scharf: Habe bei Drogendeals und Sex mitgehört

2020-05-23
Focus
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to generate responses. The whistleblower reports that Siri often activates without user consent and records private conversations, which are then reviewed by humans, leading to violations of privacy rights. This constitutes a breach of obligations under applicable law protecting fundamental rights. The involvement of the AI system in recording and transmitting these conversations is direct and central to the harm. Therefore, this event qualifies as an AI Incident due to realized harm to users' privacy and rights caused by the AI system's use and data handling.
Thumbnail Image

Whistleblower kritisiert unautorisierte Siri-Datensammlung

2020-05-20
newsORF.at
Why's our monitor labelling this an incident or hazard?
The AI system involved is Siri, a voice assistant that uses AI to process speech. The whistleblower reveals that Apple continued to analyze voice recordings without user consent, which is a breach of privacy and data protection rights. This unauthorized use of AI-processed data has directly led to violations of fundamental rights, qualifying the event as an AI Incident under the framework.
Thumbnail Image

Siri-Sprachaufnahmen: Whistleblower wirft Apple massive Überwachung vor

2020-05-20
der Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Siri voice assistant) whose use has directly led to violations of privacy and human rights through unauthorized or inadequately consented surveillance and data processing. The whistleblower's testimony indicates that intimate and sensitive information was collected and analyzed, including without proper safeguards, which constitutes harm under the framework's category (c) violations of human rights or breach of obligations under applicable law protecting fundamental rights. The involvement is through the use of the AI system and its data processing practices. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Siri-Sprachaufnahmen: Whistleblower bezweifelt Apples Einhaltung des Versprechens - Golem.de

2020-05-22
Golem.de
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Siri) that processes voice data. The whistleblower's claims suggest that the AI system's use has led to violations of privacy rights and possibly other fundamental rights, which fits the definition of harm under AI Incident category (c). The direct or indirect involvement of the AI system in collecting and analyzing sensitive personal data without proper consent constitutes a breach of obligations intended to protect fundamental rights. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm is realized and ongoing.
Thumbnail Image

Siri-Gate verschärft sich: Apple hat Sex-Talk und Arztgespräch abgehört

2020-05-23
WinFuture.de
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) used to capture user conversations. The misuse and unauthorized human review of these recordings have directly led to violations of privacy rights and data protection laws, which are breaches of fundamental rights. The harm is realized and ongoing, as indicated by the investigation and calls for penalties. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights).
Thumbnail Image

Apple betreibt mit Siri weiterhin massive Überwachung

2020-05-20
Die Presse
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—Apple's Siri voice assistant—that records and transcribes user conversations using AI-based speech recognition and natural language processing. The harm arises from the use of these AI systems to capture private conversations without proper consent, leading to violations of privacy and data protection rights, which are human rights under applicable law. The involvement of external contractors listening to and transcribing sensitive data further exacerbates the harm. This is a direct consequence of the AI system's use and data handling practices, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal obligations protecting privacy.
Thumbnail Image

Ein Apple-Whistleblower, der über Siri iPhone-Nutzern beim Sex zuhören musste, prangert den Tech-Giganten nun öffentlich an

2020-05-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Siri) whose use led to the unauthorized recording and transcription of private conversations, including sensitive and intimate content, without user consent. This is a clear violation of privacy rights and data protection laws, causing harm to individuals' fundamental rights. The whistleblower's testimony confirms that the AI system's operation directly caused these harms. Hence, it meets the criteria for an AI Incident due to violations of human rights and privacy obligations resulting from the AI system's use.
Thumbnail Image

Siri - Die freundliche Spionin

2020-05-22
netzpolitik.org
Why's our monitor labelling this an incident or hazard?
The AI system (Siri) is explicitly involved as it records and processes voice data. The event describes direct harm through privacy violations and unauthorized surveillance, which are breaches of fundamental rights protected by law. The whistleblower's revelations and the ongoing data collection practices indicate that the AI system's use and malfunction have led to these harms. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of legal obligations.
Thumbnail Image

Siri - Die freundliche Spionin

2020-05-22
Apokalyps Nu!
Why's our monitor labelling this an incident or hazard?
Siri is an AI system using speech recognition and natural language processing. The event involves the use and malfunction of this AI system, which directly led to violations of privacy rights and potentially breaches of data protection laws, harming individuals' fundamental rights. The whistleblower's revelations and the ongoing data collection indicate realized harm, not just potential harm. Therefore, this qualifies as an AI Incident due to violations of human rights and privacy breaches caused by the AI system's use and malfunction.
Thumbnail Image

Apple Just Gave 1.5 Billion iPad, iPhone Users A Reason To Leave

2020-05-24
Forbes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Siri) that processes voice recordings using AI transcription and grading. The harm arises from the use of this AI system to collect and process sensitive personal data without user knowledge or consent, violating privacy rights. The whistleblower's testimony confirms ongoing practices despite prior public exposure and apology, indicating direct harm. This fits the definition of an AI Incident as it involves violations of fundamental rights caused by the AI system's use.
Thumbnail Image

Is Siri continuing to listen in despite Apple's apology?

2020-05-25
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system, Apple's Siri, which uses voice recognition and natural language processing to interact with users. The whistleblower reveals that Siri has been recording conversations beyond user consent, including sensitive personal information, which constitutes a violation of privacy rights. This harm is realized and ongoing, not merely potential, as users' fundamental rights are being breached through the AI system's use. Apple's prior apology and updates have not fully remedied the issue, reinforcing the classification as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in causing harm through unauthorized data collection and privacy violations fits the definition of an AI Incident under violations of human rights and breach of applicable law protecting fundamental rights.
Thumbnail Image

Apple's Siri Continues to Listen to Conversations, Despite Last Year's Controversy

2020-05-25
Tech Times
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice commands and recordings. The whistleblower's claims and evidence show that Siri was used to collect private conversations without adequate user consent, leading to violations of privacy rights, a fundamental human right. The continued risk and lack of sufficient remedial action imply ongoing harm. The involvement of the AI system in this privacy breach and the resulting harm to users' rights meet the criteria for an AI Incident, as the AI system's use directly led to violations of fundamental rights.
Thumbnail Image

Is Siri continuing to listen in despite Apples apology? (Raj Narayan/TechRadar)

2020-05-25
Tech Investor News
Why's our monitor labelling this an incident or hazard?
Siri is an AI system that processes voice inputs to provide assistance. The whistleblower's claim that Apple continues to collect and transcribe user conversations without proper consent indicates a violation of privacy rights, which falls under human rights violations as defined. Since the breach is ongoing and has directly led to harm in terms of privacy violations, this qualifies as an AI Incident.
Thumbnail Image

蘋果被爆Siri竊聽用戶 9億iPhone或被駭入 - 大紀元

2020-05-25
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Siri, an AI system, being used to record users without consent, which constitutes a violation of privacy rights (harm category c). This is a direct harm caused by the AI system's use. Additionally, the zero-day attack affects iPhones that run AI-enabled operating systems and services, enabling hackers to bypass security, which can lead to harm to users' data and privacy. Both issues involve realized harms linked to AI system use and security vulnerabilities. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Forbes: 'Apple vừa cho 1,5 tỷ người dùng iPhone, iPad lý do để từ bỏ'

2020-05-25
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Siri) that processes voice commands and records sensitive user data without consent, leading to violations of privacy rights and potential breaches of legal obligations. The whistleblower's testimony and the described practices indicate realized harm to users' rights and privacy, fulfilling the criteria for an AI Incident. The involvement of AI in processing and storing these voice commands is explicit, and the harm is direct and ongoing.
Thumbnail Image

Bí mật có thể khiến hàng tỷ người dùng "tẩy chay" thiết bị của Apple

2020-05-27
danviet.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes Siri, an AI system, recording and collecting private audio data without user consent, including sensitive information. This constitutes a violation of privacy and data protection rights, which are fundamental human rights. The AI system's use directly leads to harm through unauthorized surveillance and data collection. The involvement is through the use of the AI system (Siri) in a manner that breaches legal and ethical standards. Hence, this event meets the criteria for an AI Incident due to realized harm to users' rights and privacy.