NPR Host Sues Google Over Alleged AI Voice Likeness Theft

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

David Greene, former NPR host, is suing Google, alleging its AI tool NotebookLM used a synthetic podcast voice closely resembling his own without consent. Greene claims this violates his rights and professional identity. Google denies the allegations, stating the voice is based on a paid actor.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (NotebookLM) that generates synthetic voices. The lawsuit alleges that the AI system replicated Greene's voice without consent, which constitutes a violation of his rights and intellectual property. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as Greene experiences personal and reputational harm and is pursuing legal action. The case also raises broader legal and ethical questions about AI voice replication and rights, but the primary focus is on the realized harm to Greene from the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

He spent decades perfecting his voice. Now he says Google stole it.

2026-02-15
Washington Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic voices. The lawsuit alleges that the AI system replicated Greene's voice without consent, which constitutes a violation of his rights and intellectual property. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as Greene experiences personal and reputational harm and is pursuing legal action. The case also raises broader legal and ethical questions about AI voice replication and rights, but the primary focus is on the realized harm to Greene from the AI system's use.
Thumbnail Image

NPR's David Greene is suing Google over its AI podcast voice.

2026-02-15
The Verge
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system that replicates a human voice, which is an AI system by definition. The lawsuit claims illegal replication of the voice, which constitutes a violation of intellectual property rights and personal rights. This is a direct harm caused by the AI system's use. Therefore, this qualifies as an AI Incident due to the violation of rights caused by the AI system's use.
Thumbnail Image

Familiar Voice Sparks Legal Showdown Between NPR Host And Google

2026-02-15
The Daily Wire
Why's our monitor labelling this an incident or hazard?
The event centers on the use of an AI system (NotebookLM) that generates voices resembling real individuals, raising concerns about unauthorized use of likeness and potential copyright or personal rights violations. While the lawsuit indicates a dispute over harm, the article does not confirm that harm has been legally established or that the AI system has directly caused harm yet. The main focus is on the legal and societal response to AI voice synthesis and its implications, making this a case of Complementary Information rather than a confirmed AI Incident or AI Hazard.
Thumbnail Image

Longtime NPR host David Greene sues Google over NotebookLM voice | TechCrunch

2026-02-15
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article describes a lawsuit alleging that an AI-generated voice resembles a real person's voice, which implicates AI voice synthesis technology. The AI system is clearly involved, and the alleged harm relates to rights violations. However, the harm is not confirmed or demonstrated as having occurred; it is the subject of a legal claim. The article also mentions Google's denial and references a similar past dispute, indicating this is part of broader societal and legal responses to AI voice technology. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

He spent decades perfecting his voice, but now he says Google stole it

2026-02-15
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (NotebookLM) that generates synthetic voices, allegedly replicating a real person's voice without consent. This use of AI has led to a lawsuit claiming violation of rights and potential harm to the individual's reputation and economic interests. The AI system's role is pivotal in the alleged harm, fulfilling the criteria for an AI Incident involving violations of intellectual property and personal rights. The presence of a legal case and the described personal and economic impacts confirm that harm has occurred or is ongoing, distinguishing this from a mere hazard or complementary information.
Thumbnail Image

Public radio host David Greene sues Google, says AI tool stole his voice

2026-02-15
The Detroit News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic voices. The alleged harm is a violation of rights (voice likeness used without permission), which fits the definition of harm under (c) violations of human rights or breach of obligations protecting intellectual property and personal rights. The lawsuit and public concern indicate that the AI system's use has directly led to this harm. The case is not merely a potential risk but an ongoing dispute over realized harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Hey, that's my voice!' Veteran broadcaster claims Google stole his voice for AI tool

2026-02-16
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that generates audio content using a voice similar to a real individual without consent, which is a direct use of AI technology. The alleged unauthorized use of Greene's voice constitutes a violation of his rights, fulfilling the criterion of harm under the framework. Additionally, the potential for the AI-generated voice to be used to spread conspiracy theories further supports the presence of harm. Since the harm is realized (the voice is already used in the AI tool) and legal action is underway, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

David Greene Sues Google Over AI Podcast Voice Resemblance - News Directory 3

2026-02-15
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM AI tool) that generates podcast voices. The lawsuit claims that the AI-generated voice replicates David Greene's voice without consent, which constitutes a violation of his right of publicity and intellectual property rights. This is a direct harm caused by the AI system's use, impacting Greene's professional identity and rights. Therefore, the event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Former NPR Host David Greene Accuses Google Of Stealing His Voice For AI Podcast Tool: 'It's Eerie'

2026-02-17
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's AI podcast tool) that allegedly used David Greene's voice without consent, which is a violation of intellectual property and personal rights. The use of AI to synthesize a voice based on his speech patterns and verbal tics directly relates to the AI system's development and use. The harm is realized as Greene has filed a lawsuit claiming unauthorized use, indicating a breach of rights has occurred. Google's denial does not negate the presence of an AI system or the alleged harm. Hence, this is classified as an AI Incident due to the direct involvement of AI in causing a rights violation.
Thumbnail Image

Former radio journalist sues Google, says AI tool cloned his voice without consent

2026-02-16
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses synthesized voice technology, which is alleged to have cloned a person's voice without consent. This constitutes a violation of rights (voice rights and consent), which is a recognized harm under the framework. The lawsuit indicates that harm has already occurred or is ongoing, as the voice cloning has led to confusion and potential reputational damage. The involvement of the AI system in generating the voice is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Longtime NPR host accuses Google of stealing his voice for AI podcast...

2026-02-16
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that uses synthesized voices to generate podcasts. The plaintiff alleges that his voice was used without permission, constituting a violation of intellectual property rights, a recognized harm under the AI Incident framework. The involvement of the AI system's use (the podcast tool generating content with the disputed voice) directly leads to the alleged harm. Despite Google's denial, the lawsuit and forensic analysis cited provide sufficient grounds to classify this as an AI Incident rather than a mere hazard or complementary information. The event is not unrelated, as it centers on AI use and its legal and ethical implications.
Thumbnail Image

"Un moment surnaturel où j'ai eu l'impression de m'écouter moi-même" : un célèbre journaliste de radio attaque Google pour usurpation de sa voix

2026-02-16
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article clearly describes an AI system (Google's NotebookLM) generating audio content that replicates a person's voice without consent, which is a direct use of AI technology. The journalist's voice was allegedly used to train the AI, and the AI-generated content has been distributed, causing harm to the journalist's personal rights and potentially his reputation. This meets the criteria for an AI Incident as the AI system's use has directly led to a violation of rights and harm to the individual. The legal action and the described harm confirm that this is not merely a potential risk but an actual incident.
Thumbnail Image

"Ma voix, c'est un peu la partie la plus importante de qui je suis": entre vol d'identité et peur des fakenews, le journaliste radio américain David Greene accuse Google d'avoir utilisé sa voix pour NotebookLM

2026-02-16
BFMTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses AI-generated voices to produce podcasts. The journalist alleges unauthorized use of his voice, which constitutes a violation of his rights (intellectual property and personal rights) and potential harm through misinformation dissemination. The AI system's use directly leads to these harms, fulfilling the criteria for an AI Incident. The legal complaint and forensic analysis support the direct involvement of AI in causing harm. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI harm.
Thumbnail Image

Journalist David Greene sues Google, alleges AI Tool cloned his voice without permission, company responds

2026-02-16
The Financial Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM with AI-generated synthetic voices). The lawsuit claims that the AI system's use of Greene's voice without consent has directly led to harm, specifically a violation of his rights and potential reputational and economic harm. This fits the definition of an AI Incident under violations of human rights or breach of intellectual property rights. The presence of forensic audio analysis supporting the claim strengthens the direct link between the AI system's use and the harm. Although Google denies the claim, the event centers on an alleged realized harm caused by the AI system's use, not just a potential or future risk. Therefore, it is classified as an AI Incident.
Thumbnail Image

Popular radio show host David Greene claims Google stole his voice; Google responds - The Times of India

2026-02-16
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM tool) that generates speech mimicking a human voice. The lawsuit alleges harm in the form of violation of intellectual property and personal rights due to unauthorized use of the plaintiff's voice, which fits the definition of an AI Incident under violations of intellectual property rights and possibly personal rights. The harm is realized as the plaintiff claims loss of control and potential economic harm from the unauthorized use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Former NPR Host Accuses Google Of Copying His Voice For AI Offering

2026-02-16
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's NotebookLM with AI-generated podcast voices) that allegedly used the plaintiff's voice data without consent, leading to a legal claim of misappropriation and harm to the individual's rights and livelihood. This is a direct harm caused by the AI system's use, meeting the criteria for an AI Incident. The presence of the AI system is explicit, the harm is realized (lawsuit filed for unauthorized use), and the harm relates to violation of intellectual property and personal rights.
Thumbnail Image

Google responds to claim that it stole David Greene's voice

2026-02-16
Mashable
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that autogenerates podcasts using synthetic voices, which is an AI system by definition. The complaint alleges that the AI system's use of Greene's voice without authorization has caused harm in the form of violation of rights and unjust enrichment, which fits the definition of an AI Incident. The harm is realized as the complaint has been filed and the issue is active, not merely a potential future harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

" J'étais complètement flippé " : un Américain attaque Google pour avoir cloné sa voix dans les podcasts de NotebookLM -- Frandroid

2026-02-16
Frandroid
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (NotebookLM) that generates podcasts using a cloned voice of a person without consent. This constitutes a violation of intellectual property and personal rights, which falls under harm category (c) - violations of human rights or breach of obligations protecting intellectual property rights. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NPR Host Sues Google Claiming AI Podcast Tool Stole His Voice: 'Very Weird Experience!'

2026-02-16
Mediaite
Why's our monitor labelling this an incident or hazard?
The event involves an AI system replicating a human voice without consent, which relates to rights violations. However, the article reports a lawsuit and claims rather than a confirmed incident of harm. The AI system's role is central but the harm is alleged and under legal consideration. The main focus is on the legal challenge and Google's response, making this a societal and governance response to AI use. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

NotebookLM under fire: Popular radio host says Google stole his voice

2026-02-16
Android Authority
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses a synthesized voice feature. The lawsuit alleges that the AI system's use of a voice similar to David Greene's without permission constitutes a violation of rights, which fits the definition of harm under AI Incident (c). Although Google denies the claim, the event centers on an alleged realized harm caused by the AI system's use, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Un animateur de radio accuse Google d'avoir volé sa voix pour une IA

2026-02-16
Courrier international
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that synthesizes voice content. The complaint alleges that the AI system's use of a voice similar to the plaintiff's without consent has caused harm to his reputation and economic interests, which constitutes a violation of intellectual property and personal rights under applicable law. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to a breach of rights and harm. The ongoing legal case further confirms the seriousness of the incident. The event is not merely a potential risk or a general update but a concrete claim of harm caused by AI use.
Thumbnail Image

Ex-NPR Host Sues Google, Claims It Used His Voice for AI

2026-02-16
TheWrap
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that generates audio using a voice allegedly taken without consent. The harm is realized in the form of unauthorized use of personal voice data, which is a violation of rights and causes personal distress. The AI system's development and use directly led to this harm, meeting the criteria for an AI Incident. The presence of a lawsuit and the detailed claim of harm further support this classification.
Thumbnail Image

Radio Host Sues Google Over AI Voice He Says Mimics Him

2026-02-16
Newser
Why's our monitor labelling this an incident or hazard?
The AI system (NotebookLM) is used to generate content with a voice that mimics the plaintiff's voice without permission, constituting a violation of intellectual property and personal rights. This is a direct harm related to the AI system's use, as it affects the individual's rights and control over their own voice and likeness. Therefore, this qualifies as an AI Incident due to the violation of rights caused by the AI system's use.
Thumbnail Image

Why Google has been sued by NPR host David Greene

2026-02-16
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for generating podcast audio that allegedly replicates a human voice without consent, which constitutes a violation of intellectual property or personal rights. This is a direct harm related to the use of an AI system. Since the lawsuit is active and the claim is about harm caused by the AI system's use, this qualifies as an AI Incident under violations of rights. The article also situates this case within a broader context of similar incidents, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NPR host sues Google after its AI podcast tool allegedly stole his voice

2026-02-16
Conservative News Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that uses AI-generated voice replication technology. The harm is the alleged unauthorized use of the NPR host's voice, which constitutes a violation of intellectual property and personal rights. This harm is directly linked to the AI system's use, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting intellectual property rights. The ongoing lawsuit and the described impact on the individual confirm that harm has occurred, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

NPR Host David Greene Sues Google Over Alleged Voice Resemblance in NotebookLM

2026-02-16
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (NotebookLM's AI-generated voice). The lawsuit claims that the AI voice unlawfully resembles a real person's voice, which implicates potential violations of personal rights and possibly intellectual property rights. Since the harm is alleged and under legal dispute without confirmation of actual harm or impact, this situation fits the definition of an AI Hazard — an event where AI use could plausibly lead to an AI Incident (violation of rights). It is not Complementary Information because the main focus is the legal claim of potential harm, nor is it an AI Incident because no harm has been established yet. It is not unrelated because the AI system is central to the issue.
Thumbnail Image

Ex-NPR Host Claims Google Stole His Voice For NotebookLM AI Podcast Tool

2026-02-16
HotHardware
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM's AI podcast feature) that generates synthetic voices. The alleged harm is a violation of intellectual property and personal rights, as the AI voice closely replicates David Greene's voice without consent or compensation. This fits the definition of an AI Incident under violations of human rights or breach of obligations protecting intellectual property rights. The lawsuit and forensic analysis indicate that the AI system's development or use directly led to this harm. Hence, the event is classified as an AI Incident.
Thumbnail Image

NPR host David Greene's voice being used for NotebookLM podcasts? Case filed against Google

2026-02-16
Mashable ME
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates audio content using a voice model allegedly cloned from David Greene's voice without consent. This use of AI has directly led to a legal claim alleging violation of intellectual property and personality rights, which falls under harm to fundamental and intellectual property rights. The involvement of AI in generating the voice and the resulting lawsuit for damages and injunction clearly indicate realized harm caused by the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Google responds to claim that it stole NPR host's voice

2026-02-16
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic audio content using a voice allegedly copied from a real person without consent. The alleged harm is a violation of the right to publicity and unfair competition law, which are legal rights protecting personal likeness and intellectual property. Since the lawsuit claims harm has occurred due to the AI system's use, this qualifies as an AI Incident under the framework, specifically a violation of rights (c). The denial by Google does not negate the classification, as the event centers on the claim and its implications.
Thumbnail Image

"Really Troubling": Former NPR Host Files Suit Alleging AI Tool Uses His Voice

2026-02-16
La Voce di New York
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that generates audio content using a voice closely resembling a real person's voice without permission. This use of AI directly relates to a violation of intellectual property and personality rights, which falls under harm category (c) in the framework. The lawsuit and the described harm are concrete and ongoing, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information. The presence of the AI system, the use of the AI-generated voice, and the alleged harm to the individual's rights justify this classification.
Thumbnail Image

NPR Host David Greene Suing Google After Claiming AI Tool Replicated His Voice

2026-02-16
Barrett Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses AI-generated audio, and the claim is that it replicated a person's voice without consent, which constitutes a violation of rights (intellectual property or personality rights). This fits the definition of an AI Incident because the AI system's use has directly led to a claimed harm (violation of rights). Although the harm is contested, the lawsuit and public claim indicate that the harm is considered realized by the claimant. Therefore, this event should be classified as an AI Incident.
Thumbnail Image

NPR host David Greene sues Google over NotebookLM voice

2026-02-16
Neowin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic voices, which is an AI application. The lawsuit alleges that the AI system's development or use involved unauthorized use of David Greene's voice, constituting a violation of his rights and intellectual property. This is a direct harm caused by the AI system's use, meeting the criteria for an AI Incident. The event is not merely a potential risk or a general update but a concrete legal claim of harm resulting from the AI system's operation.
Thumbnail Image

David Greene Sues Google: AI Voice Theft Claim - News Directory 3

2026-02-16
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NotebookLM) that uses AI-generated voice outputs. The lawsuit claims that the AI system unlawfully replicated Greene's voice, infringing on his intellectual property rights, which is a breach of legal protections. The harm is realized as Greene alleges unauthorized use and potential misuse of his voice, which is a direct consequence of the AI system's outputs. The presence of the AI system, the nature of its use, and the direct link to harm (intellectual property violation and potential reputational harm) meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'Animateur de NPR Poursuit Google en Réclamant que Son Outil IA a Volé sa Voix ! | LesNews

2026-02-16
LesNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM) that uses synthesized voice technology to generate audio content. The plaintiff alleges that this AI system's use of his voice without consent constitutes a violation of his rights, which is a harm under the framework's category (c) regarding violations of human rights or legal obligations protecting fundamental rights. The involvement of the AI system in the harm is direct, as the AI-generated voice is the basis of the claim. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un journaliste attaque Google pour usurpation de sa voix par une IA - The Media Leader FR

2026-02-16
The Media Leader FR - N°1 sur les décideurs médias
Why's our monitor labelling this an incident or hazard?
The article describes a concrete case where an AI system (NotebookLM) is alleged to have reproduced a person's voice without authorization, leading to a lawsuit. The journalist claims harm to his professional identity and reputation, which is a violation of personal rights. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm with legal consequences.
Thumbnail Image

Un presentador demanda a Google por usar su voz en NotebookLM: "Estaba completamente asustado"

2026-02-16
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology (voice synthesis in NotebookLM) that generated audio resembling the plaintiff's voice without consent, leading to a legal claim of rights infringement. This constitutes a violation of personal rights linked to the AI system's use, fulfilling the criteria for an AI Incident. The harm is realized (the plaintiff is distressed and has filed a lawsuit), and the AI system's use is central to the event. Google's denial does not negate the incident classification as the dispute itself arises from AI use causing alleged harm.
Thumbnail Image

Google Sued by Former NPR Host Over NotebookLM AI Voice

2026-02-17
CNET
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses AI-generated voices. The lawsuit alleges unauthorized use of a person's voice to train the AI, which is a violation of intellectual property and personal rights, fitting the definition of harm under (c) violations of human rights or breach of obligations protecting intellectual property rights. The AI system's use directly led to the alleged harm, making this an AI Incident. Google's denial does not negate the classification since the event centers on the claim and its implications.
Thumbnail Image

A radio host is suing Google for an AI-generated voice he claims sounds suspiciously like him: 'It's this eerie moment where you feel like you're listening to yourself'

2026-02-17
pcgamer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's NotebookLM AI-generated voice feature) and alleges harm related to unauthorized use of a person's voice, which implicates intellectual property rights and economic harm. However, the harm is currently alleged and under legal review, with no confirmed proof that the AI system used the plaintiff's voice data. The article focuses on the legal dispute and its implications rather than reporting a confirmed AI Incident or a plausible future hazard. Therefore, it fits the definition of Complementary Information, as it updates on societal and governance responses to AI use and potential harms without confirming an incident or hazard.
Thumbnail Image

Creators Are Fighting Back as AI Mimics Their Voices -- This Case Could Set the Rules

2026-02-17
Inc.
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Google's AI produces synthetic audio mimicking a real person's voice. The complaint alleges unauthorized use without permission or compensation, which constitutes a violation of intellectual property rights and possibly personal rights. This is a direct harm caused by the AI system's use, fitting the definition of an AI Incident due to breach of obligations protecting intellectual property and personal rights.
Thumbnail Image

Locutor demanda a Google por supuesta clonación de su voz con IA

2026-02-17
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (NotebookLM) that uses synthetic voice generation technology. The legal claim centers on the AI's use of vocal patterns that closely resemble a real person's voice without consent, which constitutes a violation of intellectual property and personal rights. This is a direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under violations of human rights or intellectual property rights. The event is not merely a potential risk or a general discussion but involves an actual claim of harm resulting from the AI system's outputs.
Thumbnail Image

David Greene, exanfitrión de NPR, demanda a Google por su voz en NotebookLM

2026-02-15
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) generating synthetic voices. The plaintiff claims that the AI-generated voice is based on his own voice, implying unauthorized use and potential violation of his rights. This is a direct harm linked to the AI system's use, fitting the definition of an AI Incident under violations of human rights or intellectual property rights. The presence of a legal dispute further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Demandan a Google por supuestamente usar voz de presentador para entrenar producto de IA

2026-02-18
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) whose training process allegedly used a person's voice without permission, constituting a violation of intellectual property rights. This is a direct harm caused by the AI system's development and use. The legal complaint and forensic analysis support the claim that the AI system's development led to this harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Demandan a Google por supuestamente usar voz de presentador para entrenar producto de IA

2026-02-18
Listin diario
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NotebookLM) that uses voice data to generate audio summaries. The plaintiff alleges unauthorized use of his voice to train this AI, which is a breach of intellectual property and personal rights. The AI system's development and use directly led to this alleged harm. Although the harm is legal and rights-based rather than physical, it fits the definition of harm under (c) violations of human rights or breach of obligations under applicable law protecting intellectual property rights. Hence, this is an AI Incident.
Thumbnail Image

Demandan a Google por supuestamente usar voz de presentador para entrenar producto de IA | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2026-02-18
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses voice data to generate audio summaries. The plaintiff alleges unauthorized use of his voice for training the AI, which constitutes a violation of intellectual property or personal rights. This is a direct harm linked to the AI system's development and use. The legal complaint and forensic analysis support the claim that the AI system's training involved the plaintiff's voice without consent, meeting the criteria for an AI Incident under violations of rights.
Thumbnail Image

Google niega uso de voz de David Greene en IA tras demanda presentada en Santa Clara

2026-02-18
UDG TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (NotebookLM) trained with voice data allegedly without consent, which constitutes a breach of intellectual property or personal rights. The harm is realized as the unauthorized use of Greene's voice for AI training and product development. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of rights (c). The legal complaint and forensic analysis support the claim of harm, and the event is not merely a potential risk or complementary information but an active dispute over realized harm.
Thumbnail Image

Ex conductor de programa de NPR demanda a Google de copiar su voz con IA por una suma millonaria

2026-02-16
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Google's AI system was trained on hours of David Greene's voice recordings without his consent, leading to the creation of synthetic audio that imitates his voice. This unauthorized use of his voice constitutes a violation of intellectual property and personal rights, which are recognized harms under the AI Incident definition (violation of rights). The AI system's development and use directly led to this harm, as alleged in the lawsuit. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google accusé d'avoir cloné la voix d'un animateur radio grâce à l'IA - Siècle Digital

2026-02-17
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic voices, and the use of this AI system has directly led to a legal complaint alleging unauthorized cloning of a person's voice, which is a violation of personal rights. The harm is realized in the form of alleged violation of the individual's rights to their voice and image, which is a recognized form of harm under the framework. The presence of an AI system, the direct use of AI for voice cloning, and the resulting legal action for rights violation justify classification as an AI Incident.
Thumbnail Image

NPR Host Sues Google Over NotebookLM Voice Replication

2026-02-17
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM powered by Gemini AI) that generates synthetic voices. The lawsuit claims that the AI-generated voice closely mimics Greene's unique vocal characteristics without authorization, constituting a violation of his rights. This is a direct harm related to intellectual property and personal rights, fitting the definition of an AI Incident under violations of human rights or intellectual property rights. The presence of the AI system, the use of AI-generated voice replication, and the alleged harm to Greene's rights and livelihood justify classification as an AI Incident.
Thumbnail Image

NPR Host Sues Google, Claims AI Used His Voice Without Permission - News Directory 3

2026-02-17
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses AI-generated voice synthesis technology. The lawsuit alleges that the AI system was trained using the plaintiff's voice without consent, constituting a violation of intellectual property rights, which is a recognized form of harm under the AI Incident definition. The harm is realized as the plaintiff has filed a legal complaint claiming unauthorized use and potential damage. The AI system's development and use are central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The event is not merely a general AI-related news item or a response to a past incident but a current legal dispute over alleged harm caused by AI use.
Thumbnail Image

David Greene, présentateur de NPR, poursuit Google pour avoir cloné sa voix

2026-02-17
Business AM - FR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice cloning technology) that was used without consent to create a synthetic voice resembling a real person, leading to a legal claim of intellectual property rights violation. This is a direct harm related to the use of an AI system, fulfilling the criteria for an AI Incident under violations of intellectual property rights. The lawsuit and potential consequences confirm that harm has occurred or is ongoing, not just a potential risk. Hence, it is not merely a hazard or complementary information but an incident.
Thumbnail Image

Escuchó su propia voz en la IA de Google y no podía creerlo: ahora los lleva a juicio

2026-02-18
Libertad Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (NotebookLM) that uses voice data to generate audio, which fits the definition of an AI system. The lawsuit alleges unauthorized use of the plaintiff's voice recordings in training the AI, constituting a violation of intellectual property rights and personal rights, which is a harm under category (c) of AI Incidents. The harm has already occurred as the AI system was developed and deployed using the contested data. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

He spent decades perfecting his voice. Now he says Google stole it.

2026-02-18
IOL
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that generates synthetic voices. The lawsuit alleges that the AI system's use of a voice resembling David Greene's without consent constitutes a violation of his rights, which is a harm under the framework's category (c) violations of human rights or breach of obligations under applicable law (intellectual property and personal rights). The harm is realized, not just potential, as Greene claims unauthorized use and personal and economic harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ex-NPR Host Sues Google Over AI Voice Cloning of His Voice - News Directory 3

2026-02-18
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM) that uses voice cloning technology, which is an AI application. The lawsuit alleges that the AI system was trained using the plaintiff's voice without authorization, constituting a breach of intellectual property and personal likeness rights. This is a direct harm related to the AI system's development and use. The harm is realized in the form of legal claims for unauthorized use and potential financial and personal damage. Therefore, this qualifies as an AI Incident due to violation of rights caused by the AI system's development and use.
Thumbnail Image

مذيع شهير يتهم "غوغل" باستنساخ نبرته بالذكاء الاصطناعي

2026-02-16
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice synthesis in NotebookLM) whose use allegedly infringes on the broadcaster's rights by replicating his voice without consent. This constitutes a violation of intellectual property and personal identity rights, which falls under harm category (c) "Violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labor, and intellectual property rights." Since the broadcaster has filed a lawsuit claiming harm caused by the AI system's use, this is a realized harm directly linked to the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

مذيع شهير في NPR يقاضي جوجل ويتهمها بسرقة صوته في NotebookLM

2026-02-16
صدى البلد
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice synthesis in NotebookLM) whose use allegedly infringes on the broadcaster's rights by replicating his voice without consent. This is a direct harm related to intellectual property and personal identity rights, fitting the definition of an AI Incident under violations of human rights or intellectual property rights. The presence of a lawsuit and the nature of the claim confirm that harm is realized or at least claimed, not merely potential. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

مذيع أمريكي يقاضي غوغل بسبب "استنساخ صوته" في أداة ذكاء اصطناعي صوتية

2026-02-16
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice AI tool) that allegedly cloned a person's voice without consent, which is a direct use of AI technology leading to a claimed violation of intellectual property rights. This constitutes harm under the AI Incident category (violation of intellectual property rights). The presence of a lawsuit and the broadcaster's claim of harm confirm that this is not merely a potential risk but an actual incident involving AI misuse. Hence, the classification is AI Incident.
Thumbnail Image

مذيع يتهم "غوغل" بسرقة صوته للذكاء الاصطناعي

2026-02-16
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google's Notebook LM) that uses synthesized voice outputs based on a person's voice without consent, which is a direct use of AI technology. The harm is a violation of intellectual property rights, as the voice was allegedly copied and used without permission or compensation. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of rights. The ongoing lawsuit and the comparison to a similar past case reinforce the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"استنسخوا نبرتى".. مذيع معروف يهاجم جوجل بسبب الذكاء الاصطناعى - اليوم السابع

2026-02-16
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's NotebookLM) that generates audio content using AI voices. The broadcaster alleges harm related to violation of personal rights (voice identity), which falls under violations of human rights or intellectual property rights. Since the claim is about the AI system's use leading to a potential or ongoing violation of rights, and a lawsuit has been filed, this constitutes an AI Incident due to realized or alleged harm linked to the AI system's use.
Thumbnail Image

بعد أزمة سكارليت.. غوغل تواجه اتهامات بسرقة نبرة صوت إعلامي شهير

2026-02-16
24.ae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (voice synthesis AI) used in a product that allegedly replicates a person's voice without consent, which constitutes a violation of intellectual property rights if proven. The legal claim indicates a potential or ongoing harm related to rights violations. Since the article focuses on the accusation and legal proceedings rather than confirmed harm or resolution, it fits best as Complementary Information, providing context and updates on AI-related legal and ethical challenges. It is not an AI Incident because the harm is not confirmed or established yet, nor is it an AI Hazard since the event is about an ongoing dispute rather than a plausible future harm. It is not unrelated because AI voice synthesis is central to the issue.
Thumbnail Image

مذيع أميركي يقاضي غوغل متهماً إياها باستنساخ صوته بتقنيات الذكاء الاصطناعي

2026-02-16
S A N A
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's NotebookLM) that uses AI-generated voices, which is explicitly mentioned. The plaintiff alleges that the AI system's use of a voice that closely resembles his own constitutes a violation of his rights, specifically intellectual property and personal identity rights. This is a direct harm caused by the AI system's use, meeting the criteria for an AI Incident. Although Google denies the claim, the lawsuit indicates that harm has occurred or is claimed to have occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident due to the alleged violation of rights caused by the AI system's use.
Thumbnail Image

اتهامات تطال غوغل بسرقة نبرة صوت إعلامي شهير

2026-02-17
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (voice synthesis in 'Notebook LM') that has been used to replicate a person's voice without permission, leading to a legal claim of rights violation. The AI system's use has directly led to a harm (violation of intellectual property and personal rights). This fits the definition of an AI Incident, as the AI system's use has caused a breach of obligations under applicable law protecting intellectual property and personal rights. The harm is realized, not just potential, and the AI system's role is pivotal in causing this harm.
Thumbnail Image

المذيع ديفيد غرين يقاضي "جوجل" بتهمة سرقة صوته | صحيفة الخليج

2026-02-17
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice synthesis in an AI tool) that allegedly caused harm by infringing on the broadcaster's intellectual property and personal rights. This constitutes a violation of rights due to the AI system's use, meeting the criteria for an AI Incident. The dispute highlights direct harm resulting from the AI system's outputs, not just a potential or hypothetical risk, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

مذيع يتهم "غوغل" بسرقة صوته للذكاء الاصطناعي

2026-02-17
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (Google's Notebook LM) that generates audio summaries using a voice allegedly copied without consent from a former broadcaster. This constitutes a violation of intellectual property and personal rights, which is a recognized harm under the AI Incident definition. The involvement of AI in generating the voice output is explicit, and the harm (unauthorized use of voice) is realized and ongoing, as evidenced by the lawsuit. The case is similar to a prior incident involving Scarlett Johansson, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

مذيع أميركي يقاضي "غوغل" بتهمة استنساخ صوته عبر أداة ذكاء اصطناعي

2026-02-18
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI tool ('Notebook LM') that uses voice cloning technology, which is an AI system. The use of the AI system's output (cloned voice) without the individual's consent constitutes a violation of intellectual property and personal rights. This harm has already materialized as the individual has filed a lawsuit, indicating direct harm caused by the AI system's use. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Radioprofil stämmer Google för stöld av hans röst

2026-02-15
Omni
Why's our monitor labelling this an incident or hazard?
The AI system (Notebook LM) is explicitly mentioned as generating a voice that sounds like David Greene's, and the claim is that this voice was used without permission, constituting a violation of rights. This fits the definition of an AI Incident under category (c) for violations of human rights or breach of obligations under applicable law protecting intellectual property rights. The event describes realized harm (unauthorized use of voice likeness) and ongoing legal action, confirming it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Radioprofil stämmer Google - anklagar techjätten för röststöld

2026-02-16
Dagens Media
Why's our monitor labelling this an incident or hazard?
The event describes an alleged violation of intellectual property or personal rights due to the use of an AI-generated voice that closely mimics a real person's voice without permission. This fits the definition of an AI Incident because it involves an AI system's use leading to a violation of rights (intellectual property or personal rights). Although the harm is currently contested and under legal review, the claim itself indicates that harm has occurred or is ongoing. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Radiopratare stämmer Google för att ha snott hans röst till NotebookLM

2026-02-16
Computer Sweden
Why's our monitor labelling this an incident or hazard?
An AI system (NotebookLM) is explicitly involved as it uses an AI-generated voice. The dispute centers on the use of a voice style that allegedly infringes on the plaintiff's personal and intellectual property rights. This is a direct harm related to rights violations caused by the AI system's outputs. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or intellectual property rights caused by the AI system's use.
Thumbnail Image

Amerikansk radiojournalist stämmer Google för att ha stulit hans röst

2026-02-16
PCforAlla
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Notebook LM) that uses voice synthesis technology, which is an AI system by definition. The alleged unauthorized use of David Greene's voice constitutes a violation of intellectual property and personal rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting intellectual property rights. The lawsuit and the described harm are direct consequences of the AI system's use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Radioprofil stämmer Google - AI-röst i NotebookLM för lik hans

2026-02-17
Teknikveckan
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (NotebookLM's AI voice synthesis). The claim is about unauthorized use of a person's voice likeness, which relates to violation of rights under applicable law. However, the article describes an ongoing legal case without confirmed harm or incident yet. The main focus is on the legal dispute and its potential implications, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.