David Attenborough Disturbed by AI Voice Cloning

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sir David Attenborough is deeply disturbed by unauthorized AI clones of his voice being used on platforms like YouTube for partisan news reports. He considers this identity theft and a violation of intellectual property rights, as the AI-generated voice closely mimics his own, potentially misleading audiences.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (voice‐cloning deepfake) has been used to generate false statements attributed to Attenborough. This represents an actual harm—identity theft, potential defamation, and risk of misleading the public—stemming directly from the AI system’s malicious use. Therefore it qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyRobustness & digital securitySafety

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalHuman or fundamental rightsPublic interestPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Sir David Attenborough 'Profoundly Disturbed' by AI Clones of His Voice

2024-11-18
IGN India
Why's our monitor labelling this an incident or hazard?
The piece does not document a concrete harm or misuse event but warns that AI voice clones could be used to spread false statements under the guise of a real person. This represents a potential risk (identity theft, misinformation) rather than a realized incident.
Thumbnail Image

David Attenborough Reacts to AI Replica of His Voice: 'I Am Profoundly Disturbed' and 'Greatly Object' to It

2024-11-18
Variety
Why's our monitor labelling this an incident or hazard?
An AI system (voice‐cloning deepfake) has been used to generate false statements attributed to Attenborough. This represents an actual harm—identity theft, potential defamation, and risk of misleading the public—stemming directly from the AI system’s malicious use. Therefore it qualifies as an AI Incident.
Thumbnail Image

David Attenborough "Profoundly Disturbed" By AI Clone Of His Voice

2024-11-18
Deadline
Why's our monitor labelling this an incident or hazard?
The article describes how AI systems have been used to create near-perfect clones of Attenborough’s voice and deploy them on public platforms without authorization, directly infringing on his identity, likeness, and potentially copyright. This is a concrete harm—misappropriation of his voice—which falls under violations of intellectual property and personality rights.
Thumbnail Image

David Attenborough, 98, is 'profoundly disturbed' by AI clone of voice

2024-11-18
Metro
Why's our monitor labelling this an incident or hazard?
An AI voice‐cloning system was used without consent to replicate Attenborough’s distinctive voice, delivering content he did not authorize. This misuse has already occurred, directly violating his rights (identity theft) and posing reputational and misinformation harms. Therefore, it qualifies as an AI Incident.
Thumbnail Image

David Attenborough says he is 'profoundly disturbed' his voice is being cloned with AI

2024-11-18
AOL.com
Why's our monitor labelling this an incident or hazard?
AI systems have directly been used to clone Attenborough’s voice and broadcast partisan content without his permission, constituting unauthorized use of his identity, potential misinformation, and violation of personal rights. This is a realized harm caused by AI misuse rather than a mere future risk or general update.
Thumbnail Image

AI cloning of celebrity voices outpacing the law, experts warn

2024-11-19
AOL.com
Why's our monitor labelling this an incident or hazard?
Fraudsters have used AI-powered voice-cloning tools to impersonate public figures and ordinary individuals, resulting in scams (financial loss) and violation of privacy and identity. These harms are materialized and directly linked to the misuse of AI systems. Therefore, this constitutes an AI Incident.
Thumbnail Image

David Attenborough 'disturbed' with his voice being cloned using AI

2024-11-18
WION
Why's our monitor labelling this an incident or hazard?
The event describes an AI system cloning Attenborough’s voice and then using those outputs on YouTube channels to spread content he did not authorize. This misuse of AI to impersonate a public figure and potentially misinform audiences is a realized harm (identity and IP rights violation, risk of misinformation), fitting the definition of an AI Incident.
Thumbnail Image

David Attenborough 'profoundly disturbed' by unauthorized AI voice cloning

2024-11-18
NewsBytes
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate an unauthorized voice clone that has been deployed in YouTube news reports, directly infringing on Attenborough’s identity and rights. This constitutes realized harm—unauthorized cloning and impersonation—falling under violations of privacy and intellectual property. Therefore, it is classified as an AI Incident.
Thumbnail Image

David Attenborough Issues Pointed Statement About the 'Profoundly Disturbing' AI Clone of His Voice

2024-11-18
Post and Courier
Why's our monitor labelling this an incident or hazard?
The article describes an actual misuse of an AI voice-cloning system to impersonate Attenborough and distribute false narrations. This unauthorized cloning constitutes a violation of his personal rights and identity, a realized harm caused by an AI system.
Thumbnail Image

AI cloning of celebrity voices outpacing the law, experts warn

2024-11-19
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article describes real instances of AI voice-cloning being used by fraudsters to deceive victims and commit financial fraud, causing direct harm to individuals. This misuse of an AI system has already resulted in identity theft, privacy breaches, and monetary losses, meeting the criteria for an AI Incident.
Thumbnail Image

If you can't trust the voice of David Attenborough, what can you trust?

2024-11-18
The Guardian
Why's our monitor labelling this an incident or hazard?
AI voice‐cloning systems are explicitly being used to fabricate Attenborough’s voice and spread false narratives about elections, politics, and to enable scams (e.g., faked hostage tapes). These activities constitute realized harm—undermining public trust, enabling disinformation, and posing financial and reputational risks—caused directly by AI misuse. Therefore it is classified as an AI Incident.
Thumbnail Image

AI being used to clone David Attenborough's voice in fake news reports

2024-11-17
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
This is a direct misuse of AI systems (voice-cloning models) to generate false news reports in Sir David’s voice, violating his personal rights and facilitating misinformation. The harm is realized (identity theft, reputational damage, potential misinformation of audiences), so it qualifies as an AI Incident.
Thumbnail Image

David Attenborough is 'profoundly disturbed' by AI voice

2024-11-18
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes real-world misuse of a generative AI system to clone Attenborough’s voice without consent, constituting identity theft and potential reputational harm. This misuse of AI has already occurred and directly impacts his rights, fitting the definition of an AI Incident.
Thumbnail Image

David Attenborough's fury as AI firms 'steal TV icon's voice' for reports

2024-11-18
Business Plus
Why's our monitor labelling this an incident or hazard?
The article describes AI firms cloning David Attenborough's voice without consent to produce partisan news reports. This unauthorized use of his voice constitutes a violation of his rights and identity, which falls under violations of human rights or intellectual property rights. The harm is realized as it affects the individual's personal and intellectual property rights. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

David Attenborough: Documentary maker, 98, 'profoundly disturbed' by AI cloning of his voice

2024-11-19
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to clone a person's voice and produce deepfake content that misrepresents the individual. The harm includes violation of privacy and intellectual property rights, as well as reputational harm, which falls under violations of human rights and intellectual property rights as defined. The misuse of AI-generated voice content to spread unauthorized messages directly leads to harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sir David Attenborough 'Profoundly Disturbed' by AI Clones of His Voice - IGN

2024-11-18
IGN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voice clones that mimic Sir David Attenborough's voice indistinguishably and are used without his permission, which he describes as identity theft. The use of these AI systems has directly led to harm in the form of violation of personal rights and potential misinformation, as the AI voices can be used to say things the real person never endorsed. Similar cases involving other celebrities reinforce the pattern of harm. The involvement of AI in creating these voice clones and the resulting ethical and legal concerns meet the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Attack of the David Attenborough AI Clones: 'My Identity Is Being Stolen'

2024-11-18
Rolling Stone
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated voice clones of David Attenborough being used to produce content that he has not endorsed, including politically charged statements. The AI system's use has directly led to harm in the form of identity theft and misinformation, which can mislead audiences and damage Attenborough's reputation. The involvement of AI voice cloning technology is explicit, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident under violations of rights and harm to communities.
Thumbnail Image

David Attenborough Is Even More Disturbed by His AI Clone Than We Are

2024-11-18
VICE
Why's our monitor labelling this an incident or hazard?
An AI system (voice cloning) is explicitly involved, used to generate a synthetic voice of David Attenborough without his consent. The event describes a direct harm: the unauthorized use of his identity and voice, which he finds disturbing and objects to, indicating a violation of his rights. The AI system's use has directly led to this harm. The response from the AI voice cloning service further highlights the disregard for consent and rights. Therefore, this is an AI Incident due to violation of rights caused by the AI system's use.
Thumbnail Image

David Attenborough "profoundly disturbed" by AI clone of his voice

2024-11-18
NME Music News, Reviews, Videos, Galleries, Tickets and Blogs | NME.COM
Why's our monitor labelling this an incident or hazard?
An AI system was used to clone a public figure's voice and generate unauthorized speech, which directly leads to harm by misrepresenting the individual and potentially misleading the public. The event describes realized harm through identity theft and misuse of the AI-generated voice, which is a violation of rights and harms the community's trust. The involvement of the AI system is explicit and central to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

David Attenborough "profoundly disturbed" by use of AI to clone his iconic voice

2024-11-18
Radio Times
Why's our monitor labelling this an incident or hazard?
The AI system's use in cloning voices directly leads to the unauthorized use of personal identity, which can be considered a violation of rights. The article describes actual use of AI-generated voices without consent, which constitutes realized harm rather than just potential. Although no physical harm or misinformation is explicitly reported, the harm to personal identity and rights is significant and fits the definition of an AI Incident. The Parkinson project is a complementary context but does not negate the incident involving Attenborough's voice misuse.
Thumbnail Image

David Attenborough Disgusted by AI Clone of His Voice

2024-11-18
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system was used to clone David Attenborough's voice, producing content that can deceive people into believing it is genuine. This constitutes a violation of rights related to identity and possibly intellectual property. Moreover, the event highlights a significant harm to communities and society by eroding trust in truthful communication, which is a clearly articulated harm. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

'I Greatly Object': Sir David Attenborough Speaks Out Against AI Recreations Of His Voice

2024-11-18
HuffPost UK
Why's our monitor labelling this an incident or hazard?
The article describes AI systems that generate digital replicas of voices, which is an AI system involvement. The use of these AI-generated voices without consent can plausibly lead to harms such as misinformation, identity theft, or reputational damage, but the article does not document any actual harm occurring yet. Therefore, this situation represents a plausible risk of harm rather than a realized incident. It fits the definition of an AI Hazard because the development and use of these AI voice generators could plausibly lead to an AI Incident in the future, but no direct or indirect harm has been reported as having occurred so far.
Thumbnail Image

AI cloning of celebrity voices outpacing the law, experts warn

2024-11-19
Irish Examiner
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used in scams that have harmed people financially, which constitutes harm to individuals. The cloning of celebrity voices without consent also implicates violations of privacy and intellectual property rights. These harms have already occurred, making this an AI Incident. The article also discusses the potential for further misuse and the need for regulation, but the realized harms from scams and identity theft are sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sir David Attenborough 'deeply disturbed' by AI clones of his voice

2024-11-19
indy100.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to clone Sir David Attenborough's voice and others, producing audio that falsely attributes statements to them. This constitutes a violation of personal identity rights and can be considered a breach of obligations under applicable law protecting fundamental rights. The harm is direct and ongoing, as the AI-generated voices are publicly disseminated and cause reputational and emotional harm to the individuals. Therefore, this qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

Sir David Attenborough Is Profoundly Disturbed About AI Mimicking His Iconic Voice: My Identity Is Being Stolen

2024-11-18
NewsX
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system cloning a trusted public figure's voice and using it to say things he never endorsed, which constitutes a violation of rights and potential misinformation harm. The harm is realized as the AI-generated voice is already being used to spread unauthorized statements, fulfilling the criteria for an AI Incident. The involvement of AI in the misuse and the resulting harm to reputation, identity, and potential misinformation dissemination justifies classification as an AI Incident.
Thumbnail Image

David Attenborough 'profoundly disturbed' after his voice is 'stolen' by AI clones

2024-11-19
Belfast News Letter
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system cloning Sir David Attenborough's voice and using it without consent to produce partisan news content. This unauthorized use directly violates his rights and misleads the public, which is a clear harm under the framework's category of violations of human rights and harm to communities. The AI system's role is pivotal as it enables the voice cloning and dissemination of misleading content. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

David Attenborough, 98, 'deeply concerned' by AI replicas of his voice amid fans' concerns over TV return

2024-11-18
GB News
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the voice replicas are generated by AI technology. The use of AI to replicate a person's voice without consent can lead to violations of personal rights and potential misinformation. However, since the article only describes concerns and objections without evidence of actual harm or misuse causing injury, rights violations, or other harms, this situation is best classified as an AI Hazard, reflecting the plausible future risk of harm from such AI voice replication.