AI Vocal Cloning Sparks Concerns Among Voice Actors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Eleven Labs has created an AI-generated vocal clone of deceased voice actor Alain Dorval, known for dubbing Sylvester Stallone. This has sparked concerns about intellectual property and labor rights among voice actors, with Brigitte Lecordier advocating for human dubbing. The AI-generated voice was shared online, prompting widespread reactions.[AI generated]

Why's our monitor labelling this an incident or hazard?

This describes an actual use of AI (voice-cloning) that has directly led to a rights violation—unauthorized reproduction of a deceased actor’s voice—triggering professional and legal concerns. Since the AI’s deployment caused tangible harm (breach of consent and intellectual/property rights), it constitutes an AI Incident.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Arts, entertainment, and recreationMedia, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/PropertyReputationalHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

La voix de son père, doubleur de Stallone, recrée par l'IA ? Aurore Bergé dément avoir validé la bande-annonce | TF1 INFO

2025-01-11
TF1 INFO
Why's our monitor labelling this an incident or hazard?
This describes an actual use of AI (voice-cloning) that has directly led to a rights violation—unauthorized reproduction of a deceased actor’s voice—triggering professional and legal concerns. Since the AI’s deployment caused tangible harm (breach of consent and intellectual/property rights), it constitutes an AI Incident.
Thumbnail Image

Doubleurs menacés par l'IA: "On défend un doublage humain pour des humains"

2025-01-14
RMC
Why's our monitor labelling this an incident or hazard?
This report describes the creation and use of an AI voice‐cloning system to replicate a deceased actor’s voice for new content. While no actual legal or economic harm has yet been realized, it poses a credible threat to human dubbing professionals’ livelihoods and raises intellectual property and consent issues. Therefore, it represents a plausible risk of harm rather than a presently materialized incident.
Thumbnail Image

La voix française de Sylvester Stallone recréée par IA? Aurore Bergé dément avoir donné son accord

2025-01-13
BFMTV
Why's our monitor labelling this an incident or hazard?
An AI system (voice‐cloning technology) was used without the final approval of the rights‐holders, directly leading to a violation of the voice actor’s posthumous rights and his family’s legal and moral interests. Such unauthorized cloning is a realized harm (violation of intellectual property and personal rights), so this is an AI Incident.
Thumbnail Image

"C'est une menace, on se bat": Emmanuel Curtil, doubleur et voix off, réagit à l'utilisation de l'IA pour recréer la voix française de Sylvester Stallone

2025-01-14
BFMTV
Why's our monitor labelling this an incident or hazard?
The article describes the direct use of AI to recreate a real person’s voice without permission, resulting in a rights violation and unauthorized use of the AI-generated content. This misuse has already occurred and constitutes harm under intellectual property and personality rights, fitting the definition of an AI Incident.
Thumbnail Image

La voix du doubleur français (décédé) de Stallone sera recréée par une IA

2025-01-13
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to generate a synthetic voice, which is an AI system application. However, there is no indication of any harm or violation resulting from this use. The article focuses on the respectful use of AI to honor the actor's legacy and does not mention any injury, rights violation, or other harm. Therefore, this is not an AI Incident or AI Hazard but rather a general AI-related development without harm, fitting the category of Complementary Information as it provides context on AI use in media production.
Thumbnail Image

Comment la voix française de Stallone restera la même malgré la mort de son doubleur

2025-01-12
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice synthesis) to recreate a deceased voice actor's voice, which is a clear AI application. However, the article does not report any harm or risk of harm resulting from this use. The family supports the use, and the AI is used to honor the actor's legacy. There is no indication of injury, rights violation, or other harms. The article focuses on the societal and ethical context and the technology's potential, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Sylvester Stallone doublé par l'IA pour Amazon Prime? Les fans KO debout

2025-01-11
Télérama
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI voice cloning technology to dub a film, which is an AI system use. The controversy centers on consent and ethical concerns, with the family and fans upset about the unauthorized use of the AI-generated voice. While this raises important ethical and labor-related issues, the article does not report any realized harm such as violation of rights, injury, or other direct damages. The event is about the potential for harm and ethical misuse rather than an actual incident causing harm. Therefore, it fits best as Complementary Information, providing context on societal and governance responses to AI voice cloning technology and its implications.
Thumbnail Image

La voix du père d'Aurore Bergé, doubleur de Sylvester Stallone, générée par IA dans un film : "J'ai donné mon accord pour un essai", précise la ministre

2025-01-13
Femme Actuelle
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate a synthetic voice of a deceased actor without final consent, which has caused public backlash and professional concern. The AI system's use directly led to harm in terms of violation of rights and harm to the voice acting community. The involvement of AI in generating the voice and the resulting controversy and potential rights violations meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm scenario involving AI misuse.
Thumbnail Image

Aurore Bergé s'oppose à l'utilisation par IA de la voix de son père décédé pour doubler le prochain film de Sylvester Stallone

2025-01-14
Libération
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI voice cloning technology) used in the development and trial phase for dubbing a film. However, the AI-generated voice was only used experimentally with family consent and will not be used commercially. There is no evidence of harm to individuals, communities, or rights, nor disruption or injury. The main issue is public debate and misinformation, which does not constitute realized harm or a credible risk of harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and ethical discussions around AI voice cloning and clarifies misunderstandings, rather than reporting an AI Incident or Hazard.
Thumbnail Image

Prime Video utilise l'IA pour doubler Sylvester Stallone et le résultat ne passe pas

2025-01-13
Les Numériques
Why's our monitor labelling this an incident or hazard?
The AI system (voice synthesis) was used to recreate the voice of a deceased actor, which raises ethical and legal concerns about rights and consent. This constitutes a violation of rights related to intellectual property or personality rights. Since the AI's use has directly led to this violation, it qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of obligations under applicable law.
Thumbnail Image

Grosse polémique: Sylvester Stallone doublé en français par l'intelligence artificielle, et le résultat est abominable

2025-01-13
DH.be
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of an AI system for voice dubbing, which has directly led to significant controversy and harm to the voice acting profession, including job losses and degradation of artistic quality. This constitutes harm to labor rights and communities, fitting the definition of an AI Incident. The controversy and backlash confirm that the harm is realized, not just potential. Therefore, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Voix française de Sylvester Stallone qui reprend celle d'Alain Dorval décédé : Aurore Bergé évoque un essai

2025-01-11
Paris Match
Why's our monitor labelling this an incident or hazard?
An AI system (voice synthesis technology) was used to generate a voice clone of a deceased person without final consent, leading to emotional harm and potential violation of rights related to the deceased's image and legacy. The unauthorized public dissemination of the AI-generated voice constitutes a direct harm to the family and community respecting the deceased, fitting the definition of an AI Incident involving violation of rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Polémique autour de la voix française de Sylvester Stallone, générée par l'IA dans son nouveau film: va-t-on trop loin?

2025-01-14
RTL Info
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the voice was generated by AI. The event stems from the use of AI in content creation (voice dubbing). However, the article does not report any actual harm such as legal violations, health injury, or community harm. The controversy is about consent and ethical considerations, which are important but do not meet the threshold for an AI Incident or AI Hazard under the definitions provided. Therefore, this is best classified as Complementary Information, as it provides context and societal response to AI use in media.
Thumbnail Image

Pour le film Armor, Alain Dorval, la voix française emblématique de Sylvester Stallone, a été ressuscité par IA (et ça ne passe pas du tout)

2025-01-13
Konbini - All Pop Everything : #1 Media Pop Culture chez les Jeunes
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate the voice of a deceased actor, which is a direct use of AI technology. The event involves the use of AI in content creation (voice synthesis) and has led to public controversy, which can be considered a violation of rights (potentially intellectual property or personality rights). Since the AI-generated voice was used without consent and caused harm in terms of ethical and possibly legal concerns, this qualifies as an AI Incident due to violation of rights and harm to the community's trust and respect for the deceased's legacy.
Thumbnail Image

"Mon père ne l'aurait jamais validée": l'emblématique voix française de Sylvester Stallone recréée par IA ne passe pas du tout

2025-01-14
parismatch.be
Why's our monitor labelling this an incident or hazard?
An AI system (voice synthesis by ElevenLabs) was used to recreate Alain Dorval's voice. The use was presented as an homage but was done without clear consent from the family, leading to a public dispute and indignation. This constitutes a violation of rights (likely intellectual property or personality rights) related to the unauthorized use of a person's voice, which is a form of harm under the framework. Therefore, this qualifies as an AI Incident due to the realized violation of rights stemming from the AI system's use.
Thumbnail Image

Sylvester Stallone doublé par une IA pour sa voix française : un futur flop ?

2025-01-13
KultureGeek
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to synthesize a voice for a film dubbing, and its unauthorized release caused reputational and ethical harm, including violation of consent rights by the family. This constitutes a violation of rights and harm to communities (public trust and respect for deceased artists' legacies). The harm is realized (public backlash and condemnation), not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm related to rights and community trust.
Thumbnail Image

Stallone et IA : 3 minutes pour comprendre la polémique autour du doublage

2025-01-14
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The article details the use of AI to synthetically recreate a voice, which involves an AI system. The event involves the use of AI (voice synthesis) in a way that raises ethical and professional concerns, but no actual harm such as rights violations or injury has been reported as having occurred. The main focus is on the controversy, public reaction, and protests against the use of AI in dubbing, which are societal responses. Therefore, this is best classified as Complementary Information, as it provides context and updates on the evolving societal and governance responses to AI use in media production, rather than describing a realized AI Incident or a plausible AI Hazard.