French Voice Actors Win Removal of AI-Cloned Voice Models

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Twenty-five French voice actors secured the removal of 47 AI-generated voice models from U.S. platforms Fish Audio and VoiceDub, which had cloned their voices without consent or payment. Legal action highlighted violations of intellectual property rights, though actors continue to seek damages and further legal protections.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (generative voice cloning models) whose use directly led to violations of intellectual property rights by cloning actors' voices without consent or payment. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. The legal actions and platform removals are responses to this harm. Although the harm is non-physical, it is significant and clearly articulated. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Intelligence artificielle : 25 doubleurs français font retirer leurs voix clonées d'une plateforme américaine - ICI

2026-04-02
France Bleu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative voice cloning models) whose use directly led to violations of intellectual property rights by cloning actors' voices without consent or payment. This constitutes a breach of obligations under applicable law protecting intellectual property rights, fitting the definition of an AI Incident. The legal actions and platform removals are responses to this harm. Although the harm is non-physical, it is significant and clearly articulated. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Voix clonées : 25 doubleurs français obtiennent le retrait de contenus litigieux générés par IA

2026-04-02
Le Telegramme
Why's our monitor labelling this an incident or hazard?
The event involves AI voice cloning technology generating content that infringes on the rights of voice actors, constituting a violation of intellectual property and personal rights under applicable law. The fact that content was removed following legal action confirms that harm occurred. The ongoing legal efforts to prevent further violations and seek damages further support the classification as an AI Incident. The AI system's use directly led to harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

France. Voix clonées par IA: 25 doubleurs obtiennent le retrait de contenus

2026-04-02
La Liberté
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI voice cloning technology used without consent, leading to legal complaints and removal of the infringing content. This constitutes a violation of intellectual property rights and labor rights, which are recognized harms under the AI Incident definition. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

Voix clonées par IA: 25 doubleurs obtiennent le retrait de contenus

2026-04-02
RTN
Why's our monitor labelling this an incident or hazard?
The event involves AI generative models that cloned voices without consent, directly leading to a violation of intellectual property and labor rights of the voice actors. The removal of the AI-generated content is a response to this harm, confirming that the harm occurred. The ongoing legal actions and concerns about future violations further support the classification as an AI Incident rather than a hazard or complementary information. The AI system's use directly caused harm to the rights of individuals, fitting the definition of an AI Incident.
Thumbnail Image

Voix clonées par IA: 25 doubleurs français obtiennent le retrait de contenus litigieux, selon leur avocat

2026-04-02
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI generative models that cloned voices without consent, leading to violations of intellectual property and personal rights. The legal actions and content removals confirm that harm has occurred due to the AI systems' use. The ongoing risk of new unauthorized content and legislative efforts to address these issues further support the classification as an AI Incident. The AI systems' development and use directly led to rights violations, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.