Dubbing Actors Strike Over AI Voice Cloning Threat

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Voice actors in Spain—including delegations in Catalunya and the Balearic Islands—and in France are protesting the unauthorized use of AI to clone their voices, striking and demanding contractual clauses barring their recordings from training generative models. Industry groups like AADPC, PASAVE, CADIB and artists such as Bruno Méyère warn AI could erase their livelihoods.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions generative AI systems cloning voices of dubbing actors without their consent, which is a direct use of AI technology. This unauthorized use leads to harm by violating actors' rights and reducing their employment opportunities, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The harm is realized, not just potential, as actors report reduced work and ethical concerns. Hence, this is an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceAccountabilityTransparency & explainabilityRespect of human rightsHuman wellbeing

Industries
Arts, entertainment, and recreationMedia, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Research and developmentMarketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Francia: la IA amenaza a los actores que doblan personajes de dibujos animados

2025-02-12
PULZO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems cloning voices of dubbing actors without their consent, which is a direct use of AI technology. This unauthorized use leads to harm by violating actors' rights and reducing their employment opportunities, fulfilling the criteria for an AI Incident under violations of human rights and labor rights. The harm is realized, not just potential, as actors report reduced work and ethical concerns. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Francia: la IA amenaza a los actores que doblan personajes de dibujos animados

2025-02-12
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems cloning voices of dubbing actors without their consent, which is a direct violation of intellectual property and labor rights. The harm is realized as actors report reduced work volume and unauthorized use of their voices, impacting their employment and artistic integrity. The AI system's use has directly led to these harms, meeting the criteria for an AI Incident under violations of rights and harm to communities.
Thumbnail Image

Francia: la IA amenaza a los actores que doblan personajes de dibujos animados

2025-02-12
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems cloning voices of dubbing actors without consent, which constitutes a violation of intellectual property and labor rights. The unauthorized use of voice data and the replacement or reduction of work for dubbing actors is a direct harm caused by the AI system's use. These harms fall under violations of rights and economic harm to a community, meeting the criteria for an AI Incident. The article also discusses ongoing negotiations and responses, but the primary focus is on the realized harms caused by AI voice cloning.
Thumbnail Image

El temor de los dobladores con la inteligencia artificial: puede clonar cualquier voz y creen que "nos va a dejar sin trabajo"

2025-02-12
telecinco
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of cloning voices, which is explicitly mentioned. The voice actors' concern is about the use of AI in voice cloning that could lead to job loss, a form of economic and labor harm. However, the article does not describe any actual harm or incident occurring yet, only the plausible future harm and ongoing protests. Therefore, this qualifies as an AI Hazard, as the development and use of AI voice cloning could plausibly lead to harm to labor rights and employment in the dubbing sector.
Thumbnail Image

Francia: la IA amenaza a los actores que doblan personajes de dibujos animados

2025-02-12
EL DEBER
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems cloning actors' voices without consent, which constitutes a violation of intellectual property rights and harms the actors' professional and artistic interests. The use of AI-generated voices in dubbing without authorization is a direct cause of harm to the actors and the dubbing industry, fulfilling the criteria for an AI Incident. The harm is not just potential but ongoing, as evidenced by reduced work volume and unauthorized use of voices. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

La batalla de los actores de doblaje de Baleares contra la Inteligencia Artificial

2025-02-10
Última Hora
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used to generate synthetic voices, which are AI applications. The concerns raised relate to the use of AI-generated voice clones without permission, which could lead to violations of rights and harm to the actors' livelihoods. However, the article does not describe a realized harm or incident but rather the potential for harm and the ongoing struggle to regulate AI use in this domain. Therefore, it fits the definition of Complementary Information, as it provides context, societal response, and advocacy efforts related to AI's impact on voice actors, without reporting a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

Inteligencia artificial generativa amenaza a los actores que doblan personajes de dibujos animados en Francia

2025-02-12
El Financiero, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems cloning actors' voices without consent, which directly harms the actors economically and ethically. The unauthorized use of voice data and the reduction in work opportunities for actors are clear harms linked to the AI system's use. The involvement of AI in producing unauthorized voice content and the resulting negative impact on the actors' rights and livelihoods meet the criteria for an AI Incident under violations of rights and harm to communities.