Mexican Voice Actors Protest AI Voice Cloning After Unauthorized Use by INE

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mexican voice actors and artists protested in Mexico City after the National Electoral Institute (INE) used AI to clone the voice of deceased actor Pepe Lavat without family consent. The group demands legislation to protect their voices from unauthorized AI cloning, citing economic and intellectual property harms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems that clone and generate human voices, which are used without consent in commercial and audiovisual products. This unauthorized use has directly harmed the artists by infringing on their intellectual property and labor rights, and by threatening their employment and income. These harms fall under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Arts, entertainment, and recreationMedia, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
WorkersOther

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Artistas mexicanos exigen proteger sus voces de la IA

2025-07-14
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that clone and generate human voices, which are used without consent in commercial and audiovisual products. This unauthorized use has directly harmed the artists by infringing on their intellectual property and labor rights, and by threatening their employment and income. These harms fall under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm.
Thumbnail Image

Locutores y actores de doblaje mexicanos se declaran en "rebelión contra las máquinas"

2025-07-14
France 24
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI systems are used to clone voices of voice actors without consent, which constitutes a violation of intellectual property and personal rights. The use of the deceased actor's voice by a government institution without permission is a direct harm caused by AI use. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of rights and harm to the professional community. The protest and calls for regulation further confirm the recognition of harm caused by AI misuse in this context.
Thumbnail Image

Artistas mexicanos exigen proteger voces humanas ante clonación con IA

2025-07-14
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which has directly led to harm in the form of unauthorized use of a deceased actor's voice, violating rights related to voice ownership and consent. The protest and legislative push respond to realized harms and risks to labor rights and intellectual property. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to the artistic community.
Thumbnail Image

Locutores y actores de doblaje mexicanos se declaran en 'rebelión contra las máquinas'

2025-07-14
TVN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used without consent to replicate a deceased actor's voice, which is a direct violation of rights and harms the voice actors' community. The protest and demand for regulation highlight the harm already caused by AI misuse. The AI system's use here directly led to a breach of intellectual property and personal rights, fulfilling the criteria for an AI Incident. The involvement of AI in cloning voices and the resulting harm to actors' rights and livelihoods is clear and direct.
Thumbnail Image

Locutores y actores de doblaje exigen regulación para proteger su voz ante clonación con IA

2025-07-14
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice cloning, which is an AI technology capable of replicating human voices. The unauthorized use of a cloned voice without consent constitutes a violation of intellectual property and labor rights, fitting the definition of an AI Incident under violations of human rights or breach of obligations protecting intellectual property rights. The harm is realized as the voice was used without permission, and the actors are protesting to prevent further such incidents. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Artistas mexicanos exigen proteger las voces humanas ante herramientas de clonación con IA

2025-07-14
SWI swissinfo.ch
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning, which have directly led to harms including unauthorized use of voices (a violation of intellectual property and labor rights), economic harm to artists, and ethical concerns. These harms have already occurred, as evidenced by the use of a deceased actor's voice without permission and commercial products using cloned voices without consent. Therefore, this qualifies as an AI Incident due to realized violations of rights and harm to the affected community of voice artists.
Thumbnail Image

"Nos clonan, no nos pagan y no nos dan crédito": actores denuncian a las plataformas

2025-07-11
La Silla Rota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to clone voices of actors without authorization, causing harm to their labor and intellectual property rights. This constitutes a violation of human and labor rights due to unauthorized use and economic harm. The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Artistas protestan en la CDMX; exigen regulación para proteger las voces ante clonación con IA

2025-07-14
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to clone voices without authorization, leading to unauthorized commercial use and labor market harm (precarization). The AI system's use directly leads to violations of rights and economic harm to the artists, fulfilling the criteria for an AI Incident. The protest and calls for regulation further confirm the harm is ongoing and recognized by affected parties. Hence, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

Artistas del doblaje se declararon en

2025-07-14
24 Horas
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (voice cloning AI) and their use/misuse, which could plausibly lead to violations of rights and economic harm to voice actors. Since no specific harm has yet occurred or been documented as realized in this article, but the threat and potential for harm is clear and credible, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is on the protest and the risk posed by AI voice cloning, not on updates or responses to a past incident. It is not an AI Incident because no direct or indirect harm has been reported as having occurred yet.
Thumbnail Image

Artistas, locutores y actores de doblaje exigen proteger voces humanas ante clonación con IA en México

2025-07-14
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used without authorization, directly impacting the rights of voice actors and artists. The unauthorized use of AI-generated voices in commercial products and media without consent or royalties is a clear violation of intellectual property and labor rights. The economic harm described (reduced pay, threats of replacement by AI) also constitutes harm to the community and individuals. These harms have already occurred or are ongoing, making this an AI Incident rather than a hazard or complementary information. The presence of AI systems (voice cloning AI) is explicit, and the harms are direct and significant.
Thumbnail Image

Artistas piden a Sheinbaum "4C" ante la IA

2025-07-14
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to replicate human voices and identities without authorization, causing harm to artists and related workers by infringing on their rights and threatening their livelihoods. The harms are realized and ongoing, including unauthorized use of digital likenesses and voices, which constitute violations of intellectual property and labor rights. The protest and demands for regulation underscore the direct impact of AI misuse. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sheinbaum apoyará a actores de doblaje contra la IA, ¿Qué debe saberse?

2025-07-14
El Siglo de Torreón
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to clone and generate voices of actors without their consent, including deceased actors, which directly leads to violations of intellectual property and labor rights. The harm is realized, as unauthorized AI-generated content has been distributed publicly, causing indignation and legal concerns among affected parties. The AI system's use in replicating voices without permission is central to the harm described. Hence, this is an AI Incident involving violations of rights and harm to the actors and the industry.
Thumbnail Image

"Voces reales para un futuro humano"; marchan por la regulación de la IA

2025-07-14
Excélsior
Why's our monitor labelling this an incident or hazard?
The article focuses on a collective demand for AI regulation to prevent future harms related to AI use in the entertainment industry, such as unauthorized voice replication and job displacement. While these concerns are valid and relate to plausible future harms, the article does not describe any specific AI system causing actual harm or malfunction at this time. The event is a societal response and advocacy effort, aiming to influence policy and legal frameworks. This fits the definition of Complementary Information, which includes governance responses and public reactions to AI developments, rather than an AI Incident or AI Hazard.
Thumbnail Image

Video del INE provoca manifestación de los actores de doblaje contra la IA

2025-07-14
|
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to clone actors' voices without permission, which constitutes a violation of intellectual property and labor rights. The harm is realized as actors face economic harm and unauthorized use of their voices. The protest and demands for regulation further confirm the significance of the harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the actors' professional community.
Thumbnail Image

¿Se acabarán los locutores y actores de doblaje con la IA? Eso preocupa a los mexicanos y por eso protestaron

2025-07-14
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for voice cloning and dubbing, which are directly impacting the livelihoods and rights of voice actors. The unauthorized use of a deceased actor's voice by an institution without consent constitutes a violation of rights. The protest and demand for regulation indicate that harm is occurring or has occurred due to AI use. Hence, the event meets the criteria for an AI Incident due to realized harm involving AI systems in voice cloning and dubbing.
Thumbnail Image

Artistas mexicanos exigen proteger sus voces de la IA | Teletica

2025-07-14
Teletica (Canal 7)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools used to clone and replicate human voices without consent, which has led to unauthorized commercial use and economic harm to voice artists. This constitutes a violation of intellectual property and labor rights, fitting the definition of an AI Incident. The harm is realized, not just potential, as unauthorized uses and economic impacts are described. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Locutores y actores de doblaje mexicanos se declaran en "rebelión contra las máquinas"

2025-07-14
SWI swissinfo.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI voice cloning technology being used without consent, which directly harms voice actors by threatening their jobs and violating their rights. The unauthorized use of a deceased actor's voice is a clear breach of intellectual property and personal rights. The protest and demand for regulation highlight the harm already occurring due to AI misuse. Hence, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to a group of people (voice actors).
Thumbnail Image

Artistas de voz exigen ley contra uso no autorizado de voces clonadas con IA en CDMX

2025-07-14
Periodico Correo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology to clone voices without authorization, which has directly led to harm by violating the rights of voice artists and causing potential job loss and exploitation. The cloning of a deceased actor's voice without family consent is a concrete example of harm. The protest and legislative initiative further confirm the recognition of this harm. Hence, the event meets the criteria for an AI Incident involving violation of intellectual property rights and harm to communities.
Thumbnail Image

Artistas mexicanos exigen proteger las voces humanas ante herramientas de clonación con IA

2025-07-14
UDG TV
Why's our monitor labelling this an incident or hazard?
The article involves AI systems capable of voice cloning, which are being used or could be used without authorization, potentially causing harm such as violation of intellectual property and labor rights. However, the event is primarily a societal response and advocacy for regulation rather than a report of a concrete AI Incident or a specific AI Hazard event. It is best classified as Complementary Information because it provides context on societal and governance responses to AI-related risks, without describing a particular AI Incident or AI Hazard itself.
Thumbnail Image

Marchan en CDMX para exigir regulación de la IA

2025-07-14
El Heraldo de Aguascalientes
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of generative AI cloning voices and faces, which is causing harm to the rights and economic interests of creative professionals. This constitutes violations of intellectual property and labor rights, which are recognized as harms under the AI Incident definition. Although the article does not describe a single discrete AI Incident event, the ongoing unauthorized use of AI to clone voices and faces without consent and compensation is an active harm to these groups. Therefore, this qualifies as an AI Incident due to realized violations of rights and harm to livelihoods caused by AI use. The protest is a response to these harms, but the harms themselves are occurring through AI misuse.
Thumbnail Image

Artistas piden ayuda a Sheinbaum por IA; lanzan plan "4C"

2025-07-14
tiempodigital.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems replicating voices without consent, which has caused harm to artists' rights and employment, fulfilling the definition of AI Incident in the background. However, the main event is a protest and call for regulation, not a new incident or hazard itself. The article does not report a new AI Incident or a new AI Hazard event but rather documents a societal and governance response to ongoing AI-related harms. This fits the definition of Complementary Information, as it provides supporting context and advocacy related to AI harms already occurring in the industry.
Thumbnail Image

Locutores y actores de doblaje protestan contra la IA

2025-07-14
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of voice cloning and AI-assisted dubbing technologies. However, the event focuses on the protest and the demand for regulation to prevent unauthorized use of voice data. There is no direct or indirect harm reported as having occurred yet, only a plausible risk of harm to labor rights and intellectual property if AI voice cloning is used without consent. Therefore, this qualifies as an AI Hazard, since the development and use of AI voice cloning technology could plausibly lead to violations of rights and harm to the actors' livelihoods in the future.
Thumbnail Image

Actores de doblaje protestaron para exigir legislación en uso de voz frente a IA

2025-07-14
Perspectivas
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of voice cloning, which is a form of AI system. However, the article does not describe an actual harm event caused by AI misuse or malfunction but rather a reaction to a potential or ongoing misuse scenario. The protest and legislative push indicate concern about plausible future harms from unauthorized voice cloning by AI, but no direct harm incident is detailed. Therefore, this qualifies as Complementary Information, as it provides context and societal response to AI-related risks rather than reporting a specific AI Incident or Hazard.
Thumbnail Image

Artistas de México exigen una mejor regulación de la IA en el sector creativo

2025-07-15
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate a deceased actor's voice without authorization, which has caused harm to the artists' rights and professional interests. The harm includes violation of intellectual property and personal rights, as well as economic harm due to job displacement fears. The AI system's use in this context directly led to these harms, fulfilling the criteria for an AI Incident under violations of human rights and intellectual property rights.
Thumbnail Image

Noticias de América - Artistas de México exigen una mejor regulación de la IA en el sector creativo

2025-07-15
RFI
Why's our monitor labelling this an incident or hazard?
The AI system was used to recreate a deceased actor's voice without consent, which is a direct violation of intellectual property and labor rights, fulfilling the criteria for an AI Incident. The protest and political response highlight the harm caused by AI misuse in the creative sector. Although the article does not describe physical harm or infrastructure disruption, the violation of rights and harm to the creative community are significant and directly linked to AI use. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

IA vs Doblaje: El gremio de actores de voz en México exige protección

2025-07-15
FayerWayer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to clone voices without consent, which is a violation of intellectual property and labor rights. This constitutes harm under the AI Incident definition. However, the article mainly reports on protests and demands for legal protection rather than detailing a specific AI Incident where harm has already occurred. Since the AI cloning was used without permission (as in the case of the José Lavat voice replication), this is a realized violation of rights, thus qualifying as an AI Incident. The protest and government response are complementary information but the core issue is the unauthorized use of AI voice cloning causing harm to actors' rights and work.
Thumbnail Image

Mexican voice actors demand regulation on AI voice cloning - The Economic Times

2025-07-14
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice cloning, which have been employed without consent, leading to violations of rights and economic harm to voice actors. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and harm to livelihoods). The protest and calls for regulation further underscore the realized harm and the need for governance. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Mexican voice actors demand regulation on AI voice cloning

2025-07-14
RTL Today
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (AI voice cloning) and their use without consent, which can lead to violations of intellectual property and personal rights, a form of harm under the framework. However, the article mainly reports on protests and demands for regulation in response to the threat posed by AI voice cloning, with some examples of past unauthorized use. Since the harm is ongoing and the article focuses on the threat and calls for regulation rather than a new specific incident causing direct harm, this is best classified as Complementary Information. It provides context and societal response to AI-related issues but does not describe a new AI Incident or AI Hazard.
Thumbnail Image

Mexican Voice Actors Demand AI Voice Cloning Regulation - News Directory 3

2025-07-14
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI voice cloning technology, which can generate synthetic voices mimicking real individuals. The voice actors' concerns relate to the potential misuse of this AI technology leading to economic and artistic harm. Since no actual harm or incident has occurred yet, but there is a credible risk of future harm, this situation fits the definition of an AI Hazard. The article is primarily about the plausible future harms and the advocacy for regulation, not about a realized incident or a complementary update on a past event.
Thumbnail Image

Mexican voice actors protest AI cloning, seek legal safeguards

2025-07-14
thesun.my
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly mentioned. The unauthorized replication of voices constitutes a violation of intellectual property rights and labor rights, which are harms under the AI Incident definition. The protest and calls for legal safeguards indicate that such harms have already occurred or are ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations of rights and harm to the affected individuals' employment and artistic integrity.
Thumbnail Image

Mexican Voice Actors Demand AI Voice Cloning Regulation - News Directory 3

2025-07-14
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered dubbing technologies and the unauthorized use of a deceased actor's voice, which involves AI voice cloning systems. The voice actors' call for biometric recognition of voices aims to prevent future unauthorized use and protect their rights and jobs. While there is an instance of unauthorized voice use, the article does not report direct legal or physical harm caused by AI systems but rather the potential for such harm and displacement. Hence, this is a credible risk of harm (to labor rights and livelihoods) that could plausibly lead to an AI Incident if unregulated, fitting the definition of an AI Hazard.
Thumbnail Image

Actores de doblaje en México alzan la voz contra la IA que clona su trabajo

2025-07-15
Expansión
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone human voices, which are being used without consent, constituting a violation of intellectual property rights and causing economic harm to voice actors. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and economic harm). The article details realized harm, not just potential harm, and the AI system's role is pivotal in causing this harm. Therefore, the classification is AI Incident.
Thumbnail Image

Apoyarán a actores de doblaje en México tras manifestarse por una regulación del uso de sus voces por la IA - El Heraldo de México

2025-07-14
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the issue centers on the use of AI for voice cloning, which affects the rights of voice actors. However, the article does not report any realized harm or incident but rather the potential for harm and the government's proactive steps to prevent misuse. Therefore, this is a governance and societal response to a potential AI-related harm, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

¿Por qué protestaron los actores de doblaje en México? La inteligencia artificial tuvo mucho qué ver

2025-07-14
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The protest centers on the unauthorized use of AI to replicate actors' voices and images, which constitutes a violation of their rights and harms their employment and artistic legacy. Since the AI system's use has directly caused harm to the actors' rights and economic interests, this qualifies as an AI Incident under the framework, specifically a violation of intellectual property and labor rights.
Thumbnail Image

Claudia Sheinbaum anuncia apoyo a actores de doblaje tras robo del INE a José Lavat

2025-07-14
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone the voice of a deceased actor without authorization, which is a misuse of AI technology causing harm to the rights of voice actors. The harm includes violation of intellectual property and labor rights, as the actors' voices are their primary work tool. The political response and protests confirm the recognition of harm. Hence, this is an AI Incident due to realized harm caused by AI use.
Thumbnail Image

Salinas Pliego critica al INE por clonar la voz de Pepe Lavat; "es un nuevo nivel de cinismo", dice

2025-07-14
El Universal
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which directly impacts labor rights and identity protection of voice actors. The unauthorized cloning of a voice constitutes a violation of intellectual property and labor rights, which are recognized harms under the AI Incident definition. Since the article describes actual use of AI cloning leading to these harms and protests, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Claudia Sheinbaum respalda a actores de doblaje y locutores; atenderá a la búsqueda de un esquema de protección frente la IA

2025-07-14
El Universal
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (voice cloning AI) that has been used without consent, leading to harm in terms of violation of intellectual property and labor rights of voice actors. However, the article does not report a specific incident of harm occurring beyond the unauthorized cloning itself, nor does it describe a malfunction or direct injury. Instead, it focuses on the societal and governance response to the issue, including planned regulatory measures and protections. Therefore, this is best classified as Complementary Information, as it provides context and updates on responses to AI-related concerns rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

"Tienen razón"; Sheinbaum respalda actores de doblaje para regular AI y proteger su voz y trabajo

2025-07-14
El Universal
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used to replicate human voices, which is an AI system involvement. The issue stems from the use of AI (use phase) to replicate voices without consent, potentially violating intellectual property and labor rights. However, the article does not report a specific incident of harm occurring but rather the demand for urgent regulation and protection. Therefore, this is a plausible risk of harm (violation of rights) that could occur or is occurring but is primarily framed as a call for regulation and protection. This fits the definition of Complementary Information, as it provides context on societal and governance responses to AI-related concerns, rather than reporting a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Los actores de doblaje se están plantando frente a la IA. En México, la protesta va más allá de los derechos de imagen

2025-07-14
Xataka
Why's our monitor labelling this an incident or hazard?
The article centers on protests and legislative advocacy against unauthorized AI voice cloning, which is a developing issue with potential for harm to rights and labor protections. While the AI system (voice cloning) is involved and the harm is plausible, the article does not document a specific AI Incident where harm has already occurred or a near-miss AI Hazard. Instead, it focuses on societal response, legal discussions, and calls for protection, fitting the definition of Complementary Information.
Thumbnail Image

Sheinbaum muestra respaldo a artistas de doblaje en México que reclaman protección contra IA

2025-07-14
Chicago Tribune
Why's our monitor labelling this an incident or hazard?
The use of AI to clone voices without consent constitutes a violation of intellectual property and labor rights, which are recognized harms under the AI Incident definition. The event involves the use of AI systems (voice cloning technology) that have directly led to harm in terms of rights violations and economic impact on artists. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is already occurring and recognized.
Thumbnail Image

Ofrece Sheinbaum proteger derechos de los actores de doblaje ante IA

2025-07-14
Excélsior
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of voice replication technology used without authorization, which is a recognized AI-related harm affecting intellectual property and labor rights. However, the event itself is about the government's intention to meet and discuss protective measures, not about a realized harm or incident. Therefore, it is a societal and governance response to a potential or ongoing issue rather than a direct AI Incident or Hazard. This fits the definition of Complementary Information, as it provides context and updates on responses to AI-related concerns without describing a new incident or hazard.
Thumbnail Image

La Jornada: Clonan nuestras voces con IA sin ninguna regulación, denuncian actores de doblaje

2025-07-11
La Jornada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to clone voices, which directly led to harm by reducing employment opportunities for voice actors and violating their rights through deceptive practices and unauthorized use of their voice data. The harm is realized and ongoing, including economic and cultural harm. The involvement of AI in the development and use stages is clear, and the harm fits the definition of violations of labor and intellectual property rights, as well as harm to communities (cultural harm). Thus, this is an AI Incident.
Thumbnail Image

'Están clonando nuestras voces': Actores de doblaje se manifiestan contra la IA y piden regalías

2025-07-14
El Financiero
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to emulate voices of actors without consent or compensation, which is a violation of labor and intellectual property rights. The harms are realized as actors protest and strike against these practices. The AI system's use in voice cloning is central to the harm described. Therefore, this qualifies as an AI Incident due to violations of rights and harm to the affected community's livelihoods.
Thumbnail Image

Sheinbaum respalda a actores de doblaje para proteger su trabajo de la IA

2025-07-14
El Economista
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice and image cloning, which are AI technologies. The harm described is a violation of labor and human rights due to unauthorized use of AI-generated voice clones, which has already occurred according to the protesters. However, the article focuses on the political and legal response to these harms rather than the incident itself. Since the main content is about the government's support and planned regulatory actions rather than the incident of harm itself, this is best classified as Complementary Information. The AI-related harm is background context, and the article's primary focus is on societal and governance responses.
Thumbnail Image

Claudia Sheinbaum respalda a actores de doblaje tras protestas por el uso de la IA; "se trata de una profesión muy valiosa"

2025-07-15
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone voices and images without authorization, leading to protests by affected professionals who claim violations of their rights. The unauthorized commercial use of AI-cloned voices constitutes a breach of intellectual property and labor rights, fulfilling the criteria for an AI Incident under the framework. The government's response is a complementary development but does not negate the incident classification. Hence, this event is best classified as an AI Incident due to realized harm involving rights violations caused by AI misuse.
Thumbnail Image

¡Que tiemble el streaming! Promete Sheinbaum que gestionará la regulación del uso de la IA en doblaje

2025-07-14
Vanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for dubbing that replicates actors' voices without consent, including deceased actors, which constitutes a violation of intellectual property and labor rights. This has led to protests and demands for regulation, indicating realized harm to the actors' rights and livelihoods. The president's promise to regulate and protect the actors is a response to an ongoing AI Incident involving harm to human rights and labor rights. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI use in dubbing without consent.
Thumbnail Image

Sheinbaum promete proteger 'el trabajo y la voz' de los actores de doblaje ante auge de la IA

2025-07-14
Forbes México
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (voice cloning technology) that was used without consent, which constitutes a violation of intellectual property and personal rights. However, the article does not report a new AI incident causing direct or indirect harm beyond the referenced past case, nor does it describe a new hazard or imminent risk. Instead, it focuses on the government's planned protective measures and consultations with stakeholders, which is a governance and societal response to an existing issue. Therefore, this is best classified as Complementary Information, as it provides context and updates on responses to AI-related concerns rather than reporting a new incident or hazard.
Thumbnail Image

Ofrece Sheinbaum proteger derechos de actores de doblaje afectados por la inteligencia artificial | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-07-14
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The article involves the use of AI in voice synthesis for dubbing, which has led to concerns about unauthorized use of actors' voices, including those of deceased persons. This constitutes a violation of intellectual property and labor rights, as the actors' primary work tool (their voice) is being used without proper authorization. However, the article focuses on the government's intention to address these issues through meetings and potential protective measures, rather than reporting a specific realized harm or incident. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related rights concerns, without describing a concrete AI Incident or AI Hazard event.
Thumbnail Image

"Me robaron la voz": Actores de doblaje protestan contra uso de IA sin consentimiento

2025-07-14
La Silla Rota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone and modify actors' voices without consent, which constitutes a violation of intellectual property and labor rights. The unauthorized use of voice data for AI training and deployment without contracts or payment directly harms the actors economically and infringes on their rights. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people (voice actors) through rights violations and economic harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Las cinco reformas que exigen los actores de doblaje a la presidenta Claudia Sheinbaum

2025-07-14
La Silla Rota
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate human voices for dubbing, including voices of deceased persons, which the voice actors oppose due to its impact on their profession and rights. The harm is realized as it has affected their work and raises legal and ethical issues. The president's response to address these concerns through legal frameworks further confirms the recognition of harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to the affected group (voice actors).
Thumbnail Image

Claudia Sheinbaum dice que se protegerá "el trabajo y la voz" de los actores de doblaje ante la IA

2025-07-14
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of voice cloning technologies used without authorization, which can lead to violations of intellectual property and labor rights. However, it does not report a concrete AI Incident where harm has already occurred with direct consequences; instead, it reports on governmental plans and societal responses to prevent such harms. Therefore, this is best classified as Complementary Information, as it provides context and updates on responses to AI-related risks rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Debemos proteger la voz de actores de doblaje frente a la IA: Clau

2025-07-14
Tiempo
Why's our monitor labelling this an incident or hazard?
The article discusses the use of AI to replicate actors' voices without authorization, which constitutes a violation of intellectual property and labor rights. Although no specific incident of harm is described as having occurred, the unauthorized replication and potential misuse of voices by AI systems is a clear violation of rights and can be considered an AI Incident due to the direct harm to actors' rights and livelihoods. The call for regulation is a response to this ongoing harm.
Thumbnail Image

Sheinbaum ofrece proteger derechos de actores de doblaje afectados por la inteligencia artificial

2025-07-14
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate voices of dubbing actors, including deceased ones, without authorization, which is a violation of rights and harms the affected individuals. This is a direct harm caused by the use of AI systems for voice cloning. The governmental response to protect these rights is complementary but the core event is an AI Incident due to realized harm from AI misuse.
Thumbnail Image

Sheinbaum busca reunión con actores de doblaje tras protesta contra la IA

2025-07-14
24 Horas
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used to replicate human voices, which is an AI system. The protest and concerns relate to the use of AI-generated deepfake voices without consent, implicating potential violations of intellectual property and personal rights. However, the article does not report a specific AI Incident where harm has already occurred; rather, it focuses on the protest and the political response seeking to prevent misuse. Therefore, this is best classified as Complementary Information, as it provides context and governance response to AI-related concerns without describing a concrete AI Incident or AI Hazard event.
Thumbnail Image

Sheinbaum respalda a actores de doblaje ante uso indebido de IA

2025-07-14
MiMorelia.com
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident where harm has already occurred, nor does it describe a concrete AI Hazard event with imminent risk. Instead, it focuses on governmental and institutional responses to concerns about AI misuse affecting voice actors. This fits the definition of Complementary Information, as it provides context on governance and legal responses to AI-related issues without detailing a realized or imminent harm event.
Thumbnail Image

Prometen proteger "el trabajo y la voz" de los actores de doblaje ante auge de la IA

2025-07-14
UDG TV
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (voice cloning AI) used to replicate human voices, which is central to the concerns raised. The unauthorized use of a deceased actor's voice by an official institution indicates an actual violation of rights has occurred, constituting harm. The broader issue of unconsented voice replication by AI tools poses ongoing risks to the labor and intellectual property rights of voice actors. Therefore, this qualifies as an AI Incident due to realized violations of rights and harm to the affected individuals' professional and personal interests. The government's planned protective measures are complementary information but do not negate the incident classification.
Thumbnail Image

México promete proteger "el trabajo y la voz" de los actores de doblaje ante auge de la IA

2025-07-14
La Voz de Michoacán
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which has led to unauthorized use of voice actors' voices, including those of deceased individuals, constituting a violation of intellectual property and labor rights. The harm is realized as unauthorized use has already occurred, prompting protests and government intervention. Therefore, this qualifies as an AI Incident due to violations of rights caused by AI use. The government's promise to create protective schemes is a response to this incident, not the incident itself.
Thumbnail Image

Sheinbaum promete proteger a actores de doblaje ante uso de IA sin consentimiento

2025-07-14
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
The article discusses the plausible future harm from AI voice cloning technology being used without consent, which could violate intellectual property and personal rights of voice actors. The government's response to consider legal protections indicates recognition of this risk. Since no specific AI incident of harm has been reported but the risk is credible and recognized, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Ofrece CSP proteger derechos de actores de doblaje

2025-07-14
El Heraldo de Aguascalientes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to replicate actors' voices without consent, which constitutes a violation of intellectual property and labor rights, fitting the definition of an AI Incident. However, the main focus is on the government's offer to protect these rights and the planned meetings to establish protective schemes. Since the harm is ongoing and the AI system's use has already led to rights violations, this qualifies as an AI Incident. The article does not describe a new hazard or merely complementary information but reports on an existing harm and the response to it.
Thumbnail Image

Claudia Sheinbaum respalda a actores de doblaje y locutores; atenderá a la búsqueda de un esquema de protección frente la IA

2025-07-14
tiempodigital.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for voice cloning without consent, which can infringe on the rights and livelihoods of voice actors and announcers. Although no concrete incident of harm is detailed, the expressed concerns and the president's response indicate a credible risk of harm if the AI use remains unregulated. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to violations of rights and harm to the community. The event is not an AI Incident because no actual harm has yet been reported, nor is it merely complementary information since the main focus is on the potential risk and calls for regulation.
Thumbnail Image

Sheinbaum VS IA, promete proteger voces de actores de doblaje

2025-07-14
IMER Noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replicate human voices for dubbing without consent, which constitutes a violation of intellectual property and labor rights. The harm is realized as voice actors' work and rights are being undermined by AI-generated voice cloning. The president's response and proposed legal reforms are complementary information to the incident. Since the AI use has already caused harm to the actors' rights and livelihoods, this qualifies as an AI Incident under the framework, specifically under violations of human rights and intellectual property rights.
Thumbnail Image

Sheinbaum respalda a actores de doblaje para proteger su trabajo de la IA

2025-07-15
El Economista
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of voice cloning and image replication technologies, which are being used or could be used without consent, potentially violating labor and intellectual property rights. However, the article does not report any realized harm or incident; rather, it discusses the potential for harm and the government's planned response. Therefore, this qualifies as Complementary Information, as it provides context on societal and governance responses to AI-related concerns without describing a specific AI Incident or Hazard.
Thumbnail Image

México avanza hacia la regulación para proteger el doblaje frente a la IA

2025-07-15
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and ongoing debates about AI voice cloning technology's impact on the dubbing profession. It describes plausible future harms such as unauthorized voice replication and labor displacement but does not document any realized harm or incident caused by AI systems. The presence of AI systems (voice cloning/generative AI) is clear, and the concerns relate to their use and misuse. Since no direct or indirect harm has yet materialized, and the main focus is on regulatory and societal responses to these risks, this qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

"La IA no se alimenta sola"; locutores y actores de doblaje exigen regulación de IA en la industria del entretenimiento

2025-07-13
El Universal
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used for cloning voices and images, which are AI applications. The harm described is a violation of rights (intellectual property and personal rights) due to unauthorized use of AI-generated clones. However, the article focuses on the protest and demand for regulation rather than reporting a concrete AI incident where harm has already occurred. Therefore, it is best classified as Complementary Information, as it provides context and societal response to potential or ongoing AI-related harms without detailing a specific incident.
Thumbnail Image

'Están clonando nuestras voces sin permiso'': Actores de doblaje exigen regular la IA

2025-07-17
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to clone voices, which directly harms the actors by violating their labor and intellectual property rights, as well as their personal image and identity. The unauthorized cloning and monetization of their voices by AI-generated content is a realized harm, fitting the definition of an AI Incident due to violations of rights and harm to individuals. The call for regulation is a response to this ongoing harm, not the primary event itself.
Thumbnail Image

Actores de doblaje en México piden leyes contra la clonación de voz con IA: Sheinbaum buscará proteger a los actores

2025-07-14
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of voice cloning technology used in the entertainment industry. The harm described is potential: unauthorized use of voice clones could violate labor and intellectual property rights and harm the livelihoods of voice actors. The protest and government response indicate concern about plausible future harm rather than harm that has already occurred. There is no report of an actual incident of harm caused by AI voice cloning, only the risk and demand for regulation. Hence, this is best classified as an AI Hazard, not an AI Incident or Complementary Information.
Thumbnail Image

Artistas marchan en CDMX contra uso no regulado de la IA

2025-07-14
sipse.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as AI is used to clone voices and images, which implicates intellectual property and labor rights. However, the event is a protest and advocacy effort addressing these issues rather than a direct AI Incident or Hazard. No specific harm caused by AI is reported as occurring in this event; rather, it is a call for regulation and ethical standards. Therefore, it fits best as Complementary Information, providing context on societal and governance responses to AI-related challenges.
Thumbnail Image

Locutores y actores de doblaje se manifiestan en CDMX; exigen regular la IA

2025-07-14
Periódico AM
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as it discusses AI cloning of voices and images without consent, which is an AI system's use leading to violations of rights (intellectual property and labor rights). However, the article focuses on a protest demanding regulation rather than describing a specific incident of harm or malfunction. The unauthorized replication of voices and images by AI has already occurred (e.g., the case of Pepe Lavat's voice), indicating realized harm, but the article's main focus is on the collective response and call for regulation. This makes the event primarily Complementary Information, as it provides context and societal/governance response to AI-related harms rather than reporting a new AI Incident or Hazard itself.
Thumbnail Image

Bajará INE video con voz de Pepe Lavat generada por IA

2025-07-19
Tiempo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate the voice of Pepe Lavat, a deceased actor, without legal authorization, which is a breach of intellectual property and related rights. The harm (violation of rights) has already occurred as the video was published and caused controversy. The INE's removal of the video is a response to this incident. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a rights violation.
Thumbnail Image

INE acuerda bajar video de TikTok tras señalamientos de clonar voz de Pepe Lavat; reconoce falta de normatividad en la materia | El Universal

2025-07-19
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to clone a voice, which qualifies as AI system involvement. However, the main focus is on the INE's decision to remove the video and their efforts to develop guidelines and protocols to manage AI use responsibly. There is no indication that the AI-generated voice caused direct or indirect harm such as legal violations or public harm at this stage. The concerns raised are about potential risks and the need for regulation, which aligns with governance and societal response. Hence, the event does not meet the criteria for an AI Incident or AI Hazard but fits the definition of Complementary Information.
Thumbnail Image

Comunidad del doblaje celebra retiro del video del INE con voz de IA similar a la de Pepe Lavat; destaca la importancia del respeto a los artistas

2025-07-20
El Universal
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a voice that closely mimicked a deceased actor's voice without authorization, constituting a violation of intellectual property and personal rights, which are harms under the AI Incident definition (c). The harm has already occurred as the video was publicly released and caused ethical and legal concerns. The subsequent withdrawal of the video and ongoing legislative efforts are responses to this incident. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

INE retira video usando IA para recrear voz de actor de doblaje fallecido

2025-07-19
Aristegui Noticias
Why's our monitor labelling this an incident or hazard?
An AI system was used to recreate the voice of a deceased actor without authorization, which constitutes a violation of rights and misuse of AI technology. The harm has already occurred as the video was published and caused public indignation and complaints from the actor's family. This fits the definition of an AI Incident because the AI system's use directly led to a breach of rights and potential harm to the actor's legacy and family. The event also highlights the absence of regulation, but the primary focus is on the realized harm from the AI-generated voice use.
Thumbnail Image

INE retira video de TikTok tras señalamientos por voz de actor de doblaje Pepe Lavat

2025-07-19
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm caused by the AI-generated voice or the video itself, but rather the potential risks and the institution's proactive measures to address them. There is no indication that the AI system's use has directly or indirectly led to injury, rights violations, or other harms. Instead, the focus is on the absence of regulation and the need for protocols to prevent possible future harms. Therefore, this event is best classified as Complementary Information, as it provides context and governance response related to AI use and its risks without describing a specific AI Incident or Hazard.
Thumbnail Image

INE retira video tras señalamientos de clonar voz de Pepe Lavat; reconoce falta de normatividad en IA | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-07-19
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (voice cloning technology) in a way that led to public controversy and potential violation of rights (use of a deceased actor's voice without consent). The INE's removal of the video and acknowledgment of regulatory gaps indicate recognition of the risks. Since the voice cloning was used and caused public indignation and a complaint by the actor's widow, this constitutes an AI Incident due to indirect violation of rights and identity impersonation harm. The article also discusses institutional responses and the need for regulation, but the primary event is the AI-generated voice cloning causing harm.
Thumbnail Image

INE activa protocolos para el uso responsable de Inteligencia Artificial - Central Electoral

2025-07-19
Central Electoral
Why's our monitor labelling this an incident or hazard?
The article focuses on the proactive measures taken by the INE to manage AI use responsibly and prevent potential harms. It discusses the absence of clear technical guidelines and the possible risks AI could pose, but no direct or indirect harm has been reported as having occurred. Therefore, this event represents a plausible risk scenario rather than an actual incident. The main content is about governance and risk mitigation, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

INE retira video institucional de Tik Tok donde utilizó Inteligencia Artificial y clonó voz del actor de doblaje Pepe Lavat

2025-07-19
Animal Político
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (voice cloning AI) in producing content that led to controversy and the removal of the video. While the cloned voice use without consent suggests a violation of intellectual property or personal rights, the article does not report any formal legal ruling or confirmed harm beyond public concern and protests. The INE's response to create guidelines and protocols indicates recognition of potential risks but no direct incident of harm has been confirmed. Therefore, this is best classified as Complementary Information, as it provides context on AI use, societal reaction, and governance responses rather than reporting a confirmed AI Incident or a plausible future hazard.
Thumbnail Image

INE retira video de actor de doblaje tras falta de normatividad en IA

2025-07-19
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a voice clone without consent, which constitutes a violation of rights (intellectual property and personal rights), thus an AI Incident has occurred. However, the article's main focus is on the INE's response to the incident, including video removal and plans for regulation and protocols. Since the article does not primarily report the incident itself but rather the institutional response and regulatory discussion, it fits the definition of Complementary Information. It provides important context and updates on governance and mitigation following an AI Incident but does not itself report a new or ongoing AI Incident or AI Hazard.
Thumbnail Image

INE retira video institucional de TikTok donde utilizó Inteligencia Artificial y clonó voz de actor de doblaje

2025-07-19
Periódico Noroeste
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (voice cloning AI) used in the production of institutional content. However, the event does not report any realized harm such as injury, rights violations, or other significant harms caused by the AI use. Instead, it highlights the removal of the video and the institution's efforts to establish ethical guidelines and training for AI use. This fits the definition of Complementary Information, as it provides updates on governance and responsible AI use following a potentially problematic AI application, rather than describing an AI Incident or AI Hazard itself.
Thumbnail Image

INE retira video con voz de IA de Pepe Lavat y promete lineamientos

2025-07-20
MiMorelia.com
Why's our monitor labelling this an incident or hazard?
An AI system (voice generated by AI) was involved in the video content, but the article does not report any realized harm such as injury, rights violations, or disruption. The removal of the video and the development of guidelines indicate a governance or policy response to potential issues rather than an incident or hazard. Therefore, this is Complementary Information about institutional responses to AI use.
Thumbnail Image

INE retira video con voz de Pepe Lavat generado por IA

2025-07-19
El Heraldo de San Luis Potosi
Why's our monitor labelling this an incident or hazard?
The use of AI to generate the voice of Pepe Lavat without consent directly implicates a violation of intellectual property and personal rights, fulfilling the criterion of harm under (c) violations of human rights or breach of obligations under applicable law. The INE's acknowledgment of the issue and removal of the video confirms the harm has materialized. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

INE baja de redes sociales video en el que usó la voz de Pepe Lavat

2025-07-20
MVS Noticias
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating the voice used in the video, which led to public criticism and the removal of the content. However, there is no indication that actual harm such as misinformation dissemination or identity fraud has occurred yet; rather, the event focuses on potential risks and institutional responses to prevent harm. Therefore, this event represents a response to potential AI-related risks and governance measures rather than a realized AI Incident or a direct AI Hazard. It fits best as Complementary Information because it provides context on governance and risk mitigation following AI use concerns.
Thumbnail Image

Guadalupe Taddei impulsa regulación pionera del INE sobre uso ético de la Inteligencia Artificial - El Heraldo de México

2025-07-21
El Heraldo de M�xico
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of institutional use and governance, but it does not describe any realized harm or incident caused by AI. Instead, it details a regulatory initiative aimed at preventing misuse and ensuring ethical AI use. This fits the definition of Complementary Information, as it provides societal and governance responses to AI-related issues without reporting an AI Incident or AI Hazard. The article's main focus is on the regulatory development and its potential positive influence, not on an AI-related harm or plausible future harm event.
Thumbnail Image

Alista INE definición de criterios para el uso de Inteligencia Artificial

2025-07-21
El Economista
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of institutional use and governance but does not describe any realized harm or incident caused by AI. Instead, it details planned actions to establish guidelines, protocols, and training to ensure responsible AI use, which fits the definition of Complementary Information as it provides updates on governance and responses to AI-related challenges without reporting an AI Incident or Hazard.