UN AI Advisor Warns of Risks: Human Impersonation and Neural Data Commercialization

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Carme Artigas, co-chair of the UN AI Advisory Council, highlighted two major AI risks at a conference in Oleiros, Spain: technologies that simulate humans and the commercialization of neural data. She emphasized the need for robust regulation to address these potential hazards.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems that simulate humans (impersonation) and the potential commercialization of neural data as significant risks. These risks are framed as plausible future harms rather than realized incidents. There is no report of actual harm, injury, or violation caused by AI systems, only warnings and expert opinions about what could happen. The involvement of AI is clear, and the potential for harm is credible, meeting the criteria for an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI harms.[AI generated]
AI principles
Privacy & data governanceTransparency & explainability

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Carme Artigas asegura que "hace falta una Hiroshima de la IA"

2026-05-09
La Voz de Galicia
Why's our monitor labelling this an incident or hazard?
The article focuses on expert warnings and societal/governance responses to AI risks, emphasizing potential systemic harms and the need for regulatory frameworks. There is no description of an AI system causing direct or indirect harm, nor an event where harm has occurred or a near miss. The content is about raising awareness and advocating for controls, which fits the definition of Complementary Information as it provides context and governance-related insights without reporting a specific AI Incident or Hazard.
Thumbnail Image

ONU ve riesgos en la IA: simular ser personas y comercializar datos neuronales

2026-05-10
www.eluniversal.com.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems that simulate humans (impersonation) and the potential commercialization of neural data as significant risks. These risks are framed as plausible future harms rather than realized incidents. There is no report of actual harm, injury, or violation caused by AI systems, only warnings and expert opinions about what could happen. The involvement of AI is clear, and the potential for harm is credible, meeting the criteria for an AI Hazard. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI harms.
Thumbnail Image

Alertan de riesgos en la IA

2026-05-09
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The article centers on warnings and concerns about possible future harms from AI, such as impersonation and misuse of neural data, which could plausibly lead to incidents if unaddressed. However, no actual AI-related harm or incident is reported. The content fits the definition of an AI Hazard, as it outlines credible risks that could plausibly lead to AI Incidents but does not describe any direct or indirect harm that has already occurred.
Thumbnail Image

Advierten sobre riesgos de la IA; alertan por comercialización de datos neuronales

2026-05-08
www.xeu.mx
Why's our monitor labelling this an incident or hazard?
The article centers on expert opinions and warnings about potential AI risks, such as impersonation and neural data commercialization, which could plausibly lead to harm in the future. There is no mention of realized harm, malfunction, or misuse of AI systems causing injury, rights violations, or other harms. Therefore, the event qualifies as an AI Hazard because it outlines credible potential risks from AI development and use but does not report an actual AI Incident. It is not Complementary Information since it is not updating or responding to a prior incident, nor is it unrelated as it clearly involves AI systems and their societal implications.
Thumbnail Image

Carme Artigas, sobre la ubicación de Aesia en A Coruña: "Es una gran oportunidad para crear ecosistema"

2026-05-08
El Ideal gallego
Why's our monitor labelling this an incident or hazard?
The article centers on AI governance, regulatory strategy, and educational use of AI, without describing any event where AI systems have caused or could cause harm. It mainly provides complementary information about AI ecosystem development, trust in AI, and responsible AI innovation in Europe. Therefore, it does not meet the criteria for AI Incident or AI Hazard, but rather fits as Complementary Information enhancing understanding of AI governance and ecosystem.