AI-Generated Fake News Falsely Links Journalist to Rapper, Causing Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems and chatbots generated and spread fake news articles falsely claiming Swiss journalist Celina Euchner was in a relationship with German rapper Kontra K. The misinformation, based on misinterpreted interview data, led to reputational damage and emotional distress for Euchner, highlighting the real-world harm of AI-generated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI-generated fake news causing reputational and emotional harm to a person, which fits the definition of an AI Incident. The AI system's outputs (fake articles) directly led to harm (violation of personal rights, emotional distress, reputational damage). The involvement of AI in generating false content and the resulting harm to the individual and community trust clearly meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
ReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fakenews: Celina Euchner fälschlicherweise als Kontra K Freundin dargestellt

2025-11-09
20 Minuten
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated fake news causing reputational and emotional harm to a person, which fits the definition of an AI Incident. The AI system's outputs (fake articles) directly led to harm (violation of personal rights, emotional distress, reputational damage). The involvement of AI in generating false content and the resulting harm to the individual and community trust clearly meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

KI-Fake: Journalistin als Rapper-Freundin dargestellt

2025-11-09
Blick.ch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake news articles spreading false information about the journalist, causing reputational harm and personal distress. The AI system's misuse has directly led to harm to the individual’s rights and reputation, fulfilling the criteria for an AI Incident under violations of human rights or personality rights. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in generating and disseminating the false content.
Thumbnail Image

KI behauptet, dass unsere Journalistin die Freundin von Kontra K sei - wie konnte das passieren?

2025-11-10
Tages Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and spreading false information (fake news) about a real person, which harms the individual's reputation and misleads the public. The AI systems' outputs have directly caused this harm. The article explicitly states that multiple AI chatbots provided the false claim, and this misinformation is widespread. This fits the definition of an AI Incident as it involves violations of rights and harm to communities due to AI-generated misinformation.
Thumbnail Image

Fake-News-Websites machten mich zur Partnerin des Rappers Kontra K. Und ich kann nichts dagegen tun

2025-11-08
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake news content that falsely portrays the individual in a damaging way, which constitutes a violation of personal rights and causes harm to the individual's reputation and emotional well-being. The AI system's use in creating and spreading these fabricated stories directly leads to harm as defined under violations of human rights and harm to communities. The article details realized harm rather than just potential risk, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Eine beängstigende Entwicklung": Wie KI-Artikel das Internet mit Fake News überfluten

2025-11-08
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI chatbots and AI-generated news websites) that have produced and spread false information about real people, causing reputational and psychological harm. The AI systems' outputs are not merely erroneous but have led to real-world consequences, such as distress to the individuals falsely portrayed and misinformation spreading widely online. This meets the criteria for an AI Incident as the AI's use and malfunction have directly led to violations of rights and harm to communities. The article documents realized harm, not just potential harm, and thus it is not an AI Hazard or Complementary Information. It is not unrelated because the AI systems are central to the harm described.
Thumbnail Image

Fake-News-Websites machten mich zur Partnerin des Rappers Kontra K. Und ich kann nichts dagegen tun

2025-11-08
Berner Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and spreading false information that directly harms the individual by damaging her reputation and causing emotional distress. The AI-generated fake news is not hypothetical but actively affecting the person, as evidenced by the false stories, public confusion, and personal impact described. The harm is realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information. The involvement of AI in creating these fake news websites and content is explicit or reasonably inferred, and the harm to personal rights and community reputation is clear.