SVP MP Andreas Glarner ordered to pay after deepfake video of Green MP Sibel Arslan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Andreas Glarner, a Swiss SVP MP, published an AI-generated deepfake of Green MP Sibel Arslan on social media days before the 2023 elections. The Basel civil court ruled he violated her personality rights, ordering him to delete the video and pay nearly CHF 4,000 in damages and legal fees.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves a deepfake video, which is a product of AI technology used to create realistic but fake video content. The publication of this AI-generated content caused harm to Sibel Arslan's reputation and personal rights, leading to legal consequences and financial penalties for the publisher. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and personal harm).[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilitySafetyRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
Government

Harm types
ReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deep-Fake-Video: Glarner muss Arslan den Anwalt bezahlen

2024-01-05
Luzerner Zeitung
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI technology used to create realistic but fake video content. The publication of this AI-generated content caused harm to Sibel Arslan's reputation and personal rights, leading to legal consequences and financial penalties for the publisher. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights and personal harm).
Thumbnail Image

Glarner muss Arslan die Anwaltskosten bezahlen

2024-01-05
www.Bluewin.ch
Why's our monitor labelling this an incident or hazard?
The event describes the use of a deepfake video created with AI technology that falsely attributes statements to a politician, constituting a violation of her personality rights. The court ruling and legal consequences directly relate to the harm caused by the AI-generated content. Since the AI system's use directly led to a violation of rights and legal harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

Klage von Sibel Arslan - Nationalrat Andreas Glarner muss wegen Fake-Video zahlen

2024-01-05
SRF News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake video (deepfake) that misrepresents a public figure, which constitutes a violation of personal rights and can cause reputational harm. The AI system's use directly led to harm (violation of rights and reputational damage), and the court ruling confirms the harm has materialized. Therefore, this qualifies as an AI Incident under the category of violations of human rights or breach of obligations intended to protect fundamental rights.
Thumbnail Image

Nach Deep-Fake-Video: Andreas Glarner muss Sibel Arslan den Anwalt zahlen

2024-01-05
watson.ch/
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate a deepfake video, which is an AI system's output. The deepfake caused reputational harm and public confusion, which can be considered harm to communities and violation of rights (e.g., personal rights, possibly defamation). The legal ruling confirms that harm has occurred and that the AI system's use was a contributing factor. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Gerichtsurteil nach Fake-Video - Andreas Glarner muss der Grünen Sibel Arslan den Anwalt zahlen

2024-01-05
Basler Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create a deepfake video, which is an AI-generated manipulated video content. The use of this AI system directly led to harm in the form of violation of personality rights, a breach of legal protections for the individual involved. The court ruling confirms that harm occurred due to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use directly caused a violation of rights and harm to the individual.
Thumbnail Image

Nationalrat Andreas Glarner muss wegen Fake-Video zahlen

2024-01-05
Jungfrau Zeitung
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake video (deepfake) that falsely portrayed a politician making statements contrary to her actual views. This misuse of AI directly led to harm in the form of violation of personality rights, a legal breach, and reputational damage. The court ruling and penalties confirm that harm has materialized due to the AI-generated content. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of applicable law protecting fundamental rights.
Thumbnail Image

Andreas Glarner muss Sibel Arslan den Anwalt zahlen

2024-01-05
20 Minuten
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated content manipulation. The video was published and spread, causing harm by misrepresenting a political figure and misleading the public. This meets the criteria for an AI Incident as the AI system's use directly led to harm in terms of violation of rights and harm to communities. Therefore, the classification is AI Incident.