Experts Warn AI-Driven Misinformation Could Threaten 2024-25 Elections

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Risk experts, including those at the WEF, warn that rapidly advancing AI could enable large-scale misinformation in elections across 50+ countries and 4 billion voters in 2024-25, posing a serious threat to democratic processes. Business leaders and policymakers are urged to prepare safeguards against AI-generated disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems as the source of misinformation and disinformation risks around elections, which could plausibly lead to harm to communities and democratic rights (harm category d and c). However, it only presents expert warnings and risk assessments without describing any actual AI-driven misinformation incidents causing harm. Thus, it fits the definition of an AI Hazard, as the AI involvement could plausibly lead to an AI Incident but no incident has yet occurred.[AI generated]
AI principles
Democracy & human autonomyRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securitySafety

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI hazard

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Dünyada tarihin en büyük seçim yılı: Yapay zeka uyarısı - Sözcü Gazetesi

2024-01-11
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems as the source of misinformation and disinformation risks around elections, which could plausibly lead to harm to communities and democratic rights (harm category d and c). However, it only presents expert warnings and risk assessments without describing any actual AI-driven misinformation incidents causing harm. Thus, it fits the definition of an AI Hazard, as the AI involvement could plausibly lead to an AI Incident but no incident has yet occurred.
Thumbnail Image

Yapay Zeka Kaynaklı Yanlış Bilgilendirme Seçimlerde Önemli Bir Risk

2024-01-11
Haberler
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems as it discusses AI-generated misinformation and disinformation. The nature of involvement is the potential use of AI to manipulate elections and spread false information. No actual harm has been reported yet; the article focuses on expert risk assessments and warnings about plausible future harms. Therefore, this event fits the definition of an AI Hazard, as it describes a credible risk that AI-driven misinformation could plausibly lead to significant harm in democratic processes if not addressed.
Thumbnail Image

WEF Küresel Riskler Raporu'nda insanlığı bekleyen en büyük 10 tehdit yayımlandı: 'Dezenformasyon' ilk sırada

2024-01-11
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The report identifies AI-related misinformation and disinformation as significant risks but does not report any actual harm or incident caused by AI systems. The focus is on potential future risks and systemic challenges rather than a concrete event involving AI malfunction or misuse. Therefore, this is best classified as Complementary Information, providing context and risk assessment about AI's role in global challenges without describing a specific AI Incident or Hazard.
Thumbnail Image

Yapay zekanın tehdit oluşturduğu bir diğer alan: Seçim sonuçları

2024-01-11
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems as the source of misinformation and disinformation risks in elections, which could plausibly lead to harm to communities and democratic processes (harm category d). However, it does not describe any actual AI-driven misinformation incidents that have caused harm yet. The focus is on expert risk assessments, warnings, and governance responses to this potential threat. This fits the definition of an AI Hazard, as it concerns plausible future harm from AI systems' use in election misinformation and disinformation, without evidence of realized harm at this time.
Thumbnail Image

Uzmanlardan yapay zeka uyarısı, seçim sonuçları tehlikede

2024-01-11
Türkiye
Why's our monitor labelling this an incident or hazard?
The article centers on expert assessments and warnings about the potential for AI systems to be used in spreading misinformation and disinformation that could affect election outcomes. No actual harm or incident is described as having occurred yet; the risks are prospective and concern possible future misuse of AI. Therefore, this qualifies as an AI Hazard, as the development and use of AI systems could plausibly lead to harm (manipulation of democratic processes) but no direct or indirect harm has been reported at this time.