Global Deepfake Threats Undermine Elections and Public Trust

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos and voice clones have been used worldwide to spread scams and election misinformation, targeting figures from Singapore’s PM Lee and Indonesia’s leaders to India’s campaign rivals. Experts warn these convincing fakes exploit advances in AI, influence voter behavior, and highlight insufficient platform and government safeguards.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems to create deepfake videos and voice clones that are being actively used in election campaigns across India, Indonesia, Bangladesh, and Pakistan. These AI-generated deepfakes are misleading voters, spreading disinformation, and influencing election outcomes, which constitutes harm to communities and democratic processes. The involvement of AI in generating and disseminating this misinformation is direct and pivotal. Since the harm is realized and ongoing, this is classified as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rightsPrivacy & data governanceDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital securityFinancial and insurance services

Affected stakeholders
General publicGovernment

Harm types
Public interestEconomic/PropertyReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections - ET Telecom

2024-01-03
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to create deepfake videos and voice clones that are being actively used in election campaigns across India, Indonesia, Bangladesh, and Pakistan. These AI-generated deepfakes are misleading voters, spreading disinformation, and influencing election outcomes, which constitutes harm to communities and democratic processes. The involvement of AI in generating and disseminating this misinformation is direct and pivotal. Since the harm is realized and ongoing, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes deceive voters in India, Pakistan before elections

2024-01-04
Dawn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to create deepfake videos and voice clones that are being disseminated during election campaigns in India, Pakistan, Bangladesh, and Indonesia. These AI-generated synthetic media pieces are misleading voters, influencing election outcomes, and spreading disinformation, which constitutes harm to communities and a violation of democratic rights. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident under the framework because the AI system's use has directly led to significant harm to communities and democratic processes.
Thumbnail Image

Americans face flood of AI & deepfake propaganda, says ex-White House intel boss

2024-01-02
The Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems (generative AI tools like ChatGPT, MidJourney, and deepfake technologies) to create manipulated audio, video, and images that are actively influencing public opinion and elections. It documents actual incidents of AI-generated misinformation and deepfakes being disseminated, causing harm to political figures and the democratic process. The harms include misinformation, social division, and potential election interference, which fall under violations of rights and harm to communities. The involvement of AI in these harms is direct and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-03
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI creating deepfakes) and their use in political campaigns to spread misinformation that influences voters. This constitutes a violation of rights related to fair democratic processes and causes harm to communities by undermining informed voting. The harm is realized and ongoing, not merely potential, as deepfakes have been widely circulated and have affected voter behavior in multiple countries. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to significant harm to communities and democratic processes.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-03
bdnews24.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake videos and AI-generated synthetic media used in political campaigns and election interference. The harms include misinformation that misleads voters, potentially influencing election outcomes and undermining democratic processes, which qualifies as harm to communities and a violation of rights. Since the article reports that these deepfakes are actively circulating and misleading people, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated content causing harm to communities and democratic integrity.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-03
The Star
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfakes) whose use has directly led to harm in the form of misinformation and manipulation of voters, which is a harm to communities and democratic rights. The article documents actual instances of AI-generated content influencing elections in India, Indonesia, Bangladesh, and Pakistan, with authorities and experts expressing concern about the impact. This meets the criteria for an AI Incident because the AI system's use has directly caused harm through disinformation affecting elections and voter behavior.
Thumbnail Image

Experts fear AI deepfakes can deceive voters in Pakistan, India other Asian nations

2024-01-03
GEO TV
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems generating deepfake videos and voice clones that are being used in real election campaigns to mislead voters and spread misinformation. This misinformation is already occurring and influencing voter perceptions and behavior, which constitutes harm to communities and democratic processes. The AI system's use is central to the harm described, fulfilling the criteria for an AI Incident. The article does not merely warn about potential future harm but documents ongoing harm caused by AI-generated content in elections.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-03
Deccan Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI tools like Midjourney, Stable Diffusion, OpenAI's Dall-E) creating deepfake content that is actively disseminated and misleading voters in India, Indonesia, Bangladesh, and Pakistan. The harm is direct and ongoing, as these deepfakes distort political information, manipulate voter perceptions, and threaten democratic integrity, which is a clear harm to communities. The involvement of AI in generating and spreading this disinformation is central to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-04
DhakaTribune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfakes) whose use has directly led to harm in the form of misinformation affecting elections and voter behavior, which is a harm to communities and democratic rights. The article provides multiple examples of AI-generated content being used in ongoing or recent elections, with authorities and experts expressing concern about the impact. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through election interference and misinformation.
Thumbnail Image

Democracy in the era of deepfakes

2024-01-02
New Statesman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos that have been used to mislead voters and manipulate political discourse, which are direct examples of AI systems causing harm to communities and democratic processes. These harms include misinformation, political manipulation, and potential election interference, fitting the definition of an AI Incident. Although some future risks are discussed, the presence of actual incidents of harm places this event in the AI Incident category rather than AI Hazard or Complementary Information.
Thumbnail Image

The Rise of Deepfakes and What They Mean for Security

2024-01-04
InformationWeek
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake generation) being used maliciously to impersonate individuals and spread misinformation, causing realized harms such as unauthorized access to financial systems and manipulation of public opinion. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The discussion of actual cases and ongoing attacks confirms that harm is occurring, not just potential. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes deceive voters from India to Indonesia before elections

2024-01-03
Gulf Daily News Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfakes) whose use has directly led to harms including misinformation, manipulation of voter behavior, and potential undermining of democratic processes, which constitute harm to communities and violations of rights. The article documents actual occurrences of these harms across several countries, not just potential risks. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant, clearly articulated harms.
Thumbnail Image

Public urged to be on guard as deepfake content will grow more sophisticated: Experts

2024-01-02
singaporelawwatch.sg
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake videos that have been used to spread false information and scams involving public figures, which constitutes harm to communities and violations of rights. The AI systems' use has directly led to misinformation campaigns and public harm. Therefore, this qualifies as an AI Incident. The article also covers responses and mitigation efforts, but the primary focus is on the realized harm caused by AI-generated misinformation.
Thumbnail Image

Deepfakes set to deceive voters in India ahead of national elections - ET Government

2024-01-03
ETGovernment.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to create deepfake videos and voice clones that are being disseminated to influence elections, which is a direct harm to communities and democratic rights. The harm is realized as these deepfakes are already circulating and misleading voters, with examples given from several countries. The AI systems' use in generating and spreading disinformation that affects election outcomes fits the definition of an AI Incident, as the AI's role is pivotal in causing harm. The article also discusses the lack of adequate platform and governmental responses, reinforcing the ongoing nature of the harm.
Thumbnail Image

FEATURE-Deepfakes deceive voters from India to Indonesia before elections | Technology

2024-01-03
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfakes) whose use has directly led to harm by spreading misinformation and influencing voter behavior in multiple countries' elections. This meets the definition of an AI Incident because the AI-generated content is actively deceiving voters, causing harm to communities and democratic processes. The article provides concrete examples of deepfakes being used and circulated, not just warnings or potential risks, so it is not merely an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

Konten Deepfake Tipu Calon Pemilih di Asia Sebelum Pemilihan Umum, Termasuk Indonesia

2024-01-04
Liputan 6
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to create deepfake videos and audio that mislead voters, which is a direct use of AI technology. The harm is realized as misinformation is actively spreading and influencing public opinion ahead of elections, which harms communities and the democratic process. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and election interference. The involvement is in the use of AI systems to generate deceptive content, and the harm is occurring, not just potential.
Thumbnail Image

Indonesia Masuk Musim Pemilu, Jangan Abai Deepfake

2024-01-06
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) used to generate misleading content. Although the article does not report a specific incident of harm, it clearly outlines the plausible future harm that AI-generated misinformation could cause in the context of elections, such as influencing voter behavior and undermining democratic integrity. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm to communities and the democratic process.
Thumbnail Image

Deepfake Kelabui Jutaan Calon Pemilih di Asia Jelang Pemilu

2024-01-03
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used to create deepfake videos and audio that are actively disseminated to millions of voters, influencing their perceptions and potentially their voting behavior. This constitutes harm to communities and democratic rights, fulfilling the criteria for an AI Incident. The AI system's use in generating and spreading misinformation is directly linked to the harm described. The article also notes the lack of effective regulation and the challenges faced by social media platforms in mitigating this harm, reinforcing the direct role of AI in causing the incident. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Kelabui Jutaan Calon Pemilih di Asia Jelang Pemilu

2024-01-03
Tempo Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI creating deepfake videos and audio) whose use has directly led to harm in the form of misinformation and disinformation affecting millions of voters in multiple countries. This misinformation can disrupt democratic processes and harm communities by misleading voters. The article explicitly states that these deepfakes are already viral and influencing perceptions, thus constituting an AI Incident. The harms are materialized and significant, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Teknologi

2024-01-02
Antara News Mataram
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems like deepfake technology and autonomous political bots used to create and spread false information and propaganda, which have directly caused harm by misleading the public and disrupting democratic elections. This fits the definition of an AI Incident because the AI systems' use has directly led to harm to communities and violations of rights. The article also discusses systemic misuse of AI for political manipulation, confirming the realized harm rather than just potential risk.
Thumbnail Image

"Deepfake", pemanfaatan AI, dan Pemilu - ANTARA News Aceh

2024-01-03
Antara News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) used to create manipulated videos that have been disseminated and caused misinformation and disinformation around elections. This use of AI has directly led to harm to communities by spreading false narratives and misleading the public, which fits the definition of an AI Incident. The article reports on actual occurrences of such deepfakes being circulated, not just potential or hypothetical risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

गुलाबी बालों वाली यह 'लड़की' हर सोशल मीडिया पोस्‍ट से कमाती है ₹83 हजार

2024-01-01
News18 India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI creating virtual influencers) and their use in social media marketing. However, it does not describe any injury, rights violation, disruption, or other harm caused by these AI systems. There is no indication of malfunction, misuse, or potential for harm that is credible or imminent. The content focuses on the economic and social impact of AI virtual influencers, which is informative and contextual. Therefore, it fits the definition of Complementary Information, as it provides supporting data and context about AI's evolving role in social media and marketing without reporting an incident or hazard.
Thumbnail Image

नौकरी खाने के लिए मुंह खोले खड़ी है टेक्नोलॉजी! बचने का है ये एकमात्र रास्ता

2024-01-03
News18 India
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (chatbots, automation) leading to job displacement, which is a form of economic and social harm (loss of employment). However, it does not describe a specific event where AI use directly or indirectly caused harm to individuals or groups, nor does it report a particular incident of violation or injury. Instead, it discusses the broader trend and potential consequences of AI adoption in the workforce, including both risks and opportunities. Therefore, this is best classified as Complementary Information, providing context and analysis about AI's impact on employment rather than reporting a discrete AI Incident or Hazard.
Thumbnail Image

चुनाव से पहले जमकर वायरल हो रहे हैं डीपफेक वीडियो

2024-01-03
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI creating deepfake videos and audio) that have directly led to harm by spreading disinformation and political propaganda during elections, which can influence voter decisions and undermine democratic integrity. This constitutes harm to communities and a violation of rights related to fair political processes. The harm is realized and ongoing, as evidenced by viral deepfake videos affecting elections in multiple countries. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New Year 2024: नये साल में जा सकती है इनकी नौकरी, इस वजह से मंडरा रहा है खतरा!

2024-01-03
hindi
Why's our monitor labelling this an incident or hazard?
The article talks about AI's potential impact on employment and job displacement as a future possibility, which fits the definition of an AI Hazard. There is no mention of actual harm or incidents caused by AI yet, only a warning about plausible future harm to jobs. Therefore, it is classified as an AI Hazard.
Thumbnail Image

जरुरी जानकारी | कृत्रिम मेधा के लिए सहयोगी नियामकीय ढांचे की जरूरतः आरबीआई डिप्टी गवर्नर | LatestLY हिन्दी

2024-01-01
LatestLY हिन्दी
Why's our monitor labelling this an incident or hazard?
The article primarily provides a policy and governance perspective on AI's potential impacts and the need for regulatory frameworks. It does not describe any realized harm or direct incident involving AI systems. The discussion of potential adverse effects and the call for vigilance indicate awareness of plausible future risks but do not report an actual AI hazard event. Therefore, this is best classified as Complementary Information, as it contributes to understanding AI's societal and governance context without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Opinion: Will 2024 be the year fake news destroys democracy?

2024-01-02
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and AI-based disinformation that have already begun to proliferate and influence elections, which are ongoing events causing harm to democratic processes and communities. The AI systems involved in generating and spreading fake news and deepfakes have directly contributed to this harm. Therefore, this qualifies as an AI Incident under the framework, as the harm is realized and the AI system's role is pivotal in causing it.
Thumbnail Image

Will 2024 be the year fake news destroys democracy?

2024-01-01
The Korea Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation and deepfakes that are already circulating and influencing elections, which constitutes direct harm to communities and democratic processes. This fits the definition of an AI Incident, as the development and use of AI systems have directly led to significant harm to communities and democratic integrity. Although the article also discusses potential future risks and responses, the presence of ongoing AI-driven disinformation campaigns causing harm makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Will 2024 be the year fake news destroys democracy? | Opinion

2024-01-02
SunSentinel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and AI-based disinformation that are already proliferating and influencing elections, which directly harms democratic communities and electoral integrity. The harm is realized and ongoing, not merely potential. The AI systems involved in generating and spreading fake news are central to the harm described. Hence, this qualifies as an AI Incident due to the direct and active role of AI in causing harm to communities and democratic rights.
Thumbnail Image

Will 2024 be the year fake news destroys democracy?

2024-01-02
Stars and Stripes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and AI-based disinformation campaigns that have already begun to proliferate and influence elections, such as the AI-generated TikTok video in Indonesia. This constitutes realized harm to communities and democratic processes, fitting the definition of an AI Incident. The harm is direct and ongoing, not merely potential, as the disinformation is actively affecting electoral integrity and public trust. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Will 2024 Be The Year Fake News Destroys Democracy?

2024-01-01
NDTV Profit
Why's our monitor labelling this an incident or hazard?
While the article highlights the risk of disinformation and digital manipulation that could plausibly be driven by AI technologies (e.g., AI-generated fake news or deepfakes), it does not describe any actual incident of harm caused by AI systems. The focus is on a credible future threat to democratic processes, which aligns with the definition of an AI Hazard rather than an AI Incident or Complementary Information.