AI Deepfakes Fuel Election Misinformation and Non-Consensual Pornography Crisis

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI tools are increasingly used to create deepfake images, audio, and videos, leading to widespread misinformation in elections and a surge in non-consensual deepfake pornography. These AI-generated fakes harm individuals’ rights, distort democratic processes, and present significant challenges for detection and prevention.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems generating deepfake content (images, audio, video) that is being disseminated on social media and flagged as false, indicating realized harm. The use of AI-generated deepfakes in political contexts can mislead voters and distort democratic processes, constituting harm to communities. The AI systems' development and use have directly contributed to this harm. Although some content is labeled as parody, the broader phenomenon of AI deepfakes spreading misinformation is a clear AI Incident under the OECD framework, as it involves violations of rights to truthful information and harms communities. The article does not merely warn about potential future harm but documents ongoing issues and examples, confirming it as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General publicWomen

Harm types
Human or fundamental rightsPublic interestPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Anti-Trump groups are quietly planning for a deepfake election crisis | Blaze Media

2024-04-12
TheBlaze
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake content (photos, videos, audio) that could mislead voters and disrupt elections, which fits the definition of an AI system. The event described is a simulated exercise preparing for a potential deepfake election crisis, indicating plausible future harm rather than realized harm. No actual harm has occurred yet, so it does not qualify as an AI Incident. The article's main focus is on the potential for AI-generated deepfakes to cause election disruption and the preparations to mitigate this risk, fitting the definition of an AI Hazard. The article also discusses concerns about bias and narrative control but does not report any actual misuse or harm caused by AI systems at this time.
Thumbnail Image

Spying a copy: Deepfake's effect on a culture

2024-04-16
Liberty University
Why's our monitor labelling this an incident or hazard?
The article describes harms caused by deepfake AI technology, including personal and psychological harm to individuals through non-consensual deepfake pornography, which fits the definition of an AI Incident due to violation of rights and harm to individuals. However, the article does not report a specific new event or incident but rather discusses known issues and examples, including past harms and potential future risks. Therefore, it serves as complementary information providing context and raising awareness about AI harms rather than documenting a discrete AI Incident or AI Hazard.
Thumbnail Image

Spot the deepfake: The AI tools undermining our own eyes and ears

2024-04-16
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake content (images, audio, video) that is being disseminated on social media and flagged as false, indicating realized harm. The use of AI-generated deepfakes in political contexts can mislead voters and distort democratic processes, constituting harm to communities. The AI systems' development and use have directly contributed to this harm. Although some content is labeled as parody, the broader phenomenon of AI deepfakes spreading misinformation is a clear AI Incident under the OECD framework, as it involves violations of rights to truthful information and harms communities. The article does not merely warn about potential future harm but documents ongoing issues and examples, confirming it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anti-Trump groups are quietly planning for a deepfake election crisis - Conservative Angle

2024-04-12
Brigitte Gabriel
Why's our monitor labelling this an incident or hazard?
The article centers on a simulated 'Deepfake Dilemma' exercise that imagines AI-generated deepfake content being used to interfere with the 2024 election. The AI system involvement is explicit (deepfake generation, AI-generated voices). The event is about planning and preparing for a potential crisis, not an actual incident of harm. The harm described (election disruption, misinformation, voter confusion) is plausible and credible but has not yet materialized. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to significant harm to communities and democratic processes in the future. There is no indication of an actual AI Incident or realized harm, nor is the article primarily about responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI systems and their potential misuse, so it is not Unrelated.
Thumbnail Image

Deepfake pornography explosion - Panda Security

2024-04-16
pandasecurity.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake pornography, which directly causes harm to individuals by violating their rights and dignity. The harm is realized and ongoing, as evidenced by the large number of deepfake videos created and shared. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article's focus is on the harm caused and the challenges in combating it, not merely on legal or technical responses, so it is not Complementary Information. Therefore, the classification is AI Incident.