AI-Generated Political Deepfakes Prompt Midjourney to Consider Ban Ahead of US Election

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI image generators like Midjourney have been used to create fake and misleading images of public figures, including politicians and celebrities, fueling misinformation and disinformation online. In response, Midjourney is considering banning images of Joe Biden and Donald Trump to curb election-related AI-driven disinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (Midjourney's generative AI) and addresses the plausible future harm of AI-generated false political images influencing elections, which could harm communities by spreading misinformation. Since no actual harm has yet occurred but the risk is credible and recognized, this qualifies as an AI Hazard rather than an Incident. The announcement of planned restrictions is a response to this hazard, but the main event is the recognition of plausible future harm from AI-generated political images.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Midjourney envisage des actions concrètes pour éviter la propagation de fausses images des candidats aux élections présidentielles américaines

2024-02-09
Clubic.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Midjourney's generative AI) and addresses the plausible future harm of AI-generated false political images influencing elections, which could harm communities by spreading misinformation. Since no actual harm has yet occurred but the risk is credible and recognized, this qualifies as an AI Hazard rather than an Incident. The announcement of planned restrictions is a response to this hazard, but the main event is the recognition of plausible future harm from AI-generated political images.
Thumbnail Image

Midjourney envisage d'interdire de créer des fausses images de Donald Trump ou Joe Biden

2024-02-12
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney) and its use in generating realistic deepfake images of political figures, which can cause harm to communities by spreading misinformation during elections. Although no specific harm has yet occurred from this decision, the context clearly indicates a credible risk of harm from the AI system's outputs if unrestricted. The announcement is about a planned restriction to mitigate this risk, reflecting a governance response to a plausible future harm scenario. Therefore, this is best classified as Complementary Information, as it provides context and a response to potential AI-related harm rather than reporting an actual incident or hazard.
Thumbnail Image

Midjourney veut vous empêcher de générer de fausses images de Trump et Biden

2024-02-12
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney, a generative AI image tool) whose outputs (fake images of political figures) are being used to spread misinformation and interfere with democratic elections, which constitutes harm to communities and a violation of democratic rights. The harm is occurring as the disinformation campaigns are active and influencing voters. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The company's consideration of restrictions is a response but does not change the classification of the incident itself.
Thumbnail Image

Bannir toutes les images de Joe Biden et Donald Trump sur Midjourney devient inévitable

2024-02-12
Numerama.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney, an AI image generation tool) whose use can lead to harm through the creation and dissemination of manipulated political images. Although the article does not report a specific incident of harm, it describes ongoing misuse and the potential for significant harm to political processes and communities, such as misinformation and reputational damage. The decision to ban certain prompts is a preventive measure acknowledging this plausible risk. Hence, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to communities and political integrity.
Thumbnail Image

Midjourney pourrait interdire la création de fausses images de Donald Trump ou Joe Biden

2024-02-12
Atlantico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney) used to generate images, including deepfakes. The article discusses the potential for misuse of this AI system to create misleading political images that could influence public opinion or elections, which constitutes a plausible risk of harm to communities and democratic processes. Since no actual harm is reported yet, but the risk is credible and recognized by the system's operators, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

AI firm considers banning creation of political images for 2024 elections

2024-02-10
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article centers on the potential for AI-generated political images to misinform or distract voters in the upcoming election, which is a plausible future harm. The company is proactively considering restrictions to mitigate this risk. Since no actual harm or incident has been reported, and the main focus is on the potential for harm and governance responses, this fits the definition of an AI Hazard. It is not Complementary Information because the article is not primarily about responses to a past incident but about a potential risk and preventive action. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

AI firm considers banning creation of political images for 2024 elections

2024-02-10
The Guardian
Why's our monitor labelling this an incident or hazard?
The article centers on the potential for AI-generated political images to be used for misinformation in the 2024 US election, which is a plausible future harm related to AI use. Midjourney's consideration of banning such images is a response to this risk. Since no actual harm or incident has been reported as having occurred due to Midjourney's AI system, and the event is about preventing possible misuse, this qualifies as an AI Hazard. The article also provides broader context about AI and election misinformation but does not describe a realized AI Incident or complementary information about responses to a past incident.
Thumbnail Image

Midjourney to ban Biden, Trump images ahead of 2024 US elections fearing AI-generated misinformation

2024-02-09
Firstpost
Why's our monitor labelling this an incident or hazard?
The article describes a preventive measure being considered by an AI image generation platform to avoid potential misuse that could lead to misinformation during elections. There is no indication that any AI-generated misinformation has already caused harm or disruption. Therefore, this situation represents a plausible future risk of harm related to AI use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Midjourney Could Ban Trump, Biden Images for the Next Year

2024-02-09
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney, an AI image generator) and addresses concerns about misuse of AI-generated images that could influence elections, which is a recognized harm to communities and democratic processes. However, the article primarily reports on planned or potential policy measures to prevent such misuse rather than describing an actual incident of harm occurring or a direct AI-driven harm event. Therefore, it is best classified as Complementary Information, as it provides context on governance and mitigation efforts related to AI misuse risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Midjourney CEO Considers Ban on Biden, Trump Images to Curb AI-Driven Election Disinformation

2024-02-09
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Midjourney's generative AI) used to create manipulated political images, which can lead to political disinformation—a harm to communities. However, the article describes a proposal and ongoing concerns about potential misuse rather than a realized incident of harm. Therefore, it fits the definition of an AI Hazard, as the misuse of AI-generated content could plausibly lead to an AI Incident involving political disinformation and harm to democratic processes. The article also mentions industry responses and preventive measures but does not focus primarily on these, so it is not Complementary Information.
Thumbnail Image

Midjourney mulls Biden and Trump picture ban ahead of election

2024-02-09
ReadWrite
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney) capable of generating political images, which could be used to create misleading or false political content (deepfakes). The article centers on the potential for such AI-generated content to cause harm to democratic processes by misleading voters, which is a plausible future harm. Since no actual harm or incident has been reported yet, and the focus is on the risk and preventive measures, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

After viral AI images of Taylor Swift, AI to ban Trump images | Al Bawaba

2024-02-11
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (generative AI like Midjourney and Dall-E) to create fake images that have already circulated and caused misinformation, which is a harm to communities by spreading false information and potentially influencing political processes. The harm is realized as fake images have been widely shared, and the companies' responses are attempts to mitigate ongoing harm. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing misinformation and potential political harm.
Thumbnail Image

Midjourney Considers Ban on Biden and Trump Images Amid Election Concerns

2024-02-09
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Midjourney's AI image generation tool) whose use could plausibly lead to harm in the form of misinformation influencing elections, which is a harm to communities and democratic processes. However, the article focuses on the potential risk and the company's proactive measures rather than describing any realized harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving political misinformation and harm to communities.
Thumbnail Image

5 things about AI you may have missed today: Midjourney may ban some images, bad news for coders, and more. Midjourney considers ban on Political images

2024-02-11
Bollyinside - Breaking & latest News worldwide
Why's our monitor labelling this an incident or hazard?
Midjourney's potential ban on political images is a preventive measure addressing possible future misuse but does not describe an actual incident or hazard. The Assam AI Pilot project is a positive application with no reported harm or malfunction. The coder replacement prediction is speculative and not linked to a specific event causing or plausibly leading to harm. Therefore, the article mainly provides complementary information about AI developments, governance considerations, and positive applications, without describing an AI Incident or AI Hazard.
Thumbnail Image

〈財經週報-科技趨勢〉歐盟先示範 AI法案管到ChatGPT、生物辨識 - 自由財經

2024-02-11
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the regulatory and governance responses to AI technologies, including the EU's AI Act and international discussions on AI risks and management. It does not describe any specific AI Incident or AI Hazard event where harm has occurred or is imminent. Instead, it provides complementary information about AI ecosystem developments, policy measures, and societal implications, which fits the definition of Complementary Information.
Thumbnail Image

防AI影響美總統大選!Midjourney擬禁用戶生成「政治相關圖片」 | 國際要聞 | 全球 | NOWnews今日新聞

2024-02-11
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
Midjourney's AI system is explicitly involved as it generates realistic fake images of political figures, which have already caused misinformation and social harm, such as false images of political candidates and AI-synthesized audio urging voters not to vote. These constitute violations of rights and harm to communities. The company's decision to restrict political image generation is a response to these harms. Therefore, the event describes an AI Incident due to realized harm from AI-generated misinformation impacting the US presidential election.