AI-Generated Fake Historical Photos Spread Misinformation Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI image generators like Midjourney are producing fake historical photos that are widely shared and mistaken for real, leading to misinformation and distorting public understanding of history. Historians warn this undermines trust in visual evidence and harms communities by eroding historical truth.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (image generators) to create fake historical photos that are being widely shared and accepted as real, which constitutes misinformation and harms the community's trust and understanding of history. This is a clear example of harm to communities (harm category d) caused directly by the use of AI systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and distortion of historical facts.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafetyDemocracy & human autonomyRobustness & digital security

Industries
Media, social platforms, and marketingEducation and trainingArts, entertainment, and recreation

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generators) to create fake historical photos that are being widely shared and accepted as real, which constitutes misinformation and harms the community's trust and understanding of history. This is a clear example of harm to communities (harm category d) caused directly by the use of AI systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and distortion of historical facts.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake historical photos that are widely shared and mistaken for real, causing misinformation and harm to communities by distorting historical understanding. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (harm category d) through the spread of false information. Although the harm is non-physical, it is significant and clearly articulated as undermining public trust and historical accuracy. Therefore, this is an AI Incident.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake historical images that are widely shared and mistaken for real, leading to misinformation and harm to public understanding and trust in history. The harm is realized and ongoing, affecting communities' perception of history and potentially enabling disinformation. The AI system's use is central to this harm, fulfilling the criteria for an AI Incident under the framework, specifically harm to communities through misinformation and distortion of historical records.
Thumbnail Image

Fake Artificial Intelligence Might Be Clouding Your View Of The Past

2024-10-16
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generation models) to create fake historical photos that are being widely shared and accepted as real by some audiences. This has directly led to harm by spreading misinformation and potentially distorting public knowledge and trust in historical records, which qualifies as harm to communities. The harm is realized and ongoing, not merely potential. Therefore, this is an AI Incident rather than a hazard or complementary information. The article does not focus on responses or governance but on the harm caused by the AI-generated content itself.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
eNCAnews
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generation models) to create fake historical photos that are widely shared and accepted as real by some audiences. This use of AI has directly led to harm to communities by spreading misinformation and distorting historical understanding, which fits the definition of an AI Incident under harm category (d) - harm to communities. The article describes realized harm (misinformation and erosion of trust) rather than just potential harm, so it is not merely a hazard. It is not complementary information because the main focus is on the harm caused by the AI-generated images, not on responses or updates. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
WFXG FOX54
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated images are being widely shared and accepted as real historical photos, which risks distorting public understanding and weakening trust in visual evidence. This constitutes harm to communities (harm category d) as misinformation can have significant societal impacts. The AI system's use in generating these fake images is central to the harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and potential erosion of historical truth.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generation models) to create fake historical photos that are actively shared and accepted as real by some audiences. This leads to harm to communities by spreading misinformation and undermining trust in historical evidence, which fits the definition of an AI Incident under harm category (d) - harm to communities. The harm is realized as these images are already widely shared and believed, not just a potential risk. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generators like Midjourney) to create fake historical photos. Although no direct harm such as injury or legal violations is reported, the AI-generated images could plausibly lead to harm by spreading misinformation and distorting public understanding of history, which constitutes harm to communities. Therefore, this situation fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to an AI Incident involving harm to communities through misinformation and disinformation.
Thumbnail Image

Fake AI history photos cloud the past

2024-10-16
Brattleboro Reformer
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake historical images that are widely shared and accepted as real by some, leading to misinformation and distortion of historical knowledge. This harms communities by undermining trust in visual evidence and potentially spreading false narratives. The harm is realized and directly linked to the use of AI image generation technology. Hence, it meets the criteria for an AI Incident involving harm to communities and informational integrity.
Thumbnail Image

La chasse aux fausses photos historiques créées par l'intelligence artificielle est lancée

2024-10-16
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false images that are actively disseminated on social media, leading to harm to communities through misinformation and manipulation. Since the harm is occurring (spread of false narratives and conspiracy theories), this qualifies as an AI Incident under the harm to communities category. The AI system's use in generating and spreading these fake photos is central to the harm described.
Thumbnail Image

"Un tsunami d'histoire factice": comment les fausses photos historiques détournent les évènements du passé

2024-10-15
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (image generation models like Midjourney) to create fake historical photos that are widely shared and believed, leading to misinformation and distortion of historical facts. This constitutes harm to communities by spreading false narratives and undermining trust in information sources, which fits the definition of an AI Incident. The harm is realized as these images are actively shared and believed, not merely a potential risk. Therefore, this event qualifies as an AI Incident due to the direct role of AI-generated content in causing societal harm through misinformation.
Thumbnail Image

Sur les réseaux sociaux, de fausses photos créées par IA déforment l'histoire

2024-10-15
La Presse.ca
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images that are being shared and believed as real historical photos, leading to misinformation and distortion of history. This misinformation harms communities by misleading public perception and trust, which is a form of harm to communities. The AI system's use directly leads to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not just potential.
Thumbnail Image

L'IA à l'origine d'un " tsunami " de fausses photos historiques sur les réseaux sociaux

2024-10-15
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake historical images that are widely shared on social media, leading to misinformation and potential harm to communities by distorting historical facts and public trust. This fits the definition of an AI Incident because the AI's use has directly led to harm to communities through misinformation. The article documents realized harm rather than just potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the core issue is the harm caused by AI-generated false content.
Thumbnail Image

Sur les réseaux sociaux, un "tsunami" de fausses photos historiques...

2024-10-15
Le Devoir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Midjourney) to generate fake historical photos that are widely shared and believed, leading to misinformation and distortion of historical knowledge. This misinformation harms communities by altering public understanding and trust, which fits the definition of harm to communities under AI Incident. The harm is realized, not just potential, as these images are actively shared and believed. Hence, the event is an AI Incident rather than a hazard or complementary information.