Study Reveals High Risk of Misinformation from Realistic AI-Generated Faces

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A University of Waterloo study found that 39% of participants could not distinguish AI-generated faces from real ones, using images created by Stable Diffusion and DALL-E. This highlights the significant risk that realistic AI-generated images pose for future misinformation and disinformation, as people struggle to detect fakes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves AI systems (Stable Diffusion and DALL-E) generating realistic images of people. Although the study itself does not report an actual incident of harm, it clearly outlines the plausible risk that such AI-generated images could be used maliciously to spread disinformation, which would constitute harm to communities. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm through disinformation and manipulation.[AI generated]
AI principles
Transparency & explainabilityAccountabilitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Think you can spot an AI-generated person? There's a solid chance you're wrong

2024-03-06
Fast Company
Why's our monitor labelling this an incident or hazard?
The article discusses a study on human perception of AI-generated images but does not describe any harm caused or potential harm that could plausibly arise from the AI systems' development, use, or malfunction. The focus is on research findings about detection difficulty, which is informative but does not constitute an incident or hazard. Therefore, it is complementary information about AI capabilities and challenges in detection, without direct or indirect harm.
Thumbnail Image

Can you tell AI-generated people from real ones?

2024-03-06
EurekAlert!
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Stable Diffusion and DALL-E) generating realistic images of people. Although the study itself does not report an actual incident of harm, it clearly outlines the plausible risk that such AI-generated images could be used maliciously to spread disinformation, which would constitute harm to communities. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm through disinformation and manipulation.
Thumbnail Image

Nearly 40% fooled by these AI-generated faces

2024-03-07
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Stable Diffusion and DALL-E generating deepfake images) and discusses the harm caused by AI-generated deepfakes, including misinformation and non-consensual explicit content, which are recognized harms to communities and individuals. However, the article does not describe a specific incident where AI use directly or indirectly caused harm in a particular event, nor does it describe a new or imminent hazard event. Instead, it reports on a study assessing human ability to detect AI-generated images and discusses the broader societal implications and past incidents as context. This aligns with the definition of Complementary Information, as it provides supporting data and context about AI harms and the evolving AI ecosystem without reporting a new primary AI Incident or AI Hazard.
Thumbnail Image

Worried About AI-Generated Images? 5 Ways To Find Out - News18

2024-03-09
News18
Why's our monitor labelling this an incident or hazard?
The content centers on informing readers about AI-generated images and how to detect them, which is a form of complementary information aimed at increasing awareness and resilience against potential AI-related misinformation. There is no direct or indirect harm reported, nor a plausible immediate hazard described. The article serves as educational guidance rather than reporting an AI Incident or AI Hazard. Therefore, it fits the definition of Complementary Information.
Thumbnail Image

AI-generated images and video are here: how could they shape research?

2024-03-07
Nature
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (text-to-image and text-to-video generative models) and discusses their use and misuse in scientific research. It references a concrete AI Incident (the publication and retraction of a paper with AI-generated misleading images), but this incident is described as background context rather than the main focus of the article. The main narrative centers on the implications, concerns, and policy responses surrounding AI-generated scientific imagery. No new specific AI Incident or AI Hazard event is reported; rather, the article provides context, expert opinions, and policy developments. Thus, it fits the definition of Complementary Information, enhancing understanding of AI's impact on research without reporting a new primary harm or hazard.
Thumbnail Image

How Difficult Is it To Tell Apart AI-Generated People From Real Ones?

2024-03-07
Technology Networks
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Stable Diffusion and DALL-E) generating realistic images of people. The study reveals a challenge in detecting AI-generated content, which could plausibly lead to harms such as disinformation and manipulation of public opinion, affecting communities and political processes. However, the article does not describe any actual harm having occurred yet, only the potential for such harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Real Person or Deepfake? Can You Tell? - Neuroscience News

2024-03-06
Neuroscience News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Stable Diffusion, DALL-E) generating images. The study reveals a challenge in human ability to detect AI-generated images, which could plausibly lead to harms such as disinformation campaigns and societal disruption. However, the article does not describe a specific incident where harm has already occurred, only the potential for such harm. Therefore, this qualifies as an AI Hazard, as the development and use of AI-generated images could plausibly lead to harm in the future, particularly in political and cultural contexts.
Thumbnail Image

Can you tell who's real? Nearly 40% fooled by AI-generated faces

2024-03-06
Study Finds
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Stable Diffusion and DALL-E) generating human faces and discusses the difficulty people have in distinguishing these from real images. Although the study itself does not describe an actual incident of harm, it clearly outlines the credible risk that such AI-generated images could be used for disinformation, which would constitute harm to communities and potentially violate rights. Therefore, this event describes a plausible future harm scenario stemming from AI use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI-Generated vs Real People: Can You Tell Difference?

2024-03-06
Mirage News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Stable Diffusion and DALL-E) generating realistic images of people. Although the study itself does not report a direct incident of harm, it discusses the credible risk that such AI-generated images could be used maliciously to spread disinformation and manipulate public opinion, which constitutes a plausible future harm. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk of harm stemming from the use of AI-generated images, even though no specific incident of harm has yet occurred.
Thumbnail Image

Which face is real? AI generated images and human perception * Earth.com

2024-03-09
Earth.com
Why's our monitor labelling this an incident or hazard?
The article centers on a research study and the broader implications of AI-generated images for misinformation and societal trust. While it clearly involves AI systems and discusses potential harms, it does not describe a concrete incident where harm has occurred due to AI-generated images. The discussion of risks and the call for detection and policy measures indicate a focus on plausible future harms and ecosystem responses rather than a realized incident. Therefore, the article fits best as Complementary Information, providing context and highlighting challenges and responses related to AI-generated content and disinformation.
Thumbnail Image

Can you tell AI-generated people from real ones?

2024-03-06
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Stable Diffusion, DALL-E) generating images and discusses the potential for these AI-generated images to be used maliciously, which could plausibly lead to harm such as disinformation and reputational damage. However, the article does not report any actual harm occurring from these AI-generated images, only the difficulty in detection and the potential risks. Therefore, it describes a plausible future risk rather than a realized harm. This fits the definition of an AI Hazard, as the development and use of AI-generated images could plausibly lead to incidents of harm such as disinformation campaigns or reputational damage, but no specific incident is reported here.
Thumbnail Image

Beware of fake faces: AI images fool people almost half the time

2024-03-07
Knowridge Science Report
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI models like Stable Diffusion and DALL-E) that create realistic fake images. While it highlights the significant risk of misinformation and disinformation that could arise from such AI-generated images, it does not document any actual harm or incident where these images have caused injury, rights violations, or community harm. The focus is on the potential for harm and the difficulty in detecting AI fakes, which constitutes a credible risk of future harm. Therefore, this event fits the definition of an AI Hazard, as the development and use of AI-generated images could plausibly lead to incidents of misinformation and related harms in the future.
Thumbnail Image

Can You Tell Who's Real? Nearly 40% Fooled by AI-generated Faces

2024-03-07
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Stable Diffusion and DALL-E) generating human faces, which participants struggle to distinguish from real images. Although no actual harm has occurred in the study itself, the findings emphasize the plausible future harm from misuse of AI-generated images in disinformation, a recognized societal harm. Therefore, this event fits the definition of an AI Hazard, as it highlights a credible risk that AI-generated content could lead to harm in the future.