Study Finds AI-Generated Faces More Trustworthy Than Real Faces, Raising Fraud Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies led by Hany Farid at UC Berkeley show that AI-generated faces, created using GANs like NVIDIA's StyleGAN2, are now indistinguishable from real faces and are often rated as more trustworthy. This raises significant risks for fraud, deception, and erosion of public trust, with documented cases of AI-generated voices already enabling financial scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating synthetic faces (AI System involvement). The article discusses the potential misuse of these AI-generated faces for fraud and disinformation, which could plausibly lead to harm to communities (harm category d). Since no specific incident of harm has been reported yet, but the risk is credible and highlighted by researchers, this qualifies as an AI Hazard. The article also includes recommendations for mitigation, but the primary focus is on the plausible future harm from these AI-generated faces.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance services

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalPublic interestPsychological

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Les visages créés par des IA ne sont plus détectables à l'œil nu (et c'est une mauvaise nouvelle)

2022-02-21
01net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic faces (AI System involvement). The article discusses the potential misuse of these AI-generated faces for fraud and disinformation, which could plausibly lead to harm to communities (harm category d). Since no specific incident of harm has been reported yet, but the risk is credible and highlighted by researchers, this qualifies as an AI Hazard. The article also includes recommendations for mitigation, but the primary focus is on the plausible future harm from these AI-generated faces.
Thumbnail Image

Les visages créés par une IA paraissent plus vrais que nature à...

2022-02-21
Futura
Why's our monitor labelling this an incident or hazard?
The article discusses the capabilities of AI systems to generate highly realistic synthetic faces and the potential for misuse in spreading false information (deepfakes). While it raises concerns about trust and misinformation, it does not describe any actual harm or incident resulting from these AI systems. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides complementary information about the state of AI-generated synthetic faces, their perception by humans, and the implications for trust and misinformation, which fits the definition of Complementary Information.
Thumbnail Image

Le deepfake, un visage qui inspire la confiance - Sciences et Avenir

2022-02-22
Sciences et Avenir
Why's our monitor labelling this an incident or hazard?
The article centers on a research study about deepfake faces generated by AI and how people perceive them, particularly their trustworthiness. While it involves AI systems (deepfake generation algorithms), it does not describe any realized harm or direct/indirect incident caused by these AI systems. Nor does it describe a plausible future harm event. Instead, it provides empirical data that enhances understanding of AI's societal effects, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

IA : les faux visages toujours plus difficiles à identifier

2022-02-22
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating synthetic faces, which is an AI system by definition. However, it does not report any realized harm (such as misinformation campaigns, fraud, or rights violations) directly caused by these AI-generated faces. The mention of a past fake profile incident is historical context, not a new incident. The study results and concerns about accessibility of such technology provide important context and raise awareness but do not constitute an AI Incident or AI Hazard. Hence, the article fits the definition of Complementary Information, providing supporting data and societal context about AI-generated synthetic faces and their detection challenges.
Thumbnail Image

Les deepfakes peuvent sembler " plus fiables " que de vrais visages

2022-02-24
Next INpact.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate synthetic faces (deepfakes), which are shown to be convincingly realistic and even perceived as more trustworthy than real faces. Although no direct harm has yet occurred, the study explicitly raises concerns about the potential misuse of these AI-generated faces for harmful purposes. Therefore, this constitutes an AI Hazard, as the development and use of deepfake AI systems could plausibly lead to incidents involving harm to communities through deception and misinformation.
Thumbnail Image

We can't tell apart deepfakes from real people but we 'trust' them more

2022-02-23
TRT World
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative adversarial networks) to create highly realistic synthetic faces (deepfakes). The article discusses realized harm in terms of the threat posed by these deepfakes to media consumers through fraud and disinformation campaigns, which can harm communities and societies. Although the article does not describe a specific incident of harm occurring, it clearly identifies ongoing and realized risks associated with the use of AI-generated deepfakes. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to communities through misinformation and trust manipulation.
Thumbnail Image

People trust AI-generated deepfake faces more than real ones

2022-02-25
The Express Tribune
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems generating deepfake faces, it does not describe any realized harm or incident resulting from these AI-generated images. The research findings highlight a potential risk in human trust towards AI-generated content, but no direct or indirect harm has occurred or is reported. Therefore, this is not an AI Incident or AI Hazard. The article provides complementary information about AI capabilities and societal perception, which helps understand the broader AI ecosystem and potential implications but does not itself describe a harm or credible risk event.
Thumbnail Image

Unsettling study claims that humans can no longer differentiate between AI-generated and real faces with certainty

2022-02-23
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic facial images, which humans struggle to reliably distinguish from real images. While no direct harm has been reported, the potential for misuse (e.g., deception, misinformation) is clearly articulated and plausible. Therefore, this qualifies as an AI Hazard, as the development and use of these AI-generated images could plausibly lead to harms such as violations of rights or harm to communities if abused. The article does not describe an actual incident of harm, nor does it focus on responses or governance measures, so it is not an AI Incident or Complementary Information.
Thumbnail Image

People find AI-generated faces to be more trustworthy than real faces -- and it could be a problem

2022-02-24
ZME Science
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate synthetic human faces. The study demonstrates that people cannot reliably distinguish these AI-generated faces from real ones and tend to trust them more, which creates a plausible pathway for misuse such as fraud, disinformation, and propaganda. This constitutes a credible risk of harm to communities and individuals through deception and manipulation. While the article does not report a specific realized incident of harm, it clearly identifies a plausible future harm scenario stemming from the use of AI-generated faces. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

People aren't very good at identifying deepfakes, study finds

2022-02-23
Input
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GANs generating deepfake faces) and discusses the implications of this technology's misuse, which could plausibly lead to harms such as misinformation, erosion of trust, and manipulation of individuals or communities. However, the article does not describe any actual harm or incident occurring from the use of deepfakes; rather, it presents research findings and a cautionary note about potential future risks. Therefore, this qualifies as an AI Hazard, as the technology's misuse could plausibly lead to an AI Incident in the future, but no direct or indirect harm has yet been reported.
Thumbnail Image

Would you trust a deepfake face more than a real one?

2022-02-23
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation using AI) and discusses their use and societal impact. However, the article does not describe any actual harm or incident caused by the AI system, nor does it report a specific event where harm occurred or was narrowly avoided. Instead, it presents research findings and ethical considerations, which provide complementary context to understanding AI's societal implications. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

研究发现Deepfake生成的面孔较真人更易骗取信任 - cnBeta.COM 移动版

2022-02-23
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GAN-based deepfake generation) whose outputs (synthetic faces) have been shown to deceive people, leading to potential harms such as fraud and erosion of trust in media, which are harms to communities and individuals. Since the harm is ongoing and demonstrated by the study, this qualifies as an AI Incident. The AI system's use has directly led to a harm scenario (deception and trust manipulation).
Thumbnail Image

研究:人工智能生成的脸比真实的脸更容易获得信任 - AI 人工智能 - cnBeta.COM

2022-02-21
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (GAN) to generate synthetic faces that are more trusted than real ones, which directly leads to a harm scenario where people can be deceived. This deception can cause harm to individuals and communities by enabling malicious uses such as fraud, misinformation, or manipulation, fitting the definition of harm to communities and violation of trust. The study documents realized harm (people being misled) and highlights the risks of further misuse. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

比真人更逼真!英伟达这次真的可以欺骗全世界了

2022-02-22
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (NVIDIA StyleGAN2) used to generate synthetic human faces. It does not report a realized harm incident but discusses credible risks of misuse, such as fraud or social confusion, which could lead to harm to communities or individuals. The research and expert opinions cited support the plausibility of future harm. The article also mentions ongoing efforts to develop authenticity verification technologies as a response, but these do not negate the potential hazard. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

人类认为 AI 生成面孔比真实面孔更值得信赖

2022-02-21
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
The article describes a study on AI-generated deepfake faces, which are AI systems producing synthetic human faces. The study reveals a plausible risk that these AI-generated faces could be exploited for harmful purposes, such as deception or fraud, but no actual harm or incident has been reported. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm but has not yet caused any direct or indirect harm.
Thumbnail Image

如果有「捏脸比赛」,人类只能拿第二名

2022-02-23
爱范儿
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (GAN-based image synthesis) and discusses their use and potential misuse. However, it does not describe a specific incident where harm has already occurred due to AI-generated faces. Instead, it focuses on research results, potential future risks, and technological and governance responses (such as authenticity certification initiatives). Therefore, the event is best classified as Complementary Information, as it provides important context and understanding about AI-generated faces and their societal implications without reporting a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

Deepfake持续进化:无限接近于真实,但仍非真实-36氪

2022-02-25
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems involved in deepfake generation and AI voice synthesis, which have been used maliciously in at least one documented fraud case causing financial harm. This constitutes an AI Incident because the AI system's use directly led to harm (fraud and financial loss). The article also elaborates on broader potential harms and societal challenges but does not report new incidents beyond the referenced fraud case. Therefore, the main classification is AI Incident due to the documented fraud case involving AI voice synthesis. The rest of the article serves as complementary information providing context and discussion on the technology's evolution and implications.
Thumbnail Image

如果有捏脸比赛,人类只能拿第二名

2022-02-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (GANs, StyleGAN2) used to generate synthetic human faces. The use of these AI systems has directly led to harms such as the potential for fraud and social disruption, as the AI-generated faces are highly realistic and trusted more than real faces in some experiments. This meets the criteria for harm to communities and social harm under the AI Incident definition. The article also discusses the societal risks and the need for authentication measures, but the primary focus is on the realized harm and risks from AI-generated fake faces, not just potential future harm or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Deepfake持续进化:无限接近于真实 但仍非真实

2022-02-25
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake AI for face and voice synthesis) and their use leading to real harms, such as a documented fraud case where AI voice synthesis was used to trick an employee into transferring funds, which constitutes harm to persons and property. It also discusses the broader societal harm of misinformation and erosion of trust caused by AI-generated content. These are direct or indirect harms caused by AI system use, fitting the definition of an AI Incident. Additionally, the article provides context on regulatory and ethical responses, but the primary focus is on the harms already occurring and their implications, not just potential future risks or general AI news. Therefore, the classification is AI Incident.