AI-Generated Fake Faces Fuel Social Media Manipulation and Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta reports a sharp rise in the use of AI-generated fake faces, created with generative adversarial networks (GANs), for fake social media profiles. These lifelike images enable malicious actors to conduct influence operations, spread propaganda, and perpetrate scams, causing widespread harm by manipulating online discourse and deceiving users.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI-generated faces are being used by threat actors to run influence operations on social media, spreading propaganda and harassment, which harms communities. The AI system (GANs generating fake faces) is directly involved in enabling these harms. The harm is realized, not just potential, as Meta has taken down over 200 such networks. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI-generated fake faces have become a hallmark of online influence operations

2022-12-15
NPR
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated faces are being used by threat actors to run influence operations on social media, spreading propaganda and harassment, which harms communities. The AI system (GANs generating fake faces) is directly involved in enabling these harms. The harm is realized, not just potential, as Meta has taken down over 200 such networks. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation.
Thumbnail Image

Meta Sees Sharp Rise in AI-Generated Fake Profile Photos

2022-12-16
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative adversarial networks) generating fake profile photos used by malicious actors to conduct coordinated inauthentic behavior, which manipulates public debate and causes harm to communities. The AI-generated images are a key tool enabling these harmful activities, fulfilling the criteria for an AI Incident due to direct involvement of AI in causing harm. The harm is ongoing and widespread, as evidenced by the disruption of multiple networks across many countries. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fake Facebook Profile Pictures Generated by AI Now on the Rise! How to Spot Them?

2022-12-18
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (GANs) to generate fake profile pictures that are used by scammers and malicious actors on Facebook. This use of AI has directly led to harm by enabling deceptive profiles that can be used for scams or other malicious purposes, which harms individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harm through the proliferation of fake profiles used for malicious activities.
Thumbnail Image

AI-generated fake faces have become a hallmark of online influence operations

2022-12-15
KGOU 106.3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (GANs) to generate fake faces for fake social media accounts used in influence operations spreading propaganda and harassment. This use has directly led to harm to communities by manipulating social media discourse and harassing activists, fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harm described, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.