AI-Generated Celebrity Selfie Sparks Deepfake Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kevin Xu, a US entrepreneur, used Google's AI image generator 'Nano Banana' to create a convincing fake selfie with K-pop star Lisa. The viral image, initially presented as real, raised global alarm about the potential for AI-generated images to fuel scams and misinformation, prompting calls for stricter AI regulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system to generate synthetic images (deepfakes) that are realistic enough to deceive people. Although no actual harm has occurred yet, the article highlights the credible risk that such AI-generated images could be used maliciously for scams or deception, which could lead to violations of rights or harm to communities. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no direct harm has been reported yet.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"즐거운 시간 보냈다"...블핑 리사와 셀카 찍어 올린 男, 알고보니 | 중앙일보

2025-09-06
중앙일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate synthetic images (deepfakes) that are realistic enough to deceive people. Although no actual harm has occurred yet, the article highlights the credible risk that such AI-generated images could be used maliciously for scams or deception, which could lead to violations of rights or harm to communities. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, but no direct harm has been reported yet.
Thumbnail Image

"리사와 즐거운 시간 보내고 셀카 한 장"...美 사업가 사진에 담긴 비밀 [글로벌 IT슈]

2025-09-07
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
An AI system (Google's generative AI model Nano Banana) was used to create synthetic images that are indistinguishable from real photos. Although no actual harm has occurred yet, the article explicitly warns about the plausible future harm from misuse of such AI-generated images for scams or deception. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harms such as fraud or misinformation. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information because the main focus is on the warning about potential misuse and the demonstration of the AI's capabilities, not on responses or ecosystem updates. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

블랙핑크 리사와 셀카... "AI 합성" 고백에 전세계 충격

2025-09-05
아시아투데이
Why's our monitor labelling this an incident or hazard?
An AI system (Google's Gemini 2.5 Flash Image model, codenamed Nano Banana) was used to create a synthetic image that fooled people into believing it was real. The article explicitly discusses the risk of such AI-generated images being used for scams and misinformation, which are harms to communities and individuals. However, the article does not report any actual harm or incident resulting from this specific image, only the potential for harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article also includes societal reactions and calls for regulation, but the main focus is on the plausible future harm from AI misuse, not on a realized incident or a governance response alone.
Thumbnail Image

블핑 리사와 셀카찍은 의문의 남성, 정체가..."AI로 합성해 만든 사진"

2025-09-06
MK스포츠
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a synthetic image that falsely represents a real-world event (a selfie with a celebrity). While no direct harm has yet occurred, the article warns about the plausible future harm of scams and misinformation resulting from such AI-generated fake images. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to significant harm (fraud, deception, harm to communities). There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information because it focuses on the risk and demonstration of AI-generated fake content and its implications.
Thumbnail Image

블랙핑크 리사와 다정한 셀카...알고보니 'AI 합성' 이었다

2025-09-06
마이데일리
Why's our monitor labelling this an incident or hazard?
An AI system (Google's image generation model 'Nanobanana') was used to create a synthetic image that could plausibly be used to deceive people, potentially causing harm such as fraud or social disruption. Although no actual harm has yet occurred, the article explicitly warns about the credible risk of future misuse leading to harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to communities or individuals through deception and fraud.
Thumbnail Image

'블핑 리사와 셀카' 올린 사업가, 논란 왜?

2025-09-07
아시아경제
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to generate a synthetic image that is indistinguishable from reality, raising concerns about deepfake crimes and potential scams. Although no direct harm has occurred, the warning about future misuse and the potential for large-scale fraud indicates a credible risk of harm. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident in the future. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information because the main focus is on the risk posed by AI-generated deepfakes, not on responses or ecosystem updates.
Thumbnail Image

"리사와 즐거운 시간 보냈다"...美 사업가의 '셀카 공개' 알고 보니

2025-09-07
서울경제
Why's our monitor labelling this an incident or hazard?
An AI system (Google's image generation model 'Nano Banana') was used to create a synthetic image that could plausibly lead to harm such as fraud or misinformation. Although no direct harm has yet occurred, the article emphasizes the credible risk of future harm from misuse of such AI-generated images. Therefore, this event constitutes an AI Hazard due to the plausible future harm from AI-generated deepfakes and their potential misuse in scams and deception.
Thumbnail Image

내 꿈을 현실로"...요즘 가장 핫하다는 이 '바나나' 뭐길래 [더인플루언서]

2025-09-08
MK스포츠
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image of a celebrity with a person, which was then shared publicly. Although no direct harm is reported yet, the creator warns about the potential for scammers to misuse such AI-generated images, implying a credible risk of future harm such as deception or fraud. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to incidents involving harm to communities or individuals through misinformation or scams.
Thumbnail Image

리사와 즐거운 시간 보냈다던 사업가... '황당하다' 반응 쏟아진 이유 [지금이뉴스]

2025-09-09
YTN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the image was generated by an advanced AI model ('Gemini 2.5 Flash Image'). The use of this AI system to create a fake image that could be used to deceive people constitutes a plausible risk of harm, specifically harm to communities through misinformation and potential fraud. Although no direct harm has yet occurred, the warning about scammers filling feeds with such fake images indicates a credible risk of future harm. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving deception and fraud.