Starship Entertainment Apologizes for Sharing Deepfake Content of IVE's An Yujin

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Starship Entertainment faced backlash after reposting AI-generated deepfake images of IVE's An Yujin with defamatory captions on Weibo. The agency apologized, explaining it was a mistake during content reporting, and deleted the post. They reassigned the responsible staff and promised improved account management to prevent future incidents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves actual harm caused by AI-generated content (deepfakes) that was posted and then publicly apologized for. This is not a potential risk but a realized incident of defamation and harassment via an AI system, meeting the criteria for an AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRespect of human rightsPrivacy & data governanceSafetyHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
Other

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisementMonitoring and quality control

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Starship Entertainment Releases New Statement On Reposting Update Calling IVE's Yujin A Sl*t - Koreaboo

2025-01-15
Koreaboo
Why's our monitor labelling this an incident or hazard?
The event involves actual harm caused by AI-generated content (deepfakes) that was posted and then publicly apologized for. This is not a potential risk but a realized incident of defamation and harassment via an AI system, meeting the criteria for an AI Incident.
Thumbnail Image

An Yujin deepfake controversy: IVE's agency apologises for sharing offensive AI content of K-pop star

2025-01-15
The Economic Times
Why's our monitor labelling this an incident or hazard?
Starship Entertainment’s staff used AI to produce and post a harmful deepfake image and caption targeting An Yujin, directly causing distress and reputational damage. This misuse of generative AI for harassment and defamation constitutes an AI Incident.
Thumbnail Image

Starship apologizes for sharing deepfake of IVE's An Yu-jin, disciplines staff

2025-01-16
The Korea Times
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system to generate a deepfake image, which was then shared publicly, causing harm to the person depicted and distress to her fans. The harm is realized and directly linked to the AI system's output. The company's acknowledgment of negligence and disciplinary action further confirms the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.
Thumbnail Image

K-pop: IVE's agency apologizes after sharing deepfakes of An Yu-Jin

2025-01-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system, and its use here is explicit. The sharing of deepfakes with malicious intent could potentially cause harm, such as reputational damage or emotional distress, which can be considered harm to individuals or communities. However, the article does not describe any realized physical harm, legal violations, or systemic harm. The event is primarily about the agency's response and apology after backlash, not about ongoing or resulting harm. Therefore, this is best classified as Complementary Information, as it provides context and response to an AI-related issue without describing a new AI Incident or Hazard.
Thumbnail Image

Starship Entertainment Apologizes for Deepfake Photo of IVE's Ahn Yoo-jin, Vows Stronger Safeguards - News Directory 3

2025-01-18
News Directory 3
Why's our monitor labelling this an incident or hazard?
The incident involves the use of AI-generated deepfake technology, which is an AI system capable of creating realistic but fake images. The sharing of the deepfake image caused harm to the artist's reputation and emotional well-being, which falls under violations of rights and harm to individuals. Since the harm has already occurred due to the sharing of the deepfake, this qualifies as an AI Incident. The agency's response and legal actions are complementary but do not change the classification of the event as an incident.