Deepfake Image Lawsuit Sparks Political Concerns in Taiwan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Former UMC founder Cao Xingcheng has filed a lawsuit against media personality Xie Hanbing, alleging that AI-generated deepfake photos wrongly suggesting an extramarital affair have harmed his reputation. He is seeking NT$100 million in damages, which he plans to donate to a political recall effort, amid broader worries of AI misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the misuse of an AI system (deepfake generation) to create non-consensual, defamatory content. The circulation of these AI-generated images has directly harmed the individual’s reputation and privacy, constituting a violation of rights. Therefore, it qualifies as an AI incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
Other

Harm types
ReputationalHuman or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

疑遭AI合成不雅照 曹兴诚提告求偿1亿 | 深伪 | 吴思瑶 | 大纪元

2025-02-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the misuse of an AI system (deepfake generation) to create non-consensual, defamatory content. The circulation of these AI-generated images has directly harmed the individual’s reputation and privacy, constituting a violation of rights. Therefore, it qualifies as an AI incident.
Thumbnail Image

中国用AI干扰台湾大选威胁国安 美国土安全部报告首揭露(图) - 新闻 美国 - 看中国新闻网 - 海外华人 历史秘闻 时政聚焦 - 仇佩芬

2025-02-19
看中國
Why's our monitor labelling this an incident or hazard?
This event involves the deliberate use of generative AI systems to produce and spread disinformation that directly threatens electoral integrity, violates citizens’ rights to truthful political information, and poses a national security risk. The harm has materialized in a coordinated campaign of AI-enabled fake content, so it meets the criteria for an AI Incident.
Thumbnail Image

"曹贼"辩称私密照是AI合成,有网友真拿去辨识,结果出炉

2025-02-18
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article describes a controversy where AI tools are used to verify if photos are AI-generated or manipulated. While AI is involved, the event does not describe any harm caused by the AI system itself, nor does it indicate plausible future harm from the AI's use. The AI's role is limited to analysis and verification, and the main issue is a political and personal scandal. Therefore, this is Complementary Information as it provides context on AI's role in verifying content in a public dispute, without constituting an AI Incident or Hazard.