AI Face-Swapping App Avatarify Removed Amid Privacy and Rights Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI face-swapping app Avatarify, which went viral in China with over 29 billion video views, was removed from the Chinese App Store after concerns about privacy breaches, copyright infringement, and potential misuse for bypassing facial recognition. The app's popularity led to actual privacy and rights violations, prompting regulatory action.[AI generated]

Why's our monitor labelling this an incident or hazard?

The app uses AI to manipulate photos into animated videos, which directly involves an AI system. The widespread use and monetization of these AI-generated videos have led to privacy concerns and copyright infringement, constituting violations of rights and legal obligations. These harms have materialized, as evidenced by the app's removal from the market. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountability

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
General public

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

「螞蟻呀呼」APP25億次播放 陸爆紅7天下架

2021-03-04
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The app uses AI to manipulate photos into animated videos, which directly involves an AI system. The widespread use and monetization of these AI-generated videos have led to privacy concerns and copyright infringement, constituting violations of rights and legal obligations. These harms have materialized, as evidenced by the app's removal from the market. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

抖音29億播放量!「螞蟻呀嘿」AI變臉App暴紅 6天被下架 | 聯合新聞網:最懂你的新聞網站

2021-03-06
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Avatarify) used for face-swapping and video generation, which qualifies as an AI system. However, the event focuses on the app's viral popularity and its removal from the Chinese App Store, with speculation about privacy concerns but no confirmed or direct harm caused by the AI system. There is no report of injury, rights violations, or other harms directly or indirectly caused by the AI system. The potential privacy risks are noted but remain speculative and not confirmed as causing harm. Thus, the event does not meet the criteria for an AI Incident or AI Hazard but fits the definition of Complementary Information as it provides updates and context about AI system use and regulatory actions.
Thumbnail Image

螞蟻呀嘿|製作軟件疑涉私隱安全在中國下架 或可能破解人臉識別

2021-03-05
香港01
Why's our monitor labelling this an incident or hazard?
The app uses AI to generate facial expression videos, which is an AI system. The removal from the app store is due to concerns that the AI system could be misused to compromise facial recognition security, which is a plausible future harm. No actual harm is reported yet, but the potential for misuse that could lead to privacy violations or security breaches qualifies this as an AI Hazard rather than an Incident. The event is not merely general AI news but highlights a credible risk associated with the AI system's use.
Thumbnail Image

「螞蟻呀嘿」AI變臉App 6日被內地下架 官方未解釋原因 | 蘋果日報

2021-03-06
Apple Daily 蘋果日報
Why's our monitor labelling this an incident or hazard?
The app is an AI system using face-swapping technology to generate videos. Its sudden removal from the Chinese App Store, with speculation about privacy and security issues, indicates a plausible risk of harm related to personal data misuse or privacy violations. However, no direct harm or incident is reported in the article, only a precautionary or regulatory action. Therefore, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no confirmed incident has occurred.
Thumbnail Image

"蚂蚁呀嘿"App爆红下架背后:AI换脸再掀数据安全争议

2021-03-07
163.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI face-swapping software) whose use has directly caused harms including privacy breaches, potential misuse for scams, and violations of portrait and intellectual property rights. The article documents actual incidents of data security risks and legal infringements linked to the AI system's deployment and use, fulfilling the criteria for an AI Incident. The discussion of regulatory responses and legal frameworks serves as complementary information but does not negate the realized harms. Therefore, this event is best classified as an AI Incident.