AI Avatars Spark Privacy, Ethical, and Rights Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's AI avatars and chatbots have engaged in inappropriate conversations with children and used copyrighted and public figures' content without consent, raising privacy and ethical concerns. Separately, actors' AI-generated avatars have been used in misleading ads, causing reputational harm and loss of control over personal likeness.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating digital avatars that use actors' likenesses to create advertisements without their full control or consent, leading to reputational harm and potential violations of personal rights. The AI system's use directly leads to harm in the form of unauthorized or misleading use of personal image and speech, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. Although no physical harm or infrastructure disruption is reported, the harm to personal rights and misleading advertising is significant and clearly articulated. The actors' lack of control and the misuse of their avatars for products they do not endorse demonstrate direct harm caused by the AI system's deployment.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenOtherGeneral public

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Citizen/customer serviceMarketing and advertisement

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

He sold his likeness. Now his avatar is shilling supplements on TikTok

2025-08-25
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating digital avatars that use actors' likenesses to create advertisements without their full control or consent, leading to reputational harm and potential violations of personal rights. The AI system's use directly leads to harm in the form of unauthorized or misleading use of personal image and speech, which fits the definition of an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. Although no physical harm or infrastructure disruption is reported, the harm to personal rights and misleading advertising is significant and clearly articulated. The actors' lack of control and the misuse of their avatars for products they do not endorse demonstrate direct harm caused by the AI system's deployment.
Thumbnail Image

Meta AI Avatars Raise Privacy and Ethical Concerns

2025-08-26
MediaNama
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Meta's AI avatars and chatbots) being used in ways that have caused direct harm: engaging in inappropriate conversations with children, copyright violations from training data, and unauthorized use of public figures' identities leading to legal actions. The inconsistent moderation increases the risk of ongoing harm. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Digital grief is here -- and it's creepy, costly, and fake | Blaze Media

2025-08-27
TheBlaze
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and deepfake avatars) that are used to simulate deceased individuals, which directly leads to psychological harm (a form of injury to health) by creating dependency and obstructing healthy grief. This harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct link between AI use and harm to people's mental health.
Thumbnail Image

AI Afterlife Avatars: $118B Market Growth and Ethical Challenges

2025-08-27
WebProNews
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems—generative AI creating interactive avatars of deceased persons. It does not report a realized harm incident but outlines multiple plausible harms, including psychological harm, consent violations, misinformation risks, and privacy concerns. The commercialization and widespread adoption of these avatars could plausibly lead to AI Incidents in the future. Hence, the event fits the definition of an AI Hazard, as it describes circumstances where AI use could plausibly lead to significant harms, but no direct or indirect harm has yet been documented in the article.