Snapchat Removes AI Filters After Māori Cultural Outcry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Snapchat and Instagram faced backlash for AI-powered filters that applied sacred Māori tattoos to users' faces, leading to accusations of cultural appropriation and violation of Māori intellectual property rights. Following public outcry and community harm, Snapchat removed the offending filters from its platform.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems in the form of social media filters that use AI-based facial recognition and modification technology. The use of these AI filters has directly led to harm by appropriating and misrepresenting Māori cultural symbols, violating intellectual property rights and causing cultural harm to the Māori community. The filters' availability and use on platforms like Snapchat and Instagram have caused realized harm, not just potential harm. The removal of the filters by Snapchat after complaints further confirms the recognition of harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of intellectual property rights and harm to communities.[AI generated]
AI principles
AccountabilityFairnessRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Recognition/object detectionContent generation


Articles about this incident or hazard

Thumbnail Image

Growing anger over use of moko, mataora in image filters: 'That's a mockery'

2022-09-04
Stuff
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of social media filters that use AI-based facial recognition and modification technology. The use of these AI filters has directly led to harm by appropriating and misrepresenting Māori cultural symbols, violating intellectual property rights and causing cultural harm to the Māori community. The filters' availability and use on platforms like Snapchat and Instagram have caused realized harm, not just potential harm. The removal of the filters by Snapchat after complaints further confirms the recognition of harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI systems in causing violations of intellectual property rights and harm to communities.
Thumbnail Image

Snapchat removes Māori face tattoo filters after outcry in New Zealand

2022-09-07
The Guardian
Why's our monitor labelling this an incident or hazard?
The filters are AI systems that generate facial tattoo images, and their deployment has directly led to cultural harm and disrespect to the Māori people, violating their intellectual property and cultural rights. The harm is realized as the filters have been used widely, causing offense and raising legal and ethical concerns. The event describes the removal of the filters as a remedial action but does not negate the fact that harm occurred. Hence, this is an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

Increasing use of moko, mataora in image filters raises concern for protecting Māori identity | Newshub

2022-09-05
Newshub
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of facial modification filters on social media platforms that use AI technology to project cultural tattoos onto users' faces. The harm is realized and ongoing, as the use of these AI filters has led to cultural appropriation and violation of Māori intellectual property and identity rights, which are recognized as harms to communities and breaches of rights. The filters' creation and use by non-Māori actors without consent have caused direct harm. The removal of the filters by Snapchat further confirms the incident's recognition. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Snapchat removes moko, mataora filters after outcry

2022-09-08
RNZ
Why's our monitor labelling this an incident or hazard?
The filters are AI systems that modify facial features in real time, and their use led to cultural harm by disrespecting sacred Māori tattoos, causing significant community harm. The harm is realized and directly linked to the AI system's use. The removal of the filters is a response to this harm. Hence, this is an AI Incident involving violation of cultural rights and harm to communities due to AI system use.
Thumbnail Image

Growing anger over use of moko, mataora in image filters: 'That's a mockery'

2022-09-04
RNZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of social media filters that use AI-based facial modification technology. The use of these AI systems has directly led to harm, specifically violations of intellectual property rights and harm to the Māori community's cultural identity, which falls under harm to communities and violation of rights. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in enabling the appropriation and misuse of Māori cultural symbols. The removal of the filters is a remedial action but does not change the classification of the event as an AI Incident. Thus, the event meets the criteria for an AI Incident rather than an AI Hazard, Complementary Information, or Unrelated event.