Meta's AI Sticker Tool Generates Offensive and Inappropriate Content Across Platforms

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta's new AI-powered sticker generation tool for Messenger, WhatsApp, and Instagram has enabled users to create and share offensive, violent, and obscene stickers, including depictions of public figures and copyrighted characters. Inadequate content filtering has led to the rapid spread of inappropriate images, sparking controversy and concerns over moderation and community harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (Meta's AI sticker generator powered by Llama2) is explicitly involved. The misuse of this AI system to generate inappropriate and harmful images has already occurred and is being shared widely online, causing reputational harm and potential violations of rights (e.g., depiction of public figures in lewd or violent contexts). This constitutes harm to communities and individuals, fulfilling the criteria for an AI Incident. The article documents realized harm through the generation and dissemination of inappropriate AI content, not just potential harm or general commentary on AI safeguards.[AI generated]
AI principles
AccountabilitySafetyRobustness & digital securityTransparency & explainabilityHuman wellbeingRespect of human rights

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
ConsumersBusiness

Harm types
ReputationalPsychologicalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

People are figuring out ways to generate inappropriate images with Meta's new AI stickers -- and sharing the results online

2023-10-05
Business Insider
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's AI sticker generator powered by Llama2) is explicitly involved. The misuse of this AI system to generate inappropriate and harmful images has already occurred and is being shared widely online, causing reputational harm and potential violations of rights (e.g., depiction of public figures in lewd or violent contexts). This constitutes harm to communities and individuals, fulfilling the criteria for an AI Incident. The article documents realized harm through the generation and dissemination of inappropriate AI content, not just potential harm or general commentary on AI safeguards.
Thumbnail Image

Facebook's new AI sticker tool generates 'completely unhinged' images

2023-10-05
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's AI sticker generator) is explicitly involved and has produced content that could be harmful or offensive, indicating a potential for harm. However, the article does not provide evidence that these AI-generated images have caused direct or indirect harm such as violations of rights, health, or community disruption. The company acknowledges the risk and is implementing safety measures, indicating awareness of potential hazards. Since no actual harm has been reported, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Meta's AI Stickers Are Already Causing Trouble

2023-10-04
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Emu) used for generating stickers based on user prompts. The AI's malfunction or inadequate filtering has resulted in the creation and dissemination of harmful content, including violent and sensitive imagery, which can cause harm to communities and violate platform guidelines. This constitutes an AI Incident because the AI system's use has directly led to harm through the generation of inappropriate and potentially harmful content on widely used social media platforms.
Thumbnail Image

Humans can't resist breaking AI with boobs and 9/11 memes | TechCrunch

2023-10-06
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI image models) and their use. The misuse of these AI systems to generate offensive or harmful content is evident, and the article discusses the failure of content filters to prevent such misuse. However, there is no explicit mention of actual harm materializing from these AI-generated images, such as injury, societal disruption, or legal violations. The harms described are potential or indirect, related to the risk of spreading offensive or harmful content. Therefore, this event fits best as an AI Hazard, highlighting plausible future harm due to misuse and insufficient safeguards, rather than an AI Incident or Complementary Information.
Thumbnail Image

Facebook's new AI stickers can generate Mickey Mouse holding a machine gun

2023-10-04
Ars Technica
Why's our monitor labelling this an incident or hazard?
Meta's AI system is explicitly involved in generating images that include violent and offensive content, such as Mickey Mouse holding a machine gun and other disturbing depictions. These images have been shared widely, causing social harm and offense, which fits the definition of harm to communities. The AI system's malfunction or insufficient filtering has directly led to this harm. The article also discusses the challenges of content moderation and the potential for further harm if not addressed. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

People are figuring out ways to generate inappropriate images with Meta's new AI stickers -- and sharing the results online

2023-10-05
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's Llama2-powered sticker generator) is explicitly involved in generating inappropriate and harmful images, which are being shared widely. This constitutes harm to communities through the dissemination of inappropriate, violent, and potentially defamatory content. The failure of the AI safeguards to prevent such outputs indicates a malfunction or inadequacy in the AI system's use. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Facebook's New AI Stickers Let You Generate Pics of Elon Musk With Boobs

2023-10-05
Futurism
Why's our monitor labelling this an incident or hazard?
The AI system (Emu) is explicitly mentioned as generating images from user prompts. The misuse of the system to create offensive or inappropriate images demonstrates that the AI's outputs have directly led to harm in the form of social and reputational damage and potential violations of community standards, which fall under harm to communities. Although no physical injury or legal violation is reported, the generation and dissemination of harmful content is a recognized form of harm. The article also notes the failure of safeguards, indicating a malfunction or insufficient control in the AI system's use. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

People are trolling Meta's new AI-generated stickers -- giving Elmo a knife and Waluigi a gun

2023-10-04
The Daily Dot
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Meta's AI-generated sticker tool) being used to create controversial and inappropriate content. This use of AI raises concerns about potential harm to communities and reputational harm to individuals depicted. However, the article does not report any actual injury, legal violation, or significant harm that has already occurred. The harms described are potential or emerging issues related to misuse of the AI system. Therefore, the event is best classified as Complementary Information, as it provides context on challenges and societal responses related to AI misuse, without documenting a concrete AI Incident or an imminent AI Hazard.
Thumbnail Image

Facebook's AI Sticker tool creates weird ones, like Waluigi with a rifle

2023-10-04
Neowin
Why's our monitor labelling this an incident or hazard?
The AI system involved is Meta's Emu image synthesis model used to generate stickers from text prompts. The use of this AI system has directly led to the creation and dissemination of harmful content, including offensive and violent images, which can harm communities and violate rights (e.g., copyright infringement, offensive depictions). Although the harm is primarily reputational and social, it fits within harm to communities and violations of rights. The article describes realized harm through the spread of such content and public backlash, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta mocked over 'lewd and rude' Facebook Sticker AI

2023-10-04
The Irish Sun
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating content based on user prompts. The misuse of this AI system has directly led to the creation and dissemination of inappropriate and offensive images, which constitutes harm to communities and violates content standards. The filters' failure to fully prevent such outputs indicates a malfunction or limitation in the AI system's use. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Controversy over AI-generated Meta stickers: many are offensive, aggressive or obscene

2023-10-06
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The AI system involved is the generative AI used to create stickers on Facebook and Instagram. The system's outputs have directly led to the spread of offensive and copyrighted content, which constitutes harm to communities and violations of intellectual property rights. The harm is realized, not just potential, as the offensive stickers are being shared. Therefore, this qualifies as an AI Incident. The company's response to improve filters is a future mitigation but does not negate the current harm.
Thumbnail Image

Facebook Messenger AI Stickers' unexpected dark turn brings a big controversy - Softonic

2023-10-05
Softonic
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to generate images that have caused harm to communities by spreading offensive and disturbing content. The harm is realized as users have actively created and shared inappropriate images, which can be considered harm to communities and a violation of rights (e.g., copyright infringement). The controversy and public backlash indicate that the AI system's use has directly led to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Usuarios aplican la IA de Meta para crear stickers ofensivos en Messenger

2023-10-05
infobae
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as generating images from user text prompts, which is a clear AI system involvement. The misuse of this system to create offensive and violent stickers that can harm community sensibilities and potentially minors is a direct harm caused by the AI's outputs. The absence of effective content moderation or filtering mechanisms in the AI system facilitates this harm. Hence, this event meets the criteria for an AI Incident due to realized harm to communities and potential violation of content standards.
Thumbnail Image

Polémica por los 'stickers' de Meta generados por IA: muchos son ofensivos, agresivos u obscenos

2023-10-06
LaVanguardia
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the stickers are generated by Meta's AI generative models. The use of this AI system is leading to the creation and sharing of offensive and inappropriate content, which can be considered harm to communities and possibly violations of rights (e.g., intellectual property rights with copyrighted characters). However, the article does not document concrete incidents of harm or legal breaches occurring yet, only the presence of inappropriate content and the potential for harm. The company's response is still in the monitoring and improvement phase. Therefore, this situation fits best as Complementary Information, providing context and updates on an ongoing issue related to AI-generated content moderation challenges, rather than a confirmed AI Incident or an AI Hazard.
Thumbnail Image

Los stickers de Meta generados por IA provocan un escándalo con desnudos e imágenes truculentas

2023-10-04
El Español
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly described as Meta's AI-powered sticker generator using Llama 2. Users have manipulated the system to bypass content filters and generate harmful and offensive images, including nudity and child soldiers, which constitute harm to communities and potentially violate rights. The harm is realized as these images have been created and shared publicly, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case of AI misuse causing harm.
Thumbnail Image

El generador de stickers con IA de Meta está creando dibujos de niños con armas o personajes desnudos | RPP Noticias

2023-10-04
RPP noticias
Why's our monitor labelling this an incident or hazard?
An AI system (Emu) is explicitly involved as the sticker generator using AI to create images based on user input. The AI system's insufficient safeguards have allowed users to generate harmful content, including depictions of children with weapons and sexualized images, which can be considered harm to communities and potentially violations of content standards and rights. The harm is occurring through the AI system's outputs being used inappropriately and spreading on platforms, thus constituting an AI Incident due to realized harm linked to the AI system's use and malfunction (lack of effective content filtering).
Thumbnail Image

La IA de Facebook era una buena idea hasta que creó a Luigi con un rifle. Ahora, Meta debe buscar la forma de acabar con estos stickers ofensivos

2023-10-05
3D Juegos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-based sticker generator) whose use has directly led to the creation and dissemination of offensive content, including depictions of characters with rifles, which can be considered harmful to communities due to the nature of the imagery. The AI system's malfunction or insufficient filtering allowed users to bypass safeguards, resulting in realized harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm in the form of offensive and potentially harmful content spreading within the community.
Thumbnail Image

El Papa con un rifle, Mickey con un cuchillo ensangrentado, Trump el bebé llorón: los stickers de Facebook creados con IA, sin filtros

2023-10-04
ComputerHoy.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates stickers from text prompts. The system's use has directly led to harms including violations of intellectual property rights (copyright infringement) and harm to communities by producing offensive and unethical content involving real individuals and fictional characters. The lack of content filtering constitutes a malfunction or failure in the AI system's deployment. Therefore, this event qualifies as an AI Incident due to realized harms linked to the AI system's use and malfunction.
Thumbnail Image

Los controvertidos stickers que Facebook e Instagram permiten crear con IA

2023-10-05
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (Meta's Emu model) used to generate images. The issue arises from the AI's use, where its outputs can bypass content filters, enabling the creation of controversial or offensive stickers. Although no concrete harm (such as widespread dissemination causing harm) is documented, the article highlights the plausible risk of such harm occurring due to insufficient safeguards. Meta's acknowledgment of the problem and plans to improve filters further support this being a hazard. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm but no direct harm has yet been established.
Thumbnail Image

Los nuevos stickers de Meta creados con IA desatan la polémica: políticos desnudos y dibujos animados armados rondan por WhatsApp

2023-10-04
Genbeta
Why's our monitor labelling this an incident or hazard?
The AI system (Meta's generative AI tools) is explicitly involved in generating content that has led to harm to communities by spreading inappropriate and offensive images, including those involving public figures and copyrighted characters. This constitutes harm to communities and potentially violations of intellectual property rights. The harm is realized as the content is already circulating and causing social controversy. Therefore, this qualifies as an AI Incident. The article also mentions Meta's response, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Usuarios de Facebook descubrieron que pueden generar stickers ofensivos con IA - La Opinión

2023-10-06
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
An AI system (the sticker generation tool using AI) is explicitly involved. The use of this AI system has led to the creation and circulation of offensive content, which can be considered a form of harm to communities or violation of norms. However, the article does not document actual realized harm such as injury, legal violations, or significant disruption. The harm is potential and ongoing moderation challenges are highlighted. Hence, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no concrete incident of harm is confirmed in the article.
Thumbnail Image

Usuarios aplican la IA de Meta para crear stickers ofensivos en Messenger

2023-10-05
Noticias de Bariloche
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates images from user text prompts. The misuse of this AI system to create offensive and violent stickers has already occurred, causing harm to community standards and potentially to minors exposed to such content. The absence of effective content moderation or filtering in the AI system's deployment has allowed this harm to materialize. Hence, this event qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Los stickers creados con la IA de Facebook pueden ser inapropiados y obscenos

2023-10-04
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved in generating content that can be inappropriate or obscene, which can be considered a form of harm to communities or violation of content guidelines. However, the article does not report actual incidents of harm caused by the stickers, such as widespread dissemination leading to harm or legal consequences. The company is actively limiting deployment and working to fix issues, indicating a proactive approach to prevent harm. Therefore, this event is best classified as Complementary Information, providing context on challenges and responses related to an AI system's outputs rather than documenting a realized AI Incident or a plausible future hazard alone.
Thumbnail Image

Las nuevas pegatinas de IA de Facebook pueden generar a Elmo con un cuchillo

2023-10-04
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates images from text prompts. The misuse of the AI-generated stickers to create offensive and copyrighted images has led to harm in the form of offensive content dissemination and potential intellectual property rights violations. The harm is occurring as users share these images publicly, impacting communities and possibly violating rights. Although Meta is working on mitigation, the incident is ongoing. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Read more

2023-10-04
esdelatino.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates stickers from text prompts. The system's use has directly led to harms including violations of intellectual property rights (use of copyrighted characters without permission) and harm to communities (offensive and unethical content involving real people and sensitive subjects). The lack of filtering and control over generated content constitutes a malfunction or failure in the AI system's deployment. Therefore, this qualifies as an AI Incident due to realized harm stemming from the AI system's use.