Canva AI Tool Replaces 'Palestine' with 'Ukraine' in User Designs, Prompting Apology

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Canva's AI-powered Magic Layers tool was found to automatically replace the word 'Palestine' with 'Ukraine' in user-generated designs, sparking accusations of censorship and bias. The issue, which did not affect related terms like 'Gaza,' caused distress among users. Canva has apologized and implemented fixes to prevent recurrence.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Magic Layers feature is an AI tool designed to decompose images into editable layers, and it malfunctioned by altering specific text content without user consent. This malfunction directly led to harm in the form of distress and potential violation of users' rights to free expression and accurate representation, which can be considered harm to communities or a violation of rights. Since the AI system's malfunction caused realized harm and the company has responded with remediation, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
FairnessRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersBusiness

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Canva apologizes after its AI tool replaces 'Palestine' in designs

2026-04-27
The Verge
Why's our monitor labelling this an incident or hazard?
The Magic Layers feature is an AI tool designed to decompose images into editable layers, and it malfunctioned by altering specific text content without user consent. This malfunction directly led to harm in the form of distress and potential violation of users' rights to free expression and accurate representation, which can be considered harm to communities or a violation of rights. Since the AI system's malfunction caused realized harm and the company has responded with remediation, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Canva Admits Its AI Tool Removed 'Palestine' From Designs, Apologizes for Any Distress It Caused

2026-04-27
Gizmodo
Why's our monitor labelling this an incident or hazard?
The AI system (Magic Layers) is explicitly mentioned and is responsible for changing text in user designs without instruction, which is a malfunction. This alteration of the word "Palestine" to "Ukraine" can be seen as a form of censorship or bias, impacting users' rights and causing distress, thus harm to communities. The company confirmed the issue and took remedial action, confirming the AI system's role in causing harm. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's malfunction and biased behavior.
Thumbnail Image

Canva's New AI-Powered Tool Caught Swapping 'Palestine' for 'Ukraine'

2026-04-27
PetaPixel
Why's our monitor labelling this an incident or hazard?
The Magic Layers feature is explicitly described as an AI system that manipulates visual designs by interpreting and separating image elements. The malfunction—swapping "Palestine" with "Ukraine"—is a direct error caused by the AI system's processing. This leads to harm in the form of misinformation and potential distress to affected communities, fulfilling the criteria for harm to communities under the AI Incident definition. Canva's apology and efforts to fix the issue confirm the AI system's role in causing the harm.
Thumbnail Image

Canva Issues Apology for AI Tool Mistake Replacing 'Palestine' in User Designs - Internewscast Journal

2026-04-27
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction directly led to the alteration of user content in a way that misrepresented political terms, which can be considered a violation of rights and harm to communities. The incident involves the use and malfunction of an AI system affecting user-generated content, fulfilling the criteria for an AI Incident. The company's response to fix the issue does not negate the fact that harm occurred due to the AI's malfunction.
Thumbnail Image

10 Free Tools to Create Stunning Visuals for Social Media OR (if focusing on the image context): How Ukraine and Palestine Are Using Free Design Tools for Advocacy - News Directory 3

2026-04-27
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI system's use led to direct harm by censoring legitimate speech related to political advocacy, which is a violation of human rights (freedom of expression). The harm is realized, not just potential, as users experienced the removal or alteration of the word "Palestine" in their designs. The incident is directly linked to the AI system's content moderation malfunction. Therefore, this qualifies as an AI Incident under the framework, specifically under violations of human rights.