AI-Generated Indigenous Avatar Sparks Outcry Over Cultural Appropriation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A South African content creator in New Zealand used AI to generate 'Jarren,' an Aboriginal-appearing avatar, for the 'Bush Legend' social media accounts. The AI persona, presented as an Indigenous wildlife expert, amassed hundreds of thousands of followers, drawing criticism for cultural appropriation, digital blackface, and misrepresentation of Aboriginal identity without community consent.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (the AI-generated avatar 'Jarren') is central to the event, as it creates a fictional Indigenous persona that misrepresents Aboriginal identity and culture. This has led to cultural harm, misappropriation, and economic exploitation concerns raised by Indigenous leaders and experts, constituting violations of rights and harm to communities. The harm is ongoing and realized, not merely potential, as the AI avatar is actively used to generate content and profit. Hence, the event meets the criteria for an AI Incident under violations of human rights and harm to communities caused directly by the AI system's use.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

No Mob, No Country: the social media account profiting with an AI Indigenous avatar

2026-01-13
SBS Language
Why's our monitor labelling this an incident or hazard?
The AI system (the AI-generated avatar 'Jarren') is central to the event, as it creates a fictional Indigenous persona that misrepresents Aboriginal identity and culture. This has led to cultural harm, misappropriation, and economic exploitation concerns raised by Indigenous leaders and experts, constituting violations of rights and harm to communities. The harm is ongoing and realized, not merely potential, as the AI avatar is actively used to generate content and profit. Hence, the event meets the criteria for an AI Incident under violations of human rights and harm to communities caused directly by the AI system's use.
Thumbnail Image

This TikTok star sharing Australian animal stories doesn't exist - it's AI Blakface

2026-01-14
The Conversation
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating content that impersonates Indigenous peoples without consent or cultural grounding, constituting a violation of Indigenous cultural and intellectual property rights. The AI's use leads to harm by perpetuating cultural appropriation, misinformation, and undermining Indigenous self-determination, which aligns with harm to communities and violations of rights under the AI Incident definition. The article documents realized harm rather than potential harm, and the AI system's role is pivotal in creating and disseminating this content. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

'It's AI blackface': social media account hailed as the Aboriginal Steve Irwin is an AI character created in New Zealand

2026-01-15
the Guardian
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating the digital avatar and content that misrepresents Indigenous identity, which has directly led to cultural harm and potential violation of intellectual property rights. The harm is realized as the AI-generated persona is followed by many, spreading misleading information and cultural appropriation, which experts describe as 'AI blackface' and cultural theft. The event involves the use of AI systems and the resulting harm to communities and cultural rights, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-Generated 'Bush Legend' Sparks Debate on Cultural Appropriation and Digital Blackface

2026-01-15
Head Topics
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates the Indigenous-appearing avatar and content. The harm is realized and ongoing, as Indigenous experts and communities express concerns about cultural appropriation, misrepresentation, and perpetuation of stereotypes, which are forms of harm to communities and violations of rights. The controversy and criticism indicate that the AI system's use has directly led to these harms. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indigenous 'expert' goes viral online but there's a huge problem

2026-01-16
Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating a fake Indigenous persona with cultural markings without consent directly leads to harm by misrepresenting and exploiting Indigenous culture, which is a violation of human rights and causes harm to communities. The harm is realized as the account has gained significant followers and public backlash highlights the cultural harm and disrespect. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The event is not merely a potential hazard or complementary information, but a clear case of harm caused by AI misuse.
Thumbnail Image

Viral 'Wildlife Expert' Exposed: The Shocking Truth Behind Misleading Australian Animal Videos - Internewscast Journal

2026-01-16
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates the avatar and videos. The harm is realized in the form of cultural appropriation and disrespect towards Indigenous peoples, which constitutes harm to communities and a violation of cultural rights. The event describes ongoing dissemination of AI-generated content causing social harm and controversy, meeting the criteria for an AI Incident. The harm is indirect but clearly linked to the AI system's use. Therefore, the classification is AI Incident.
Thumbnail Image

AI 'Bush Legend' wildlife star fronted by fake Aboriginal man based in New Zealand

2026-01-17
NZ Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the creation and use of an AI-generated fake Aboriginal persona ('Bush Legend') that misleads viewers and appropriates Indigenous culture, causing cultural harm and violating intellectual property rights. The AI system's development and use directly lead to harm to communities and violations of rights, fulfilling the criteria for an AI Incident. The harm is realized, not merely potential, as the videos are actively viewed and commented on, and experts highlight the cultural and intellectual property damage caused. Hence, this is classified as an AI Incident.
Thumbnail Image

Indigenous TikTok star 'Bush Legend' is actually AI-generated, leading to accusations of 'digital blackface'

2026-01-19
Live Science
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a digital Indigenous persona ('Bush Legend') that misappropriates Indigenous culture and knowledge without consent or accountability. This use of AI has caused harm by violating Indigenous Cultural and Intellectual Property rights, contributing to cultural appropriation, and impacting Indigenous peoples' self-determination and social standing. The harm is realized and ongoing, including social and cultural harms, which fall under violations of human rights and harm to communities as defined. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Indigenous TikTok star 'Bush Legend' is AI-Generated: Digital Blackface Accusations - News Directory 3

2026-01-19
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated the fake Indigenous persona on TikTok. The use of AI to create and disseminate this content has directly caused harm by appropriating Indigenous culture without permission, misrepresenting Indigenous identity, and potentially causing social and cultural harm to Indigenous communities. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The article also discusses ongoing concerns and responses but the primary focus is on the realized harm from the AI-generated content, not just potential or complementary information.