Crunchyroll Faces Backlash for Using ChatGPT in Anime Subtitles

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Crunchyroll was criticized after viewers discovered that ChatGPT was used to generate subtitles for the anime 'Necronomico and the Cosmic Horror Show,' resulting in visible AI-generated text and poor translation quality. Fans expressed frustration over the lack of quality control and reliance on AI for essential content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI (ChatGPT) was used to generate subtitles that were error-ridden and unedited, leading to a poor viewing experience. The AI system's use directly led to harm in the form of degraded content quality and potential job risks for human translators. Although no physical injury or legal rights violation is reported, the harm to the community's experience and labor rights is a significant, clearly articulated harm caused by the AI system's use. Thus, this fits the definition of an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
ConsumersBusiness

Harm types
ReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Crunchyroll ran embarrassingly bad ChatGPT subtitles on its new anime series

2025-07-02
The Verge
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI (ChatGPT) was used to generate subtitles that were error-ridden and unedited, leading to a poor viewing experience. The AI system's use directly led to harm in the form of degraded content quality and potential job risks for human translators. Although no physical injury or legal rights violation is reported, the harm to the community's experience and labor rights is a significant, clearly articulated harm caused by the AI system's use. Thus, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ChatGPT faceplants while translating Crunchyroll anime, and some viewers are demanding human localization

2025-07-03
TechRadar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used for translation/localization, which is a creative task involving complex language understanding. The AI-generated subtitles contained errors and awkward phrasing that harmed the viewing experience and upset the audience. This harm is directly linked to the AI system's use without adequate human oversight. The harm is cultural and reputational, affecting the community of anime viewers and the platform's credibility. Therefore, it meets the criteria for an AI Incident due to direct harm caused by AI system malfunction in its use phase.
Thumbnail Image

Crunchyroll's lazy AI subtitles have anime fans furious

2025-07-02
engadget
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI, ChatGPT) in the development and use of subtitles for anime streaming. The AI's outputs have directly led to harm in the form of poor-quality subtitles that frustrate and mislead consumers, causing dissatisfaction and loss of trust. This constitutes harm to communities (anime fans and paying subscribers) and consumer harm. Although the harm is not physical or legal rights violations, it is a significant clearly articulated harm where the AI system's role is pivotal. Therefore, this qualifies as an AI Incident.
Thumbnail Image

New Crunchyroll Anime Reignites AI Controversy After Giving ChatGPT a Shocking Credit

2025-07-01
ScreenRant
Why's our monitor labelling this an incident or hazard?
The article describes the use of ChatGPT, an AI system, for generating subtitles, which has resulted in poor translation quality and missing content. This constitutes harm to the community's experience and trust but does not rise to the level of an AI Incident involving injury, rights violations, or significant harm as per the definitions. The event is more about the consequences of AI use in content translation leading to dissatisfaction and distrust, which is a notable impact but not a direct or indirect cause of the harms outlined for AI Incidents. Therefore, this is best classified as Complementary Information, providing context on AI's impact on the anime industry and consumer trust without a clear AI Incident or Hazard.
Thumbnail Image

Anime streaming site Crunchyroll accidentally leaves ChatGPT listed in subtitles, months after streaming boss said the site is "not considering" using AI

2025-07-02
gamesradar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, for subtitle translation, which is a use of AI. The errors in translation and the accidental listing of ChatGPT in subtitles indicate a malfunction or misuse of the AI system. However, the harms described are limited to user disappointment and concerns about creative job displacement, which do not constitute injury, rights violations, or other significant harms as defined. There is no evidence of direct or indirect harm caused by the AI system's outputs beyond quality issues. The event does not describe a credible risk of future harm beyond reputational damage. Thus, it does not meet the criteria for AI Incident or AI Hazard but fits as Complementary Information about AI's role and public response in media translation.
Thumbnail Image

Crunchyroll Accidentally Reveals They've Been Using ChatGPT for Sub Translations

2025-07-01
CBR
Why's our monitor labelling this an incident or hazard?
The article explicitly states that Crunchyroll used ChatGPT for subtitle translations, which caused multiple errors and bizarre subtitles that were publicly noticed and criticized by the community. This constitutes direct harm to the community of viewers who rely on accurate subtitles for understanding content, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the poor subtitles have already been distributed and caused dissatisfaction and confusion. Therefore, this event qualifies as an AI Incident due to the direct negative impact caused by the AI system's use in subtitle translation.
Thumbnail Image

Crunchyroll Under Fire Over Alleged Use of AI in Subtitles

2025-07-02
Game Rant
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI in subtitle translation, which is an AI system involvement in content generation. The event stems from the use of AI in the production process. However, the harms described are limited to concerns about quality and professional value, which do not constitute injury, rights violations, or other significant harms as defined. There is no direct or indirect harm caused by the AI system that meets the criteria for an AI Incident. The article also discusses the ongoing debate and regulatory considerations, which are societal and governance responses to AI use. Therefore, the event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Crunchyroll accidentally confirmed it uses ChatGPT for subtitles

2025-07-02
Pocket-lint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in generating subtitles, which has led to poor translation quality and user complaints. This constitutes harm to the community of users relying on accurate subtitles, fulfilling the criteria for an AI Incident under harm to communities. The harm is realized, not just potential, as users have expressed frustration and dissatisfaction. Although the harm is not physical or legal, it is significant and clearly articulated, stemming directly from the AI system's use. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Crunchyroll Accidentally Left AI Slop in Anime Subtitles

2025-07-02
Futurism
Why's our monitor labelling this an incident or hazard?
The article describes an AI-related error in subtitles caused by the use of AI tools like ChatGPT in the subtitling process. While this reflects on the use and potential misuse of AI, it does not describe any realized harm such as health injury, rights violations, or significant community harm. The harm is reputational and quality-related, which is not covered under the AI Incident definitions. There is also no credible indication that this could plausibly lead to a significant AI Incident in the future. The article mainly provides background on AI integration in Crunchyroll's workflows and fan reactions, which aligns with Complementary Information.
Thumbnail Image

Crunchyroll blames third-party vendor for AI subtitle mess

2025-07-03
engadget
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI, specifically ChatGPT) for subtitle translation, which is explicitly mentioned. The AI system's use was unauthorized and led to poor subtitle quality, causing harm to the community of viewers through misinformation and degraded service quality. This harm is direct and realized, as viewers complained and some turned to piracy. Therefore, this qualifies as an AI Incident due to harm to communities and violation of service quality expectations linked to AI use.
Thumbnail Image

Crunchyroll Reportedly Addresses AI Controversy

2025-07-03
Comicbook
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-generated subtitles without authorization, which is a misuse of AI in content production. While this raises ethical and contractual issues, the article does not report any direct or indirect harm to people, infrastructure, rights, or property. The controversy is about the presence and unauthorized use of AI, not about harm caused by the AI outputs. Therefore, it does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI's role in the anime industry, ongoing investigations, and industry responses, without reporting a specific harm or credible future harm.
Thumbnail Image

Crunchyroll Anime Subtitles Quality Leads To AI Generation Claims

2025-07-05
Lowyat.NET
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT or similar) used in subtitle generation, which is a clear AI system involvement. However, the harm described is limited to poor quality subtitles and customer dissatisfaction, which does not rise to the level of injury, rights violations, or significant harm as defined for AI Incidents. There is no plausible future harm indicated beyond the current quality issues. The platform's response and investigation indicate ongoing management of the issue. Hence, this is best classified as Complementary Information, providing context on AI use and its implications in media localization, rather than an incident or hazard.
Thumbnail Image

ChatGPT Subtitle Disaster Hits Crunchyroll, AI-Powered Anime Translation Goes Hilariously Wrong

2025-07-04
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated subtitles were used by a third-party vendor in violation of agreements, resulting in incomprehensible and poor-quality subtitles. This caused harm to the users' viewing experience and led to public backlash, which is a clear harm to communities and consumer rights. The AI system's involvement in producing faulty outputs that degraded service quality constitutes an AI Incident under the framework, as the harm is realized and directly linked to the AI system's use and malfunction.