AI Chatbot Sparks Outrage for Simulating Conversations with Hitler and Other Controversial Figures

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI-powered app 'Historical Figures' allows users to chat with over 20,000 historical personalities, including Adolf Hitler and other Nazi leaders. Jewish organizations and the public criticized the app for enabling the spread of hate and misinformation, raising concerns about the social harm caused by AI-generated content.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly mentioned as enabling conversations with historical figures, including Hitler, which has generated controversy due to the potential promotion of extremist ideology. The chatbot's harmful statements about the Holocaust demonstrate that the AI's outputs can cause harm to communities by spreading extremist and false narratives. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to communities (harm category d).[AI generated]
AI principles
Respect of human rightsSafetyFairnessHuman wellbeingAccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingConsumer services

Affected stakeholders
Civil societyGeneral public

Harm types
PsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Con esta Inteligencia Artificial puedes hablar con Jesús y más de 20 mil personajes históricos

2023-01-25
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as enabling conversations with historical figures, including Hitler, which has generated controversy due to the potential promotion of extremist ideology. The chatbot's harmful statements about the Holocaust demonstrate that the AI's outputs can cause harm to communities by spreading extremist and false narratives. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm to communities (harm category d).
Thumbnail Image

Polémica por una aplicación que permite "hablar" con personajes históricos, incluido Hitler

2023-01-25
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The application uses an AI chatbot system to simulate conversations with historical figures, including controversial and hateful ones like Hitler. The concerns raised by Jewish organizations highlight the plausible risk that the AI system could be used to spread hate and antisemitism, which are violations of human rights and harm to communities. Since no actual harm or incident is reported, but the potential for misuse and harm is credible and recognized, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the potential harm, nor is it Unrelated since AI involvement and plausible harm are central to the report.
Thumbnail Image

Una nueva aplicación generó polémica porque permite hablar con Hitler

2023-01-27
Perfil
Why's our monitor labelling this an incident or hazard?
The application uses AI to simulate conversations with historical figures, which is an AI system by definition. The AI-generated responses from Nazi figures that deny the Holocaust or justify atrocities can indirectly lead to harm by spreading antisemitic ideas and hate speech, which harms communities and violates human rights. While the harm is not explicitly stated as realized, the concerns and warnings from organizations like the Anti-Defamation League and the Simon Wiesenthal Center indicate a credible risk of such harm occurring. Therefore, this event qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm, even if no direct incident has yet materialized.
Thumbnail Image

Controvertida aplicación: ¿te gustaría tener una 'conversación' con Adolf Hitler? - MDZ Online

2023-01-25
mdz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the chatbot) whose use has raised concerns about possible misuse leading to harm (e.g., spreading hate or antisemitism). Since no direct harm has been reported or confirmed, but there is a plausible risk of harm in the future, this qualifies as an AI Hazard. The article focuses on the potential risks and societal concerns rather than an incident of realized harm.
Thumbnail Image

IA es criticada por permitir a los usuarios chatear con Hitler y Jesús

2023-01-24
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates conversational outputs simulating historical figures. The harm arises from the AI's use, where the generated content includes offensive or hateful statements that have caused public backlash and concern from organizations like the Anti-Defamation League. This constitutes harm to communities and potentially violates ethical and social norms, fitting the definition of an AI Incident. Although no physical harm is reported, the social harm and violation of rights through offensive content are significant and clearly articulated. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Conozca la nueva aplicación de inteligencia artificial que permite "chatear" con Jesucristo y Hitler

2023-01-24
Entorno Inteligente
Why's our monitor labelling this an incident or hazard?
The application involves an AI system designed to simulate conversations with historical figures, including those associated with hate and atrocities. The AI's outputs have already caused public harm by spreading disturbing and potentially hateful narratives, which can be seen as harm to communities and a violation of social norms and rights. Since the AI's use has directly led to these harms through its generated content, this qualifies as an AI Incident under the framework, specifically under harm to communities and violations of rights. The event is not merely a product launch or general news, but involves realized harm due to the AI's outputs and societal impact.
Thumbnail Image

Un chatbot que permite 'hablar' con Hitler genera polémica entre la comunidad judía

2023-01-25
Sputnik Mundo
Why's our monitor labelling this an incident or hazard?
The application Historical Figures uses AI to simulate conversations with historical figures, including notorious antisemites. The AI-generated content includes false and misleading statements that distort historical truth, which can contribute to the spread of hate and misinformation. The involvement of the AI system in generating these harmful outputs directly leads to harm to communities and violates ethical and social norms. The controversy and criticism from Jewish organizations and the ADL highlight the realized harm caused by the AI system's outputs. Hence, this event meets the criteria for an AI Incident due to the direct harm caused by the AI system's use.