Swedish Deputy PM Uses AI-Fabricated Quote in Speech, Issues Public Apology

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sweden’s Deputy Prime Minister Ebba Busch used a fabricated quote generated by ChatGPT in a major speech, wrongly attributing it to journalist Elina Pahnke. The incident led to public misinformation and reputational harm, prompting Busch to issue a public apology and sparking debate about politicians’ understanding of AI risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system that generated a false quote attributed to a person, which was then publicly used by a political figure. This led to misinformation and reputational harm, which qualifies as harm to communities and individuals. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
WorkersGeneral publicGovernment

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Forskare: Behöver lära sig hur AI faktiskt fungerar

2025-08-14
Omni
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident or AI Hazard. It mentions AI-generated text and a false quote incident but does not establish that the AI system directly or indirectly caused harm beyond the general issue of misinformation. The main focus is on the need for better understanding of AI and an apology for a false quote, which is more about societal response and awareness rather than a new incident or hazard. Therefore, it fits best as Complementary Information.
Thumbnail Image

Ebba Busch använde falskt AI-citat - ber om ursäkt

2025-08-14
HD
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system that generated a false quote attributed to a person, which was then publicly used by a political figure. This led to misinformation and reputational harm, which qualifies as harm to communities and individuals. The AI system's use directly led to this harm, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Expertens kritik mot Busch: Förstår inte hur AI fungerar

2025-08-14
Sveriges Radio
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI tool generating a quote), but the article does not describe any harm resulting from this use, nor does it suggest a credible risk of harm. The focus is on the politician's lack of understanding and expert critique, which is a societal response and commentary rather than an incident or hazard. Therefore, this is best classified as Complementary Information.
Thumbnail Image

Malmöjournalisten felciterades av Busch: "Försöker smutskasta" - P4 Malmöhus

2025-08-16
Sveriges Radio
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a false quote attributed to a real person, which led to reputational harm and a public apology. This constitutes harm to an individual (a form of harm to persons) caused directly by the AI-generated content. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's misuse or malfunction (generation of false information).
Thumbnail Image

Busch felcitering kom från sökning via AI-verktyg

2025-08-14
Omni
Why's our monitor labelling this an incident or hazard?
The AI system was involved in generating or retrieving a false quote, which was then used mistakenly by a public figure. This shows AI's potential to produce inaccurate or fabricated content. However, the event does not describe any direct or indirect harm resulting from this misinformation, such as harm to individuals, communities, or rights violations. The main focus is on the apology and clarification after the error was discovered, which is a response to a prior issue rather than a new incident causing harm. Thus, it fits the definition of Complementary Information, providing supporting context about AI's impact on information accuracy and public discourse.
Thumbnail Image

Aftonbladet-profil kräver att KD avslöjar sin AI-prompt

2025-08-15
Omni
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fabricated quote that was presented as real in a political speech, which directly led to misinformation and harm to public discourse and trust. This constitutes harm to communities through the spread of false information and political manipulation. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated false content.
Thumbnail Image

Aftonbladetchef kräver svar från KD

2025-08-15
journalisten.se
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to produce a fabricated quote that was presented as factual, leading to misinformation and reputational harm to an individual. This constitutes a violation of rights (reputational harm and misinformation) and harm to communities (trust in media and public discourse). Since the harm has already occurred due to the AI-generated false quote, this qualifies as an AI Incident under the framework.
Thumbnail Image

Ebba Busch pudlar efter ai-hallunication - Dagens opinion

2025-08-14
Dagens opinion
Why's our monitor labelling this an incident or hazard?
The AI system (an AI tool generating quotes) is involved in producing fabricated content, which was mistakenly used by a public figure. This is a clear example of AI hallucination. However, the event does not describe any realized harm such as injury, rights violations, or disruption. The harm is limited to misinformation that was corrected with an apology. The article focuses on the apology and explanation rather than on harm caused by the AI system. Thus, it fits the definition of Complementary Information, providing insight into AI hallucination risks and societal responses without constituting a new AI Incident or Hazard.