AI-Generated Deepfake Video of Trump Spreads MedBed Conspiracy Theory

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video featuring Donald Trump and Lara Trump promoted the false 'medbed' conspiracy theory, claiming miraculous healthcare technology. The video, shared on Truth Social, misled viewers by fabricating endorsements from public figures, contributing to misinformation and potentially undermining trust in legitimate healthcare in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to create a video featuring a fabricated announcement by a public figure, promoting a conspiracy theory about alien 'medbeds' as a miracle cure. This misinformation can cause harm by misleading people about health treatments, potentially leading to health risks or undermining trust in legitimate healthcare. The harm is indirect but significant, affecting community well-being and public health. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomyRespect of human rightsSafety

Industries
Media, social platforms, and marketingHealthcare, drugs, and biotechnology

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

'Truly insane': Trump posts bizarre QAnon claim he will heal people with alien 'medbeds'

2025-09-28
Raw Story
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a video featuring a fabricated announcement by a public figure, promoting a conspiracy theory about alien 'medbeds' as a miracle cure. This misinformation can cause harm by misleading people about health treatments, potentially leading to health risks or undermining trust in legitimate healthcare. The harm is indirect but significant, affecting community well-being and public health. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation.
Thumbnail Image

Trump shares AI video of 'medbed' cure, later deletes it. What is the conspiracy theory - The Times of India

2025-09-29
The Times of India
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video featuring public figures endorsing a false health technology, which is a clear case of AI-generated misinformation. This misinformation can harm communities by spreading false health claims and conspiracy theories, fulfilling the criteria for harm to communities under the AI Incident definition. The AI system's use directly led to the dissemination of misleading content, even if later deleted, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump Deletes Post Referencing Bizarre 'Medbed' Conspiracy Theory

2025-09-28
Forbes
Why's our monitor labelling this an incident or hazard?
The AI system is involved in generating a video that contributes to misinformation, which can be harmful to communities if widely believed. However, the article does not document any actual harm occurring from this AI-generated content, nor does it indicate a credible imminent risk of harm. The event is primarily about the existence and removal of the AI-generated post referencing a conspiracy theory, which fits the category of Complementary Information as it provides context and updates on AI-generated misinformation without reporting a specific incident or hazard.
Thumbnail Image

Trump shares apparent AI video promoting 'medbed' conspiracy theory

2025-09-28
Aol
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video (deepfake) of a public figure promoting a false medical technology linked to conspiracy theories. The video was shared publicly, potentially misleading viewers and causing harm through misinformation about health. This constitutes an AI Incident because the AI-generated content directly led to harm by spreading false health claims and conspiracy theories, which can negatively impact public health and trust. Although the video was deleted, the harm from its dissemination has already occurred or is ongoing.
Thumbnail Image

Trump shares apparent AI video promoting 'medbed' conspiracy theory

2025-09-28
Channel 3000
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake video of a public figure making false medical claims. This use of AI directly contributes to misinformation that can harm communities by misleading people about health treatments, fitting the definition of harm to communities. The event describes realized harm through the dissemination of false information with potential public health consequences, thus qualifying as an AI Incident.
Thumbnail Image

'Truly Insane': Trump Posts Bizarre QAnon Claim He Will Heal People With Alien 'Medbeds'

2025-09-28
Crooks and Liars
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic video featuring a public figure making false claims about health technology. The use of AI-generated content to spread conspiracy theories and misinformation can lead to harm to communities by undermining trust in legitimate healthcare and spreading false hope or dangerous beliefs. Therefore, this constitutes an AI Incident due to harm to communities through misinformation caused by AI-generated content.
Thumbnail Image

Did Trump Actually Endorse the 'MedBed' Conspiracy Theory? Here's the Information We Have - Internewscast Journal

2025-09-29
internewscast.com
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI to generate a video that falsely portrays public figures endorsing a conspiracy theory. This involves an AI system (deepfake technology) used in the development and dissemination of misleading content. While the video promotes false claims, the article does not indicate that this has directly or indirectly caused harm such as injury, rights violations, or disruption. The potential for future harm exists due to misinformation, but no realized harm is reported. Therefore, this event is best classified as Complementary Information, as it provides context on AI-generated misinformation without documenting an AI Incident or an immediate AI Hazard.
Thumbnail Image

Trump posted about a medical conspiracy theory called 'medbeds.' Here's what's going on.

2025-09-29
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake video of Donald Trump promoting a false medical cure, the 'medbed' conspiracy theory. This misinformation can cause harm to public health by misleading individuals, undermining trust in medical professionals, and potentially leading to harmful health decisions. Although no direct physical harm is reported, the spread of medical misinformation is recognized as causing actual harm to communities and public health, fitting the definition of an AI Incident due to the AI system's role in generating and spreading the false content.
Thumbnail Image

Donald Trump Posts and Deletes AI-Generated Video Promising 'Every American' a 'Medbed Card' amid Conspiracy Theory

2025-09-29
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video that disseminated false information about medical technology. This misinformation could harm communities by spreading conspiracy theories and misleading the public about healthcare, which constitutes harm to communities. Although no direct physical harm is reported, the spread of false medical claims can have significant societal impacts. Therefore, this event qualifies as an AI Incident due to the realized harm of misinformation caused by the AI-generated content.
Thumbnail Image

The president posted a fantasy video falsely claiming he's releasing a miracle cure that QAnon supporters have eagerly awaited.

2025-09-29
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video that falsely claims the existence of a miracle cure, which is a conspiracy theory promoted by fringe groups. The AI-generated content directly contributes to misinformation, which can harm communities by spreading false health information and undermining public trust. Therefore, this constitutes an AI Incident due to harm to communities through misinformation dissemination.
Thumbnail Image

Trump posted about a medical conspiracy theory called 'medbeds.' Here's what's going on.

2025-09-29
USA Today
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the video is described as AI-generated, indicating the use of generative AI technology. The event involves the use of AI to create misleading content promoting a medical conspiracy theory, which is misinformation that can harm public health and trust. Although the article discusses the broader harms of medical misinformation and its consequences, it does not report a direct, specific harm caused by this particular AI-generated video. The harm is potential and plausible, given the known risks of medical misinformation amplified by AI-generated content. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Truth behind Donald Trump's alien 'medbeds' miracle cure conspiracy video

2025-09-29
The Sun
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake video segment featuring AI-generated voices and visuals of public figures, which was then shared publicly. Although no direct physical harm or legal violation is reported, the dissemination of AI-generated misinformation about medical cures can harm public understanding and trust, potentially leading to indirect harm such as people foregoing real medical treatment. Since the harm is indirect and related to misinformation with potential societal impact, this qualifies as an AI Incident due to the AI system's role in generating and spreading false information that can harm communities by misleading them about health-related matters.
Thumbnail Image

Trump, 79, Posts AI Version of Himself Shilling Magic Beds

2025-09-28
The Daily Beast
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a realistic video of a public figure promoting a false medical product tied to conspiracy theories. The dissemination of this AI-generated misinformation can lead to harm to communities by misleading people about medical treatments, potentially causing health risks. Therefore, this qualifies as an AI Incident due to the direct role of AI in producing and spreading harmful misinformation with real-world consequences.
Thumbnail Image

Donald Trump slammed after sharing 'Black Mirror'-style AI Fox News report

2025-09-29
indy100.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake news video that was shared by Donald Trump, leading to misinformation about a medical technology that does not exist. This misinformation can cause harm to communities by misleading vulnerable individuals, especially those desperate for medical cures, thus fulfilling the criteria for harm to communities. The AI system's use in creating and spreading this false content directly led to this harm, qualifying the event as an AI Incident.
Thumbnail Image

Baffled Jake Tapper Blasts Trump Over 'Bogus' Magic Beds

2025-09-29
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The AI-generated video is central to the dissemination of false medical claims that have real-world consequences, such as individuals refusing legitimate medical treatment. This meets the criteria for an AI Incident because the AI system's use in creating and spreading deceptive content has directly contributed to harm to health and communities. The event is not merely a potential risk but involves realized harm, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

Trump Shares Bizarre AI Video Promising Magic 'Medbeds' for Everyone

2025-09-29
Gizmodo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic video of Trump making false claims about a medical technology that does not exist. The video was shared publicly, spreading misinformation that could plausibly lead to harm, such as people being misled about medical treatments or falling victim to scams. Although no direct harm is reported in the article, the potential for harm through misinformation and fraudulent exploitation is credible. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as the harm is plausible but not yet realized.
Thumbnail Image

Trump Just Promoted One of the Nation's Cruelest Conspiracy Theories

2025-09-29
Slate Magazine
Why's our monitor labelling this an incident or hazard?
The AI system generated a realistic video that was used to promote a false and harmful conspiracy theory. The harm is indirect but significant: it can mislead people, especially those with serious health conditions, to forego proper medical care and potentially spend money on fraudulent treatments. The event describes actual dissemination and impact of the AI-generated content, not just a potential risk. Hence, it meets the criteria for an AI Incident involving indirect harm to health and communities caused by the AI system's use.
Thumbnail Image

Donald Trump deletes AI video promoting alien-based conspiracy theory

2025-09-29
Metro
Why's our monitor labelling this an incident or hazard?
The AI system generated a deepfake video impersonating a public figure to promote false and conspiratorial claims, which constitutes misinformation causing harm to communities by spreading false narratives. This fits the definition of an AI Incident because the AI system's use directly led to the dissemination of harmful misinformation. Although the post was deleted quickly and no large-scale harm is described, the event still involves realized harm from the AI system's use. It is not merely a potential hazard or complementary information, as the AI-generated content was publicly posted and linked to conspiracy theories with known harmful societal impacts.
Thumbnail Image

Why it was so deeply weird to see Trump amplify 'medbed' pseudoscience

2025-09-29
MSNBC.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video segment (deepfake) that falsely depicted Trump endorsing a non-existent medical technology. The president's amplification of this AI-generated misinformation on social media constitutes the use of an AI system leading to harm by spreading false health claims. This misinformation can cause harm to public health and communities by misleading people about medical treatments, fitting the definition of an AI Incident due to realized harm from the AI system's outputs and their dissemination.
Thumbnail Image

Trump Shared an AI Video Promoting the Medbed Conspiracy Theory, but What Is That?

2025-09-29
Distractify
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a deepfake video, which was disseminated and promoted by a high-profile figure. This constitutes the use of AI-generated content to spread false information, which can harm communities by misleading the public and potentially undermining trust in institutions. However, the article does not report any realized physical harm, legal violations, or direct consequences from this video. Therefore, it does not meet the threshold for an AI Incident but represents a plausible risk of harm through misinformation dissemination. Given the current lack of direct harm but the clear potential for societal impact, this event is best classified as Complementary Information, as it provides context on AI-generated misinformation and its societal implications without documenting a concrete AI Incident or Hazard.
Thumbnail Image

What's the 'medbed' conspiracy theory that Trump shared, then deleted?

2025-09-29
Firstpost
Why's our monitor labelling this an incident or hazard?
The event centers on an AI-generated deepfake video promoting a conspiracy theory, which is misinformation. The AI system's involvement is in generating synthetic media. While misinformation can cause harm, the article does not document any direct or indirect harm resulting from this specific video or its sharing. The video was deleted quickly, and no harm such as health injury, rights violations, or societal disruption is reported as having occurred. The event is primarily about the existence and spread of AI-generated misinformation and the societal context around it, which fits the definition of Complementary Information. It does not describe a realized AI Incident or a plausible AI Hazard from this specific event.
Thumbnail Image

Trump Deletes Bizarre AI-Generated Video He Shared After His Grip on Reality Is Questioned

2025-09-29
Futurism
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video falsely depicting a political figure and promoting a conspiracy theory about a non-existent healthcare technology. The sharing of this video by the president, even if later deleted, directly contributes to misinformation that can harm public understanding and trust, which is a form of harm to communities. The AI system's role in generating the deceptive content is pivotal to the incident. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

President Trump Deletes AI-Generated News Report About 'Medbeds'

2025-09-29
PetaPixel
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating a fabricated news report, which was disseminated publicly. The content is misleading and could plausibly lead to harm by spreading false medical information, which fits the definition of an AI Hazard. There is no evidence in the article that actual harm (such as injury, rights violations, or disruption) has occurred yet from this specific video, so it does not meet the threshold for an AI Incident. The deletion of the post and lack of confirmed harm indicate this is a potential risk rather than a realized incident.
Thumbnail Image

Oh Nothing, Just the President Posting AI Videos About QAnon Conspiracy Theories

2025-09-29
Jezebel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated video content (deepfake) used to spread false conspiracy theories, which constitutes harm to communities through misinformation. The AI system's use in creating and disseminating this content has directly led to the spread of harmful falsehoods, fulfilling the criteria for an AI Incident under harm to communities. Although the video was deleted, the harm from its dissemination has already occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

"Unfathomable level of boomerism": Trump shares AI clip promoting "Med Bed" conspiracy before quietly deleting it

2025-09-29
The Daily Dot
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic fake video (deepfake) that promotes a harmful conspiracy theory. The AI-generated content was disseminated by a high-profile user, which can amplify misinformation and cause harm to communities by misleading people about medical treatments and fostering conspiracy beliefs. Although the harm is primarily informational and social, it is significant and clearly articulated, fitting the definition of harm to communities. The event describes actual sharing and public reaction, indicating realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in spreading harmful misinformation.
Thumbnail Image

What Trump's AI medbed video means for Houston | Editorial

2025-09-29
Houston Chronicle
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is the generation of a deepfake video that spreads misinformation about a non-existent medical technology. While this misinformation can plausibly lead to harm by misleading people and enabling scams, the article does not document any actual harm or incidents resulting directly from the AI video. Therefore, this event represents a potential risk rather than a realized harm. The main focus is on the societal impact of AI-generated misinformation and its implications for public health and policy discourse, which aligns with the definition of an AI Hazard. There is no indication of complementary information or unrelated news, as the AI-generated video is central to the discussion and its potential for harm is highlighted.
Thumbnail Image

Trump Shares Bizarre Video Promoting 'Medbeds'

2025-09-30
Newser
Why's our monitor labelling this an incident or hazard?
The AI system generated realistic but fabricated video content (deepfake) that was disseminated publicly, promoting false medical claims. This misinformation can directly harm individuals by encouraging reliance on non-existent cures and financial exploitation through scam products. The AI-generated content's role is pivotal in creating and spreading this harmful misinformation. Therefore, this qualifies as an AI Incident due to realized harm to health and communities through misinformation and scams.
Thumbnail Image

Trump shares and deletes AI video promoting "medbed" theory

2025-09-29
Pakistan Today
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fabricated video featuring a public figure promoting a baseless medical conspiracy theory. This misinformation can lead to harm to communities by fostering false beliefs about health treatments, potentially causing people to avoid legitimate medical care or fall victim to scams. The harm is realized through the spread of misinformation, which is a form of harm to communities. Therefore, this qualifies as an AI Incident due to the direct role of the AI-generated content in causing harm.
Thumbnail Image

Senile Trump Posts, Then Deletes, AI Deepfake Of Himself Promoting Wacky MedBed Conspiracy Theory

2025-09-29
Techdirt
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos (an AI system) that were posted and believed to be real by a high-profile individual, leading to the spread of false claims about medical technology. This misinformation can harm communities by misleading the public about health-related matters, fulfilling the harm to communities criterion. The AI system's outputs directly caused this harm, meeting the definition of an AI Incident. The event is not merely a potential hazard or complementary information, as the misinformation was actively disseminated and believed, causing real harm.
Thumbnail Image

Trump Quietly Deletes Unhinged AI "MedBed" Conspiracy Video

2025-09-29
The New Republic
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but false video, which is a form of AI-generated misinformation. While this could plausibly lead to harm by spreading conspiracy theories and misleading the public, the article does not report any realized harm or direct consequences resulting from the video. Therefore, this event represents a potential risk of harm (AI Hazard) rather than an actual incident. The deletion of the video shortly after posting suggests mitigation of potential harm, but the initial sharing still constitutes a plausible hazard due to the AI-generated misinformation.
Thumbnail Image

Trump shares apparent AI video promoting 'medbed' conspiracy theory

2025-09-29
Roanoke Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a clear AI application. The video promotes a conspiracy theory with false medical claims, which can harm public health understanding and trust, thus harming communities. The harm is realized as the video was shared publicly by a prominent figure, increasing its potential impact. This meets the criteria for an AI Incident because the AI system's use directly led to harm through misinformation dissemination.
Thumbnail Image

Donald Trump deletes AI video promoting conspiracy theory involving aliens

2025-09-29
JOE.co.uk
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video that promotes a false and harmful conspiracy theory. The content directly relates to misinformation that has previously led to societal harm. Although the video was deleted, the AI-generated content's role in spreading conspiracy theories constitutes an AI Incident due to the harm to communities through misinformation and potential violation of rights to accurate information. The AI system's use in creating and disseminating this misleading content directly contributes to harm as defined in the framework.
Thumbnail Image

Donald Trump deletes bizarre AI-generated conspiracy theory video

2025-09-29
Extra.ie
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic video with misleading content tied to conspiracy theories, which could plausibly cause social harm. However, the article does not report any actual harm occurring, such as violence, health injury, or rights violations resulting from the video. The video was deleted and no official comment was made. The main focus is on the existence and removal of the AI-generated video and its social media impact, which fits the definition of Complementary Information rather than an Incident or Hazard. There is no clear indication of direct or indirect harm caused by the AI system's use in this case.
Thumbnail Image

What Is the Medbed Conspiracy Theory? Trump's AI Video Fuels Misinformation

2025-09-29
International Business Times UK
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a synthetic video that falsely presented medical claims, which experts warn can lead to harm by exploiting vulnerable patients and spreading health misinformation. The misinformation can cause individuals to delay or avoid legitimate medical treatment, constituting harm to health and communities. The AI-generated video was central to the event and directly linked to the spread of harmful false information, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump shares apparent AI video promoting 'medbed' conspiracy theory

2025-09-29
SCNow
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video, which is a clear AI involvement. The video promotes a conspiracy theory about a medical cure-all, which is misinformation that could plausibly lead to harm to individuals' health or public trust in healthcare. Since no actual harm has been reported yet, but the potential for harm is credible, this event qualifies as an AI Hazard rather than an AI Incident. The event does not describe a response, update, or governance action, so it is not Complementary Information. It is not unrelated because AI-generated content is central to the event.
Thumbnail Image

Trump shares apparent AI video promoting 'medbed' conspiracy theory

2025-09-29
Culpeper Star-Exponent
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but false video of a public figure promoting a conspiracy theory with unproven medical claims. This dissemination of misinformation can lead to harm to communities by encouraging false beliefs and potentially harmful health behaviors. The harm is realized as the video was shared publicly and influenced public discourse. Therefore, this qualifies as an AI Incident due to harm to communities caused by AI-generated misinformation.
Thumbnail Image

What are 'Medbeds?' Trump posts and deletes video of cure-all healing conspiracy

2025-09-29
The Daily Jeffersonian
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fabricated video falsely claiming miraculous healing technology, which is a form of misinformation. Although no direct physical harm is reported, the spread of such misinformation can indirectly harm public health by misleading people about medical treatments. This constitutes harm to communities through misinformation and undermines trust in health systems. Therefore, this event qualifies as an AI Incident due to the AI-generated content directly leading to harm via misinformation dissemination.
Thumbnail Image

Trump AI Deepfake: MedBed Conspiracy Theory Explained - News Directory 3

2025-09-29
News Directory 3
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which was then shared by a prominent individual who believed it to be real. The AI system's use directly led to the spread of misinformation, a form of harm to communities and public trust. The incident is not merely a potential risk but a realized harm since the video was posted publicly and influenced perceptions. Although the harm is non-physical, it fits within the framework's definition of harm to communities and violation of rights (informational rights). The event is not just a hazard or complementary information because the AI-generated content caused actual misinformation dissemination. Therefore, the classification as an AI Incident is appropriate.
Thumbnail Image

Trump And The 'Medbed' Conspiracy: What's Really Behind The Deleted Post?

2025-09-30
News18
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generated a realistic fake video that was used to promote false medical claims. The use of AI-generated content to spread misinformation about health treatments can directly harm individuals by misleading them, potentially causing them to avoid legitimate medical care or spend money on ineffective products. The event describes actual dissemination of harmful misinformation, not just a potential risk, thus meeting the criteria for an AI Incident involving harm to health and communities. The deletion of the post does not negate the harm already caused by its circulation.
Thumbnail Image

Trump's deleted 'Medbed' video Explained: Miracle cure conspiracy

2025-10-01
India Today
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video promoting a harmful conspiracy theory about medical cures. The misinformation disseminated by the AI-generated content poses a risk of harm to public health by misleading people about medical treatments. Although no direct harm is reported as having occurred yet, the plausible risk of harm from such misinformation is significant. Therefore, this event qualifies as an AI Hazard because the AI-generated video could plausibly lead to harm through misinformation and false health claims. It is not an AI Incident because no direct or indirect harm has been confirmed as having occurred. It is not Complementary Information or Unrelated because the AI-generated content and its potential impact are central to the event.
Thumbnail Image

Trump's AI-generated video fuels Medbed conspiracy theory - but what are Medbeds?

2025-10-01
IOL
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating a fake video that promotes false health claims constitutes an AI Incident because it directly leads to harm through misinformation about health technologies. The misinformation can cause injury or harm to people's health by misleading them about medical treatments, fulfilling the criteria for harm to health (a). The AI-generated content is central to the incident, as it is the medium through which the false claims are spread. Therefore, this is an AI Incident involving AI-generated misinformation with potential health harms.
Thumbnail Image

Seth Meyers Suspects Trump Absolutely Believed AI Video of Himself Was Real | Video

2025-09-30
TheWrap
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video of President Trump making false claims. The sharing of this video by a public figure can lead to misinformation and harm to communities by spreading false narratives. Although the article does not explicitly state direct harm occurred, the dissemination of AI-generated false content with potential to mislead the public constitutes harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm of misinformation spread via AI-generated content.
Thumbnail Image

Trump Posts Then Deletes AI-Generated Video Of Himself Promoting QAnon-Linked 'Medbed' Conspiracy Theory - uInterview

2025-09-30
uInterview
Why's our monitor labelling this an incident or hazard?
The event describes an AI-generated deepfake video that was posted publicly and promoted a conspiracy theory about medical technology. The AI system was used to create realistic but false content that misleads the public, which is a form of harm to communities and public health. The harm is realized as the video was posted and viewed before deletion. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in spreading misinformation.
Thumbnail Image

Trump posts and deletes AI video that fuels very wild conspiracy theory

2025-09-30
UNILAD
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is clear as it generated a realistic but false video of Trump promoting a conspiracy theory. The sharing of this video on a social media platform can be seen as misuse of AI-generated content that spreads misinformation. While misinformation can harm communities and public trust, the article does not provide evidence that this specific video caused realized harm. Therefore, this event is best classified as an AI Hazard because the AI-generated video could plausibly lead to harm through misinformation, but no direct harm is confirmed in the article.
Thumbnail Image

Donald Trumps Medbed Video is under control

2025-09-30
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a deepfake video that spreads a false and harmful conspiracy theory about medical technology. The misinformation has been widely disseminated, causing public concern and prompting expert rebuttals. This constitutes harm to communities by spreading false health information and undermining public trust, fulfilling the criteria for an AI Incident. The AI system's use directly led to this harm through the creation and spread of the deceptive video content.
Thumbnail Image

What is the 'Medbeds' conspiracy theory? Trump sparks debate with now-deleted viral video

2025-10-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video (deepfake) that was shared publicly, promoting false medical claims. This constitutes the use of AI-generated content leading to misinformation, which can harm public health by misleading people about medical treatments. The harm here is indirect but significant, as it affects community trust and health decisions. Therefore, this qualifies as an AI Incident due to the realized harm from AI-generated misinformation impacting communities and health.
Thumbnail Image

Leavitt Defends Trump's AI Magic Beds Video as 'Refreshing'

2025-10-01
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The article describes the posting of an AI-generated video spreading a conspiracy theory, which is misinformation and could potentially harm public understanding or trust. However, the article does not report any direct or indirect harm resulting from this event, such as health injury, rights violations, or disruption. The AI system's involvement is in content generation and dissemination, but the harm is potential rather than realized. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is primarily a report on the use of AI-generated content and the political and social reactions to it, which fits best as Complementary Information about AI's societal impact and misinformation risks.
Thumbnail Image

Trump post on alien-based 'medbed' conspiracy theory defended by White House

2025-10-01
Metro
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a deepfake video of Trump promoting a false conspiracy theory about 'medbed' technology. The video was shared publicly, spreading misinformation that could plausibly lead to harm by misleading the public about health technologies. However, the article does not document any actual harm or incident resulting from this sharing. Therefore, this qualifies as an AI Hazard because the AI-generated content could plausibly lead to harm (misinformation causing societal harm), but no direct harm has been reported yet. It is not Complementary Information because the main focus is the AI-generated misinformation event itself, not a response or update to a prior incident.
Thumbnail Image

'Refreshing': Karoline Leavitt sings Trump's praises for pushing AI-generated QAnon video

2025-10-01
Raw Story
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video that promotes a conspiracy theory with known harmful societal impacts. The video was posted on social media by a prominent figure, amplifying its reach and potential harm. The event describes realized harm in the form of misinformation dissemination linked to a harmful conspiracy theory, which fits the definition of harm to communities. Hence, this qualifies as an AI Incident due to the direct role of the AI-generated content in spreading harmful misinformation.
Thumbnail Image

How TruthSocial's bogus medical claims fool Trump fans

2025-10-01
Mother Jones
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating a fake video promoting a non-existent medical technology. The use of AI-generated content has directly led to misinformation spreading among a vulnerable user base, causing harm by fostering false hope, encouraging the use of ineffective or harmful treatments, and enabling scams that financially exploit people. These harms fall under harm to communities and potential harm to individuals' health and property. The AI system's development and use in creating and disseminating false medical claims is a direct contributing factor to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump's AI-generated video fuels Medbed conspiracy theory - but what are Medbeds?

2025-10-01
DFA
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video of a public figure promoting false health claims, which directly contributes to misinformation that can harm individuals' health and communities. The video was posted and viewed, thus the harm is occurring through the spread of false information. Therefore, this qualifies as an AI Incident due to harm to communities and health through misinformation caused by AI-generated content.
Thumbnail Image

White House defends Donald Trump's 'medbed' conspiracy theory post

2025-10-02
Extra.ie
Why's our monitor labelling this an incident or hazard?
The event centers on an AI-generated video (deepfake) shared by Donald Trump, which promotes a false conspiracy theory about advanced healing technology. The AI system's use here is the generation of realistic but false video content. While the misinformation could plausibly lead to harm (e.g., public confusion, erosion of trust, or health-related misinformation), the article does not document any direct or indirect harm resulting from this specific incident. Therefore, it does not meet the threshold for an AI Incident. It also does not primarily focus on responses, governance, or broader ecosystem context, so it is not Complementary Information. Given the plausible risk of harm from AI-generated misinformation, this event qualifies as an AI Hazard.
Thumbnail Image

'We All Know': Stephen Colbert Taunts Trump With An Absolutely Bananas Reminder

2025-09-30
Yahoo
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating a deepfake video, which is an AI system's use. The sharing of this video by a prominent figure could plausibly lead to misinformation harm, which is a recognized societal harm. However, the article does not report any actual harm or consequences resulting from the video being shared, only the fact that it was shared and then deleted. Therefore, this situation represents a plausible risk of harm rather than a realized harm. It fits the definition of an AI Hazard because the AI-generated content could plausibly lead to misinformation-related harm, but no direct or indirect harm has been reported as having occurred yet.
Thumbnail Image

'We All Know': Stephen Colbert Taunts Trump With An Absolutely Bananas Reminder

2025-09-30
HuffPost
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic but fake video featuring a public figure and a fabricated news report. The sharing of this AI-generated misinformation by the figure himself led to the spread of false health information about miraculous medical devices that do not exist. This constitutes harm to communities through misinformation and false health claims, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm.
Thumbnail Image

Stephen Colbert Takes a Swipe at Trump's Bruised Hand and 'Bananas' AI Blunder

2025-09-30
The Inquisitr
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake news video that was shared publicly, leading to misinformation. This misinformation can harm communities by spreading false narratives and undermining trust in information sources. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article explicitly mentions the AI-generated nature of the video and the resulting misinformation, which is a clear harm to communities.
Thumbnail Image

Stephen Colbert reacts to Trump posting AI 'medbed' conspiracy video

2025-09-30
Mashable
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but fake video featuring a public figure making false claims about a medical device that does not exist. The posting and dissemination of this AI-generated misinformation can cause harm to communities by spreading false information, which is a recognized form of harm under the framework. Since the AI-generated content has been posted and is circulating, the harm is realized rather than just potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in creating and spreading misinformation that harms public understanding and trust.
Thumbnail Image

Americans remain no closer to learning why the president shared an AI-generated video of himself promoting a QAnon conspiracy.

2025-10-01
Yahoo
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video promoting a QAnon conspiracy, which is a false and misleading narrative. The sharing of this video by the president directly contributes to the spread of misinformation, which harms communities by undermining public trust and potentially influencing beliefs and behaviors based on falsehoods. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

Maddow Blog | White House tries to defend Trump amplifying bizarre 'medbed' pseudoscience

2025-10-02
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system is involved as it generated a fake video (deepfake) that was amplified by a public figure. While deepfakes can cause harm by spreading misinformation, this article does not report any realized harm such as public health injury, disruption, or rights violations directly caused by the video. Nor does it present a credible imminent risk of harm from this specific event. The article mainly discusses political and social reactions, making it complementary information about AI-generated misinformation and its societal context rather than an incident or hazard. Therefore, the event is best classified as COMPLEMENTARY_INFO.
Thumbnail Image

Trump Leverages AI-Driven Media on Truth Social to Target Rivals and Enhance Reputation - Internewscast Journal

2025-10-05
internewscast.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and deepfake technology to create synthetic media that is shared widely and has a significant audience. The AI-generated content includes fabricated videos and images that misrepresent reality and political figures, which can mislead the public and distort political narratives. This has already led to harm in the form of misinformation and social disruption, fulfilling the criteria for harm to communities and violations of rights. The AI system's use in generating and disseminating this content is central to the event, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'The president is unhinged': Trump's online behavior grows increasingly odd

2025-10-05
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being posted by the president, which involves AI systems. The videos are fake and promote misinformation, which can harm communities by spreading false narratives. However, the article does not document a direct or immediate harm such as injury, legal violation, or critical infrastructure disruption caused by these AI videos. The harms are indirect and societal, and the article focuses on describing the behavior and public reaction rather than a specific AI Incident. There is no indication that the AI system malfunctioned or was misused in a way that caused a concrete incident. Nor does it describe a plausible future harm scenario beyond the ongoing misinformation context. Thus, the article is best categorized as Complementary Information, providing supporting context on AI-generated misinformation and its societal implications.
Thumbnail Image

'The president is unhinged': Trump's online behavior grows increasingly odd

2025-10-05
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and sharing of AI-generated deepfake videos. The harm is realized as these videos spread misinformation and racist content, which harms communities and violates social norms, potentially inciting discrimination and unrest. The AI system's outputs have directly contributed to these harms. The article details actual harm occurring due to the AI-generated content, not just potential harm, so it is classified as an AI Incident rather than a hazard or complementary information.