AI-Generated Videos Spread Misinformation During Hurricane Melissa in Jamaica

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

As Hurricane Melissa approached Jamaica, AI-generated fake videos—mostly created with OpenAI's Sora—flooded social media, spreading misinformation and distracting from critical safety updates. These fakes depicted fabricated disaster scenes and minimized the storm's threat, undermining official warnings and potentially endangering public safety.[AI generated]

Why's our monitor labelling this an incident or hazard?

The presence of an AI system is explicit (OpenAI's text-to-video model Sora). The use of this AI system to generate fake videos that misinform the public during a dangerous hurricane constitutes indirect harm to communities by undermining the dissemination of accurate safety information. This misinformation can lead to people underestimating the storm's severity, which is a clear harm to communities and public safety. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation during a critical event.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Physical (injury)Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI-generated fake videos proliferate as Hurricane Melissa nears Jamaica

2025-10-27
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is explicit (OpenAI's text-to-video model Sora). The use of this AI system to generate fake videos that misinform the public during a dangerous hurricane constitutes indirect harm to communities by undermining the dissemination of accurate safety information. This misinformation can lead to people underestimating the storm's severity, which is a clear harm to communities and public safety. Therefore, this event qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation during a critical event.
Thumbnail Image

AI-generated fakes proliferate as Hurricane Melissa nears Jamaica

2025-10-27
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's text-to-video model) to generate fake videos that are actively misleading the public during a dangerous hurricane. This misinformation disrupts the communication of critical safety information, which is a harm to communities and could plausibly lead to injury, loss of life, or property damage. The harm is realized as the fake content is already proliferating and causing confusion, not merely a potential risk. Hence, this qualifies as an AI Incident due to the direct link between AI-generated misinformation and harm to community safety during a natural disaster.
Thumbnail Image

#EyeOnMelissa: AI-generated fakes proliferate as hurricane nears Jamaica - Jamaica Observer

2025-10-27
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's Sora) to generate fake videos that misinform the public about a severe hurricane, which is a direct factor in causing harm by distracting from official safety information and potentially leading to physical harm and property damage. This fits the definition of an AI Incident because the AI-generated content is actively causing harm to communities by undermining critical safety communication during a natural disaster.
Thumbnail Image

AI-generated fakes proliferate as Hurricane Melissa nears Jamaica

2025-10-27
KTBS
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OpenAI's text-to-video model) to generate fake videos that are actively misleading the public during a dangerous hurricane. This misinformation disrupts the communication of critical safety information, which is a harm to communities and could plausibly lead to injury, loss of life, or property damage. The harm is realized as the fake content is already proliferating and undermining official warnings, meeting the criteria for an AI Incident due to indirect harm caused by the AI-generated misinformation.
Thumbnail Image

AI-generated fakes proliferate as Hurricane Melissa nears Jamaica

2025-10-27
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (using OpenAI's text-to-video model Sora) spreading misinformation during a severe hurricane. This misinformation directly interferes with the communication of critical safety information, posing a risk to people's health and safety. The AI system's outputs are directly linked to harm to communities, fulfilling the criteria for an AI Incident.
Thumbnail Image

Ces vidéos générées par IA sur l'ouragan Melissa propagent des fake news en Jamaïque

2025-10-28
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) created by an AI system (Sora by OpenAI) that are spreading false and misleading information about a dangerous hurricane. This misinformation undermines official warnings and could lead to people ignoring safety advice, thus directly or indirectly causing harm to people and communities. The harm is realized and ongoing, not just potential, as the videos are already circulating and influencing public perception. Hence, this is an AI Incident involving the use of AI systems leading to harm to communities and potentially human health and safety.
Thumbnail Image

Attention à ces fausses vidéos qui circulent sur l'ouragan Melissa

2025-10-28
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (video generation models like OpenAI's Sora) to create and disseminate false videos about a natural disaster. This misinformation directly undermines public safety efforts and could plausibly lead to injury or harm to people (harm to health and communities). The harm is indirect but real, as the false content may cause people to ignore warnings and fail to take necessary precautions. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation during a critical emergency situation.
Thumbnail Image

Ouragan Mélissa : gare aux fausses vidéos générées par l'IA qui circulent sur les réseaux sociaux

2025-10-28
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos spreading false information about a dangerous hurricane, which undermines official warnings and could lead to loss of life and property damage. The AI system's use in generating these videos is central to the harm caused. The misinformation is actively circulating and has already caused confusion, which meets the criteria for an AI Incident involving harm to communities and potential injury or death. The involvement of AI in generating the misleading content and the resulting harm is direct and significant.
Thumbnail Image

Ouragan "Melissa": quand les fausses vidéos générées par l'IA brouillent l'information

2025-10-28
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos spreading false and misleading information about a dangerous hurricane, which is actively undermining official safety messages. This misinformation can and likely does lead to harm by causing people to underestimate the threat, risking injury, death, and property damage. The AI system's use in generating these videos is central to the harm. The harm is to communities and potentially to human health and safety, fitting the definition of an AI Incident. The event is not merely a potential hazard or complementary information but a realized incident involving AI-generated misinformation causing harm.
Thumbnail Image

Ouragan Mélissa: quand les fausses vidéos générées par l'IA brouillent l'information

2025-10-28
Mediapart
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Sora, a video generation model by OpenAI) generating false videos that spread misinformation about the hurricane. This misinformation undermines official warnings and safety messages, leading to a plausible risk of injury, loss of life, and property damage. The harm is realized as the false content is actively circulating and causing confusion among the public, which meets the criteria for an AI Incident involving harm to communities. The involvement of AI in generating the misleading content is direct and pivotal to the harm caused.
Thumbnail Image

Ouragan Melissa : non, les habitants ne font pas du jet-ski pendant la tempête

2025-10-28
RTL.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies AI-generated videos spreading false and misleading information about a dangerous hurricane, which undermines official warnings and could lead to loss of life and property. The AI system's outputs are directly linked to harm to communities and disruption of emergency response, fulfilling the criteria for an AI Incident. The harm is occurring (not just potential), and the AI system's role is pivotal in generating convincing false content that confuses the public and endangers lives.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites

2025-10-29
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as video generators creating false videos that are spreading misinformation about a natural disaster. This misinformation can harm communities by causing confusion and potentially impeding effective disaster response, which qualifies as harm to communities. Since the harm is occurring through the active spread of AI-generated false content, this meets the criteria for an AI Incident. The AI system's use directly leads to harm through misinformation dissemination during a critical event.
Thumbnail Image

Fake hurricane videos shared online including AI-generated sharks

2025-10-28
BBC
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as text-to-video generators creating fake hurricane videos. The use of AI-generated misleading content during an ongoing natural disaster can directly harm communities by spreading false information, which fits the definition of harm to communities under AI Incidents. The harm is realized as these videos have been widely viewed and circulated, influencing public perception and potentially affecting safety responses. The event is not merely a potential risk but an actual occurrence of harm facilitated by AI misuse, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media...

2025-10-29
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos that falsely depict disaster scenes, which have been widely viewed and shared, causing misinformation and confusion among the public. The AI systems (video generation tools) are directly responsible for creating the misleading content. The harm is to communities through misinformation during a crisis, which fits the definition of harm to communities under AI Incident. The event is not merely a potential risk but an ongoing harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI systems are central to the event.
Thumbnail Image

Fake Hurricane Melissa Videos Risk Safety Of Public

2025-10-29
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating fake videos that misinform the public during a hurricane, which is a direct use of AI leading to harm. The misinformation can cause people to make unsafe decisions, thus harming communities indirectly. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities by undermining accurate safety information during a critical event. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Phony AI-Generated Videos of Hurricane Melissa Flood Social Media Sites

2025-10-29
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) that are spreading false information about Hurricane Melissa, leading to confusion and misinformation among social media users. The AI systems (video generators like Sora) are directly involved in creating these misleading videos. The harm is realized as misinformation during a natural disaster, which can negatively impact communities by causing confusion and potentially undermining trust in official information sources. Therefore, this event meets the criteria for an AI Incident due to the direct role of AI in causing harm to communities through misinformation.
Thumbnail Image

Fake Hurricane Videos of AI-Generated Sharks Are Flooding Social Media: How to Spot Them

2025-10-29
PEOPLE.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fabricated videos that are misleading the public during a severe hurricane, directly impacting the community by spreading misinformation that can cause harm to people and disrupt crisis management efforts. The AI-generated content is used maliciously or irresponsibly, leading to violations of the right to accurate information and potentially endangering lives. Therefore, this qualifies as an AI Incident due to realized harm to communities through misinformation during a critical event.
Thumbnail Image

Phony AI videos of Hurricane Melissa flood social media

2025-10-29
PBS.org
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos that are spreading misinformation about a natural disaster. This misinformation is causing harm to communities by sowing confusion and potentially impacting public safety and trust. Since the AI-generated content is actively misleading people and has been widely viewed, this constitutes an AI Incident due to harm to communities through misinformation. The article does not merely warn about potential future harm but documents ongoing harm caused by AI-generated content.
Thumbnail Image

Fake Hurricane Melissa videos including AI-generated sharks spreading online - Daily Star

2025-10-28
Daily Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create and disseminate fabricated videos that misrepresent the real impact of Hurricane Melissa. This misinformation can harm communities by misleading the public about the severity and nature of the disaster, which is a form of harm to communities and potentially public safety. Since the AI-generated content is actively spreading and causing misinformation harm, this qualifies as an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites - The Boston Globe

2025-10-29
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as video generators creating deepfake content that is spreading misinformation about a natural disaster. This misinformation is causing confusion among social media users, which constitutes harm to communities by undermining accurate information dissemination during an emergency. The AI's role is pivotal as the videos are AI-generated and are the source of the false content. The harm is realized, not just potential, as the videos are actively circulating and misleading people. Hence, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites

2025-10-29
Market Beat
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly, namely AI video generators producing deepfake videos. The use of these AI systems to create and spread misinformation during a natural disaster could plausibly lead to harm to communities by sowing confusion and undermining trust in official information sources. However, the article does not document any actual harm or incident caused by these AI-generated videos, only the potential for such harm. The focus is on the emerging risk and challenges posed by AI-generated deepfakes in crisis contexts, making this an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the event described.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites - WTOP News

2025-10-29
WTOP
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI video generators) creating and spreading false videos about a natural disaster, which is misinformation that can confuse and mislead communities. This fits the definition of an AI Hazard because the AI-generated content could plausibly lead to harm (e.g., public confusion, misinformed decisions during a disaster). However, the article does not document any actual harm or incident resulting from these videos, only the potential for such harm. Hence, it is classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites

2025-10-29
Spectrum News Bay News 9
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos that falsely depict disaster scenes, which have gone viral and caused confusion among social media users. This misinformation can disrupt public understanding and response during a natural disaster, harming communities. The AI systems (video generators) are directly responsible for creating and spreading this false content. Hence, this qualifies as an AI Incident due to realized harm to communities through misinformation.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites

2025-10-29
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The videos are explicitly described as AI-generated misinformation that falsely represent disaster impacts, which can harm communities by spreading false narratives and causing confusion or panic. The AI system's use in generating these videos directly leads to harm to communities through misinformation dissemination. Therefore, this qualifies as an AI Incident due to harm to communities caused by AI-generated false content actively spreading on social media.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites

2025-10-29
2 News Nevada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos (deepfakes) that are spreading false information about Hurricane Melissa, causing confusion among social media users. The AI systems (video generation tools like Sora) are directly involved in creating the misleading content. The harm is realized in the form of misinformation that affects communities during a natural disaster, which fits the definition of harm to communities under AI Incident criteria. The event is not merely a potential risk but an ongoing issue with millions of views and public officials warning about it, confirming actual harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Phony AI-generated videos of Hurricane Melissa flood social media sites - The Boston Globe

2025-10-30
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI video generators like Sora) used to create deepfake videos that are spreading misinformation about Hurricane Melissa. The misinformation is actively causing harm by confusing social media users and potentially undermining public trust and response during a natural disaster, which constitutes harm to communities. The harm is realized and ongoing, not merely potential, as the videos are already circulating and causing confusion. Hence, this fits the definition of an AI Incident where the AI system's use has directly or indirectly led to harm to communities. The article also discusses the societal implications and challenges posed by these AI-generated videos, reinforcing the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

No son tiburones en una piscina tras el paso del huracán Melissa por Jamaica, es IA

2025-10-29
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system because the video of sharks in the pool is AI-generated content. However, there is no harm caused or plausible harm described in the event. The main focus is on debunking misinformation and clarifying that the video is AI-generated. Therefore, this is complementary information that provides context and correction regarding AI-generated content, rather than an incident or hazard involving harm.
Thumbnail Image

¿Real o falso? La verdad sobre si huracán Melissa arrastró tiburones a piscina de hotel en Jamaica

2025-10-30
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The article focuses on debunking a false video created with AI, clarifying that no real incident involving sharks in a pool occurred. While AI was used to generate the misleading video, the event does not describe any realized harm (such as injury, rights violations, or disruption) caused by the AI system. Nor does it indicate a plausible future harm scenario stemming from the AI system's use. Instead, it provides contextual information about AI-generated misinformation and its detection, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Tiburones en el huracán Melissa: la verdad detrás del clip

2025-10-30
@MeganoticiasTVC
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating the fake video content, but the event does not describe any realized harm such as injury, disruption, rights violations, or property damage caused by the AI-generated video. Instead, it focuses on exposing the falsehood and the potential for such AI-generated content to mislead people. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information that provides context on AI-generated misinformation and its societal implications.
Thumbnail Image

Video | Tiburones nadan en la piscina de un hotel en Jamaica tras paso de huracán Melissa: ¿Real o inteligencia artificial?

2025-10-30
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate false video content (deepfake or AI-generated imagery). However, the harm described is misinformation, which is a form of harm to communities. Since the video is fake and the article exposes this, the event concerns the use of AI to create misleading content that could cause harm if believed. The harm is realized in the form of misinformation dissemination, which is a recognized AI Incident category (harm to communities). Therefore, this event qualifies as an AI Incident due to the direct use of AI to create false content that misleads the public and contributes to misinformation after a natural disaster.
Thumbnail Image

Tras el paso del huracán "Melissa" en Jamaica circulan videos desinformantes creados con IA

2025-10-31
Animal Político
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the creation of false videos (AI-generated content) that spread misinformation about a natural disaster. While this misinformation could potentially harm communities by misleading them, the article does not document actual harm occurring from these videos. Instead, it serves to inform and correct the falsehoods, which aligns with providing complementary information about AI misuse and its societal implications. Therefore, this event is best classified as Complementary Information rather than an AI Incident or AI Hazard.