AI-Generated Fake War Videos Spread via Hacked Accounts on X

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Pakistani user hacked 31 X (formerly Twitter) accounts to spread AI-generated fake videos about the Iran-US-Israel conflict, promoting pro-Iran content and misleading the public. X's team, led by product head Nikita Bier, has taken action against these accounts to curb misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is involved as the videos are AI-generated, and the misuse of AI to create and spread false war-related content has directly led to harm to communities by spreading misinformation during a conflict, which is a form of harm to communities under the AI Incident definition. The event describes realized harm (active spreading of false narratives) rather than just potential harm. Therefore, this qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

एक्स ने एआई से बने युद्ध के वीडियो पर शुरू की कार्रवाई, अकाउंट पर प्रतिबंध लगाने की चेतावनी | Elon Musk?s X cracks down on undisclosed AI war videos, warns creators of 90-day revenue ban

2026-03-04
दैनिक भास्कर हिंदी
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate misleading war videos, which can cause harm by spreading misinformation and escalating tensions, thus harming communities. However, the article focuses on the platform's policy changes and enforcement actions to mitigate this harm rather than describing a specific incident of harm occurring. Therefore, this is best classified as Complementary Information, as it provides updates on governance and societal responses to AI-related misinformation risks rather than reporting a direct AI Incident or a plausible future hazard alone.
Thumbnail Image

AI से फैलाया जा रहा था युद्ध का झूठ! X ने पकड़ा पाकिस्तान का शख्स, 31 अकाउंट से कर रहा था खेल

2026-03-06
hindi
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the videos are AI-generated, and the misuse of AI to create and spread false war-related content has directly led to harm to communities by spreading misinformation during a conflict, which is a form of harm to communities under the AI Incident definition. The event describes realized harm (active spreading of false narratives) rather than just potential harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

पाक‍िस्‍तानी यूजर की करतूत, 31 अकाउंट हैक करके पोस्‍ट किए ईरान-अमेरिका-इस्राइल युद्ध के AI Video

2026-03-06
Navbharat Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate fake war videos, which were then disseminated through hacked accounts to spread disinformation. This disinformation harms communities by misleading the public and potentially exacerbating geopolitical tensions. The AI system's use here directly led to harm through misinformation and manipulation, fulfilling the criteria for an AI Incident. The platform's response to suspend creators and remove monetization is a mitigation effort but does not change the classification of the event as an incident.
Thumbnail Image

एआई से बने युद्ध वीडियो पर एक्स का सख्त कदम, नियम तोड़ने वालों की कमाई होगी बंद | ??? ?? ??? ????? ?????? ?? ???? ?? ???? ???, ???? ?????? ????? ?? ???? ???? ???

2026-03-04
दैनिक भास्कर हिंदी
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's policy and enforcement measures to mitigate the spread of misleading AI-generated war videos. It does not report an actual incident of harm caused by AI-generated content but rather a preventive measure to reduce potential harm. Therefore, this is Complementary Information about governance and societal response to AI-related risks, not an AI Incident or AI Hazard.
Thumbnail Image

एआई से बने युद्ध वीडियो पर एक्स का सख्त कदम, नियम तोड़ने वालों की कमाई होगी बंद

2026-03-04
Newsnation
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's new policy and enforcement measures to mitigate harm from AI-generated misleading war videos. It does not report an actual AI Incident causing harm, nor does it describe a specific AI Hazard event where harm is plausible but not realized. Instead, it details a governance response to an existing risk, aiming to prevent harm. Therefore, this is Complementary Information as it provides important context on societal and governance responses to AI-related misinformation risks, without describing a new incident or hazard itself.
Thumbnail Image

एआई से बने युद्ध वीडियो पर एक्स का सख्त कदम, नियम तोड़ने वालों की कमाई होगी बंद | business.khaskhabar.com

2026-03-04
business.khaskhabar.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's policy and enforcement measures to prevent the spread of misleading AI-generated war videos. While the AI-generated content can cause harm by misleading people about real-world conflicts (harm to communities), the article does not report a specific incident where such harm has already occurred. Instead, it describes a preventive measure and the use of detection systems. Therefore, this is best classified as Complementary Information, as it provides a governance response to a known AI-related risk rather than reporting a realized AI Incident or a plausible future hazard alone.
Thumbnail Image

एक्स ने एआई से बने युद्ध के वीडियो पर शुरू की कार्रवाई, अकाउंट पर प्रतिबंध लगाने की चेतावनी

2026-03-04
Newsnation
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate misleading war-related videos, which can cause harm by spreading false information and increasing tensions among communities. However, the article focuses primarily on the platform's policy changes and mitigation measures rather than an actual incident of harm caused by AI-generated content. Therefore, this is best classified as Complementary Information, as it provides updates on governance and societal responses to AI-related misinformation risks rather than reporting a specific AI Incident or AI Hazard.
Thumbnail Image

एक्स ने एआई से बने युद्ध के वीडियो पर शुरू की कार्रवाई, अकाउंट पर प्रतिबंध लगाने की चेतावनी | business.khaskhabar.com

2026-03-05
business.khaskhabar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate misleading war-related videos that can cause harm by spreading misinformation during a sensitive geopolitical situation, which can harm communities and public trust. However, the article focuses mainly on the platform's policy changes and preventive measures rather than describing a specific realized harm incident or a direct AI-driven harm event. Therefore, this is best classified as Complementary Information, as it provides governance and societal response to AI-related misinformation risks rather than reporting a concrete AI Incident or an AI Hazard.
Thumbnail Image

X Exposes Pakistan Man Using 31 Accounts To Post AI Videos Amid US-Israel-Iran Conflict

2026-03-05
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated videos being used to spread misleading war propaganda, which constitutes harm to communities by disseminating false information during a conflict. The use of hacked accounts to amplify this content further exacerbates the issue. Since the AI system's use has directly led to the spread of harmful misinformation, this qualifies as an AI Incident under the framework's definition of harm to communities. The platform's response to suspend users and remove incentives is complementary information but does not change the classification of the primary event.
Thumbnail Image

X removes network spreading AI generated war videos

2026-03-05
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being spread as real war footage, causing misinformation during a sensitive geopolitical conflict. This misinformation can harm communities by spreading confusion and panic. The AI system's outputs (the fake videos) directly contributed to this harm. The coordinated network used hacked accounts to amplify this misleading content, showing misuse of AI-generated content. Since the harm is occurring and the AI system's role is pivotal, this qualifies as an AI Incident.
Thumbnail Image

X Officials Discover Pakistan-Based Operator Behind 31 'Iran War Monitor' Accounts Spreading Fake War Videos

2026-03-04
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Sora 2) to generate fake war videos that were spread via hacked accounts, causing misinformation during a sensitive geopolitical conflict. This misinformation harms communities by spreading false narratives and undermining trust in information sources, which fits the definition of harm to communities under AI Incident criteria. The AI system's use in generating the fake content is central to the incident, and the harm is realized, not just potential. The platform's actions to detect and remove the accounts are complementary but do not negate the incident classification.
Thumbnail Image

Pakistani found running 31 handles on X, posting AI-generated war videos amid US-Iran conflict, gets deplatformed: Details

2026-03-05
OpIndia
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated war videos being posted by a coordinated network of accounts, which were then dismantled by the platform. The AI system's use in creating fabricated content directly contributes to harm by spreading disinformation amid a sensitive geopolitical conflict, affecting communities and potentially influencing public opinion and stability. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities through misinformation. The platform's enforcement actions are complementary but do not negate the incident classification.
Thumbnail Image

X bans accounts spreading AI-generated war videos amid ongoing crisis

2026-03-05
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated videos used to spread misleading content about a war, which constitutes harm to communities through misinformation. The AI system's outputs (the videos) were central to the incident, and the misuse of hacked accounts to distribute this content led to realized harm. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing harm.