AI-Generated Fake Pentagon Explosion Image Triggers Stock Market Crash

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated fake image depicting an explosion near the Pentagon was spread by a fake verified Bloomberg Twitter account, leading major news outlets to report it as real. The misinformation caused panic, resulting in a significant drop in the S&P 500 and millions of dollars in financial losses before the hoax was debunked.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to create and distribute realistic fake images (deepfakes) that falsely depicted a critical event (Pentagon explosion). This misinformation directly led to harm by causing market disruption and financial losses, which qualifies as harm to communities and property. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI-generated content.[AI generated]
AI principles
AccountabilityRobustness & digital securityTransparency & explainabilitySafetyDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingFinancial and insurance servicesGovernment, security, and defenceDigital security

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyReputationalPublic interestPsychological

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Pentagon'da patlama olduğuna dair sahte görsel ortalığı karıştırdı! Borsa resmen çakıldı - Son Dakika

2023-05-23
Son Dakika
Why's our monitor labelling this an incident or hazard?
An AI system was used to create and distribute realistic fake images (deepfakes) that falsely depicted a critical event (Pentagon explosion). This misinformation directly led to harm by causing market disruption and financial losses, which qualifies as harm to communities and property. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

Pentagon'un sahte patlaması borsada düşüşe sebep oldu

2023-05-23
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fake images that were widely disseminated, leading to real-world economic harm through stock market disruption and financial loss. This fits the definition of an AI Incident because the AI system's use directly caused harm to communities and property (financial assets).
Thumbnail Image

Sahte fotoğraf ABD borsasını salladı! 5 dakika içinde çakıldı

2023-05-23
Milliyet
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake image that was disseminated widely, leading to a significant and immediate financial loss in the stock market and social disruption. The harm is realized and directly linked to the AI-generated content. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Pentagon'da patlama olduğuna dair sahte görsel ortalığı karıştırdı! Borsa resmen çakıldı

2023-05-23
Haberler
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the fake image was produced by AI and that its dissemination caused real-world harm, including market disruption and financial losses. The AI system's use in generating and spreading false information directly led to these harms, fitting the definition of an AI Incident involving harm to communities and economic harm.
Thumbnail Image

Borsa neden düşüyor, ne zaman yükselir? 23 Mayıs 2023 borsa ne oldu, son durum nedir?

2023-05-23
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create and spread false visual content (deepfake images) that directly led to harm in the form of financial losses and market disruption. The AI-generated fake images caused misinformation that influenced investor behavior, resulting in a significant drop in the stock market and millions of dollars lost. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (financial harm to investors and market participants).
Thumbnail Image

Dolar neden düşüyor? Dolar düştü mü? 2023 Güncel dolar fiyatı ne kadar oldu?

2023-05-23
Haberler
Why's our monitor labelling this an incident or hazard?
An AI system was used to create realistic but fake images of an explosion, which were then widely shared and believed, causing panic and a significant drop in the stock market. This directly led to financial harm and disruption of economic operations, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI-generated content's misuse, not merely a potential risk or general news. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Yapay zekanın Pentagon oyunu borsayı karıştırdı! Milyonlarca dolar buhar oldu

2023-05-23
Haber 7
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create and disseminate false images that misled the public and media, causing real-world financial harm. The AI system's outputs directly led to significant economic disruption and harm to the community by spreading misinformation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Ortalığı karıştıran fotoğraf: Amerikan borsası resmen çakıldı

2023-05-23
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The incident involves the use of AI or AI-enabled tools to create or manipulate images (deepfakes or AI-generated fake content) that were then spread on social media, causing misinformation and social disruption. This fits the definition of an AI Incident because the AI system's use (in generating or facilitating the fake image) directly led to harm to communities by spreading false information and causing public confusion. The presence of AI is reasonably inferred from the creation of a fake image and fake verification marks, which are commonly produced using AI techniques.
Thumbnail Image

Yapay zeka gazeteciliği yendi

2023-05-22
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a fake image accompanying false news, which was disseminated via social media. The AI-generated content directly contributed to significant harm to communities and economic stability by causing market disruption and panic. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Sahte Pentagon Patlaması Gerçek Sanıldı: Borsa Çakıldı

2023-05-22
tamindir.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used maliciously to generate fake content that directly caused harm to communities and economic systems (harm to property and communities). The spread of AI-generated misinformation led to a significant stock market drop and financial losses, fulfilling the criteria for an AI Incident. The AI's role is pivotal as the false visuals were created by AI and were central to the misinformation and resulting harm.
Thumbnail Image

Heboh Foto Ledakan di Pentagon, Ternyata Palsu!

2023-05-23
detik News
Why's our monitor labelling this an incident or hazard?
The photo is described as AI-generated fake content that caused real-world social disruption and market impact. The AI system's use in generating the false image directly led to misinformation spreading and public confusion, which is a harm to communities. Although no physical injury or property damage occurred, the incident meets the criteria for an AI Incident due to the realized harm from misinformation and its societal impact. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mencuat Foto Palsu Ledakan di Pentagon hingga Pasar Saham Anjlok Sesaat

2023-05-24
detik News
Why's our monitor labelling this an incident or hazard?
The event describes the creation and viral spread of a fake image generated by AI, which led to a real-world impact: a temporary stock market drop. The AI system's role in generating the false image and enabling its viral spread is central to the harm caused. The harm is to communities and economic stability, fitting the definition of an AI Incident. The incident is not merely a potential risk but a realized harm caused directly or indirectly by the AI system's outputs.
Thumbnail Image

Geger Foto 'Ledakan' di Pentagon, Tahunya Gambar AI

2023-05-24
detiki net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake image that was widely disseminated, causing misinformation and social disruption (harm to communities). This fits the definition of an AI Incident because the AI-generated content directly led to significant harm in the form of public confusion and market disruption. Although no physical injury or property damage occurred, harm to communities through misinformation is recognized as a valid harm under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Hoaks Ledakan Pentagon Nyebar di Twitter, Prediksi Pakar Terbukti

2023-05-24
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake image that was used to spread false information about an explosion at the Pentagon. This misinformation was widely disseminated, including on a major TV network, causing public confusion and a temporary stock market drop. The harm to communities through misinformation and economic disruption qualifies as harm under the AI Incident definition. The AI system's use in creating the fake image and its role in the incident is direct and pivotal. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Gambar Palsu Marak Beredar, Peran AI jadi Perdebatan

2023-05-23
Media Indonesia - News & Views -
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that the fake images are likely generated by AI and that their spread caused real-world harm by shaking the stock market temporarily. This constitutes an AI Incident because the AI system's use directly led to significant harm (economic disruption and misinformation). The involvement of AI in generating the fake images and the resulting harm meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Diduga Buatan AI, Gambar Palsu Ledakan Pentagon Viral dan Kacaukan Pasar Saham AS

2023-05-23
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the fake image of the Pentagon explosion was likely produced by AI and that its dissemination caused a ten-minute chaos in the US stock market. The AI system's use in generating and spreading false visual content directly led to significant harm (market disruption and misinformation). This fits the definition of an AI Incident, as the AI system's use directly led to harm to communities and economic disruption. The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated misinformation.
Thumbnail Image

Foto Palsu Ledakan di Dekat Pentagon Viral, Diduga Buatan AI

2023-05-23
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The photo was identified as AI-generated fake content, which led to misinformation and a temporary market disruption. Although the AI system was used to create the false image, no actual physical harm, rights violations, or infrastructure disruption took place. The event does not describe realized harm caused by the AI system but shows the potential for misinformation-related harm. Since the harm is indirect and limited to misinformation effects without physical or legal harm, and the event is about the circulation of AI-generated fake content, it fits best as Complementary Information, providing context on AI misuse and its societal impact rather than constituting a direct AI Incident or Hazard.
Thumbnail Image

Viral Foto Ledakan di Pentagon, Ternyata Buatan AI

2023-05-23
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images that falsely depict an explosion at the Pentagon, which is misinformation. While this could plausibly lead to harm such as public panic or disruption, the article states that the explosion did not occur and the misinformation was debunked. Since no actual harm has occurred yet, but the AI-generated content poses a credible risk of harm, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the AI-generated false images and their potential impact, not on responses or broader ecosystem context.
Thumbnail Image

L'image d'une fausse explosion au Pentagone provoque la panique sur les marchés boursiers américains

2023-05-24
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a false image that was disseminated and caused real-world harm by disrupting financial markets and causing panic. The AI system's use directly led to harm to communities (market participants and the public) through misinformation. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Une fausse image d'explosion au Pentagone bouscule Wall Street

2023-05-23
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the image was generated by AI and that its spread caused a tangible impact on financial markets, demonstrating direct harm. The AI system's use in creating and distributing false content that influenced market behavior fits the definition of an AI Incident, as it led to harm to communities and economic disruption. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Une fausse image générée par une IA fait chuter le cours de la bourse américaine

2023-05-23
PhonAndroid
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a false image (AI generative technology) that was used to spread misinformation. This misinformation directly caused a momentary disruption in the financial market (a 0.26% drop in the S&P 500), which is a form of harm to communities and economic systems. The AI system's use is central to the incident, as the image was AI-generated and led to real-world consequences. Hence, it meets the criteria for an AI Incident due to indirect harm caused by the AI-generated content.
Thumbnail Image

États-Unis : une photo générée par l'IA agite Wall Street

2023-05-23
La Croix
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic but false image that was disseminated on social media, leading to a brief disruption in financial markets (harm to economic community) and public alarm. The AI-generated misinformation directly caused this harm, fulfilling the criteria for an AI Incident. Although the harm was short-lived and mitigated, the event involved realized harm linked to AI use, not just potential harm or complementary information.
Thumbnail Image

Une photo du Pentagone en feu fait dégringoler les marchés financiers : que s'est-il passé ?

2023-05-23
Toms Guide : actualités high-tech et logiciels
Why's our monitor labelling this an incident or hazard?
The event involves a likely AI-generated image (AI system involvement) that was used to spread false information causing market disruption (harm to communities and economic harm). However, the harm is indirect and no physical injury, rights violation, or critical infrastructure disruption occurred. Since the false image was already spread and caused real market impact, this qualifies as an AI Incident due to indirect harm caused by AI-generated misinformation. The uncertainty about AI involvement is outweighed by the plausible signs of generative AI use and the realized harm from misinformation.
Thumbnail Image

Le Pentagone en feu ? Une image générée par IA fait le tour du monde - CNET France

2023-05-24
CNET France
Why's our monitor labelling this an incident or hazard?
The AI system (generative AI) was used to produce a false image that indirectly led to a temporary financial market disruption, which can be considered harm to communities or economic harm. However, the harm was not physical or severe and was quickly mitigated. Since the harm occurred and was directly linked to the AI-generated misinformation, this qualifies as an AI Incident. The event is not merely a potential risk (hazard) nor a complementary information update, but an actual incident where AI-generated content caused real-world impact.
Thumbnail Image

Y a-t-il eu une explosion près du Pentagone ?

2023-05-23
News 24
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image depicting an explosion near the Pentagon, which was widely shared and caused misinformation and a brief market disturbance. The AI-generated content directly led to harm in the form of misinformation and economic disruption, fulfilling the criteria for an AI Incident. Although no physical harm or explosion occurred, the indirect harm to communities (through misinformation) and economic systems (market impact) is clear and materialized. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Cette image d'une explosion au Pentagone n'est pas réelle

2023-05-24
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a false image that was widely shared, leading to misinformation about a serious event. This misinformation caused automated trading algorithms to react, resulting in a measurable but temporary drop in the stock market index. The harm is indirect but real, as the AI-generated content triggered economic disruption. Hence, the event meets the criteria for an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

Imagem gerada por IA viraliza com falsa explosão no Pentágono

2023-05-22
TecMundo
Why's our monitor labelling this an incident or hazard?
The event describes a false image likely generated by an AI system that led to the spread of misinformation about an explosion at the Pentagon. This misinformation caused social disruption and affected the stock market briefly, which constitutes harm to communities and economic harm. The AI system's involvement is in the creation of the misleading image, which directly contributed to the incident. Therefore, this meets the criteria for an AI Incident due to realized harm caused by AI-generated content.
Thumbnail Image

Foto falsa de explosão no Pentágono afeta bolsas dos EUA; veja - Estadão E-Investidor - As principais notícias do mercado financeiro

2023-05-23
E-Investidor
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a false image that directly led to harm in the form of market disruption (harm to economic communities). The harm was realized, as the stock market indices dropped temporarily due to the misinformation. Therefore, this qualifies as an AI Incident because the AI-generated content directly caused harm to communities (market participants) through misinformation-induced market reaction.
Thumbnail Image

Foto falsa de explosão perto do Pentágono viraliza e chega a afetar brevemente as ações

2023-05-22
O Globo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a false image that was widely disseminated, leading to misinformation and a brief but real disruption in financial markets (a harm to communities and economic systems). The AI system's use (image generation) directly led to this harm. Although the harm was temporary and corrected, it was realized and significant enough to classify as an AI Incident. The event is not merely a potential risk or a complementary update but a concrete case of AI-generated misinformation causing harm.
Thumbnail Image

É #FAKE imagem de explosão perto do Pentágono, nos EUA

2023-05-23
Jornal Floripa - Notícias de Florianópolis - Santa Catarina Brasil
Why's our monitor labelling this an incident or hazard?
The image was likely generated by an AI system, but since no actual explosion or harm occurred, and the misinformation was promptly refuted by authorities, this does not constitute an AI Incident. It also does not represent a plausible future harm scenario as the misinformation was contained. The article primarily provides complementary information about the false AI-generated image and its social impact (brief market reaction), thus it fits best as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

IA: Imagem fake de explosão no Pentágono tuitada em conta 'verificada' viraliza e faz Bolsa cair

2023-05-23
MediaTalks
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (generative AI) producing a fake image that was used to spread false information. This misinformation caused harm to communities by disrupting the financial market, which is a significant and clearly articulated harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm caused by AI-generated content.
Thumbnail Image

Imagem falsa de explosão no Pentágono viraliza

2023-05-22
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The event describes a false image likely generated by AI that caused a real-world impact: a temporary market drop due to misinformation. The AI system's use (generative AI creating a fake image) directly led to harm in the form of economic disruption and misinformation spreading, which harms communities and market stability. This fits the definition of an AI Incident because the AI system's use directly led to significant harm (market disruption and misinformation). There is no indication that this is merely a potential risk or a complementary update; the harm has already occurred. Therefore, the classification is AI Incident.
Thumbnail Image

Une image générée par une IA a fait Â" exploser Â" le Pentagone et paniquer Wall Street

2023-05-23
01net
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a realistic but false image of an explosion at the Pentagon. This image was disseminated widely on social media, leading to misinformation, public panic, and a temporary negative impact on financial markets (Wall Street). The harm here is to communities through misinformation and economic disruption, which fits the definition of harm to communities under AI Incident criteria. The event is not merely a potential risk but a realized incident of harm caused indirectly by the AI-generated content. Hence, it is classified as an AI Incident.
Thumbnail Image

Pourquoi l’intelligence artificielle inquiète-t-elle les scénaristes ?

2023-05-23
Journal du Geek
Why's our monitor labelling this an incident or hazard?
The article centers on the societal and legal concerns about AI's impact on screenwriting and creative professions, including fears of replacement and intellectual property challenges. It mentions the use of AI tools and the industry's response, such as strikes and calls for regulation, but does not describe any realized harm or incident caused by AI. Therefore, it fits the definition of Complementary Information, providing context and updates on AI's influence and governance discussions in the cultural sector, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Neuralink va bien tester ses implants sur des humains

2023-05-26
Journal du Geek
Why's our monitor labelling this an incident or hazard?
Neuralink's implants qualify as AI systems because they involve advanced neural interface technology that interprets brain signals and likely uses AI algorithms for signal processing and control. The event concerns the imminent use of these AI systems in human trials, which could plausibly lead to serious physical harm if the implants malfunction or cause adverse effects. Since the article reports FDA approval for human testing but no actual harm has yet occurred, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a significant development indicating credible future risk.
Thumbnail Image

Le créateur de ChatGPT menace (déjà ) de quitter l'UE, pourquoi ?

2023-05-25
Presse-citron
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report a plausible future harm event directly linked to AI system use or malfunction. Instead, it focuses on regulatory developments, company positions, and ongoing discussions about AI governance. Therefore, it fits the definition of Complementary Information, as it provides supporting context and updates on AI regulation and industry responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Binance : le chef de la sécurité met en garde contre une nouvelle arnaque

2023-05-25
Presse-citron
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfake generators) to create fake videos that deceive identity verification controls and are used in scams targeting cryptocurrency users. This has directly led to financial harm to victims, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals. The involvement of AI in generating deepfakes that facilitate fraud is clear and the harm is realized, not just potential.
Thumbnail Image

Les doubleurs haussent le ton face à l’IA

2023-05-26
Presse-citron
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate synthetic voices, which is an AI system by definition. The event stems from the use and development of these AI systems to replace or replicate human voice actors. Although no concrete incident of harm such as legal violations or economic damage is reported as having already occurred, the voice actors' manifesto and concerns indicate a credible risk of such harms materializing. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to violations of rights and harm to communities (the voice acting profession and artistic heritage). The article does not describe a realized harm or incident but focuses on the potential and ongoing threat, making it an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Pourquoi les détecteurs de ChatGPT marchent-ils aussi mal ?

2023-05-27
Presse-citron
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI text classifiers) used to detect AI-generated content. However, the article does not describe any realized harm or incident caused by these systems, nor does it report a specific event where harm occurred due to their malfunction or misuse. Instead, it provides an analysis of their current limitations and suggests improvements. This fits the definition of Complementary Information, as it provides supporting data and context about AI detection tools and their challenges without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Fake AI फोटो की वजह से स्टॉक मार्केट में आई गिरावट, ये थी वो तस्वीर जिससे बाजार में मचा तहलका

2023-05-23
hindi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake image that was used to spread misinformation, which directly caused harm by disrupting the stock market and causing public panic. The AI-generated fake image is the pivotal factor leading to the harm. Therefore, this qualifies as an AI Incident due to the realized harm to communities (financial market disruption and public panic) caused by the AI-generated misinformation.
Thumbnail Image

US News: पेंटागन के पास हुआ बड़ा धमाका? अमेरिकी शेयर बाजार में मची खलबली, जानें वायरल तस्वीरों की सच्चाई

2023-05-22
News18 India
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated fake images falsely showing a Pentagon explosion, which caused panic and a stock market drop. However, authorities confirmed no explosion occurred, and the photos were fake. The AI system's role is in generating misleading content that could plausibly lead to harm (market disruption, public panic) but did not directly cause physical or legal harm. Therefore, it is an AI Hazard due to the plausible risk of harm from AI-generated misinformation, not an AI Incident since no actual harm materialized.
Thumbnail Image

Pentagon Attack पेंटागन में विस्फोट की FAKE PHOTO हुई वायरल AI से बनाई गई थी तस्वीर - fake picture of explosion in Pentagon went viral photo was AI generated

2023-05-22
दैनिक जागरण (Dainik Jagran)
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake image that was widely shared, leading to misinformation and social disruption (harm to communities). This qualifies as an AI Incident because the AI-generated content directly led to harm through the spread of false information causing market impact and public alarm. The event is not merely a potential hazard or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

कैसे पेंटागन विस्फोट की एक फेक AI तस्वीर ने दुनिया को हिलाकर रख दिया

2023-05-23
आज तक
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images and AI-assisted voice manipulation leading to public panic, misinformation, and a criminal act (kidnapping). The AI systems' outputs caused harm to communities (panic, disruption) and violation of rights (kidnapping). The harms are realized, not just potential. Hence, the events qualify as AI Incidents under the framework.
Thumbnail Image

(FOTO) NASTALA PANIKA ZBOG "EKSPLOZIJE" KOD PENTAGONA, A ONO... Veštačka inteligencija demonstrirala svoju zastrašujuću moć, uzdrmana i berza!

2023-05-23
Informer
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated fake image that was shared on social media, leading to public panic and a temporary drop in the stock market index S&P 500. The AI system's role in generating the false image and enabling its spread through paid verified accounts directly caused harm to communities and economic disruption. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to significant harm.
Thumbnail Image

Vještačka inteligencija, Tviter i lažna eksplozija ispred Pentagona - zašto je zadrhtala berza - BIGportal.ba

2023-05-23
BIGportal.ba
Why's our monitor labelling this an incident or hazard?
The article describes an AI-generated false image that was spread via paid verified Twitter accounts, leading to panic and a temporary market disruption. The AI system's involvement in generating the misleading content and its use in spreading disinformation caused real harm to communities (panic) and economic harm (market drop). Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm.
Thumbnail Image

Veštačka inteligencija, Tviter i lažna eksplozija ispred Pentagona - zašto je zadrhtala berza - Izazov

2023-05-23
Izazov
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred to be involved because the image is described as AI-generated or manipulated, which is a form of AI content generation. The use of AI to create a false image that was widely shared and caused panic and a stock market drop constitutes indirect harm to communities and economic disruption. The harm is realized, not just potential, as the market reacted negatively. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Veštačka inteligencija, Tviter i lažna eksplozija ispred Pentagona - zašto je zadrhtala berza

2023-05-23
Nedeljnik
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions generative AI being used to create a false image that was widely shared, causing real-world harm including panic among investors and a temporary stock market decline. This constitutes harm to communities and economic disruption, fitting the definition of an AI Incident where AI use directly leads to harm. The involvement of AI in generating the false content and its role in spreading misinformation that caused tangible harm justifies classification as an AI Incident.
Thumbnail Image

Twitter Lawan Foto Buatan AI dengan Fitur Cek Fakta Notes on Media

2023-06-04
Liputan 6
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in generating fake images and videos that have circulated on Twitter, causing misinformation and confusion among users, which constitutes harm to communities. Twitter's introduction of the Notes on Media feature is a response to this harm, aiming to mitigate the impact of AI-generated misinformation. Since the article describes ongoing harm caused by AI-generated fake media and the platform's response to it, this qualifies as an AI Incident due to the realized harm to communities through misinformation spread by AI-generated content.
Thumbnail Image

Twitter Rilis Fitur Cek Fakta untuk Lawan Gambar Hoax

2023-06-01
detiki net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated images (AI systems) that have caused misinformation and confusion among users, which is a form of harm to communities. However, the article focuses on Twitter's deployment of a fact-checking feature to mitigate this harm rather than describing a new incident of harm caused by AI. Therefore, this is a governance and societal response to an existing AI-related issue, enhancing understanding and mitigation efforts rather than reporting a new AI Incident or Hazard.
Thumbnail Image

Viral Foto Ledakan di Pentagon yang Ternyata Palsu, Ini Reaksi Twitter

2023-06-03
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake image that was widely shared and believed, causing misinformation and social disruption (harm to communities). The misinformation led to a brief stock market drop, indicating real-world impact. Therefore, the AI system's use directly led to harm as defined by disruption and harm to communities. The article also discusses Twitter's mitigation efforts, but the primary focus is on the incident of misinformation caused by AI-generated content. Hence, this qualifies as an AI Incident.
Thumbnail Image

Twitter Uji Fitur Baru, Bisa Cek Keaslian Foto |Republika Online

2023-06-01
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI generative content and AI-based image matching) in the development and use of a feature to detect manipulated or AI-generated images. However, there is no indication that any harm has occurred or that the system malfunctioned leading to harm. The article focuses on the feature's testing and potential to reduce misinformation, which is a positive governance and societal response to AI-related challenges. Therefore, this is Complementary Information as it provides context and updates on AI ecosystem responses rather than reporting an AI Incident or Hazard.
Thumbnail Image

Αντέχει το χρηματιστήριο την τεχνητή νοημοσύνη;

2023-06-18
Reporter.gr
Why's our monitor labelling this an incident or hazard?
The AI system (generative AI creating a realistic fake image) was directly involved in producing false information that triggered automated trading algorithms to react, causing a rapid market downturn and large financial transactions worth hundreds of billions of dollars. This disruption constitutes harm to the financial market's stability, a form of harm to property and economic systems. The article explicitly links the AI-generated fake image to the market reaction, fulfilling the criteria for an AI Incident. The discussion of regulatory concerns and systemic risks further supports the significance of the event as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Αντέχει το χρηματιστήριο την τεχνητή νοημοσύνη;

2023-06-19
sofokleous10.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic fake image that was mistaken for real news, triggering a rapid market reaction and financial harm through sudden trading shifts. This constitutes an AI Incident because the AI system's use directly led to harm to communities (financial market participants) through disruption and potential financial loss. The article also highlights the role of AI-driven algorithmic trading reacting to such misinformation, further linking AI use to the incident. Although the market recovered quickly, the realized harm and disruption meet the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Αντέχει το χρηματιστήριο την τεχνητή νοημοσύνη; - ELLINIKI GNOMI * Die Zeitung der Griechen in Europa.

2023-06-18
ELLINIKI GNOMI • Die Zeitung der Griechen in Europa.
Why's our monitor labelling this an incident or hazard?
The AI system (generative AI creating a fake image) directly caused misinformation that triggered market panic and a rapid $500 billion trading volume shift, indicating harm to financial market stability (harm to communities and economic harm). The involvement of AI in algorithmic trading further amplified the impact. This fits the definition of an AI Incident because the AI system's use directly led to harm (market disruption). The article also references regulatory concerns about AI as a systemic risk, but the primary event is the realized market disruption caused by AI-generated misinformation.
Thumbnail Image

Αντέχει το χρηματιστήριο την τεχνητή νοημοσύνη;

2023-06-18
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly generating a fake image that was mistaken as real, leading to a direct and measurable financial market disruption (a 0.3% drop in the S&P 500, equating to $500 billion in trades). The AI-generated misinformation caused harm to the financial community by triggering panic and volatility, fulfilling the criteria for harm to communities and economic systems. The involvement of algorithmic trading systems reacting to the AI-generated content further supports the AI system's pivotal role in the incident. Although the harm was short-lived, it was real and directly linked to the AI system's output. The article also discusses systemic risks and regulatory responses, but the primary focus is on the incident itself, not just complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Европарламентът прие регулациите за AI

2023-06-15
Vesti.bg
Why's our monitor labelling this an incident or hazard?
The event concerns the legislative progress and regulatory framework development for AI in the EU, which is a governance and societal response to AI risks. It does not describe any specific AI system causing harm or any incident or hazard involving AI systems directly leading or plausibly leading to harm. Therefore, it is Complementary Information as it provides important context and updates on AI governance but does not report an AI Incident or AI Hazard.
Thumbnail Image

Google предупреди да се внимава с чатботовете

2023-06-16
Vesti.bg
Why's our monitor labelling this an incident or hazard?
The article describes Google's cautionary advice regarding the use of AI chatbots and the potential risks of data leakage or misuse. However, it does not report any realized harm or incident caused by the AI system. The focus is on potential risks and preventive measures, which aligns with the definition of an AI Hazard rather than an AI Incident or Complementary Information. There is no indication of a past or ongoing harm, only plausible future risks.
Thumbnail Image

Има риск от пазарни манипулации с изкуствен интелект

2023-06-13
Investor.bg
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake video that directly led to harm by causing volatility in financial markets, which qualifies as harm to communities and property (economic harm). The AI-generated content was used maliciously to manipulate markets, fulfilling the criteria for an AI Incident. The article also covers policy responses and future risks, but the primary focus is on the realized harm from the AI-generated fake video incident.