AI-Driven Trading Linked to Market Crashes and Volatility on Wall Street

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered trading algorithms, including early computer programs and modern generative AI like ChatGPT, have transformed financial markets but also contributed to major incidents such as the 1987 Black Monday and the 2010 flash crash. These events highlight the risks of market volatility and systemic harm from AI-driven trading systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (HFT algorithms, natural language processing, generative AI like ChatGPT) being used in financial trading and their direct and indirect roles in causing harms such as market crashes, extreme volatility, and systemic market risks. These harms correspond to disruption of critical infrastructure (financial markets) and harm to communities (economic impacts). Since these harms have already occurred (e.g., flash crash) and AI systems are implicated in causing or exacerbating them, this qualifies as an AI Incident. The article also discusses potential future harms but the presence of realized harms takes precedence.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Financial and insurance services

Affected stakeholders
ConsumersBusinessGeneral public

Harm types
Economic/PropertyPublic interestReputational

Severity
AI incident

Business function:
Other

AI system task:
Forecasting/predictionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

2023-05-18
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, AI algorithms in high-frequency trading) and discusses their use and potential malfunction in financial markets. It references historical harms caused by algorithmic trading and warns about plausible future harms from generative AI-powered trading algorithms causing market volatility, crashes, and herd behavior. However, it does not report a new, specific AI-related harm occurring now. Instead, it provides a reasoned analysis of potential risks and benefits, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The discussion of past crashes is historical context, not a new incident. The article is not merely general AI news or product launch, so it is not Unrelated.
Thumbnail Image

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

2023-05-20
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (HFT algorithms, natural language processing, generative AI like ChatGPT) being used in financial trading and their direct and indirect roles in causing harms such as market crashes, extreme volatility, and systemic market risks. These harms correspond to disruption of critical infrastructure (financial markets) and harm to communities (economic impacts). Since these harms have already occurred (e.g., flash crash) and AI systems are implicated in causing or exacerbating them, this qualifies as an AI Incident. The article also discusses potential future harms but the presence of realized harms takes precedence.
Thumbnail Image

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

2023-05-18
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic trading, high-frequency trading, and generative AI tools like ChatGPT) being used in financial markets. It details historical harms caused by these AI systems, such as the 1987 Black Monday crash and the 2010 flash crash, both linked to AI-driven trading algorithms causing market volatility and economic harm. It also discusses ongoing and potential harms from AI use in trading, including increased volatility and systemic risks from algorithmic herding. Since these harms have occurred and are directly linked to AI system use, the event qualifies as an AI Incident rather than a hazard or complementary information. The article is not merely about AI research or policy responses but focuses on the realized and ongoing harms caused by AI in financial trading.
Thumbnail Image

ChatGPT on Wall Street Could Be Disastrous, Financial History Shows

2023-05-19
Scientific American
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and AI-powered trading algorithms) and their use in financial markets. It discusses historical harms caused by algorithmic trading (which involved AI or automated systems) and warns about plausible future harms from generative AI integration in trading. No new harm event is reported; rather, the article provides a reasoned analysis of potential risks and systemic vulnerabilities. This fits the definition of an AI Hazard, as it plausibly leads to AI Incidents (market crashes, volatility, unfair advantages) but does not describe a new incident itself. It is not Complementary Information because it is not updating or responding to a specific past incident, nor is it unrelated as it clearly involves AI and its risks.
Thumbnail Image

ChatGPT-powered Wall Street: The Benefits And Perils Of Using Artificial Intelligence To Trade Stocks And Other Financial Instruments - Stuff South Africa

2023-05-19
Stuff
Why's our monitor labelling this an incident or hazard?
The article provides a detailed analysis of AI systems' role in financial trading, including historical AI-related market crashes (which are AI incidents) and the potential for future harms from generative AI in trading. Since it mainly discusses past AI incidents and plausible future risks without reporting a new specific incident or hazard event, the content is best classified as Complementary Information. It enhances understanding of AI's impact on financial markets and informs about risks and benefits but does not describe a new AI Incident or AI Hazard event.
Thumbnail Image

Will ChatGPT-powered Wall Street end in disaster?

2023-05-18
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic trading, high-frequency trading, and generative AI-powered trading algorithms) that have directly or indirectly caused significant harms, including market crashes and increased volatility, which affect the global economy and communities. These harms fall under the definition of AI Incidents as they involve disruption to critical financial infrastructure and harm to communities. The discussion of potential future risks from ChatGPT-powered trading algorithms also indicates plausible future harm, but the presence of actual past harms takes precedence. The article is not merely a general discussion or a complementary update but details concrete harms linked to AI system use in financial markets.
Thumbnail Image

ChatGPT-powered Wall Street offers benefits and perils - UPI.com

2023-05-19
UPI
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (algorithmic trading, HFT, and generative AI) and their direct or indirect role in causing significant harms such as market crashes and volatility, which affect the economy and communities. The 1987 Black Monday and 2010 flash crash are concrete examples of AI Incidents where AI systems led to harm. The discussion of ChatGPT-powered trading algorithms points to plausible future harms but does not overshadow the realized harms already documented. Hence, the event qualifies as an AI Incident rather than merely a hazard or complementary information. It is not unrelated because AI systems are central to the narrative and harm described.
Thumbnail Image

The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

2023-05-19
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The article provides a detailed overview of AI systems in financial trading, including historical AI-related market crashes and current concerns about generative AI's impact on trading behavior. The harms described (market crashes, volatility, unfair advantages) have occurred in the past due to AI systems, making those past events AI Incidents. However, the article itself is a reflective analysis and warning about potential future harms from AI in trading, without reporting a new incident or hazard event. It does not focus on a new AI Incident or AI Hazard but rather provides complementary information about the ecosystem, risks, and benefits of AI in finance. Therefore, the classification is Complementary Information.
Thumbnail Image

The Conversation: ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

2023-05-18
Portland Press Herald
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview and analysis of AI's impact on financial markets, referencing historical AI-related incidents and discussing potential future risks. It does not describe a new or ongoing AI Incident or a specific AI Hazard event. The focus is on understanding the ecosystem, benefits, and risks, which aligns with Complementary Information as it enhances understanding without reporting a new harm or imminent risk.
Thumbnail Image

How AI could revolutionize stock trading - ExBulletin

2023-05-21
ExBulletin
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-powered trading algorithms and generative AI tools like ChatGPT, used in financial markets. It discusses the historical and ongoing use of AI in trading and the associated risks, such as market volatility and herd behavior, which could plausibly lead to significant financial harm or market disruption. However, it does not describe a specific event where AI directly or indirectly caused harm; instead, it focuses on potential future harms and systemic risks. Therefore, the event qualifies as an AI Hazard because it plausibly could lead to an AI Incident but no actual harm is reported in the article.