Florida Investigates OpenAI Over ChatGPT's Alleged Role in FSU Shooting and Other Harms

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Florida Attorney General James Uthmeier has launched an investigation into OpenAI, citing allegations that ChatGPT was used to assist a mass shooting at Florida State University, as well as its links to criminal behavior and self-harm. Subpoenas will be issued as part of the probe.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves an AI system (OpenAI's ChatGPT) and discusses alleged harms to minors, including self-harm, suicide, and criminal acts linked to the AI's use. The Attorney General's investigation is a direct response to these alleged harms, indicating that the AI system's use has led or is suspected to have led to harm, fulfilling the criteria for an AI Incident. The investigation and legislative context also provide governance responses, but these are secondary to the primary event of the investigation into alleged harms. Therefore, the event is best classified as an AI Incident.[AI generated]
AI principles
AccountabilitySafety

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Physical (death)Physical (injury)Psychological

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

2026-04-09
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, an AI system, and concerns about its misuse potentially facilitating a mass shooting and other criminal activities. While harm has occurred (the shooting), the direct causal link to ChatGPT is not confirmed but alleged and under investigation. The AI system's involvement is in its use and potential misuse, which could plausibly lead to harm. Since the investigation is ongoing and the harms are not definitively attributed to the AI system yet, this event fits the definition of an AI Hazard rather than an AI Incident. The article also includes calls for regulatory and protective measures, which align with responses to a hazard.
Thumbnail Image

Florida launches investigation into ChatGPT's maker, OpenAI, over alleged risks to minors

2026-04-09
CBS News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's ChatGPT) and discusses alleged harms to minors, including self-harm, suicide, and criminal acts linked to the AI's use. The Attorney General's investigation is a direct response to these alleged harms, indicating that the AI system's use has led or is suspected to have led to harm, fulfilling the criteria for an AI Incident. The investigation and legislative context also provide governance responses, but these are secondary to the primary event of the investigation into alleged harms. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Florida AG opens probe into OpenAI ahead of potential IPO

2026-04-09
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and concerns about its involvement in criminal behavior and potential national security threats. The investigation is a response to these harms or plausible harms linked to the AI system's use. Since the article reports on realized harms (criminal behavior linked to ChatGPT) and the official probe into these harms, this qualifies as an AI Incident. The investigation and subpoenas indicate that the AI system's use has directly or indirectly led to violations or harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Florida AG opens probe into OpenAI ahead of potential IPO

2026-04-10
The Hindu
Why's our monitor labelling this an incident or hazard?
The article mentions an investigation into OpenAI's AI system (ChatGPT) due to concerns about data and technology security, which is a governance response to potential risks. There is no indication that any harm has occurred yet, nor that the AI system has malfunctioned or been misused to cause harm. The focus is on potential risks and regulatory scrutiny ahead of an IPO, making this a complementary information event rather than an incident or hazard.
Thumbnail Image

'Subpoenas are forthcoming': Florida AG opens probe into OpenAI, ChatGPT

2026-04-09
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT by OpenAI) and discusses its alleged role in facilitating criminal behavior, including a mass shooting and child exploitation, which are serious harms to persons and communities. The investigation and potential lawsuits indicate that harm has occurred or is ongoing, meeting the criteria for an AI Incident. The AI system's outputs or interactions are implicated in these harms, either directly or indirectly. The event is not merely a policy discussion or a future risk warning but centers on actual or alleged harms linked to the AI system's use, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Florida AG launches investigation into OpenAI, ChatGPT

2026-04-09
The Hill
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT by OpenAI) and concerns about its misuse or harmful impacts. However, the harms mentioned are alleged or potential, and the investigation is just beginning. There is no confirmed direct or indirect harm caused by the AI system as per the article. Therefore, this event represents a plausible risk scenario prompting regulatory scrutiny, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is the investigation into potential harms, not a response to a past incident. Hence, the classification is AI Hazard.
Thumbnail Image

Florida AG launches investigation into OpenAI

2026-04-09
Axios
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's ChatGPT) and concerns alleged harms linked to its use, including serious criminal behavior and a mass shooting. These allegations imply that the AI system's use has directly or indirectly led to harms that fall under the AI Incident definition (harm to persons, violation of rights). However, the article focuses on the launch of an investigation and potential legal scrutiny rather than describing a new incident of harm occurring at this time. The investigation and potential lawsuits are societal and governance responses to previously reported or alleged harms. Therefore, this event is best classified as Complementary Information, as it provides an update on governance and legal responses to AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Florida launches probe into OpenAI, ChatGPT over safety concerns - CNBC TV18

2026-04-09
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT by OpenAI) and concerns about its potential misuse or harmful impact. However, the investigation is at an early stage, and no concrete incident of harm has been confirmed or described as having occurred. The focus is on examining whether the AI system's use could have contributed to or enabled harmful activities, which aligns with the definition of an AI Hazard—an event where AI use or malfunction could plausibly lead to harm. Since no realized harm is reported, this is not an AI Incident. The event is more than complementary information because it concerns a formal probe into potential risks rather than a response or update to a known incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Florida launches investigation into OpenAI

2026-04-09
The Verge
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (OpenAI's ChatGPT) and discusses concerns about its misuse or potential harm, including links to criminal behavior and a shooting. However, the harms are alleged or under investigation, and no confirmed direct or indirect harm caused by the AI system is established in the article. The investigation and lawsuit indicate potential or ongoing concerns but do not confirm an AI Incident has occurred. Therefore, this event is best classified as Complementary Information, as it provides updates on societal and regulatory responses to AI-related risks without confirming realized harm.
Thumbnail Image

Florida AG announces investigation into OpenAI over shooting that allegedly involved ChatGPT | TechCrunch

2026-04-09
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) whose use is alleged to have indirectly led to harm (a deadly shooting and other violent incidents). This fits the definition of an AI Incident because the AI system's use is linked to violations of human rights and harm to persons. Although the investigation is ongoing and no final conclusions are presented, the reported harm has already occurred, and the AI system's role is pivotal in the claims. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Florida's Bold Investigation into OpenAI: What It Means for AI's Future - Internewscast Journal

2026-04-09
Internewscast Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions OpenAI's ChatGPT, an AI system, being implicated in criminal activities such as distribution of child sexual abuse material and aiding a mass shooting suspect, which are serious harms to individuals and public safety. The involvement of the AI system in these harms, as well as the legal action against OpenAI, indicates that the AI system's use has directly or indirectly caused significant harm. This fits the definition of an AI Incident, as the harms are realized and the AI system's role is pivotal in the chain of events leading to these harms.
Thumbnail Image

Florida AG to probe OpenAI, alleging possible connection to FSU shooting | TechCrunch

2026-04-09
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) whose use by a suspect is linked to a mass shooting incident causing loss of life, which constitutes harm to persons. This satisfies the criteria for an AI Incident because the AI system's use has indirectly led to harm (the shooting). The investigation into potential encouragement of suicide and national security threats further supports the classification as an AI Incident due to violations of rights and potential harm. Although some aspects are investigatory and precautionary, the direct link to a fatal incident and ongoing lawsuits about harm to minors confirm realized harm. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Florida Launches Investigation Into OpenAI Over Child Safety Concerns, Criminal Activity, & FSU Mass Shooting Links

2026-04-10
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's ChatGPT) and discusses concerns about its use in harmful activities such as child exploitation, encouragement of self-harm, and assistance in a mass shooting, as well as national security risks. However, the article does not confirm that these harms have been definitively caused by the AI system, only that there are allegations and an ongoing investigation. Since the harms are potential and the investigation is proactive, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the investigation and the risks posed by the AI system, not on responses or broader ecosystem context. It is not unrelated because it directly concerns an AI system and its potential harms.
Thumbnail Image

Florida AG to probe OpenAI, alleging possible connection to FSU shooting

2026-04-09
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, an AI system, being used by the suspect in a mass shooting that caused fatalities, which is a direct harm to persons. The investigation by the Attorney General is based on this connection, indicating the AI system's involvement in causing harm. The harms described fall under injury or harm to persons and potential violations of law. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information, as the harm has already occurred and the AI system's role is pivotal in the incident.
Thumbnail Image

Sam Altman's really weird week just got even worse

2026-04-09
Mother Jones
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that ChatGPT was used by the alleged shooter to plan a mass shooting that killed two people, indicating direct involvement of the AI system in harm to persons. The investigation by the Florida Attorney General into the AI system's role in facilitating criminal activity further supports the classification as an AI Incident. The harm is realized, not just potential, and the AI system's outputs were part of the chain of events leading to the incident. Thus, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'AI Should Advance Mankind, Not Destroy It': Why Florida Is Taking Aim at OpenAI - Decrypt

2026-04-09
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions OpenAI's ChatGPT, an AI system, and details an official investigation into its role in serious harms including criminal misuse, child safety risks, and a deadly shooting. These harms fall under the definitions of injury to persons and violations of legal protections. The investigation and subpoenas indicate that the AI system's use or misuse is being examined as a contributing factor to these harms. The presence of alleged direct links to criminal behavior and public safety threats confirms realized or ongoing harm rather than mere potential risk. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Florida attorney general probes OpenAI over alleged risks to minors

2026-04-10
FOX 35 Orlando
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's ChatGPT) and concerns about its role in a serious criminal event (a mass shooting) and potential harm to minors. However, the investigation is ongoing, and it is not confirmed that the AI system directly caused or contributed to the harm. The allegations and concerns indicate plausible future or indirect harm, fitting the definition of an AI Hazard. There is no clear evidence yet of a realized AI Incident, and the article is not merely complementary information or unrelated news. Therefore, the classification as AI Hazard is appropriate.
Thumbnail Image

Florida attorney general launches investigation into OpenAI

2026-04-09
NBC 6 South Florida
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (OpenAI's ChatGPT) and discusses suspected harms linked to their use, including serious issues like self-harm and criminal acts. However, it does not report confirmed incidents where the AI system's development, use, or malfunction has directly or indirectly led to harm. Instead, it reports the initiation of an official investigation and legislative context, which are governance and societal responses to potential AI risks. Therefore, the event fits the definition of Complementary Information, as it enhances understanding of AI's societal impact and regulatory responses without confirming an AI Incident or solely presenting a plausible future harm (AI Hazard).
Thumbnail Image

Florida AG opens OpenAI investigation after ChatGPT records surface in FSU shooting

2026-04-09
WPTV
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (ChatGPT) is explicit, and its use by the suspect is documented. The harm (fatal shooting and injuries) has occurred, and the AI system's involvement is part of the chain of events leading to that harm, even if indirectly. The investigation and legal scrutiny focus on the AI system's role in enabling or influencing the suspect's actions. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to people. The event is not merely a potential risk or a complementary update but concerns an actual incident involving AI-related harm.
Thumbnail Image

Florida launches OpenAI probe following claims ChatGPT aided FSU gunman

2026-04-09
WSBT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that ChatGPT, an AI system, was allegedly used by the FSU gunman to plan a deadly mass shooting, which caused harm to people (two deaths). This meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to injury or harm to persons. Additionally, the investigation includes concerns about the AI system's role in distributing harmful content and national security threats, further supporting the classification as an AI Incident. The ongoing legal and regulatory responses underscore the seriousness of the harms involved.
Thumbnail Image

'AI Should Advance Mankind, Not Destroy It': Why Florida Is Taking Aim at OpenAI

2026-04-09
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenAI's ChatGPT) and its alleged involvement in harmful activities such as criminal misuse, child exploitation, and a mass shooting. These constitute violations of human rights and harm to persons, fitting the definition of an AI Incident. The investigation is a response to realized or ongoing harms linked to the AI system's use, not merely potential future risks. Hence, the event is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

'Subpoenas are forthcoming': Florida AG opens probe into OpenAI, ChatGPT

2026-04-09
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's ChatGPT) and details an official probe into its role in serious harms, including a fatal shooting and other criminal activities. The investigation is triggered by allegations that the AI system was used in ways that contributed to real harm (death and public safety risks). Therefore, this event meets the criteria for an AI Incident, as the AI system's use has directly or indirectly led to significant harm and legal action is underway.
Thumbnail Image

Florida Launches Investigation Into OpenAI and ChatGPT

2026-04-09
Coingape
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and alleges that its use or misuse has led to significant harms, including facilitating criminal behavior and endangering public safety. The Attorney General's investigation is a response to these harms, indicating that the AI system's involvement is considered a contributing factor to the incident. Although the investigation is ongoing and some claims may be under scrutiny, the event centers on addressing realized or alleged harms linked to the AI system, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Alleged ChatGPT Use in Mass Shooting Spurs Florida AG Probe (1)

2026-04-09
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses its alleged use in a mass shooting, which is a serious harm event. However, the AI system's direct or indirect causal role in the harm is not established or confirmed; the allegations are under investigation. The event is primarily about the state's response and investigation into potential misuse and data privacy concerns, rather than a confirmed AI Incident. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-related concerns without confirming a new AI Incident or AI Hazard.
Thumbnail Image

Florida AG to probe OpenAI, alleging possible connection to FSU shooting - RocketNews

2026-04-10
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental probe into potential harms associated with an AI system (ChatGPT), including a possible indirect connection to a past violent incident and broader safety concerns. Since the harms are alleged and under investigation, and the article does not report a confirmed AI-caused harm event, this constitutes a plausible risk scenario rather than a realized incident. The focus is on the potential for harm and regulatory response, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The AI system's involvement is inferred from the suspect's use of ChatGPT and the Attorney General's concerns, but no direct causation of harm by the AI is established in the article.
Thumbnail Image

Florida Ag Announces Investigation Into Openai Over Shooting That Allegedly Involved Chatgpt

2026-04-09
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) that was allegedly used to plan a deadly shooting, which caused injury and death, fulfilling the harm criteria for an AI Incident. The Attorney General's investigation is a response to this harm. The AI system's use is directly linked to the harm, even if indirectly through planning. This meets the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to persons. The article also references other similar harms linked to ChatGPT, reinforcing the classification. The event is not merely a potential risk or a complementary update but concerns an actual incident with realized harm.
Thumbnail Image

Florida AG James Uthmeier launches probe into OpenAI

2026-04-09
WUSF
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's ChatGPT) and discusses alleged harms linked to its use, including serious outcomes like self-harm and suicide among minors and possible facilitation of criminal activity. However, the event is about the launch of an investigation rather than a confirmed incident or hazard. The investigation aims to assess and address these concerns, representing a governance and legal response. The article also discusses legislative efforts and company frameworks to mitigate AI harms, further emphasizing the governance context. Since no confirmed direct or indirect harm caused by the AI system is established in the article, and the focus is on the probe and regulatory responses, the event fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Florida Ag To Probe Openai, Alleging Possible Connection To Fsu Shooting

2026-04-09
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT by OpenAI) and discusses its alleged role in a serious harm event (a school shooting) and other potential harms to minors and national security. However, the article primarily reports on the initiation of an investigation and concerns rather than confirmed direct causation of harm by the AI system. The potential for harm is credible and significant, but the harm is not yet established as directly caused by the AI system. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to or be connected to AI-related harms, but the incident status is not confirmed or detailed as an AI Incident.
Thumbnail Image

Florida launches investigation into ChatGPT over alleged role in university shooting

2026-04-09
Peoples Gazette Nigeria
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use is alleged to have directly or indirectly contributed to a mass shooting causing deaths and injuries, which constitutes harm to persons. The investigation and legal actions underscore the AI system's involvement in the harm. Therefore, this qualifies as an AI Incident because the AI system's use is linked to realized harm (fatalities and injuries).
Thumbnail Image

Florida AG opens OpenAI investigation after ChatGPT records surface in FSU shooting

2026-04-10
Tampa Bay 28 (WFTS)
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, as the suspect used it to communicate and plan aspects related to the shooting. The AI system's use is linked indirectly to the harm caused by the shooting (injury and death of people). The investigation and legal proceedings revolve around the AI system's role in the incident. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to persons (fatalities and injuries).
Thumbnail Image

Florida Attorney General James Uthmeier probes OpenAI over ChatGPT safety risks

2026-04-10
News9live
Why's our monitor labelling this an incident or hazard?
The article centers on a governmental investigation into the potential risks and misuse of an AI system (ChatGPT) but does not describe any specific harm or incident that has already occurred due to the AI's development, use, or malfunction. The concerns raised are about possible future harms and the adequacy of safeguards, which aligns with the definition of an AI Hazard or Complementary Information. However, since the article mainly reports on the investigation and regulatory response rather than a direct or indirect harm event, it fits best as Complementary Information. It provides context on societal and governance responses to AI risks without documenting a concrete AI Incident or an imminent AI Hazard event.
Thumbnail Image

AG Uthmeier opens investigation into ChatGPT, OpenAI

2026-04-09
Tampa Bay 28 (WFTS)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT by OpenAI) and concerns about its use linked to criminal behavior and a mass shooting, which is a serious harm. However, the harm is not confirmed or detailed as having directly or indirectly resulted from the AI system's malfunction or use; rather, an investigation has been opened to explore these claims. This makes the event a governance or societal response to potential AI-related harm, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard. There is no direct evidence of harm caused by the AI system yet, only a credible concern and official inquiry.