OpenAI Detected Violent Intent in ChatGPT User Before Canadian Mass Shooting, Did Not Alert Authorities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI's internal systems flagged an 18-year-old ChatGPT user in British Columbia, Canada, for violent tendencies months before she killed eight people and herself. Despite detecting concerning behavior, OpenAI closed her account but did not alert police, citing lack of evidence of imminent threat.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the involvement of ChatGPT, an AI system, in the suspect's violent planning. The AI system was used to discuss violent scenarios, which is directly connected to the subsequent mass shooting and multiple fatalities. OpenAI's internal detection and decision-making process regarding reporting the user further confirm the AI system's role in the chain of events. The harm to multiple people (injury and death) has materialized, fulfilling the criteria for an AI Incident under the framework.[AI generated]
AI principles
SafetyAccountability

Industries
IT infrastructure and hosting

Affected stakeholders
General public

Harm types
Physical (death)

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Adolescenta de 18 ani, care și-a ucis familia și mai mulți copii într-o școală din Canada, discutase cu ChatGPT scenarii violente

2026-02-21
Digi24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of ChatGPT, an AI system, in the suspect's violent planning. The AI system was used to discuss violent scenarios, which is directly connected to the subsequent mass shooting and multiple fatalities. OpenAI's internal detection and decision-making process regarding reporting the user further confirm the AI system's role in the chain of events. The harm to multiple people (injury and death) has materialized, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Open AI anunță că adolescenta care a ucis nouă persoane în Canada a avut activități online care indicau un profil violent, însă compania nu a anunțat autoritățile

2026-02-21
Ziare.com
Why's our monitor labelling this an incident or hazard?
The AI system (OpenAI's monitoring tools) was used to analyze user activity and flagged violent content, which is an AI system's use in monitoring and content moderation. The failure to alert authorities despite identifying concerning behavior indirectly contributed to the harm, as the violent act occurred later. This fits the definition of an AI Incident because the AI system's use and the decisions based on its outputs are directly linked to a serious harm (multiple deaths). The event is not merely a potential hazard or complementary information but a realized harm connected to AI system use and decisions.
Thumbnail Image

OpenAI a blocat contul ChatGPT al suspectului de la Tumbler Ridge cu jumătate de an înainte de masacrul din Canada

2026-02-21
ziarulnational.md
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use by the suspect was identified and blocked prior to a mass shooting that caused multiple deaths and injuries. The AI system's moderation and detection mechanisms were involved in identifying the suspect's account, and the company's decisions about reporting to authorities relate to the timeline of the harm. Although the AI system did not directly cause the attack, its use and the company's handling of the account are part of the chain of events leading to the harm. This fits the definition of an AI Incident because the AI system's use and moderation indirectly relate to a serious harm (loss of life and injury).
Thumbnail Image

OpenAI afirmă că a blocat contul de ChatGPT al suspectului în cazul unui atac armat din Canada, însă nu a alertat autorităţile

2026-02-21
rador.ro
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was used by the suspect to promote violence, leading to the account being blocked. The failure to alert authorities despite detection of violent content indirectly contributed to the harm caused by the suspect's subsequent mass shooting. The AI system's use and the company's handling of the situation are directly linked to the harm (multiple deaths), fulfilling the criteria for an AI Incident due to indirect causation of harm through the AI system's misuse and response.
Thumbnail Image

Fata de 18 ani responsabilă de masacrul din Canada discutase despre violență cu ChatGPT înainte de tragedie

2026-02-21
Informaţia Zilei
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the perpetrator and exhibited indications of violent behavior through interactions. OpenAI's internal systems identified this and suspended the account but did not alert authorities due to assessment of no immediate risk. The subsequent mass killing and suicide constitute severe harm to persons. The AI system's involvement in monitoring and decision-making about reporting is directly linked to the harm, as it influenced the lack of earlier intervention. This fits the definition of an AI Incident because the AI system's use indirectly led to injury and harm to people, fulfilling the criteria for harm (a).
Thumbnail Image

OpenAI recunoaște că adolescenta de 18 ani, care și-a ucis familia și mai mulți copii într-o școală din British Columbia (Canada), discutase cu ChatGPT în ultimul an și prezentase o predispoziție de a comite acțiuni violente. Deși compania luase în calcul să anunțe poliția, în final nu a mai făcut-o, pe motiv că tânăra nu dădea semne că plănuiește un atac iminent, iar astfel doar i-a închis contul. - Biziday

2026-02-21
Biziday
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly mentioned as being used by the perpetrator. The AI system was involved in the use phase, with OpenAI monitoring and detecting violent content but deciding not to report to police due to lack of imminent threat evidence. The tragic mass killing that followed caused severe harm to multiple people, fulfilling the criteria of injury or harm to persons. The AI system's role is indirect but pivotal, as the company had information about violent tendencies expressed via the AI but did not escalate, which is relevant to the incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Compania OpenAI "a luat în considerare" să alerteze poliția canadiană în legătură cu un suspect care ar fi pus la cale un atac armat la o școală - Aktual24

2026-02-21
Aktual24
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in monitoring user activity and identifying potential violent behavior. Although OpenAI did not report the user before the attack due to lack of imminent credible threat, the AI system's use by the suspect and the company's monitoring efforts are directly linked to the incident. The event describes actual harm (multiple deaths) resulting from the suspect's actions, with the AI system playing an indirect role through its monitoring and detection capabilities. Therefore, this qualifies as an AI Incident due to the direct or indirect link between the AI system's use and the harm caused.
Thumbnail Image

OpenAI, sub lupă după atacul din Canada

2026-02-21
B1TV.ro
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (OpenAI's chatbot and internal monitoring tools) that detected potentially dangerous behavior but did not lead to timely intervention to prevent a mass shooting. The harm (multiple deaths and injuries) has occurred, and the AI system's outputs influenced the company's decision not to alert authorities, which is an indirect link to the harm. This fits the definition of an AI Incident because the AI system's use indirectly led to harm through its role in risk assessment and failure to trigger preventive action. The event is not merely a potential risk (hazard) or a complementary update; it involves realized harm connected to AI system use.
Thumbnail Image

ChatGPT a identificat semne de violență la fata care a ucis 8 oameni în Canada, dar nu a alertat autoritățile

2026-02-21
spotmedia.ro
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was actively used to monitor user conversations and identified concerning behavior indicative of potential violence. Although the system did not malfunction, its policy-based decision not to alert authorities despite detecting warning signs indirectly contributed to the tragic outcome. The harm (multiple deaths) is a direct consequence of the failure to act on AI-generated warnings. This fits the definition of an AI Incident, as the AI system's use and its outputs played a direct role in the chain of events leading to significant harm to persons.
Thumbnail Image

容疑者の暴力的兆候、事前に検知 カナダ銃撃、通報せず -- 米オープンAI:時事ドットコム

2026-02-21
時事ドットコム
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used and detected violent content indicating potential harm. The AI system's involvement in detecting but not reporting the threat is directly connected to the subsequent harm (eight deaths in a shooting). This constitutes an AI Incident because the AI system's use and the human decision not to report the threat directly or indirectly contributed to a violation of safety and harm to persons. The event involves realized harm linked to the AI system's use and its handling of the detected threat.
Thumbnail Image

カナダ乱射犯、AIと事前に会話|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2026-02-21
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used by the perpetrator to discuss shooting scenarios before the actual shooting incident that caused fatalities. The AI's involvement is indirect but pivotal in the chain of events leading to harm. Therefore, this qualifies as an AI Incident because the AI system's use is linked to a real harm (loss of life) through the perpetrator's planning.
Thumbnail Image

カナダ乱射犯、チャットGPTと事前に会話 銃撃シナリオ オープンAI、通報検討

2026-02-21
産経ニュース
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the perpetrator to discuss shooting scenarios before the mass shooting that caused multiple deaths. OpenAI's monitoring system detected this interaction but did not report it before the incident, which raises concerns about the AI system's role in enabling or failing to prevent harm. The harm (multiple deaths) has occurred, and the AI system's involvement is indirect but pivotal in the chain of events. Hence, this is classified as an AI Incident.
Thumbnail Image

カナダ乱射犯、AIと事前に会話 チャットGPT、警察に通報検討 | 共同通信 ニュース | 沖縄タイムス+プラス

2026-02-21
沖縄タイムス+プラス
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used by the perpetrator to discuss violent scenarios prior to the mass shooting, which resulted in multiple deaths. Although the AI did not directly cause the harm, its use in planning and the subsequent failure to report the conversation (despite consideration) links it indirectly to the harm. This meets the criteria for an AI Incident because the AI system's use indirectly led to harm to persons (multiple deaths).
Thumbnail Image

カナダ乱射犯、AIと事前に会話 チャットGPT、警察に通報検討:国際:福島民友新聞社

2026-02-21
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved as the suspect conversed with it about a violent scenario before committing a mass shooting causing multiple deaths. The AI developer's detection and decision not to report is part of the context but does not negate the AI's involvement. The AI's role is indirect but pivotal in the chain of events leading to harm to persons. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

オープンAI、カナダ銃乱射事件の容疑者を以前から警戒

2026-02-21
The Wall Street Journal - Japan
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was involved in the suspect's creation of violent scenarios, which preceded and is indirectly linked to a real-world violent incident causing harm to people. The AI's use in generating violent content that relates to the suspect's later actions constitutes an indirect contribution to harm. Therefore, this qualifies as an AI Incident due to indirect harm to people resulting from the AI system's use.
Thumbnail Image

캐나다 총기난사범, 범행전 챗GPT에 수일간 시나리오 서술 | 연합뉴스

2026-02-21
연합뉴스
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the shooter to generate violent scenarios before the attack, which is a direct use of AI outputs in the context of planning violence. The AI's role is indirect but pivotal in the chain of events leading to harm (9 deaths). The event involves the use of AI in a way that contributed to a serious harm (loss of life), meeting the criteria for an AI Incident. Although OpenAI did not report to law enforcement, the AI system's outputs were part of the preparatory process for the crime. Therefore, this is not merely a potential hazard or complementary information but an AI Incident.
Thumbnail Image

챗GPT, '캐나다 총기 난사' 사전 포착하고도 신고 안해

2026-02-21
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the development and discussion of a violent crime plan, and the company's internal review process failed to escalate the threat to authorities, which could be seen as a malfunction or failure in the use of the AI system. This failure indirectly led to a tragic incident involving loss of life and injury, meeting the criteria for an AI Incident under harm to persons. The involvement of the AI system is explicit and central to the event, and the harm has materialized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

챗GPT, '총기 학살 예고에도 신고 안 했다'

2026-02-21
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event describes a mass shooting where the perpetrator used an AI chatbot (ChatGPT) to write about gun violence scenarios before the attack. The AI system was involved in the use phase, and its outputs or the user's inputs were flagged internally but not reported to law enforcement. The harm (loss of life) has occurred, and the AI system's involvement is indirectly linked to this harm. This fits the definition of an AI Incident because the AI system's use indirectly led to harm to persons, and the decision not to report the threat is part of the incident context. The presence of the AI system is explicit, and the harm is materialized, not just potential.
Thumbnail Image

챗GPT, 캐나다 총기난사범 범행 계획 알고도 신고 안 해

2026-02-21
매일방송
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used by the perpetrator to discuss plans for a mass shooting, which directly led to harm (deaths and injuries). The AI system's involvement is indirect but pivotal, as it was the medium through which the plans were expressed and detected by OpenAI staff. The failure to report the credible threat to authorities contributed to the harm. This fits the definition of an AI Incident because the AI system's use indirectly led to injury and harm to people. The event is not merely a potential hazard or complementary information but a realized harm linked to AI system use and oversight.
Thumbnail Image

"캐나다 총기난사범, 챗GPT에 범행 계획"...신고 안한 오픈AI

2026-02-21
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved as the perpetrator used it to generate violent scenarios before committing a mass shooting that caused multiple deaths. The AI system's outputs were reviewed internally, and the decision not to report to authorities despite recognizing potential risks implicates the AI system's use in the chain of events leading to harm. This meets the criteria for an AI Incident due to direct or indirect contribution to injury or harm to people.
Thumbnail Image

OpenAI contacted RCMP about Tumbler Ridge shooter's ChatGPT account after mass shooting

2026-02-21
The Globe and Mail
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the shooter to generate or discuss violent scenarios, and its internal flagging system identified concerning content months before the shooting. The failure to inform authorities before the attack means the AI system's role is indirectly linked to the harm caused. The mass shooting resulted in deaths and injuries, fulfilling the harm criteria. The AI system's involvement in the development and use phases, and the subsequent failure to act on flagged content, directly contributed to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

OpenAI contacted RCMP about Tumbler Ridge shooter's ChatGPT account after attack

2026-02-21
OrilliaMatters.com
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the shooter to generate or discuss violent scenarios, which were flagged internally by OpenAI's automatic review system. Although the company did not alert law enforcement until after the shootings, the AI system's involvement in the shooter's planning or mindset is a contributing factor to the harm. The mass shooting caused injury and death, fulfilling the harm criteria. The event is not merely a potential risk but a realized incident involving AI use and its consequences, thus classifying it as an AI Incident.
Thumbnail Image

OpenAI contacted RCMP about Tumbler Ridge shooter's ChatGPT account after attack

2026-02-21
thepeterboroughexaminer.com
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was involved in the shooter's preparation or expression of violent scenarios, which were flagged internally by OpenAI's system. The harm (mass shooting with multiple fatalities and injuries) has occurred, and the AI system's involvement is indirect but significant. The failure to notify authorities before the attack is part of the incident's context. The event meets the criteria for an AI Incident because the AI system's use directly or indirectly led to harm to people and communities. The article does not merely discuss potential harm or general AI-related news but reports on a concrete event with realized harm linked to AI use.
Thumbnail Image

OpenAI contacted RCMP about Tumbler Ridge shooter's ChatGPT account after attack - Medicine Hat News

2026-02-21
Medicine Hat News
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used by the shooter to generate or discuss scenarios of gun violence, which were flagged internally by OpenAI's review system. Although the AI system did not cause the shooting directly, its flagged content was a relevant factor in the timeline of events. The failure to alert authorities before the attack means the AI system's role is linked indirectly to the harm. The event involves realized harm (multiple deaths and injuries), and the AI system's involvement is material to the incident, meeting the criteria for an AI Incident rather than a hazard or complementary information.