Kota Police and Meta Use AI Monitoring to Prevent Student Suicides

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Kota police partnered with Meta to use AI-powered monitoring of Facebook and Instagram to detect suicidal content among coaching students. Within a week, officers intervened to avert a suicide attempt from a student in Jhunjhunu. The initiative aims to scale statewide detection and response to prevent further student suicides.[AI generated]

Why's our monitor labelling this an incident or hazard?

Meta's social media platforms likely use AI systems to analyze user behavior and content to detect signs of suicidal tendencies. The collaboration with the police to intervene and prevent suicides indicates that the AI system's use has directly led to harm prevention, specifically injury or harm to health. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm prevention, which is a positive form of harm management under the framework.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General public

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Kota Police Partners with Meta to Prevent Student Suicides

2024-05-29
TimesNow
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms likely use AI systems to analyze user behavior and content to detect signs of suicidal tendencies. The collaboration with the police to intervene and prevent suicides indicates that the AI system's use has directly led to harm prevention, specifically injury or harm to health. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm prevention, which is a positive form of harm management under the framework.
Thumbnail Image

Kota Cops Tie Up With Facebook-Parent Meta To Prevent Student Suicides

2024-05-28
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by Meta to analyze social media content for signs of suicidal tendencies, which is a clear AI system involvement. The use of this AI system has directly led to the prevention of a suicide, which is harm to health averted through AI intervention. Therefore, this qualifies as an AI Incident because the AI system's use has directly contributed to preventing injury or harm to a person. The event is not merely a potential risk or a general update but describes realized impact through AI-enabled monitoring and police intervention.
Thumbnail Image

Kota Police Rope in Meta to Prevent Suicide Among Coaching Students - News18

2024-05-30
News18
Why's our monitor labelling this an incident or hazard?
The system involves AI-based detection of suicidal content on Facebook and Instagram, which is then used by police to intervene and prevent harm. This constitutes the use of an AI system whose outputs have directly led to harm prevention (injury or harm to persons avoided). Since harm is being prevented and the AI system's role is pivotal in identifying at-risk individuals, this qualifies as an AI Incident involving harm to persons (a).
Thumbnail Image

Kota News: Police Joins Hands With Meta To Prevent Student Suicides, Watch Video | ABP News

2024-05-30
english
Why's our monitor labelling this an incident or hazard?
Meta's system likely uses AI to analyze social media content for suicidal tendencies, which is an AI system as it infers from input (social media posts) to generate outputs (alerts). The police's intervention based on these alerts has directly prevented harm (suicide) to students, fulfilling the criteria for an AI Incident. The event involves the use of an AI system and its outputs have directly led to harm prevention, which is a form of injury or harm to health. Hence, it is not merely a potential hazard or complementary information but an incident involving realized harm prevention.
Thumbnail Image

Rajasthan News: Kota Police Collaborates With META To Prevent Suicide, Know How! | ABP LIVE

2024-05-31
english
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred here because META's platforms (Facebook and Instagram) use AI algorithms to analyze user-generated content for mental health signals such as depression or suicidal ideation. The system's use leads directly to harm prevention (avoiding suicide), which is a health-related harm. Since the AI system's use is actively involved in preventing harm, this qualifies as an AI Incident involving the use of AI to address a health harm. The event describes realized use and impact, not just potential risk or general information, so it is not a hazard or complementary information.
Thumbnail Image

Kota Suicide: Police Join Hands With Facebook-Parent Meta To Prevent Student Suicides

2024-05-28
Jagran English
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta's platforms use AI to detect suicidal tendencies from user content. The AI system's outputs are used by police to intervene and prevent harm (suicide), which is a direct health-related harm to individuals. Since the AI system's use has directly led to preventing injury or harm to persons, this qualifies as an AI Incident under the definition of harm to health (a).
Thumbnail Image

India News | Kota Police Ropes in Meta to Prevent Suicides Among Coaching Students | LatestLY

2024-05-28
LatestLY
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta uses algorithms to identify suicidal tendencies from user-generated content on social media. The use of AI here is in the detection and flagging of potentially harmful content. The event involves the use of AI systems to prevent harm (suicide) by enabling timely intervention. Since the AI system's outputs have already directly contributed to preventing at least one suicide, this constitutes an AI Incident due to the direct role of AI in harm prevention and intervention. The harm in question is injury or harm to health (suicide prevention).
Thumbnail Image

Kota Police Partners with Meta to Prevent Student Suicides

2024-05-28
NewsX
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as Meta uses AI to monitor social media content and flag suicidal tendencies. The use of this AI system has directly led to the prevention of at least one suicide attempt, which constitutes injury or harm to health being averted. Therefore, this event qualifies as an AI Incident because the AI system's use has directly contributed to preventing harm to a person, fulfilling the criteria for an AI Incident under the framework.
Thumbnail Image

Kota police ropes in Meta to prevent suicide among coaching students

2024-05-28
NewsDrum
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms use AI systems to detect content indicative of suicidal tendencies (red-flag tags). The police rely on this AI-generated information to intervene and prevent harm (suicide) to individuals. Since the AI system's outputs directly contribute to preventing injury or harm to persons, this qualifies as an AI Incident involving the use of AI systems leading to harm prevention. The event reports a realized harm prevention (a suicide was averted), which is a direct positive impact of the AI system's use.