Castilla y León Launches AI Pilot to Predict Gender-Based Violence Risk

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The regional government of Castilla y León is launching a pilot project using AI and Big Data to predict the risk of gender-based violence based on behavioral patterns from social history data. While aimed at prevention, the initiative raises concerns about potential future harms from inaccurate predictions or misuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the development and use of an AI system (predictive model using Big Data) aimed at forecasting the risk of gender-based violence. However, the article does not report any realized harm or incident resulting from the AI system's use. Instead, it outlines a proactive project intended to improve prevention and support for victims. Therefore, the event represents a plausible future risk scenario where the AI system could lead to harm if misused or if predictions are inaccurate, but no harm has yet occurred. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
WomenGeneral public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI hazard

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

CyL ensaya un modelo predictivo de casos violencia machista

2020-11-25
El Día de Valladolid
Why's our monitor labelling this an incident or hazard?
The event describes the development and use of an AI system (predictive model using Big Data) aimed at forecasting the risk of gender-based violence. However, the article does not report any realized harm or incident resulting from the AI system's use. Instead, it outlines a proactive project intended to improve prevention and support for victims. Therefore, the event represents a plausible future risk scenario where the AI system could lead to harm if misused or if predictions are inaccurate, but no harm has yet occurred. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Castilla y León ensaya un modelo para conocer la probabilidad de sufrir un ataque machista

2020-11-25
abc
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to create a predictive model for violence risk assessment. However, the article does not report any actual harm or incident resulting from the AI system's use yet. The project is in a pilot phase aiming at early detection and prevention, so no realized harm is described. The event thus represents a plausible future risk scenario where AI use could lead to harm if misused or if predictions are inaccurate, but currently it is a development stage without direct or indirect harm reported. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Castilla y León ensayará un modelo predictivo de casos violencia machista

2020-11-25
Agencia EFE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a predictive model for violence against women. However, it describes the initiation of a pilot project rather than an incident where harm has occurred or a hazard where harm is plausible but not realized. Since the project is in the early stages and no harm or direct risk is reported, this is a development related to AI with potential future implications but no current incident or hazard. Therefore, it is best classified as Complementary Information, as it provides context on AI use in social issues without reporting realized or imminent harm.
Thumbnail Image

Mañueco ensaya un sistema para predecir ataques machistas

2020-11-26
Diario de León
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as a predictive model using AI and Big Data is being developed and deployed. However, the article does not report any realized harm or incidents caused by the AI system; rather, it discusses the intended use of AI for early detection and prevention of violence. Therefore, the event represents a plausible future risk scenario where AI could lead to harm if misused or if predictions are inaccurate, but no harm has yet occurred. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Castilla y León implantará un proyecto para predecir la violencia de género a partir de patrones de comportamiento

2020-11-25
La Opinión - El Correo de Zamora
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred here because the project uses Big Data techniques to predict violence based on behavioral patterns, which implies advanced data processing and predictive modeling typical of AI systems. The use of this AI system is intended to anticipate and prevent harm (violence against women), but the article does not report any actual harm caused by the AI system itself. Instead, it describes a planned or ongoing deployment aimed at harm prevention. Therefore, this event represents a plausible future risk scenario where the AI system's use could lead to harm if predictions are inaccurate or misused, but no harm has yet occurred. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

CyL ensayará un modelo para predecir la violencia de género

2020-11-26
Diario Palentino
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system to predict gender-based violence risk, which is a sensitive and high-stakes application. While no harm has been reported, the use of AI in this context could plausibly lead to harms such as privacy breaches, stigmatization, or incorrect predictions affecting individuals' rights and wellbeing. Since the article does not describe any actual harm or incident but rather a planned AI system with potential risks, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI is central to the project.