Queensland Police Trial AI to Predict Domestic Violence, Raising Bias and Rights Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Queensland Police are trialing an AI system to predict and prevent domestic violence by identifying high-risk individuals using police data. The system prompts proactive police visits before any crime occurs, raising concerns about potential bias, misidentification, and disproportionate targeting of vulnerable groups, especially Indigenous communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (an actuarial predictive tool) designed to assess domestic violence offender risk using police data. The system is in development and about to be trialed, so no realized harm has occurred yet. However, the article highlights credible concerns about potential bias, disproportionate targeting of minority groups, and systemic failures that could lead to violations of rights or harm to communities if the AI system is misapplied or malfunctions. These concerns align with plausible future harms as defined for AI Hazards. Since no actual harm has yet occurred, and the focus is on the potential risks and the trial phase, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
PsychologicalReputationalHuman or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Forecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Queensland police to trial AI tool designed to predict and prevent domestic violence incidents

2021-09-13
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (an actuarial predictive tool) designed to assess domestic violence offender risk using police data. The system is in development and about to be trialed, so no realized harm has occurred yet. However, the article highlights credible concerns about potential bias, disproportionate targeting of minority groups, and systemic failures that could lead to violations of rights or harm to communities if the AI system is misapplied or malfunctions. These concerns align with plausible future harms as defined for AI Hazards. Since no actual harm has yet occurred, and the focus is on the potential risks and the trial phase, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Queensland police to trial AI tool designed to predict and prevent domestic violence incidents

2021-09-14
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a predictive risk assessment tool) in a policing context aimed at preventing domestic violence incidents. Although no harm has yet occurred, the article highlights credible concerns about potential harms such as bias, wrongful targeting, and systemic discrimination, which could plausibly lead to violations of rights and harm to communities if the system malfunctions or is misused. Since the AI system is in the trial phase and the harms are potential rather than realized, this qualifies as an AI Hazard. The article does not report any actual harm or incident resulting from the AI system's use, so it is not an AI Incident. It is more than just complementary information because it focuses on the AI system's deployment and associated risks rather than responses or governance developments.
Thumbnail Image

QLD police will use AI to 'predict' domestic violence before it happens. Beware the unintended consequences

2021-09-16
The Conversation
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the police plan to use an AI algorithm to predict future domestic violence risk based on historical data. The AI's use is in the deployment phase, influencing police decisions to conduct preemptive visits. While no direct harm has yet been reported, the article outlines multiple plausible harms: increased criminalization, disproportionate targeting of Indigenous populations, violation of victims' rights, and social harm from intrusive policing. These risks are credible and consistent with known issues in predictive policing AI systems. Since harm is plausible but not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses broader societal and ethical concerns but does not report actual harm or legal actions, so it is not Complementary Information. It is not unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

QLD police will use AI to 'predict' domestic violence before it happens: Beware the unintended consequences

2021-09-17
Phys.org
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being developed and used by the Queensland Police Service to predict domestic violence risk and guide proactive interventions. The AI system's use is central to the event. The article discusses the potential for the AI system to cause harm through reinforcing biases, increasing criminalization, and negatively impacting victims, which are harms to communities and violations of rights. Although the harms are prospective rather than realized, the risks are credible and significant, meeting the definition of an AI Hazard. The article does not report actual harm occurring yet, so it is not an AI Incident. The focus is on the plausible future harms and unintended consequences of the AI system's use, not on responses or updates, so it is not Complementary Information. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Queensland Police trial artificial intelligence to mitigate domestic violence

2021-09-17
Happy Mag
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as a predictive analytical tool using police data to forecast domestic violence escalation. The event concerns the use of this AI system (use phase) to intervene proactively. Although no direct harm has yet occurred, the article outlines plausible risks of harm to individuals and communities, such as misidentification of victims as perpetrators, escalation due to uninvited police visits, and biased targeting of Indigenous populations. These risks constitute credible potential harms that could arise from the AI system's deployment. Therefore, this event qualifies as an AI Hazard rather than an AI Incident, as harm is plausible but not yet realized.
Thumbnail Image

QLD police will use AI to 'predict' domestic violence before it happens. Beware the unintended consequences

2021-09-16
The Mandarin
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a risk assessment algorithm) developed and used by police to predict future domestic violence risk. The AI's outputs will directly influence police interventions, which may lead to harms such as increased criminalization, surveillance, and social harm, especially to vulnerable groups. The article describes these harms as likely and already manifesting in similar contexts, indicating realized or ongoing harm rather than purely potential. Therefore, this qualifies as an AI Incident because the AI system's use is directly linked to harms including violations of rights and harm to communities. The concerns about bias and disproportionate targeting further support this classification. The article does not merely discuss potential future harm or governance responses but details the concrete risks and consequences of the AI system's deployment.