Baden-Württemberg Approves Police Use of Palantir AI Software Amid Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Baden-Württemberg parliament has approved police use of Palantir's AI-powered Gotham software for advanced data analysis starting in 2026. Supporters cite improved crime-fighting capabilities, while critics warn of potential privacy violations, data misuse, and dependency on a US company. The decision follows significant political debate and public concern.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article clearly involves an AI system (Palantir's Gotham) used for data analysis by police, which fits the definition of an AI system. The event concerns the planned use (development and deployment) of this AI system, with public and political debate about potential harms such as privacy violations and surveillance. Since no actual harm has occurred yet and the software is not yet in use, the event represents a plausible future risk of harm (privacy violations, surveillance, potential misuse of data). This aligns with the definition of an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the potential implications and risks of the software's deployment, not just updates or responses to past incidents. Therefore, the classification is AI Hazard.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityAccountabilityRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Forecasting/predictionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Palantir im Südwesten: Was die Polizei-Software bedeutet

2025-11-12
WEB.DE
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Palantir's Gotham) used for data analysis by police, which fits the definition of an AI system. The event concerns the planned use (development and deployment) of this AI system, with public and political debate about potential harms such as privacy violations and surveillance. Since no actual harm has occurred yet and the software is not yet in use, the event represents a plausible future risk of harm (privacy violations, surveillance, potential misuse of data). This aligns with the definition of an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the potential implications and risks of the software's deployment, not just updates or responses to past incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

Datenanalyse: Palantir im Südwesten: Was die Polizei-Software bedeutet

2025-11-12
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
An AI system (Palantir's Gotham) is explicitly involved as a data analysis tool for police investigations. The event concerns the planned use (deployment) of this AI system, which could plausibly lead to harms such as violations of privacy, surveillance overreach, and potential misuse of sensitive data, as feared by critics. However, since the software is not yet in operational use and no direct or indirect harm has occurred, it does not qualify as an AI Incident. The article is not primarily about responses or updates to past incidents, so it is not Complementary Information. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

Verbrechensbekämpfung: Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Palantir's Gotham) intended for use in law enforcement, which can plausibly lead to harms such as violations of privacy and human rights through mass data linkage and potential profiling (a form of harm to communities and rights). Since the software is not yet in use and no harm has been reported, this constitutes a credible potential risk rather than a realized incident. Therefore, this event qualifies as an AI Hazard due to the plausible future harms from the AI system's deployment in policing.
Thumbnail Image

Palantir im Südwesten: Was die Polizei-Software bedeutet - WELT

2025-11-12
DIE WELT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Palantir's Gotham) intended for use in law enforcement data analysis. While there are significant concerns about potential harms such as privacy violations, mass surveillance, and data security risks, these harms have not yet materialized in Baden-Württemberg as the software is not yet deployed. The article describes the potential risks and political controversy but does not report any realized harm or incident. Therefore, this situation constitutes an AI Hazard, as the use of the AI system could plausibly lead to harms such as violations of rights and privacy if deployed without adequate safeguards.
Thumbnail Image

Polizei darf auch im Südwesten Software von Palantir nutzen - WELT

2025-11-12
DIE WELT
Why's our monitor labelling this an incident or hazard?
Palantir's 'Gotham' software is an AI system that processes and links large datasets to support police investigations. The article does not report any realized harm but discusses the upcoming authorized use and the associated concerns about privacy and surveillance. Therefore, this event represents an AI Hazard because the use of this AI system could plausibly lead to violations of rights or harm to communities in the future, but no incident has yet occurred.
Thumbnail Image

Baden-Württemberg entscheidet über Einsatz von Palantir

2025-11-12
heise online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Palantir's Gotham software) designed to analyze and link data for police investigations, which fits the definition of an AI system. The article does not report any realized harm yet but discusses the planned deployment and significant concerns about constitutional and privacy rights violations, which are plausible future harms. The legal framework is still being established, and independent oversight is lacking, increasing the risk of misuse or harm. Since no direct or indirect harm has yet occurred, but the potential for harm is credible and significant, the event qualifies as an AI Hazard rather than an AI Incident. The article also includes societal and governance responses (legal challenges, public debate), but the main focus is on the potential risks of the AI system's deployment.
Thumbnail Image

Palantir im Südwesten: Was die Polizei-Software bedeutet

2025-11-12
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
Palantir's software qualifies as an AI system because it performs complex data analysis and pattern recognition to support police investigations. However, the article does not report any direct or indirect harm resulting from the software's use so far. Instead, it discusses potential risks such as privacy violations and surveillance, which are concerns about plausible future harms. Since the software is not yet in active use and no incident has occurred, the event represents a credible potential risk (AI Hazard) rather than an AI Incident. The article primarily covers the policy decision and societal debate, which aligns with the definition of an AI Hazard due to the plausible future risk of harm from the AI system's deployment.
Thumbnail Image

Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
stern.de
Why's our monitor labelling this an incident or hazard?
The software 'Gotham' is an AI system used for linking and analyzing large datasets to detect patterns relevant to criminal investigations. Although the software is not yet in use, the legislative approval and planned deployment imply imminent use. The concerns about mass surveillance and dependency on a US company highlight credible risks of human rights violations and privacy breaches. Since no actual harm has been reported yet, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
Zeitungsverlag Waiblingen
Why's our monitor labelling this an incident or hazard?
The event involves the planned use of an AI system (Palantir's Gotham software) by law enforcement, which can be reasonably inferred to involve AI capabilities such as data analysis and pattern recognition. However, since the software is not yet in use and no harm or incident has occurred, the event represents a plausible future risk scenario where the AI system's use could lead to harms such as privacy violations or rights infringements. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Palantir im Südwesten: Was die Polizei-Software bedeutet

2025-11-12
Badische Zeitung
Why's our monitor labelling this an incident or hazard?
Palantir's software is an AI system used for data analysis and pattern recognition by police. The article describes the legislative and political process to authorize its use, with concerns about privacy violations and misuse of sensitive data. However, the software is not yet operational in Baden-Württemberg, and no direct or indirect harm has been reported. The event thus fits the definition of an AI Hazard, as the use of this AI system could plausibly lead to harms such as violations of privacy, human rights, or misuse of data once deployed. It is not an AI Incident because no harm has occurred yet, nor is it Complementary Information since the article focuses on the upcoming deployment and associated risks rather than updates on past incidents or governance responses. It is not Unrelated because the AI system and its potential impacts are central to the article.
Thumbnail Image

Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
Badische Zeitung
Why's our monitor labelling this an incident or hazard?
Palantir's 'Gotham' software is an AI system that processes and links large datasets to support police investigations. The article states the software will be used starting in 2026, so no harm has yet occurred. The concerns raised about privacy, potential overreach (e.g., 'rasterfahndung'), and dependency on a foreign AI provider indicate plausible risks of harm to rights and communities in the future. Since the event centers on legislative approval and potential future use with associated risks, it fits the definition of an AI Hazard. It is not Complementary Information because it is not an update on an existing incident, nor is it unrelated as it clearly involves an AI system and potential harm.
Thumbnail Image

Palantir im Südwesten: 25 Millionen Euro für umstrittene Software der Polizei

2025-11-12
TAG24
Why's our monitor labelling this an incident or hazard?
Palantir's Gotham software qualifies as an AI system due to its advanced data analysis capabilities aiding pattern recognition. However, the article does not report any direct or indirect harm resulting from its use, nor does it describe any malfunction or misuse leading to harm. The concerns raised are about potential risks, but these remain speculative at this stage. Therefore, the event is best classified as Complementary Information, providing context on the adoption and debate around an AI system without reporting an AI Incident or Hazard.
Thumbnail Image

Trotz Kritik: Ab wann die BW-Polizei eine Palantir-Software nutzen darf

2025-11-12
swr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the planned use of an AI system (Palantir's Gotham) by the police, which is intended for data analysis and crime prevention. Although the software is not yet deployed, the legislative approval and planned future use create a credible risk of harms such as violations of rights or other significant harms related to surveillance and law enforcement. Since no harm has yet occurred, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the approval and potential future use, which could lead to harm.
Thumbnail Image

Fragen & Antworten: Palantir in Baden-Württemberg - Was die Polizei-Software bedeutet

2025-11-12
Stuttgarter-Zeitung.de
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Palantir's Gotham) designed for complex data analysis and pattern recognition to aid police investigations. The system is not yet in use, so no direct or indirect harm has occurred. However, the concerns about surveillance, data misuse, and dependency on a foreign company indicate plausible future harms such as violations of privacy and human rights. Therefore, this event fits the definition of an AI Hazard, as the development and planned use of the AI system could plausibly lead to an AI Incident in the future. It is not Complementary Information because the main focus is not on responses or updates to an existing incident, nor is it unrelated since the AI system and its potential impacts are central to the discussion.
Thumbnail Image

Baden-Württemberg: Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
Stuttgarter-Zeitung.de
Why's our monitor labelling this an incident or hazard?
The software 'Gotham' is an AI system used by police to analyze large datasets for investigative purposes. The article highlights concerns about potential privacy violations and overreliance on a US company, indicating plausible risks of harm to rights and communities. However, no actual harm or incident is described; the event is about the approval and planned use of the system. Thus, it is an AI Hazard, reflecting credible potential for future harm but no realized incident yet.
Thumbnail Image

Baden-Württemberg: Polizei darf Software von Palantir nutzen

2025-11-12
Rhein-Neckar-Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of Palantir's AI-powered software by the police, which qualifies as an AI system. However, it does not describe any direct or indirect harm caused by the system so far. Instead, it focuses on the legal and political decision to allow its use and the concerns about potential future risks such as privacy violations or excessive dependence on a US company. Since no harm has materialized yet but plausible future harm is discussed, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Baden-Württemberg: Polizei darf auch im Südwesten Software von Palantir nutzen

2025-11-12
Eßlinger Zeitung
Why's our monitor labelling this an incident or hazard?
The Palantir software is an AI system used for complex data analysis and pattern recognition by law enforcement. The article discusses the legal authorization and political debate around its future use, highlighting concerns about potential privacy and rights harms. Since no actual harm or incident has been reported yet, but the use of this AI system could plausibly lead to violations of rights or other harms, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is not on updates or responses to a past incident, nor is it unrelated as it clearly involves an AI system with potential for harm.
Thumbnail Image

Fragen & Antworten: Palantir in Baden-Württemberg - Was die Polizei-Software bedeutet

2025-11-12
Eßlinger Zeitung
Why's our monitor labelling this an incident or hazard?
Palantir's "Gotham" software qualifies as an AI system because it performs advanced data analysis and pattern recognition across large datasets to support police investigations. The article does not report any current incidents of harm caused by the software's use; rather, it focuses on the upcoming deployment and the legal and political debates surrounding it. The concerns raised about privacy, surveillance, and data misuse represent plausible future harms that could arise from the software's use. Since no actual harm has yet occurred, but there is a credible risk of AI-related harm, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and the legal enabling of the AI system's use, not on responses or updates to past incidents.
Thumbnail Image

Landtag Baden-Württemberg stimmt über Palantir ab - Kritik an Polizei-Software

2025-11-12
Heilbronner Stimme
Why's our monitor labelling this an incident or hazard?
Palantir's "Gotham" software is an AI system used for data analysis and pattern recognition in law enforcement. The article focuses on the legislative approval and planned use of this AI system, with concerns about potential misuse and privacy violations. Since the software is not yet operational in Baden-Württemberg and no harm has been reported, the event constitutes an AI Hazard due to the plausible future risk of harm from its deployment. It is not an AI Incident because no direct or indirect harm has occurred yet. It is not Complementary Information because the article is not about responses or updates to a past incident, nor is it unrelated as it clearly involves an AI system and its potential impacts.
Thumbnail Image

Baden-Württemberg: Grüne geben Polizeidaten für Palantir frei

2025-11-13
netzpolitik.org
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the deployment and legal authorization of Palantir's AI system for automated police data analysis, which involves processing large-scale personal data and mass surveillance. This use directly affects fundamental rights, including privacy and informational self-determination, constituting a violation of human rights under the framework. The involvement of AI in automated data analysis and the legal changes enabling commercial access to police data for AI training further exacerbate the harm. The harms are realized and ongoing, not merely potential, as the law has been passed and the system is planned for deployment. Hence, this is an AI Incident rather than a hazard or complementary information.