Bavarian Police Halt Real Data Tests of AI Analysis Software After Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Bavarian State Criminal Police Office stopped testing its new AI-based analysis software with real personal data following criticism from the state data protection commissioner over legal and privacy risks. Testing now continues only with pseudonymized data, as no legal basis for real data use currently exists.[AI generated]

Why's our monitor labelling this an incident or hazard?

The software is an AI system designed to analyze and link large police data sets. The initial use of real personal data without legal basis posed a plausible risk of violation of data protection rights (a form of harm to rights). Although no actual harm or data analysis occurred during the initial tests, the event highlights a credible risk of legal and privacy harm if such testing continued without proper authorization. The switch to pseudonymized data reduces this risk. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harm from the development and use of the AI system under questionable legal conditions. There is no evidence of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk and legality of the AI system's testing, not on responses or ecosystem updates. It is clearly related to AI, so it is not unrelated.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsAccountability

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Forecasting/predictionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software

2024-03-27
GMX
Why's our monitor labelling this an incident or hazard?
The software is an AI system designed to analyze and link large police data sets. The initial use of real personal data without legal basis posed a plausible risk of violation of data protection rights (a form of harm to rights). Although no actual harm or data analysis occurred during the initial tests, the event highlights a credible risk of legal and privacy harm if such testing continued without proper authorization. The switch to pseudonymized data reduces this risk. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harm from the development and use of the AI system under questionable legal conditions. There is no evidence of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information because the main focus is on the risk and legality of the AI system's testing, not on responses or ecosystem updates. It is clearly related to AI, so it is not unrelated.
Thumbnail Image

Informationstechnologie: LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software

2024-03-27
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The software qualifies as an AI system because it performs analysis and linking of large police data sets to support investigations, which involves sophisticated data processing and inference. The event involves the development and testing phase of this AI system. However, the tests with real personal data were limited to functionality checks without actual data analysis, and have been stopped following data protection concerns. No harm or violation of rights has been reported as a result of these tests, and the LKA asserts legal compliance. The main issue is the potential for future harm or legal non-compliance if the system is used without proper legal basis. Therefore, this event represents an AI Hazard, as the use of the AI system could plausibly lead to violations of privacy or legal rights if not properly regulated, but no incident has yet occurred.
Thumbnail Image

LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software - WELT

2024-03-27
DIE WELT
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the software performs data analysis and linking across multiple police databases, which implies AI or advanced algorithmic processing. The event concerns the use and development of this AI system. Although real personal data was used initially for testing, the software did not analyze the data or cause harm. The main issue is the legal and privacy risk from using real personal data without a clear legal basis, which could plausibly lead to violations of data protection rights if continued. Since no actual harm or rights violation has occurred, and the software is still in testing with pseudonymized data, this situation constitutes an AI Hazard due to the plausible risk of harm from the development and use of the AI system under insufficient legal safeguards.
Thumbnail Image

LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software

2024-03-27
Süddeutsche Zeitung
Why's our monitor labelling this an incident or hazard?
The software is an AI system designed to analyze and link police data. The initial use of real personal data for testing raised legal and privacy concerns, leading to a halt and switch to pseudonymized data. No actual harm or rights violations have been reported as the analysis functions were not active during tests with real data. The event highlights potential future risks related to privacy, data protection, and legal compliance if the system is deployed improperly. Since no harm has yet occurred but plausible future harm exists, this is best classified as an AI Hazard.
Thumbnail Image

LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software

2024-03-27
rtl.de
Why's our monitor labelling this an incident or hazard?
The software VeRA is an AI system used for data analysis in police investigations. The initial use of real personal data for testing without legal basis raised concerns about privacy and legal compliance, which relate to human rights and data protection laws. However, the article does not report any actual harm or violation occurring, only that testing with real data was stopped following criticism. The current testing uses pseudonymized data, indicating mitigation measures. The main focus is on the governance response and changes in practice rather than a new incident or hazard. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

LKA: Keine Echtdaten-Tests mehr bei neuer Polizei-Software

2024-03-27
tz
Why's our monitor labelling this an incident or hazard?
The software qualifies as an AI system because it performs complex data analysis and linkage across multiple databases to support investigations, which goes beyond simple software. The event involves the use and development of this AI system. However, the article does not report any realized harm or incident resulting from the AI system's use; rather, it reports on the cessation of tests with real data due to legal concerns and the switch to pseudonymized data. There is no indication that the AI system caused injury, rights violations, or other harms. The concerns raised are about potential legal non-compliance and privacy risks, which have been addressed by stopping the use of real data in tests. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context about governance, legal compliance, and the development process of an AI system, enhancing understanding of the broader AI ecosystem and responses to potential risks.
Thumbnail Image

Umstrittene Palantir-Software in Bayern - Polizei beendet VeRA-Test mit Daten von echten Menschen

2024-03-27
tz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (VeRA) used by police for data analysis and linking, which qualifies as an AI system. The use of real personal data for testing without clear legal authorization raises concerns about violations of fundamental rights, specifically data protection and privacy rights. Although the software did not analyze the real data during the initial test, the mere use of real personal data in testing without proper legal basis is a breach of legal obligations protecting fundamental rights. No actual harm such as data misuse or breach is reported, and the test with real data has been stopped. Thus, the event describes a plausible risk of harm (violation of rights) due to the AI system's use, fitting the definition of an AI Hazard. It is not an AI Incident because no realized harm has been confirmed. It is not Complementary Information because the main focus is on the controversy and risk related to the AI system's use, not on responses or updates to a past incident. It is not Unrelated because the event clearly involves an AI system and potential harm.