Japan Police to Test AI Surveillance for Lone-Wolf Attack Prevention

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Japanese police plan to test AI-equipped security cameras with behavior detection and facial recognition to identify suspicious actions and prevent lone-wolf attacks. The initiative aims to enhance public safety but raises privacy concerns, as authorities evaluate the system's accuracy and potential deployment.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the development and planned use of AI systems for behavior detection and facial recognition by the police to address security threats. While the AI system is not yet deployed operationally and no harm has been reported, the use of AI in surveillance and behavior detection inherently carries plausible risks of harm, such as privacy violations or misuse. Since the event concerns the potential and planned use of AI systems that could plausibly lead to harm or benefits, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. There is no indication of realized harm or legal/governance responses, so it is not Complementary Information. It is not unrelated as AI systems are central to the event.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityRespect of human rightsFairnessRobustness & digital securityAccountabilityDemocracy & human autonomySafety

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalPublic interest

Severity
AI hazard

Business function:
Compliance and justiceMonitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

日本警察将使用AI分析可疑人员动作、表情 应对"独狼"式犯罪

2023-07-10
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and planned use of AI systems for behavior detection and facial recognition by the police to address security threats. While the AI system is not yet deployed operationally and no harm has been reported, the use of AI in surveillance and behavior detection inherently carries plausible risks of harm, such as privacy violations or misuse. Since the event concerns the potential and planned use of AI systems that could plausibly lead to harm or benefits, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. There is no indication of realized harm or legal/governance responses, so it is not Complementary Information. It is not unrelated as AI systems are central to the event.
Thumbnail Image

安倍遇刺一周年之际 日本警方准备使用AI避免类似悲剧再现 - 警告! - cnBeta.COM

2023-07-07
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-equipped security cameras with behavior detection and facial recognition) in the context of public security. However, the AI system is not reported to have caused any harm or incident yet. Instead, it is being tested to prevent similar incidents in the future, such as lone-wolf attacks. Therefore, this is a plausible future risk mitigation effort rather than an incident or harm caused by AI. The article focuses on the planned use and potential benefits and concerns (privacy) of AI in security, making it an AI Hazard as the AI system could plausibly lead to preventing or failing to prevent harm in the future, but no harm has yet occurred.
Thumbnail Image

اليابان.. كاميرات تكتشف المجرم قبل ارتكاب جريمته!

2023-07-25
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for behavior detection and suspicious object identification to prevent crimes before they happen. While no harm has been reported yet, the use of AI for predictive policing carries credible risks of harm such as privacy violations, wrongful accusations, or biased policing practices. The event is about planned testing and evaluation, indicating potential future harm rather than realized harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

في ذكرى اغتيال شينزو آبي، كاميرات ذكية في اليابان تكتشف المجرمين قبل ارتكاب الجريمة

2023-07-24
بوابة فيتو
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (smart cameras with machine learning for behavior detection) by law enforcement to prevent crimes, which is a clear AI system involvement. The system is not yet fully operational but is being tested, so no direct harm has occurred due to the AI system itself. However, the article highlights the plausible future impact of these AI systems in preventing serious crimes, as well as concerns about potential biases and privacy implications. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to either harm prevention or potential harms related to bias or misuse. There is no indication of realized harm or incident yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the testing and implications of the AI system.
Thumbnail Image

اليابان تختبر تقنية 'اكتشاف السلوك' لاصطياد هؤلاء

2023-07-24
MTV Lebanon - Live Online TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using machine learning to detect suspicious behavior and objects. The police are currently testing this system, so it is in the use phase but not yet causing harm. The article mentions concerns about potential algorithmic bias and privacy issues, indicating plausible future harms such as violations of rights or biased law enforcement. Since no harm has yet occurred, but the system's deployment could plausibly lead to an AI Incident, this qualifies as an AI Hazard.
Thumbnail Image

"ذكية"... كاميرا تكتشف المجرم قبل ارتكاب جريمته!

2023-07-24
شبكة الميادين
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-powered cameras with behavior detection and object recognition) in a law enforcement context aimed at preventing crimes before they happen. However, the system is currently only in the testing phase, and no actual harm or incident has been reported. The article discusses potential benefits and concerns but does not describe any realized harm or direct incident caused by the AI system. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to incidents involving rights violations or other harms in the future, but no harm has yet occurred.
Thumbnail Image

في اليابان.. كاميرات أمنية تكتشف المجرم قبل ارتكاب جريمته!

2023-07-24
AlJadeed.tv
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-enhanced security cameras with behavior detection capabilities) in a law enforcement context. The system's use is intended to prevent crimes, which is a potential benefit, but the article does not report any actual harm or incident resulting from the AI system's deployment. Since the AI system's use is currently in testing and no direct or indirect harm has occurred, but there is a plausible risk of future harm (e.g., bias, privacy violations, wrongful suspicion), this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential and planned use of AI for crime prevention, not on a realized incident or harm.
Thumbnail Image

كاميرات "خارقة" في اليابان: تكتشف المجرم قبل ارتكاب جريمته!

2023-07-24
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it uses machine learning to detect suspicious behavior, weapons, and intrusions. However, the article does not report any actual harm or incidents caused by these AI systems yet. Instead, it discusses the potential benefits and concerns, including possible algorithmic bias. Since no harm has occurred but there is a plausible risk of future harm (e.g., bias leading to wrongful suspicion or privacy violations), this qualifies as an AI Hazard rather than an Incident or Complementary Information.