Japanese Sushi Chains Deploy AI Cameras to Prevent Food Tampering Incidents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In response to recent food tampering incidents at conveyor belt sushi restaurants, Japanese chain Kura Sushi plans to implement AI-enabled cameras to detect suspicious handling of sushi plates. The AI system aims to enhance food safety by alerting staff to potential hygiene violations, though no AI-related harm has occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly mentioned as being used to detect suspicious behavior related to food tampering in a restaurant setting. The use of AI here is directly linked to preventing harm to health by identifying and mitigating unsanitary actions that could lead to foodborne illness or contamination. Since the AI system's use is directly aimed at preventing harm and is actively involved in monitoring and alerting staff, this qualifies as an AI Incident under the definition of harm to health of persons or groups. The event describes realized harm potential and the AI's role in addressing it, not just a future risk or general information, so it is not a hazard or complementary information.[AI generated]
Industries
Food and beverages

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

藏壽司計劃以 AI 相機來防止近來在日本頻發的「壽司恐怖襲擊」

2023-02-12
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article discusses the planned use of an AI system (AI cameras) to monitor and prevent harmful customer behavior (food tampering) in sushi restaurants. The AI system is not reported to have caused any harm or malfunctioned; rather, it is being introduced to reduce existing harm caused by customers. Since the AI system's involvement is preventive and no harm or malfunction related to AI is reported, this does not qualify as an AI Incident or AI Hazard. The article mainly provides complementary information about AI adoption in a specific context to address a problem, fitting the definition of Complementary Information.
Thumbnail Image

防範壽司郎事件? 日本連鎖店擬用 AI 偵測可疑迴轉壽司盤蓋 - ezone.hk - 科技焦點 - 科技汽車

2023-02-13
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for monitoring customer behavior to prevent food safety incidents. However, no actual harm or incident has occurred yet; the AI system's deployment is intended to prevent potential future harm related to food contamination or hygiene issues. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to preventing harm but no harm has been reported as occurring due to the AI system itself or its malfunction.
Thumbnail Image

回转寿司用AI摄像头打击开盖不拿

2023-02-13
中关村在线
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to detect suspicious behavior that could lead to food contamination or hygiene issues, which would harm customers' health if it occurred. However, the article does not report any actual harm caused by the AI system or its malfunction. Instead, the AI is used to prevent such harm. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to preventing an AI Incident related to health harm. There is no indication of realized harm or violation caused by the AI system itself, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system's deployment is central to the event and linked to potential harm prevention.
Thumbnail Image

回转寿司用AI摄像头打击开盖不拿

2023-02-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to detect suspicious behavior related to food tampering in a restaurant setting. The use of AI here is directly linked to preventing harm to health by identifying and mitigating unsanitary actions that could lead to foodborne illness or contamination. Since the AI system's use is directly aimed at preventing harm and is actively involved in monitoring and alerting staff, this qualifies as an AI Incident under the definition of harm to health of persons or groups. The event describes realized harm potential and the AI's role in addressing it, not just a future risk or general information, so it is not a hazard or complementary information.
Thumbnail Image

日本藏壽司秀「AI攝影系統」 杜絕屁孩保食安 - 國際 - 自由時報電子報

2023-03-02
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to monitor and detect abnormal behavior that could compromise food safety, which directly relates to harm to the health of customers (harm category a). The AI system's use is actively preventing or mitigating such harm by alerting staff to intervene. Since the AI system's deployment is directly linked to preventing or addressing a health-related harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

防堵「壽司郎屁孩」再現 日本藏壽司設AI錄影監控可疑行為 | 聯合新聞網

2023-03-03
UDN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it monitors customer behavior and detects anomalies in real time. The AI's use is to prevent potential harm to the restaurant environment and other customers' dining experience by identifying and mitigating inappropriate behavior. However, there is no indication that any actual harm has occurred due to the AI system's malfunction or misuse, nor that the AI system itself caused harm. The article focuses on the implementation of AI as a preventive measure rather than reporting an incident or a hazard with realized or imminent harm. Therefore, this event is best classified as Complementary Information, as it provides context on societal and technical responses to prior incidents involving customer misconduct but does not describe a new AI Incident or AI Hazard.
Thumbnail Image

日本藏壽司出招護食安 AI監控轉盤上可疑動作 | 聯合新聞網

2023-03-02
UDN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as it uses AI-powered cameras to detect suspicious actions related to food safety. The AI system's use directly aims to prevent harm to customers' health by identifying and mitigating potential contamination risks. Since the AI system's use is directly linked to preventing injury or harm to people (harm category a), this qualifies as an AI Incident under the framework. The event describes the AI system's use leading to harm prevention, which is a realized impact on health safety management, thus an AI Incident rather than a hazard or complementary information.
Thumbnail Image

日本藏壽司出招護食安 AI監控轉盤上可疑動作 | 國際要聞 | 全球 | NOWnews今日新聞

2023-03-02
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, used for monitoring and detecting suspicious behavior in real time. The system's use is intended to prevent food safety risks caused by customer misconduct, which could harm health if contaminated food is consumed. Although no specific harm event is reported, the AI system's deployment directly addresses a plausible risk of harm to health (food safety). Since the article reports the system's active use to prevent harm rather than an incident of harm caused by AI malfunction or misuse, and no harm has yet occurred, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses to a past incident or broader governance, so it is not Complementary Information. It is not unrelated as AI is central to the event.
Thumbnail Image

怕重演「噁心恐攻」!日迴轉壽司祭AI大招反制奧客:盯著你吃 - 科技

2023-03-03
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, monitoring real-time video feeds to detect abnormal behavior. The system's use is to prevent or mitigate harm caused by disruptive or malicious customer actions, which previously led to significant business and reputational harm (indirect harm to the business and potentially to customers if disruptions escalate). Although no direct physical harm is reported yet, the AI system's deployment is a response to prior incidents causing harm and aims to prevent recurrence. This fits the definition of an AI Incident because the AI system's use is directly linked to preventing or responding to harm caused by customer misconduct, which had materialized previously. The AI system's role is pivotal in managing and mitigating these harms. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

日壽司業者裝設AI監測 顧客惡意掀蓋響警報

2023-03-04
HiNet
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to monitor customer behavior and detect unhygienic or malicious actions. The system's use directly aims to prevent harm to public health by identifying and responding to behaviors that could compromise food safety and hygiene. The AI system's deployment and its role in detecting and alerting about harmful behaviors that affect health and safety constitute an AI Incident under the definition of harm to health of groups of people (harm category a).
Thumbnail Image

日本藏壽司出招護食安 AI監控轉盤上可疑動作 | 國際 | 中央社 CNA

2023-03-02
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as monitoring suspicious behavior to protect food safety. While the AI system's use aims to prevent harm (food contamination or safety issues), there is no indication that harm has occurred or that the AI system malfunctioned or caused harm. Therefore, this is a case of a plausible future risk being mitigated by AI, but no realized harm is reported. Hence, it qualifies as Complementary Information, as it provides context on AI deployment for safety and monitoring but does not describe an AI Incident or AI Hazard.
Thumbnail Image

日本迴轉壽司頻傳「噁心恐攻」 藏壽司派AI跟屁孩拚了!全程監視你吃壽司

2023-03-03
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to monitor customer behavior and detect malicious acts that contaminate food, which directly harms other customers' health and safety. The AI's detection and notification function is central to addressing this harm. Although the article also discusses privacy concerns, the primary focus is on the AI's role in preventing realized harm from malicious contamination. Therefore, this event meets the criteria for an AI Incident due to the AI system's use leading to harm prevention in a context where harm is occurring or has occurred.
Thumbnail Image

日壽司業者裝設AI監測 顧客惡意掀蓋響警報

2023-03-03
公共電視
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to monitor customer behavior and detect specific actions that could harm public health by compromising food hygiene. The AI system's use directly contributes to preventing injury or harm to the health of people (harm category a) by identifying and responding to unhygienic behaviors. Since the AI system's deployment is actively involved in mitigating health risks and the event describes its operational use leading to alerts and potential law enforcement action, this qualifies as an AI Incident.
Thumbnail Image

口水、手摸「噁心攻擊」!藏壽司派AI跟屁孩拚了 全程監視你吃壽司 | 國際 | 三立新聞網 SETN.COM

2023-03-03
三立新聞
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to monitor customer behavior and detect malicious acts that have already occurred, such as contaminating sushi with saliva or dirty chopsticks. These acts pose a direct health risk to other customers, constituting harm to health (a). The AI system's use is a response to these incidents and plays a pivotal role in identifying and mitigating such harmful behavior. Although the AI system itself is not causing harm, its deployment is directly linked to addressing and preventing harm caused by malicious customers. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

防惡搞!日本藏壽司AI監測抓屁孩 揪「拿起又放下」可疑動作-台視新聞網

2023-03-02
台視新聞網
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to monitor and detect suspicious behavior that could lead to harm (e.g., contamination or food safety issues) in sushi restaurants. The AI's detection leads to staff intervention to prevent harm. Since the AI system's use directly contributes to preventing harm to customers and the community, this qualifies as an AI Incident involving the use of AI to address and mitigate harm caused by prank behaviors. The event involves the use of AI systems and their outputs directly linked to preventing harm, thus fitting the AI Incident category rather than a hazard or complementary information.
Thumbnail Image

Japan Sushi Chain Installs AI camera system to Thwart Customer Meddling

2023-03-03
japannews.yomiuri.co.jp
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, used for monitoring and detecting suspicious behavior. The system's use is intended to prevent harm to customers' health by stopping food tampering incidents. Since the AI system's use is directly linked to preventing harm and responding to prior incidents of food tampering, this qualifies as an AI Incident due to the realized harm (tampering) and the AI's role in addressing it.
Thumbnail Image

Monitoring of 'sushi terrorism' with AI cameras at major chain draws mixed reaction

2023-03-05
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI cameras used to monitor conveyor belt sushi behavior to prevent 'sushi terrorism.' The AI system is involved in use, but no harm or violation has been reported as having occurred. The concerns about surveillance are societal reactions rather than documented harms. The AI system's role is preventive and supportive, with no direct or indirect harm caused or plausible harm clearly imminent. Thus, the event does not meet the criteria for AI Incident or AI Hazard. It fits the definition of Complementary Information, providing context on AI deployment and societal response.
Thumbnail Image

Sushi chain installs AI camera system after licking scandal

2023-03-03
The Japan Times
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being installed to identify suspicious behavior in restaurants. The system's use is a response to prior incidents of unhygienic acts by customers, which caused reputational and possibly economic harm to the sector. However, the article does not describe any direct or indirect harm caused by the AI system itself, nor does it indicate any malfunction or misuse of the AI system. The AI system is used as a preventive measure to avoid further harm. Therefore, this event does not describe an AI Incident or AI Hazard but rather a complementary information update about AI deployment in response to prior incidents.
Thumbnail Image

Kura Sushi using AI camera network to prevent gross pranks at its revolving sushi restaurants

2023-03-04
SoraNews24 -Japan News-
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-enhanced camera network) deployed to monitor and detect unsanitary prank behavior in a restaurant setting. The AI system's use is directly linked to preventing harm to customers' health by identifying and stopping potential contamination incidents before they cause injury or illness. Since the AI system's deployment aims to prevent injury or harm to people, and the AI is actively used in this harm prevention, this qualifies as an AI Incident under the definition of harm to health caused by AI system use.
Thumbnail Image

Sushi chain installs AI camera system after licking scandal

2023-03-02
Kyodo News+
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being installed and used to detect suspicious behavior, which involves AI system use. However, the event does not report any harm caused by the AI system or any malfunction. The AI system is a tool to prevent harm caused by human misconduct, not a source of harm itself. The event focuses on the deployment of AI as a response to a prior scandal, making it a governance or societal response. Thus, it fits the definition of Complementary Information rather than an Incident or Hazard.