Russian Scientist Wrongfully Detained After AI Facial Recognition Error

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Russian hydrologist Alexander Tsvetkov was detained for 10 months after an AI facial recognition system matched his face 50-55% with a decades-old suspect sketch, despite strong alibi evidence. The AI's error led to his wrongful arrest and prolonged detention, highlighting risks of relying on AI in criminal investigations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system used for facial recognition that malfunctioned or produced an erroneous output, leading to the wrongful arrest and imprisonment of an innocent person. This constitutes direct harm to the individual's rights and personal liberty, fitting the definition of an AI Incident due to harm to a person caused by the AI system's malfunction or misuse.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPsychologicalReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

عالم يمضي 10 أشهر في السجن بسبب خطأ من الذكاء الاصطناعي

2023-12-14
جراءة نيوز
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for facial recognition that malfunctioned or produced an erroneous output, leading to the wrongful arrest and imprisonment of an innocent person. This constitutes direct harm to the individual's rights and personal liberty, fitting the definition of an AI Incident due to harm to a person caused by the AI system's malfunction or misuse.
Thumbnail Image

الذكاء الاصطناعى صنفه "قاتل".. عالم روسى يقضى 10 أشهر فى السجن بالخطأ

2023-12-16
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The AI system was used in law enforcement to identify a suspect, but it produced a false positive match with insufficient accuracy, leading to the wrongful imprisonment of an innocent person. This is a clear case where the AI system's malfunction directly caused harm to a person's life and liberty. Therefore, this qualifies as an AI Incident under the definition of harm to a person caused by AI system malfunction.
Thumbnail Image

الذكاء الاصطناعى صنفه "قاتل".. عالم روسى يقضى 10 أشهر فى السجن بالخطأ - اليوم السابع

2023-12-16
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The AI system was used in a law enforcement context to identify a suspect. Its incorrect match led to the wrongful arrest and detention of an innocent person, which constitutes harm to the individual's rights and liberty. This is a direct harm caused by the AI system's malfunction or erroneous output. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly led to harm to a person.
Thumbnail Image

عالم يمضي 10 أشهر في السجن بسبب خطأ من الذكاء الاصطناعي

2023-12-13
24.ae
Why's our monitor labelling this an incident or hazard?
An AI system was used to identify the suspect with a 55% match to a witness sketch, which was accepted by authorities despite strong alibi evidence. This led to the scientist's wrongful imprisonment for 10 months, a clear harm to his personal liberty and a violation of legal rights. The AI system's malfunction or misuse was a direct contributing factor to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

بسبب خطأ من الذكاء الاصطناعي.. عالم هيدرولوجيا يمضي 10 أشهر في السجن

2023-12-13
akhbarona.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used for facial recognition that incorrectly matched the scientist to a suspect, leading to his wrongful arrest and imprisonment. This is a direct harm to the individual's rights and liberty, fulfilling the criteria for an AI Incident. The AI system's erroneous output was pivotal in causing the harm, despite contradictory evidence. The harm is realized, not just potential, and relates to violation of rights and wrongful detention.
Thumbnail Image

被AI判定样貌"55%似杀人犯" 俄科学家惨坐冤狱10个月终获释 | 国际

2023-12-15
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
An AI system (facial recognition AI) was explicitly involved in the identification process that led to the wrongful arrest and imprisonment of a person, which constitutes harm to the individual's rights and liberty (a violation of human rights and legal rights). The AI's malfunction or misuse (false positive identification with insufficient evidence) directly contributed to the harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

俄羅斯科學家冤獄10個月...只因AI判定「他長得像殺人犯」! - 國際 - 自由時報電子報

2023-12-15
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The AI facial recognition system was used by police to identify a suspect, but it incorrectly matched the scientist to a murder suspect with 55% similarity, leading to his wrongful arrest and imprisonment. Despite alibi evidence, authorities relied on the AI's output, causing direct harm to the scientist's liberty and legal rights. This fits the definition of an AI Incident because the AI system's use directly led to harm (a violation of rights and wrongful imprisonment).
Thumbnail Image

超冤!AI判他長得像殺人犯 俄羅斯科學家坐冤獄近一年 | 聯合新聞網

2023-12-14
UDN
Why's our monitor labelling this an incident or hazard?
The AI system was used in the law enforcement process to identify a suspect based on facial similarity, which directly resulted in the wrongful arrest and prolonged detention of an innocent person. This is a clear case where the AI system's use caused harm to the individual's fundamental rights, specifically the right to liberty and fair legal treatment. The harm is realized and significant, meeting the criteria for an AI Incident under violations of human rights and breach of legal protections. The event is not merely a potential hazard or complementary information but a concrete incident of harm caused by AI use.
Thumbnail Image

衰爆!外觀遭AI判定「55%像殺人犯」 俄科學家慘坐牢10個月 | 國際萬花筒 | 全球 | NOWnews今日新聞

2023-12-15
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system was used in law enforcement to identify a suspect based on facial similarity, which directly led to the wrongful detention of an innocent person. This constitutes harm to the individual's rights and liberty, a violation of fundamental rights under applicable law. The AI system's malfunction or misuse in this context caused direct harm, qualifying this event as an AI Incident.
Thumbnail Image

被AI判定樣貌「55%似殺人犯」 俄科學家慘坐冤獄10個月終獲釋

2023-12-15
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The AI system was used in a law enforcement context to assess facial similarity, which directly influenced the arrest and detention of the scientist without sufficient corroborating evidence. This misuse of AI led to harm to the person's liberty and a violation of legal rights, fitting the definition of an AI Incident. The harm is realized, not just potential, and the AI's role was pivotal in the wrongful imprisonment.