Israel's AI-Driven Targeting in Gaza Leads to Mass Civilian Casualties

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Reports reveal Israel's military used AI systems, notably 'Lavender', to generate kill lists and target suspected militants in Gaza with minimal human oversight. This AI-driven process led to airstrikes causing high civilian casualties, including entire families, raising global concern over human rights violations and the ethical use of AI in warfare.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (Lavender) used for targeting in military strikes that have resulted in civilian casualties, including families and children, which is a direct harm to people and communities. The system relies on WhatsApp data, implicating the messaging platform in potential human rights violations. The harms described (civilian deaths, possible violations of international law) are materialized and directly linked to the AI system's use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilitySafetyRespect of human rightsRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyFairnessHuman wellbeing

Industries
Government, security, and defence

Affected stakeholders
General publicChildren

Harm types
Physical (death)Physical (injury)Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Organisation/recommendersGoal-driven organisationForecasting/predictionRecognition/object detectionReasoning with knowledge structures/planning

In other databases

Articles about this incident or hazard

Thumbnail Image

İsrail, Gazze'deki Filistinlileri öldürmek için WhatsApp'ı kullanıyor

2024-04-23
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for targeting in military strikes that have resulted in civilian casualties, including families and children, which is a direct harm to people and communities. The system relies on WhatsApp data, implicating the messaging platform in potential human rights violations. The harms described (civilian deaths, possible violations of international law) are materialized and directly linked to the AI system's use. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Yılın skandalı! İsrail, Filistinlileri bombalamak için WhatsApp kullanıyor!

2024-04-23
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) that uses data from WhatsApp to identify targets for military strikes. The AI system's use has directly led to lethal harm to individuals and families, constituting injury and violation of human rights. The article explicitly states that the AI system is used operationally and has caused deaths and civilian harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to people and breaches of fundamental rights.
Thumbnail Image

WhatsApp'a ağır suçlama! O "Gazze" iddiasını duyan şaşkına döndü

2024-04-24
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI targeting system used by the Israeli military) that allegedly uses data from WhatsApp to target Palestinians, leading to harm (killings). This fits the definition of an AI Incident because the AI system's use is directly linked to injury or harm to people. The article reports the harm as ongoing and the AI system's role as pivotal, even if the claim is disputed. Therefore, the event is best classified as an AI Incident based on the reported direct or indirect role of AI in causing harm.
Thumbnail Image

WhatsApp'a ağır suçlama! Şoke eden Gazze iddiası

2024-04-23
Haber7.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI targeting system) that allegedly uses data from WhatsApp to identify targets for airstrikes, which have resulted in significant loss of life and injuries. This meets the definition of an AI Incident because the AI system's use has directly led to harm to groups of people (Palestinians in Gaza). The article describes realized harm, not just potential harm, and the AI system's role is pivotal in the targeting process. Therefore, this is classified as an AI Incident.
Thumbnail Image

Teknoloji devi Meta'nın İsrail'e yardım ettiği iddiası

2024-04-25
NTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI program (Lavender) used by the Israeli military to identify live targets, which allegedly connects to WhatsApp data from Meta. This implies AI system involvement in military targeting that could cause harm to people (Palestinians), constituting violations of human rights and harm to communities. Despite Meta's denial and lack of direct proof in the article, the allegations describe realized harm linked to AI use. Therefore, this event meets the criteria for an AI Incident due to the direct or indirect role of AI systems in causing harm.
Thumbnail Image

Teknoloji devi Meta'nın İsrail'e yardım ettiği iddiası

2024-04-25
Dünya
Why's our monitor labelling this an incident or hazard?
The involvement of an AI system (Lavender) used to identify live targets based on WhatsApp data indicates AI system use in a context leading to potential or actual harm to persons (human rights violations and harm to communities). The censorship of pro-Palestinian content by Meta's platforms also constitutes a violation of rights. Despite Meta's denial and lack of direct proof in the article, the allegations describe realized harms linked to AI system use and content moderation algorithms. Hence, this qualifies as an AI Incident due to direct or indirect harm caused by AI system use in violation of human rights and harm to communities.
Thumbnail Image

Israele: usa AI per sterminare più civili

2024-04-05
Blondet & Friends
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that directly led to the deaths of thousands of Palestinians, including civilians. The AI system's outputs were treated as authoritative kill lists with minimal human verification, causing widespread harm and loss of life. This is a clear case where the AI system's use directly led to injury and harm to people and communities, fulfilling the definition of an AI Incident. The detailed description of the AI's role, the scale of harm, and the lack of adequate human oversight confirm this classification.
Thumbnail Image

Raid su ong, Furia di Biden su Israele: "Indignato, non è stato un incidente"

2024-04-03
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for military targeting that led to thousands of deaths, mostly civilians, due to reliance on AI-generated suspect lists with minimal human oversight. This constitutes direct harm to people and communities and potential human rights violations, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in the harm caused, as it identified targets that were then bombed, often in their homes, resulting in significant civilian casualties.
Thumbnail Image

20 secondi per uccidere: lo decide la macchina | il manifesto

2024-04-05
il manifesto
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used by the Israeli military to generate kill lists and target individuals for airstrikes. The AI system's outputs were directly used to authorize lethal attacks with minimal human review, leading to the deaths of thousands of Palestinians, including civilians, women, and children. The AI system's error rate and the lack of adequate human oversight caused wrongful targeting and mass civilian casualties, fulfilling the criteria for harm to persons and communities. The AI system's role was pivotal in the decision-making process and the resulting harm. This meets the definition of an AI Incident as the AI system's use directly led to injury, death, and violations of human rights.
Thumbnail Image

Quale ruolo per i sistemi di AI nella guerra a Gaza?

2024-04-07
Analisi Difesa
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used in military targeting decisions that has directly influenced lethal actions against individuals, leading to harm and potential violations of human rights. The system's outputs were treated as human decisions, with insufficient verification, increasing the risk of wrongful harm. This meets the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities, and breaches of fundamental rights. Although the IDF denies some claims, the investigative report and the described consequences indicate realized harm linked to AI use in warfare, not just a potential hazard or complementary information.
Thumbnail Image

Guterres: sono profondamente turbato per l'utilizzo dell'IA da parte dell'esercito israeliano per i bombardamenti su Gaza

2024-04-05
Informazione.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the Israeli military to identify targets for bombing, which directly led to the deaths and injuries of tens of thousands of Palestinians, including civilians. The AI system's outputs were used with minimal human review, causing significant harm to people and communities, fulfilling the criteria for an AI Incident. The harm is direct and material, involving injury and death, and violations of human rights and international law. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israele ha usato l'IA per individuare 37.000 obiettivi a Gaza

2024-04-03
Askanews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to identify targets for airstrikes. The AI system's outputs were used with minimal human oversight, leading to attacks that caused thousands of deaths, including civilians. This constitutes direct harm to people and violations of human rights, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in the harm caused, as it generated the target lists that led to lethal actions.
Thumbnail Image

"LAVENDER": LA MACCHINA DI INTELLIGENZA ARTIFICIALE CHE DIRIGE I BOMBARDAMENTI DI ISRAELE SU GAZA | NoGeoingegneria

2024-04-05
nogeoingegneria.com
Why's our monitor labelling this an incident or hazard?
Lavender is explicitly described as an AI system used to generate target lists for military strikes. Its use directly led to the deaths of thousands of civilians and destruction of property, fulfilling the criteria for an AI Incident under the OECD framework. The AI system's outputs were relied upon with minimal human oversight, causing harm to people and communities. The article details realized harm, not just potential harm, and the AI system's involvement is central to the incident. Therefore, the event is classified as an AI Incident.
Thumbnail Image

"Dov'è papà" un software di intelligenza artificiale con cui Israele ha sterminato intere famiglie - Africa Express: notizie dal continente dimenticato

2024-04-07
Africa Express: notizie dal continente dimenticato
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly caused harm to civilians and destruction of property, fulfilling the criteria for an AI Incident. The harms include injury and death to people, harm to communities, and violations of human rights and international law. The AI system's role is pivotal in the decision-making process leading to these harms. Although there is a military denial of fully autonomous targeting, the article's detailed description of the AI's use and consequences supports classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Per l'algoritmo onniscente dell'Idf le vittime innocenti sono un rischio calcolato

2024-04-05
editorialedomani.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) employing facial recognition and data analysis to identify targets for military strikes, which directly leads to harm (death or injury) of innocent civilians. The AI system's role is pivotal in generating target lists with a known error rate, and human oversight is minimal, increasing the risk of wrongful harm. This fits the definition of an AI Incident, as the AI system's use has directly led to harm to persons and violations of human rights. The harm is realized, not just potential, and the AI system's involvement is central to the event described.
Thumbnail Image

Guardian: Το σύστημα τεχνητής νοημοσύνης του Ισραήλ "Lavender" επέλεξε 37.000 στόχους για βομβαρδισμό

2024-04-05
Newsbeast.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used in military targeting that directly led to harm: deaths of civilians, destruction of homes, and large-scale bombing based on AI-generated target lists. The AI system's role was pivotal in identifying targets, and the resulting harm includes injury and death to persons and harm to communities and property. The article confirms the AI system's involvement and the realized harm, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Ισραήλ/Tεχνητή νοημοσύνη κατά Χαμάς: Πώς ο αλγόριθμός επέλεξε χιλιάδες στόχους και ανεχόταν παράπλευρες απώλειες

2024-04-03
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for target selection in military operations, which has directly contributed to the killing of thousands of people, including civilians, thus causing injury and harm. The system's decision-making process and tolerance for collateral damage have led to real, materialized harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and violations of human rights. The involvement is in the use of the AI system in a lethal military context, with documented consequences. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Αποκάλυψη-σοκ: "Η Γάζα ισοπεδώθηκε με Τεχνητή Νοημοσύνη" | Protagon.gr

2024-04-05
Protagon.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') in military targeting that directly led to harm: the deaths of civilians and destruction of homes in Gaza. The AI system's outputs were pivotal in selecting targets, and the military's reliance on it contributed to significant harm. This fits the definition of an AI Incident, as the AI system's use directly led to injury and harm to groups of people and harm to communities and property. The article also discusses the ethical and legal implications, reinforcing the severity of the harm caused.
Thumbnail Image

Ισραήλ και τεχνητή νοημοσύνη κατά Χαμάς-Πώς ο αλγόριθμός επέλεξε χιλιάδες στόχους

2024-04-03
ΡΕΠΟΡΤΕΡ
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems were used to select targets in a military conflict, resulting in thousands of deaths, including civilians. The AI systems' outputs were used to make lethal decisions, with documented collateral damage and ethical issues. This meets the definition of an AI Incident because the AI system's use directly led to harm to persons and communities (harm categories a and d). The involvement is not hypothetical or potential but realized, and the harm is significant and clearly articulated. Therefore, the event is classified as an AI Incident.
Thumbnail Image

The Guardian: Το σύστημα AI "Lavender" διάλεξε 37.000 πιθανούς στόχους για τους βομβαρδισμούς στη Γάζα

2024-04-05
ertnews.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military targeting that led to bombings causing death and destruction, which constitutes direct harm to people and communities. The AI system's involvement in selecting targets is central to the incident, fulfilling the criteria for an AI Incident. The harm is materialized, not just potential, and involves violations of human rights and harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ما هو جيش اللافندر الذي يحارب في صفوف قوات الاحتلال لإبادة غزة؟

2024-04-04
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military targeting decisions that have directly caused harm, including loss of life and violations of rights. The AI system's role is pivotal in selecting targets for lethal strikes, leading to an AI Incident as defined by the framework. The harm is realized and ongoing, not merely potential, and involves violations of human rights and harm to communities.
Thumbnail Image

قوات الاحتلال الإسرائيلى تستخدم برامج الذكاء الاصطناعى لقتل الأبرياء فى غزة - اليوم السابع

2024-04-04
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender, The Gospel, Where's Daddy?) used by the Israeli military to identify and target individuals in Gaza, leading to lethal attacks that have killed civilians and families. The AI's role is pivotal in selecting targets and enabling strikes with reduced human intervention, which has directly led to harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized and significant, including loss of life and potential human rights violations. The AI systems' operation without sufficient human control and the resulting civilian casualties confirm the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الجارديان: إسرائيل استعانت بجيش اللافندر فى قصف غزة وقتل آلاف المدنيين - اليوم السابع

2024-04-04
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Lavender') used by the Israeli military to process intelligence data and identify targets for airstrikes. The use of this AI system directly led to the killing of thousands of civilians and destruction of property, fulfilling the criteria for harm to persons and communities. The AI system's role was pivotal in the targeting process, and the harm is realized and significant. The involvement is in the use of the AI system in military operations causing injury and death, as well as violations of human rights. Hence, this is clearly an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"لافندر" و"قنابل غبية".. إسرائيل استخدمت الذكاء الاصطناعي لتحديد 37 ألف هدف لحماس

2024-04-04
Dostor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') in military targeting that led to the identification of 37,000 potential targets and subsequent airstrikes causing civilian deaths and destruction. The AI system's role was central in processing intelligence and facilitating rapid targeting decisions, which directly resulted in harm to civilians and property. This meets the criteria for an AI Incident as the AI system's use directly led to violations of human rights and harm to communities. The article also discusses ethical and legal concerns, reinforcing the significance of the harm caused. Thus, the event is classified as an AI Incident.
Thumbnail Image

واشنطن تراجع تقريرا عن استخدام إسرائيل للذكاء الاصطناعي في قصف غزة

2024-04-05
Aljazeera
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military operations to identify targets, which has directly led to harm through airstrikes causing civilian deaths. The use of AI to classify individuals as suspects without human oversight and the resulting targeting aligns with harms to health, human rights violations, and harm to communities. Although the Israeli military denies using AI for targeting, the report and intelligence testimonies indicate AI's pivotal role in causing harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in conflict.
Thumbnail Image

أدى لاستشهاد 37 ألف فلسطيني.. ما هو نظام "لافندر" الذي استخدمه الاحتلال في حرب غزة؟

2024-04-04
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named "Lavender" used in military targeting during an armed conflict. The AI system's development and use directly contributed to lethal strikes causing thousands of deaths, including civilians, which is a clear harm to persons and communities. The AI system's role was pivotal in identifying targets and facilitating attacks, meeting the criteria for an AI Incident. The harm is realized and significant, including loss of life and potential violations of human rights. The Israeli military's denial does not negate the reported evidence and testimonies indicating AI involvement in causing harm.
Thumbnail Image

أميركا تراجع تقريرا حول استخدام إسرائيل الذكاء الصناعي في قصف غزة

2024-04-05
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Lavender') in a military context to identify targets for airstrikes. The AI system's outputs directly influenced lethal actions that caused injury and death to civilians, constituting harm to persons and communities. The article details the AI's central role in generating target lists without sufficient human review, leading to significant civilian casualties. This meets the definition of an AI Incident, as the AI system's use has directly led to harm to people and communities, and raises violations of human rights and international humanitarian law obligations.
Thumbnail Image

بضغطة زر واحدة.. إسرائيل استعانت بجيش الـ"لافندر" في قصف غزة

2024-04-04
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') in military targeting decisions that directly led to the killing of thousands of civilians and destruction of homes in Gaza. The AI system processed data to identify tens of thousands of potential targets, and its outputs were used to conduct strikes with unguided bombs, causing widespread harm. This meets the criteria for an AI Incident because the AI system's use directly led to injury and harm to people (harm category a), harm to communities (d), and likely violations of human rights and international law (c). The involvement is through the use of the AI system in military operations, and the harm is realized and significant.
Thumbnail Image

وكالة سرايا : واشنطن تنظر باستخدام "إسرائيل" الذكاء الصناعي في قصف غزة

2024-04-05
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Lavender') in military targeting decisions that have directly led to large-scale civilian casualties and destruction, fulfilling the criteria for an AI Incident. The AI system's role in processing data to identify targets and the subsequent airstrikes causing harm to civilians constitute direct harm to persons and communities. The article provides evidence of realized harm, not just potential risk, and thus this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

إسرائيل استخدمت الذكاء الاصطناعي لاستهداف 37 ألف فلسطيني في غزة

2024-04-04
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('Lavender' and 'Gospel') used for targeting individuals and buildings in Gaza, with a reported 90% accuracy in identifying targets. The AI's outputs were used to approve and conduct strikes that resulted in thousands of civilian deaths and widespread destruction, fulfilling the criteria for harm to people and communities. The AI system's role was pivotal in accelerating and scaling the targeting process, effectively enabling a 'low-cost killing' strategy. This direct link between AI use and realized harm classifies the event as an AI Incident rather than a hazard or complementary information. The event involves the use of AI in a military context causing significant human rights violations and civilian harm, meeting the definition of an AI Incident.
Thumbnail Image

تقرير: إسرائيل لجأت لتطبيق الذكاء الاصطناعي "لافندر" لتحديد الفلسطينيين المطلوب قتلهم

2024-04-04
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') in military targeting decisions that have directly led to the deaths of thousands of Palestinians, including civilians. The AI system's outputs were used to authorize lethal strikes, including indiscriminate bombing causing civilian casualties. This constitutes direct harm to people and communities and breaches of human rights, fulfilling the criteria for an AI Incident. The involvement is not hypothetical or potential but realized harm caused by the AI system's use in conflict operations.
Thumbnail Image

لافندر والقنابل الغبية..إسرائيل حددت 37 ألف هدف بغزة بالذكاء الاصطناعي

2024-04-04
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used to identify 37,000 targets in Gaza, which led to military strikes causing thousands of civilian deaths and destruction of property. The AI system's outputs were used to make lethal targeting decisions, including the use of unguided bombs that destroyed entire homes. This clearly meets the definition of an AI Incident, as the AI system's use directly led to harm to persons and communities, and violations of human rights. The involvement is not hypothetical or potential but realized and documented. The military's reliance on AI for target selection and the resulting civilian casualties confirm the direct link between AI use and harm.
Thumbnail Image

استخدام الذكاء الاصطناعي لتسهيل قتل الفلسطينيين في غزة - قناة العالم الاخبارية

2024-04-05
قناة العالم الاخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for military targeting that has directly led to the killing of civilians and destruction of property. This fits the definition of an AI Incident because the AI system's use has directly caused harm to people (a) and communities (d), and likely involves violations of human rights (c). The harm is realized and ongoing, not merely potential. Therefore, the event is classified as an AI Incident.
Thumbnail Image

"لافندر".. ذكاء اصطناعي اسرائيلي تسبب في "تدمير" غزة

2024-04-04
@Elaph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military operations to identify targets, which directly led to strikes causing large-scale civilian deaths and destruction in Gaza. The AI system processed data to generate target lists, which were then acted upon with unguided bombs causing harm to people and communities. This is a clear case of AI use leading directly to harm (civilian casualties and destruction), fulfilling the definition of an AI Incident. The involvement is not speculative or potential but actual and documented, with significant ethical and legal implications.
Thumbnail Image

جرائم عابرة للتكنولوجيا.. قوات الاحتلال الإسرائيلى تستخدم برامج الذكاء الاصطناعى لقتل الأبرياء فى غزة - صوت الأمة

2024-04-04
صوت الأمة
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) to identify and target individuals for attacks, which directly leads to harm (killings) and human rights violations. The AI system's role is pivotal in accelerating target identification and bypassing human intervention, which directly contributes to the harm. This fits the definition of an AI Incident as the AI system's use has directly led to injury or harm to persons and violations of human rights.
Thumbnail Image

تقرير: الجيش الإسرائيلي استخدم الذكاء الاصطناعي لتحديد 37 ألف هدف بشري في غزة

2024-04-04
جريدة الايام Al-ayyam newspaper
Why's our monitor labelling this an incident or hazard?
The report explicitly states that an AI system was used to identify human targets, which directly led to the deaths of civilians and violations of human rights. The AI system's role was pivotal in decision-making with reduced human intervention, causing injury and loss of life (harm to persons) and breaches of fundamental rights. Therefore, this event meets the criteria for an AI Incident as the AI system's use directly caused significant harm.
Thumbnail Image

هكذا أصبحت أرواح المدنيين بغزة تحت رحمة "برامج الاحتلال" في الذكاء الاصطناعي

2024-04-04
المركز الفلسطيني للإعلام
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for targeting in military operations that have directly caused civilian deaths and destruction of property. The AI's role is pivotal in automating targeting decisions that led to harm, including the killing of non-combatants and destruction of civilian infrastructure. This constitutes injury and harm to people, harm to communities, and violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's malfunction or design (e.g., 10% error margin, misidentification) is a contributing factor.
Thumbnail Image

إسرائيل توظف الذكاء الاصطناعي في إبادة الغزيين | MEO

2024-04-04
MEO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that directly led to the deaths of thousands, including civilians, which constitutes injury and harm to groups of people and violations of human rights. The AI system's use in lethal operations with admitted civilian casualties and lack of proportionality clearly meets the criteria for an AI Incident, as the AI's role was pivotal in causing direct harm and rights violations.
Thumbnail Image

أميركا تنظر في استخدام إسرائيل الذكاء الصناعي في قصف غزة

2024-04-05
tayyar.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for target identification in military operations, which has directly led to harm including civilian deaths and destruction of property. The AI system's role is pivotal in the decision-making process for strikes, and the resulting harm fits the definition of an AI Incident under categories (a) injury or harm to people and (d) harm to communities and property. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

الاحتلال استخدم الذكاء الاصطناعي لتحديد 37 ألف هدف بشري في غزة

2024-04-04
Al-Ahed News
Why's our monitor labelling this an incident or hazard?
The article explicitly states the use of an AI system for targeting individuals in a military conflict, leading to the deaths of thousands of civilians and destruction of property. The AI system's role was pivotal in identifying targets and approving strikes that caused harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized, severe, and directly linked to the AI system's outputs and use in military operations.
Thumbnail Image

إسرائيل استعانت بجيش الـ

2024-04-05
Arabstoday
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context where its outputs directly contributed to lethal strikes causing civilian deaths and destruction of homes. The AI system's role in target identification and the subsequent attacks that led to harm to civilians and communities clearly fits the definition of an AI Incident. The harms include injury and death to persons, harm to communities, and violations of human rights. The article provides evidence of realized harm linked to the AI system's use, not just potential harm, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"إسرائيل" استخدمت الذكاء الاصطناعي لتحديد 37 ألف هدف بشري في غزة

2024-04-04
PNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that directly led to harm: the deaths of civilians and destruction of homes. This constitutes injury and harm to groups of people (a), and harm to property and communities (d). The AI system's malfunction or design (non-precise targeting, reliance on unreliable data) contributed to these harms. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use directly caused significant harm.
Thumbnail Image

إسرائيل.. أرواح المدنيين بغزة تحت رحمة الذكاء الاصطناعي

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly caused civilian deaths and destruction in Gaza. The AI's automated classification and targeting decisions led to thousands of civilian casualties, with admitted error margins and no human verification. This clearly meets the definition of an AI Incident, as the AI's use directly led to harm to people and property, and violations of human rights. The harm is realized, not just potential, and the AI's role is pivotal in the incident. Hence, the classification is AI Incident.
Thumbnail Image

جريدة القدس || الجيش الإسرائيلي يستخدم الذكاء الاصطناعي في حربه على غزة

2024-04-04
القدس
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify targets, which directly contributed to military actions causing civilian deaths. This constitutes direct harm to people, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in processing data and generating target lists that led to lethal strikes. The harm is not hypothetical or potential but has occurred, and the AI system's use is central to the incident. Hence, the classification as AI Incident is justified.
Thumbnail Image

في اليوم 181 لحرب الإبادة على غزة.. 33037 شهيدا و75668 جريحا واستخدام "الذكاء الاصطناعي" لإبادة عائلات | المشهد اليمني

2024-04-04
almashhadnews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender, The Gospel, Where's Daddy?) used by the Israeli military to identify and target individuals in Gaza, leading to lethal attacks that have caused deaths and injuries to civilians, including entire families. This is a direct use of AI systems causing harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential. The article also discusses human rights abuses and medical mistreatment in detention, but these are not directly linked to AI systems. Therefore, the classification is AI Incident due to the direct lethal harm caused by AI-enabled targeting systems.
Thumbnail Image

واشنطن تراجع تقريرا عن استخدام إسرائيل الذكاء الاصطناعي في قصف غزة

2024-04-05
بوابة فيتو
Why's our monitor labelling this an incident or hazard?
The article centers on the possible use of an AI system by the Israeli military for target identification in Gaza, which if true, could lead to harm to civilians (harm to persons and communities). Since the use is not confirmed and harm is not yet established, but the potential for harm is credible and significant, this situation fits the definition of an AI Hazard rather than an AI Incident. The US is reviewing the report, and the military denies the use, so no confirmed incident has occurred. Therefore, the event is best classified as an AI Hazard due to the plausible risk of harm from AI-enabled military targeting without human oversight.
Thumbnail Image

أميركا تراجع تقريرا حول استخدام إسرائيل الذكاء الصناعي في قصف غزة

2024-04-05
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article mentions the use of AI by the Israeli military to assist in targeting for bombings in Gaza. This involves an AI system used in a military context where harm to people and communities is a direct consequence. Although the article states that the U.S. is reviewing the report, the use of AI in targeting for airstrikes is already linked to harm. Therefore, this qualifies as an AI Incident due to the AI system's involvement in actions causing harm to people and communities.
Thumbnail Image

وكالة سرايا : عاجل : واشنطن تراجع تقريرا باستخدام إسرائيل الذكاء الاصطناعي لتحديد أهدافها في غزة

2024-04-04
(وكالة أنباء سرايا (حرية سقفها السماء
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military for target identification, which is an AI system involved in a high-stakes context with potential for significant harm (injury, death, harm to communities). Since the report is under review and no confirmed harm or incident is described, the event is best classified as an AI Hazard, reflecting the plausible future harm that could result from such AI use in military targeting.
Thumbnail Image

أمريكا تراجع تقريرا بشأن استخدام إسرائيل الذكاء الاصطناعي في قصف غزة

2024-04-05
صدى البلد
Why's our monitor labelling this an incident or hazard?
The article describes a situation where AI is reportedly used in military targeting and suspect classification, which could plausibly lead to significant harms including injury, human rights violations, and harm to communities. Since the US is still reviewing and has not verified the report, no confirmed harm is established. Thus, this event fits the definition of an AI Hazard, as the development or use of AI systems could plausibly lead to an AI Incident, but no direct or indirect harm has been confirmed yet.
Thumbnail Image

أمريكا تراجع تقريرا أفاد بأن إسرائيل تستخدم الذكاء الاصطناعي لتحديد أهداف قصفها في غزة

2024-04-04
القدس العربي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to assist in targeting for airstrikes, which involves an AI system in a use case that can cause harm to people and communities (harm to health and communities). The harm is plausible and likely given the context of military strikes. However, the U.S. is only reviewing the report, and no confirmed harm or confirmed AI use is established yet. Given the potential for direct harm if the AI is used as described, this situation fits best as an AI Hazard rather than an AI Incident at this stage.
Thumbnail Image

الولايات المتحدة تراجع تقريرا يفيد بأن الجيش الإسرائيلي يستخدم الذكاء الاصطناعي لتحديد أهداف يقصفها بغزة

2024-04-05
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The article mentions the possible use of AI by the Israeli military for target identification, which could plausibly lead to harm given the context of military strikes. However, the event is currently a report under review with no confirmed harm or incident described. Therefore, this constitutes an AI Hazard, as the use of AI in military targeting could plausibly lead to harm, but no direct or indirect harm has been confirmed or reported yet.
Thumbnail Image

كيربي: نراجع تقريرا حول استخدام إسرائيل الذكاء الاصطناعي لتحديد أهدافها في غزة

2024-04-05
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The article centers on a report about the potential use of AI by the Israeli military for target identification, which if true, could lead to serious human rights violations and harm to civilians. However, the use is denied by the military, and no direct harm from AI use is confirmed in the article. Thus, the event is best classified as an AI Hazard, reflecting a credible risk of harm from AI use in military targeting without confirmed realized harm.
Thumbnail Image

واشنطن: نراجع تقارير استخدام إسرائيل الذكاء الاصطناعي في قصف أهدافها بغزة

2024-04-05
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The article centers on a report about the use of AI in military targeting, which could plausibly lead to serious harm such as civilian casualties and human rights violations. Although the harm is not confirmed as having occurred due to AI, the potential for such harm is credible and significant. The Israeli military's denial and the U.S. review indicate uncertainty about the AI system's actual use and impact. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the AI system is indeed used as reported.
Thumbnail Image

واشنطن تراجع تقريراً عن استخدام إسرائيل الذكاء الاصطناعي في قصف غزة

2024-04-05
جريدة المدى
Why's our monitor labelling this an incident or hazard?
The article centers on a report alleging the use of an AI system by the Israeli military to classify suspects in Gaza without sufficient human oversight, which could lead to violations of human rights and harm to communities. Although the military denies this use and the U.S. has not verified the report, the potential for AI-driven harm in this context is credible and significant. There is no confirmed direct harm reported yet, but the plausible risk of harm from such AI use in military targeting justifies classification as an AI Hazard rather than an AI Incident. The article does not focus on responses or broader ecosystem context, so it is not Complementary Information. The event is clearly related to AI systems and their potential misuse, so it is not Unrelated.
Thumbnail Image

أميركا تراجع تقارير استخدام "إسرائيل" للذكاء الاصطناعي في قصف أهدافها بغزة

2024-04-05
Addiyar
Why's our monitor labelling this an incident or hazard?
Although the article mentions the alleged use of an AI system ('Lavender') for target identification, the Israeli military denies such use, and the U.S. has not verified the claims. There is no confirmed direct or indirect harm caused by AI systems as per the article; rather, it is a report under review with conflicting statements. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context and updates on potential AI use in a conflict setting, contributing to understanding the broader AI ecosystem and its implications without confirming harm or plausible future harm.
Thumbnail Image

واشنطن تراجع تقارير باستخدام إسرائيل الذكاء الاصطناعي في حرب غزة

2024-04-05
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article involves an AI system allegedly used in military targeting, which could lead to significant harm if true (e.g., wrongful targeting, violations of human rights). However, the Israeli military denies such AI use, and no confirmed incident of harm caused by AI is reported. The U.S. is still reviewing the claims. Therefore, this situation represents a plausible risk of harm from AI use in a conflict setting but no confirmed harm has occurred or been verified. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The rest of the article focuses on humanitarian aid and conflict events unrelated to AI systems.
Thumbnail Image

Is Israel using AI to identify targets in Gaza war?

2024-04-04
Channel 4
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Lavender) for target identification in a conflict zone. The reported error rate causing civilian deaths constitutes harm to people, fulfilling the criteria for an AI Incident. Although Israel denies the use, the investigation and claims indicate the AI system's involvement in causing harm, directly or indirectly, through its outputs leading to lethal actions.
Thumbnail Image

Israel's reported use of AI in its Gaza war may explain thousands of civilian deaths

2024-04-04
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) used in military targeting decisions, which directly caused harm to civilians, including thousands of deaths. The system's development and use led to violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The article details realized harm resulting from the AI system's use, not just potential or hypothetical risks, and thus it is not an AI Hazard or Complementary Information. The involvement of AI in causing direct harm to people in a conflict setting clearly classifies this as an AI Incident.
Thumbnail Image

Israel is using artificial intelligence to help pick bombing targets in Gaza, report says | CNN

2024-04-04
CNN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military targeting decisions that have directly led to civilian deaths and widespread harm in Gaza. The AI system's outputs are used with minimal human oversight, contributing to lethal airstrikes that have killed thousands, including non-combatants. This meets the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people, as well as violations of human rights. The article provides detailed evidence of realized harm caused by the AI system's involvement in military operations.
Thumbnail Image

A journalist's report alleged that Israel is using AI tools in its war in Gaza: "It's completely dehumanizing"

2024-04-04
CNN International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in warfare to identify targets, with reported consequences including civilian deaths and insufficient human oversight. This directly links the AI system's use to harm to people and possible violations of rights, fitting the definition of an AI Incident. Although the Israel Defense Forces deny AI use in targeting, the report's detailed allegations and the described harms justify classification as an AI Incident due to the direct or indirect role of the AI system in causing harm.
Thumbnail Image

Gaza War: Palestinian magazine claims Israel using AI Systems to assassinate Hamas militants, civilians | International - Times of India Videos

2024-04-05
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender and Gospel) used in a military context to identify targets, which directly leads to harm including civilian casualties. This constitutes an AI Incident because the AI's use in targeting has directly led to injury and harm to people, fulfilling the criteria of harm to persons. The involvement of AI in lethal targeting and the resulting civilian harm is a clear case of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets

2024-04-03
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) in military targeting that directly contributed to the deaths of thousands of civilians, which is a clear harm to health and communities. The AI system was central to identifying targets and was used in a way that accepted high collateral damage, including civilian deaths. This meets the criteria for an AI Incident because the AI system's use directly led to significant harm, including violations of human rights and loss of life. The event is not merely a potential risk or complementary information but a documented case of AI-enabled harm in an armed conflict.
Thumbnail Image

Top Israeli spy chief exposes his true identity in online security lapse

2024-04-05
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems developed and deployed by Unit 8200 under Sariel's leadership, including AI-powered target recommendation systems used in military operations in Gaza. These systems influence lethal decisions that have resulted in significant harm, including deaths and kidnappings, constituting harm to communities and potential violations of human rights. The AI systems' role is pivotal in these harms, as they are integral to the targeting process. Hence, this qualifies as an AI Incident due to the direct or indirect link between AI system use and realized harm.
Thumbnail Image

Israel's 'Where's Daddy?' AI system helps target suspected Hamas militants when they're at home with their families, report says

2024-04-07
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('Lavender' and 'Where's Daddy') used for identifying and tracking targets, which are then bombed, often resulting in civilian casualties. The AI system's errors and minimal human oversight have directly contributed to harm to people and potential violations of human rights and international law. The harm is realized and ongoing, not merely potential. Thus, this event meets the criteria for an AI Incident as the AI system's use has directly led to injury and violations of rights.
Thumbnail Image

Early on in the war, IDF gave clearance to allow 20 civilian deaths for every low-ranking Hamas suspect, intelligence sources said: report

2024-04-04
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system was used to identify targets for military strikes, with a policy that allowed a high ratio of civilian deaths per militant killed. The AI system's outputs were used with minimal human oversight, effectively making it a key factor in decisions that led to large-scale civilian casualties. This meets the definition of an AI Incident because the AI system's use directly led to injury and harm to groups of people (civilian deaths), fulfilling harm criteria (a) and (d). The scale and nature of harm, combined with the AI's pivotal role in targeting, justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israel is using AI to identify bombing targets in Gaza, report says

2024-04-05
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to identify targets for bombing, which directly led to harm including deaths and injuries. The system made errors in about 10% of cases, marking individuals with loose or no connection to militant groups, indicating wrongful harm. The reliance on AI outputs as if they were human decisions, with minimal human verification, shows the AI system's role in causing harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and communities.
Thumbnail Image

Israel may be using AI to identify militants, ignoring 10% error rate

2024-04-04
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to classify individuals as militants and target them for air strikes. The AI's outputs have directly led to the deaths of thousands, including civilians, which constitutes injury and harm to groups of people (harm category a) and harm to communities (d). The AI's 10% error rate and reliance on flawed data (e.g., phone tracking) have caused wrongful targeting. The AI system's use in lethal military operations with documented casualties and collateral damage clearly meets the definition of an AI Incident, as the AI's development and use have directly led to significant harm.
Thumbnail Image

White House investigating reports Israel used AI for targets in Gaza

2024-04-05
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to classify tens of thousands of Palestinians as militants and target them for air strikes. The AI's outputs were used to plan lethal operations that have resulted in over 30,000 deaths, including civilians, which constitutes direct harm to people and communities. The AI system's misidentification rate and the delegation of targeting decisions to it indicate malfunction and misuse. The harms include injury and death, harm to communities, and likely violations of human rights and international law. Therefore, this event meets the criteria for an AI Incident due to the direct and significant harm caused by the AI system's use.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify...

2024-04-05
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Lavender and Gospel) in military targeting decisions that have directly resulted in civilian deaths and harm in Gaza. The AI system's role in identifying targets with little human oversight and permissive policies for civilian casualties indicates the AI's involvement in causing harm. The harms include injury and death to civilians and potential violations of human rights and war crimes. Although the Israeli military denies using AI for targeting, the report is based on multiple sources and is taken seriously by the UN Secretary-General and human rights experts. Given the direct link between AI use and realized harm, this event is classified as an AI Incident.
Thumbnail Image

Consortium News:ㅤCaitlin Johnstone: Israel's 'Human Shields' Lie

2024-04-06
Apokalyps Nu!
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used to identify and target individuals for lethal strikes, with documented civilian casualties including children. The AI system's outputs are used to make lethal decisions with minimal human oversight, directly causing harm to people and communities and violating human rights. This fits the definition of an AI Incident as the AI system's use has directly led to harm (death and human rights violations).
Thumbnail Image

Israel Facing Scrutiny Over Reportedly Using AI For Gaza Targets -- Top UN Official Says He's 'Deeply Troubled'

2024-04-05
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to identify targets, which is an AI system by definition. The use of this system allegedly led to civilian deaths and potential war crimes, constituting harm to people and violations of human rights. The harm is realized, not just potential, and the AI system's role is central to the incident. Despite denials, the report and international concern indicate the AI system's involvement in causing harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israeli 'AI Targeting System' Has Caused Huge Civilian Casualty Count In Gaza: Report

2024-04-05
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being used for military targeting decisions with minimal human review, leading to thousands of civilian deaths and collateral damage. The AI system's role is pivotal in selecting targets and automating kill lists, which directly caused harm to civilians, fulfilling the criteria for an AI Incident. The harm includes injury and death to people, violations of human rights, and harm to communities. The article provides detailed evidence of realized harm, not just potential harm, and thus it is not merely a hazard or complementary information.
Thumbnail Image

Israel reportedly used 'Lavender' AI system to ID thousands of dubious targets in Gaza war

2024-04-04
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used in military targeting decisions that resulted in bombings causing civilian deaths and destruction. The AI system's role in generating thousands of targets with scant human oversight directly contributed to harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized and significant, including loss of life and violation of rights. Although the IDF denies direct AI targeting, the report is based on multiple insider testimonies indicating the AI system's pivotal role in the harm caused.
Thumbnail Image

Israeli Military Using AI to Select Targets in Gaza With 'Rubber Stamp' From Human Operator: Report

2024-04-03
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used to select human targets in a conflict zone, with the AI system's outputs directly leading to lethal military actions that caused deaths and destruction of civilian homes. The human operators' minimal review of AI-generated targets and the system's known misidentifications further implicate the AI system in causing harm. The harms include injury and death to individuals, harm to communities, and violations of human rights. The direct causal link between the AI system's use and these harms meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets

2024-04-03
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting that directly led to harm, including the deaths of thousands of civilians. The AI system was integral to the identification and selection of targets, and its outputs were used to authorize strikes with significant collateral damage. This constitutes direct involvement of an AI system in causing harm to people and communities, fulfilling the criteria for an AI Incident. The harm is materialized and significant, including violations of human rights and harm to communities. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Israel used secretive AI program called 'Lavender' to identify...

2024-04-05
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for military targeting that had a 10% error rate and was responsible for identifying thousands of targets, including many low-level operatives. The system's use led to the killing of many civilians as collateral damage, with reports of up to 20 civilians killed per junior operative targeted. The AI system's outputs were used with scant human review, indicating reliance on AI decisions that directly caused harm to people. This fits the definition of an AI Incident as the AI system's use directly led to injury and harm to groups of people and potential violations of human rights and international law.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to target civilians in Gaza - War on Gaza - War on Gaza

2024-04-06
جريدة الأهرام
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for targeting in military operations with minimal human oversight, leading to thousands of civilian deaths and injuries. This is a clear case where the AI system's use has directly led to significant harm to people and communities, including violations of human rights and potential war crimes. The involvement of AI in lethal decision-making with insufficient human control and the resulting civilian casualties meet the criteria for an AI Incident as defined. The harm is realized and severe, not merely potential, and the AI system's role is pivotal in the chain of events causing this harm.
Thumbnail Image

Report: Israel used AI to identify bombing targets in Gaza

2024-04-04
The Verge
Why's our monitor labelling this an incident or hazard?
The Lavender system is an AI system used in the identification and authorization of lethal military targets, which directly leads to harm to persons (potentially injury or death) and raises serious concerns about violations of human rights and international law. Despite official denials about AI's autonomous role, the system's use in targeting decisions implicates AI in causing direct harm. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in lethal targeting and the resulting harm to individuals.
Thumbnail Image

'AI-assisted genocide': Israel reportedly used database for Gaza kill lists

2024-04-04
Al Jazeera Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-assisted system used in military targeting that has directly led to thousands of civilian deaths, which is a clear harm to human life and a violation of human rights. The system's error rate and the minimal human oversight indicate malfunction or misuse. The involvement of AI in decisions about life and death with such consequences fits the definition of an AI Incident, as the AI system's use has directly led to significant harm and potential war crimes. The event is not merely a potential risk or complementary information but a realized harm caused by AI.
Thumbnail Image

Israeli army is using artificial intelligence to generate kill lists in Gaza: Report

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The AI system Lavender is explicitly described as generating target lists that have been used to conduct military strikes causing mass casualties and destruction. The AI's role is pivotal in the decision-making process for lethal strikes, with reported errors and minimal human oversight contributing to civilian deaths and property destruction. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to people (a), harm to communities and property (d), and violations of human rights (c).
Thumbnail Image

Israel reportedly used 'Lavender' AI system to ID thousands of targets in Gaza war

2024-04-04
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used in military targeting decisions that led to bombings causing civilian deaths and harm. The AI system's use in generating targets with minimal human review directly contributed to harm (civilian casualties), fulfilling the criteria for an AI Incident. The harm includes injury and death to persons and harm to communities, as well as potential violations of international law and human rights. Although the IDF denies using AI to identify confirmed targets, the report is based on multiple intelligence officers' testimonies indicating AI's pivotal role in targeting decisions leading to harm. Hence, this is an AI Incident due to direct harm caused by AI system use in conflict.
Thumbnail Image

Israeli 'AI weapon dubbed Lavender coldly identified 37k Hamas targets'

2024-04-03
The Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used to identify targets for military strikes, which directly led to harm (deaths in Gaza). The system's outputs were reportedly used with minimal human oversight, and the error rate suggests a risk of wrongful targeting. This fits the definition of an AI Incident as the AI system's use has directly led to harm to people and communities, including potential violations of human rights and loss of life. Although the IDF denies using AI for target identification, the investigation's claims and the described use of AI for kill lists with minimal human checks indicate direct involvement of AI in causing harm.
Thumbnail Image

Israeli army uses AI to identify tens of thousands of targets in Gaza

2024-04-05
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Lavender and Where's Daddy?) used by the Israeli army to identify and target individuals for assassination via aerial bombardments. The AI's role in designating targets directly led to harm, including civilian deaths and collateral damage, which are injuries to persons and harm to communities. The acceptance of a margin of error and collateral deaths further confirms the AI's involvement in causing harm. This meets the criteria for an AI Incident as the AI system's use has directly led to significant harm to people and communities, including violations of human rights and loss of life.
Thumbnail Image

Israeli strikes deliberately targeted Gaza homes at night, with families present, reveals report

2024-04-04
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system was used to identify bombing targets, leading to airstrikes that killed thousands of civilians, including non-combatants such as women and children. The AI's error rate and the minimal human review indicate malfunction or misuse contributing to harm. The harms include injury and death (a), harm to communities (d), and probable violations of human rights and international law (c). Therefore, this event meets the criteria for an AI Incident due to the direct and significant harm caused by the AI system's deployment in military targeting.
Thumbnail Image

Killing with AI? Israel using Lavender and Where's Daddy AI to identify bombing targets in Gaza

2024-04-04
India Today
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Lavender, The Gospel, and Where's Daddy?) used in military targeting and attacks, which have directly caused harm to people, including civilians, through lethal airstrikes. This constitutes injury and harm to groups of people and violations of human rights. The AI systems' role is pivotal in identifying targets and enabling attacks, fulfilling the criteria for an AI Incident. Although the Israeli military denies some claims, the report provides detailed accounts of AI-driven targeting and resulting harm, justifying classification as an AI Incident.
Thumbnail Image

War: How Israeli soldiers use AI system, 'Lavender' to destroy Hamas

2024-04-04
Daily Post Nigeria
Why's our monitor labelling this an incident or hazard?
The AI system Lavender was used operationally to identify targets, and its outputs influenced lethal military actions that caused civilian deaths. The involvement of AI in targeting decisions that led to loss of life and harm to communities constitutes direct harm under the AI Incident definition. The article explicitly links the AI system's use to the harm caused, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israeli Military Using AI to Select Targets in Gaza With 'Rubber Stamp' From Human Operator: Report

2024-04-03
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that has directly led to harm, including deaths and destruction of civilian homes. The AI system's outputs were used to select targets with minimal human oversight, resulting in significant harm to individuals and communities, including violations of human rights and loss of life. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people and breaches of fundamental rights. The involvement of AI in the development and use of the kill lists and the resulting harm is clear and central to the event.
Thumbnail Image

Israel Used Lavender AI To Zero In On Hamas Targets, Say Intel Officers, IDF Denies Claims: Report

2024-04-03
english
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) used in military operations to identify targets, which directly led to civilian casualties and harm to communities, fulfilling the criteria for an AI Incident. The AI system's development and use in targeting, with alleged pre-set allowances for civilian deaths, shows the AI's role in causing harm. Despite official denial, the detailed report and intelligence sources provide sufficient basis to classify this as an AI Incident involving violations of human rights and harm to communities.
Thumbnail Image

UN Chief Raises Alarm over Israel's Use of AI in Gaza Strikes - World news - Tasnim News Agency

2024-04-06
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for target identification with very limited human oversight, resulting in high civilian casualties in Gaza. The AI system's role was pivotal in decisions leading to loss of life and potential war crimes, fulfilling the criteria for an AI Incident as it directly led to harm to people and violations of human rights. The involvement is in the use of the AI system in military targeting, causing realized harm, not just potential harm.
Thumbnail Image

Report: Israel Used AI System 'Lavender' to Identify Palestinian Potential Targets

2024-04-05
Asharq Al-Awsat English
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used in military targeting decisions that directly led to the deaths of civilians, which is a clear harm to groups of people. The AI system's outputs were used to authorize strikes that killed non-combatants, indicating the AI's role in causing injury and harm. This fits the definition of an AI Incident, as the AI system's use directly led to harm to people and raises legal and moral concerns.
Thumbnail Image

AI in conflict: Israel's actions in Gaza and India's UN diplomacy

2024-04-06
The Financial Express
Why's our monitor labelling this an incident or hazard?
The Israeli military's use of the AI tool 'Lavender' for targeting terrorists has directly resulted in large-scale civilian harm and deaths, fulfilling the criteria for an AI Incident due to harm to people and communities. The AI system's role in selecting targets that lead to collateral damage is pivotal. The UN's condemnation and India's abstention in the UN Human Rights Council are complementary information providing governance and diplomatic context but do not themselves constitute new incidents or hazards.
Thumbnail Image

Concern over AI use in Gaza war

2024-04-07
The Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for military targeting that have directly led to civilian deaths and violations of human rights, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving injury and death to people and breaches of fundamental rights. Although the Israeli military denies using AI in this way, the report and UN concerns indicate the AI system's role in causing harm. Therefore, this event is classified as an AI Incident due to the direct link between AI use and harm.
Thumbnail Image

Israel military using AI to bomb targets in Gaza: Report

2024-04-05
ThePrint
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems in the military targeting process is explicit, with AI used to cross-reference intelligence and generate target lists. The human verification process appears minimal and insufficient, leading to direct harm to civilians and aid workers, which constitutes injury and harm to groups of people. This meets the criteria for an AI Incident because the AI system's use has directly contributed to harm (death and humanitarian crisis) through its role in target identification and bombing authorization.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify Gaza targets

2024-04-06
The Times of Israel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly led to civilian casualties, which is a clear harm to people and communities. The AI system's outputs were reportedly treated as human decisions, indicating reliance on AI in lethal operations. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly caused harm to human life and violated human rights. The denial by the Israeli military does not negate the reported harm and AI involvement. Hence, the event is classified as an AI Incident.
Thumbnail Image

Israel's reported use of AI in its Gaza war may explain thousands of civilian deaths

2024-04-04
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that directly led to thousands of civilian deaths, including women and children, in Gaza. The AI system's use in assigning target scores and the lack of sufficient human oversight contributed to indiscriminate killings and excessive collateral damage. This fits the definition of an AI Incident, as the AI system's use directly led to harm to people and communities, and violations of human rights. Although the IDF denies using AI for targeting, the report is based on multiple intelligence sources and the described harms are consistent with the AI system's role. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Gaza Conflict: Israel used AI to strike thousands of Hamas targets

2024-04-03
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) explicitly described as being used to identify military targets, with the military relying on its outputs to conduct strikes that killed civilians. This is a direct link between AI use and harm to human life, fulfilling the criteria for an AI Incident under the definition of harm to people. The article also highlights the ethical and legal concerns, reinforcing the significance of the harm caused. Therefore, the event is classified as an AI Incident.
Thumbnail Image

What is 'Lavender', the AI program that Israel 'used' to create kill lists in Gaza?

2024-04-04
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) that was used in a military context to identify targets for airstrikes. The AI system's outputs directly influenced lethal decisions, leading to the deaths of thousands of civilians, which constitutes harm to people and communities. The article provides detailed evidence of the AI's role in generating kill lists and the resulting harm, fulfilling the criteria for an AI Incident. The harm is realized and significant, and the AI system's involvement is direct and pivotal. Although the IDF denies the AI's role in targeting, the investigation and insider testimonies strongly support the classification as an AI Incident.
Thumbnail Image

Israel denies using AI to identify Gaza airstrike targets

2024-04-04
The Age
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) allegedly used to generate targets for airstrikes, leading to civilian deaths, which is harm to groups of people. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm. Despite the denial by the IDF, the report's detailed claims and the scale of harm described justify classification as an AI Incident. The harm is materialized (civilian casualties), and the AI system's role is pivotal in the targeting process as reported.
Thumbnail Image

Israel disputes it has powerful AI program for targeted killing that tolerates civilian casualties

2024-04-04
Washington Times
Why's our monitor labelling this an incident or hazard?
The event involves the alleged use of AI systems in military targeting decisions that have resulted in civilian deaths, which constitutes harm to people and potential violations of human rights and international law. The AI system's role is pivotal in generating kill lists and influencing strike decisions, even if human analysts are involved. The harm described is realized, not merely potential, as civilian casualties have occurred. Despite official denials, the detailed reports from multiple sources and descriptions of actual strikes linked to AI-generated lists meet the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Israel disputes AI targeting reports

2024-04-04
Washington Times
Why's our monitor labelling this an incident or hazard?
The article centers on the use and alleged misuse of AI systems in military targeting that has resulted in civilian casualties, which is a direct harm to people. Despite official denials, credible reports and prior information indicate AI systems are involved in targeting decisions that have caused harm. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to injury or harm to groups of people. The article does not merely speculate about potential harm but references actual events with lethal outcomes linked to AI-enabled targeting systems.
Thumbnail Image

Israeli army is using artificial intelligence to generate kill lists in Gaza: Report | Middle East

2024-04-04
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lavender) used by the Israeli army to generate target lists for military strikes. The AI system's outputs have directly led to harm, including the deaths of over 33,000 Palestinians, many of whom are civilians, and widespread destruction and displacement. This meets the criteria for an AI Incident because the AI system's use has directly caused injury and harm to people and communities, as well as violations of human rights. The scale and severity of harm described confirm this classification over AI Hazard or Complementary Information.
Thumbnail Image

Israel is using artificial intelligence to help pick bombing targets in Gaza, report says

2024-04-04
Saudi Gazette
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used to select bombing targets, with a reported 10% error rate and cursory human oversight, resulting in thousands of civilian deaths and widespread humanitarian harm. The AI system's use in military targeting directly led to injury and harm to people, harm to communities, and violations of human rights, fulfilling the criteria for an AI Incident. Although the Israeli military denies AI is used to identify terrorists, the investigation and testimonies indicate AI's pivotal role in target selection and consequent harm.
Thumbnail Image

Have we entered the age of AI warfare?

2024-04-05
The Week
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) used in military operations to identify targets, which directly influences lethal decisions. The system's errors and the minimal human review imply that the AI's outputs have directly or indirectly led to harm, including potential wrongful killings, which is a violation of human rights and harm to persons. The article's detailed description of the AI's role in these decisions and the resulting harms fits the definition of an AI Incident. Although the Israeli military denies the claims, the article presents credible allegations and testimonies indicating realized harm linked to the AI system's use.
Thumbnail Image

Israel uses 'unparalleled' AI to target suspected Hamas militants as machine did it 'coldly': Report

2024-04-03
WION
Why's our monitor labelling this an incident or hazard?
The AI system 'Lavender' was used to identify thousands of targets for bombing, resulting in the deaths of over 30,000 people according to the report, including civilians and aid workers. The AI's involvement in targeting and estimating collateral damage directly contributed to these harms. The article explicitly links the AI system's use to the military actions causing these casualties, which qualifies as an AI Incident under the framework's definition of harm to health and communities caused directly or indirectly by AI use.
Thumbnail Image

IDF Allowed 100 Civilian Deaths for Every Hamas Official Targeted by Error-Prone AI System

2024-04-03
Common Dreams
Why's our monitor labelling this an incident or hazard?
The article explicitly details an AI system (Lavender) used in military targeting decisions that led to the bombing of homes and the deaths of thousands of civilians, including non-combatants. The AI system's role was pivotal in generating targets with minimal human oversight, and its errors directly caused harm to people, fulfilling the criteria for an AI Incident. The harms include injury and death to persons (a), and violations of human rights and international law (c). The direct link between AI use and realized harm excludes classification as a hazard or complementary information.
Thumbnail Image

Israel's Genocidal New Video Game Way of War

2024-04-04
Common Dreams
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have a 10% error rate, leading to the killing of thousands of civilians, including women and children. This is a direct harm to health and life (a), harm to communities (d), and a violation of human rights (c). The AI systems' role is pivotal in these harms, as they guided strikes with loose rules of engagement and minimal human oversight. Therefore, this event qualifies as an AI Incident due to the realized and significant harms caused by the AI systems' use.
Thumbnail Image

Israel Uses AI-Assisted Identification of 37,000 Hamas Targets

2024-04-04
Tech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are used to identify targets for military attacks, with pre-authorized allowances for civilian casualties, and that unguided bombs are used on these targets, resulting in harm to civilians. The AI system Lavender identified tens of thousands of potential targets, and facial recognition technology is also employed. The AI's role is pivotal in the targeting process that leads to harm, fulfilling the criteria for an AI Incident involving harm to people and communities. The involvement is direct, as the AI outputs are used to authorize attacks causing injury and death.
Thumbnail Image

World News | Israel Military Using AI to Bomb Targets in Gaza: Report | LatestLY

2024-04-05
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in military targeting that has contributed to lethal air strikes causing civilian deaths and a humanitarian crisis, which constitutes harm to people and communities. The AI system's role in identifying targets with a notable error rate and limited human oversight indicates direct involvement in causing harm. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to injury and harm to groups of people and violations of human rights.
Thumbnail Image

Israel military using AI to bomb targets in Gaza: Report

2024-04-05
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being used to identify bombing targets, which directly influences lethal military actions. The AI's error rate and the minimal human oversight imply that the AI system's outputs have contributed to wrongful targeting and civilian casualties. The resulting harm includes injury and death to people, harm to communities, and a humanitarian crisis, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in the chain of events leading to these harms, even if human analysts are involved, as their limited verification suggests overreliance on AI outputs. Hence, this is not merely a potential hazard or complementary information but a realized incident involving AI-related harm.
Thumbnail Image

Explained: Israeli military's use of AI 'Lavender' to generate kill lists

2024-04-04
WION
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') explicitly mentioned as being used to generate kill lists, which directly leads to harm including deaths and destruction of property. The AI's outputs are used as authoritative decisions for lethal military actions, with a known error rate causing civilian casualties. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The direct causal link between the AI system's use and realized harm confirms this classification.
Thumbnail Image

Israel used AI to identify 37,000 targets associated with Hamas

2024-04-04
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the military to identify human targets, which directly influenced lethal operations causing harm to civilians. This constitutes harm to persons and communities, and potential violations of human rights. The AI system's involvement in target selection and the resulting civilian deaths meet the criteria for an AI Incident, as the harm is realized and the AI system's role is central to the event.
Thumbnail Image

Explained: Israel's use of AI tool to generate kill list

2024-04-04
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used to identify potential militant targets, with human personnel acting mainly as rubber stamps, implying reliance on AI outputs for lethal decisions. The reported use of AI in this context has directly led to harm, including civilian deaths, which fits the definition of an AI Incident involving harm to people and communities. Although the IDF denies using AI for target selection, the investigative report provides detailed claims of AI involvement and resulting harm, justifying classification as an AI Incident.
Thumbnail Image

Israel's secret 'Lavender' AI used for Gaza kill lists, report claims

2024-04-03
inews.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Lavender') used in military targeting that directly leads to harm, including deaths and civilian casualties, fulfilling the criteria for an AI Incident. The AI system is involved in the use phase, making decisions or recommendations that result in lethal strikes. The harms include injury and death to persons, harm to communities, and violations of human rights. The minimal oversight and acceptance of a high error rate further emphasize the AI system's pivotal role in causing these harms. Thus, the event meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israel's AI system 'Lavender' decides who lives and dies in Gaza

2024-04-04
TRT World
Why's our monitor labelling this an incident or hazard?
Lavender is an AI system explicitly described as making targeting decisions that lead to lethal bombings causing death and injury. The system's use with minimal human oversight and its role in selecting tens of thousands of targets directly links it to harm to people, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving loss of life and injury, and the AI system's role is pivotal in these outcomes.
Thumbnail Image

Israel military using AI system to target militants, bomb civilians

2024-04-04
Middle East Monitor
Why's our monitor labelling this an incident or hazard?
The AI system 'Lavender' is explicitly mentioned as being used to identify bombing targets, including civilians, leading to actual harm (civilian deaths and destruction of homes). The system's outputs are treated as if they were human decisions, directly influencing military actions that cause injury and death. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to people and communities, as well as violations of human rights. The article describes realized harm, not just potential harm, so it is not an AI Hazard or Complementary Information. The involvement of AI in causing these harms is clear and central to the event described.
Thumbnail Image

IDF denies it uses AI software to target individuals in Gaza bombing campaigns - SiliconANGLE

2024-04-04
SiliconANGLE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used in military targeting decisions that have directly led to significant loss of life and harm to civilians, which qualifies as an AI Incident under the framework. The AI system's outputs are used to make lethal decisions, with minimal human review, causing harm to people and violating human rights. The harm is realized and ongoing, not merely potential. Therefore, this is classified as an AI Incident.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify Gaza targets

2024-04-05
Al-Monitor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for military targeting that has directly caused harm to civilians, including deaths and potential war crimes. The AI system's role in identifying targets with little human oversight and the resulting high civilian casualties meet the criteria for an AI Incident, as the AI's use has directly led to violations of human rights and harm to communities. Although the Israeli military denies using AI in this way, the report and UN concerns indicate the AI system's involvement in causing harm, fulfilling the definition of an AI Incident.
Thumbnail Image

Lavender & Where's Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes

2024-04-05
Democracy Now!
Why's our monitor labelling this an incident or hazard?
The AI systems Lavender and Where's Daddy are explicitly described as being used to identify and target individuals for lethal military action with minimal human oversight, leading to actual deaths and destruction of civilian infrastructure. This constitutes direct harm to persons and communities and breaches of fundamental rights, fitting the definition of an AI Incident.
Thumbnail Image

'Lavender': The AI machine directing Israel's bombing spree in Gaza

2024-04-04
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military operations to identify targets for bombing, with minimal human oversight and a high tolerance for civilian casualties. The AI system's decisions have directly led to harm to people, including civilians, and breaches of fundamental rights. The involvement of the AI system in causing these harms is direct and central, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and includes injury and violations of human rights, making this classification appropriate.
Thumbnail Image

Journalist Who Broke Story on Israel's AI Warfare Discusses the Technology

2024-04-05
Truthout
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Lavender and Where's Daddy?) used in military targeting and bombing operations. The AI's use directly led to harm to people (killing of civilians, including families), harm to communities (destruction of homes and infrastructure), and violations of human rights and international law. The AI systems' outputs were treated as decisive in targeting decisions with minimal human oversight, causing significant and documented harm. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to serious harms including loss of life and violations of rights.
Thumbnail Image

Report: Israeli Army Uses AI to Mass-Produce Palestinian Targets for Assassination

2024-04-03
Truthout
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as generating targets for lethal military strikes, with direct consequences of deaths and harm to civilians and families. The AI's role is pivotal in causing these harms, including violations of human rights and harm to communities. The use of AI in this context is not hypothetical but actively causing injury and death, fulfilling the criteria for an AI Incident. The report details direct harm resulting from the AI system's outputs and operational use, not just potential or future harm, thus excluding classification as an AI Hazard or Complementary Information.
Thumbnail Image

The Hamas 'human shields' lie has been conclusively, irrefutably debunked

2024-04-05
Signs Of The TImes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in lethal targeting decisions that have directly led to civilian deaths, which is injury and harm to groups of people. The AI's role is pivotal in compiling kill lists and tracking targets, leading to harm. This meets the criteria for an AI Incident because the AI system's use has directly caused harm to human life and communities. The harm is realized, not just potential, and the AI system's involvement is central to the event described.
Thumbnail Image

White House investigating reports Israel used AI to identify bombing targets in Gaza and create a 'kill list' of 37,000 Palestinians suspected of being militants

2024-04-05
expressdigest.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military targeting decisions that directly led to air strikes killing thousands of Palestinians, including civilians. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people and communities (harms a and d). The AI system's misidentification rate and the delegation of lethal targeting to AI further confirm the AI's pivotal role in causing harm. Although the Israeli military denies the AI's role, multiple intelligence sources and investigative reports confirm its use and impact. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israel's Use of AI 'Lavender' Sparks Debate Over Targeting Alleged Hamas Members

2024-04-05
Science Times
Why's our monitor labelling this an incident or hazard?
The AI system 'Lavender' is explicitly mentioned and is used in the operational decision-making process to select targets for lethal airstrikes. The system's outputs have directly led to harm, including deaths of thousands of Palestinians, which is a clear injury and harm to groups of people. The use of AI in this lethal targeting process, with a known error margin and rapid human approval, demonstrates direct causation of harm. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

How Does AI Tech Influence Military Decision-Making? The Harsh Realities of the Israel-Palestine War | Cryptopolitan

2024-04-04
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, specifically the deaths of over 33,000 civilians. The AI system's malfunction or design (10% false positive rate) and its use in lethal operations constitute direct involvement in causing harm to people and communities. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people, and raises serious human rights and ethical concerns.
Thumbnail Image

Does Israel's Adoption of AI Military Systems Predict a Sinister Turn in Warfare? | Cryptopolitan

2024-04-06
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Lavender) in military targeting decisions that have directly resulted in harm to civilians, fulfilling the criteria for an AI Incident. The AI system's outputs were used to approve airstrikes that caused injury and death, which is a clear harm to persons and communities. The article explicitly links the AI system's use to realized harm, not just potential harm, and discusses the ethical and moral consequences of such AI deployment in warfare. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israel is using AI to identify bombing targets in Gaza, report says

2024-04-05
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to identify targets for bombing, which is an AI system by definition. The AI's outputs were used with minimal human review, leading to errors in about 10% of cases, implicating the AI in wrongful targeting. The harms include injury and death to people, violations of human rights, and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and rights violations.
Thumbnail Image

Israel's Lavender Murderbot is Programmed to Kill up to a Third of all Palestinian Civilians in Gaza

2024-04-04
Informed Comment
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('Lavender' and 'Where's Daddy') used in military targeting that have caused direct harm to civilians through strikes based on AI identification with a 10% error rate. The harm includes injury and death to civilians, violations of human rights, and breaches of international humanitarian law. The AI system's use and malfunction (high error rate and lack of human supervision) have directly led to these harms, qualifying this as an AI Incident under the OECD framework.
Thumbnail Image

Early on in the war, IDF gave clearance to allow 20 civilian deaths for every low-ranking Hamas suspect, intelligence sources said: report

2024-04-04
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the IDF to identify targets, with human oversight being minimal and largely a formality. The AI's outputs were pivotal in targeting decisions that resulted in thousands of civilian deaths, which is a clear harm to people and communities. The policy allowing a high number of civilian deaths per militant killed indicates a permissive approach to collateral damage, exacerbating the harm. The AI system's involvement in lethal targeting and the resulting civilian casualties meet the criteria for an AI Incident, as the AI's use directly led to significant harm to human life and rights.
Thumbnail Image

Israel's 'Where's Daddy?' AI system helps target suspected Hamas militants when they're at home with their families, report says

2024-04-07
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for targeting militants, with documented errors and minimal human verification leading to civilian deaths and harm to communities. This meets the definition of an AI Incident because the AI system's use has directly and indirectly led to injury and harm to groups of people, as well as potential violations of human rights and international law. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard or Complementary Information. The article's focus is on the harm caused by AI-enabled targeting, not on responses or general AI news, so it is not Complementary Information or Unrelated.
Thumbnail Image

Israel is using artificial intelligence to help pick bombing targets in Gaza, report says

2024-04-04
NewsChannel 3-12
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in military targeting decisions, which has directly led to significant harm to civilians, including deaths and destruction of property. The AI's error rate and minimal human oversight have contributed to these harms, fulfilling the definition of an AI Incident as the AI system's use has directly led to injury and harm to groups of people and harm to communities. The article provides detailed evidence of realized harm caused by the AI system's deployment in a conflict setting, thus it is not merely a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Caitlin Johnstone: Israel's 'Human Shields' Lie

2024-04-06
Consortiumnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by the IDF to identify and target individuals, leading to the killing of civilians, including children. This constitutes direct harm to people and communities, as well as violations of human rights. The AI system's involvement in compiling kill lists and tracking targets is central to the harm described. Therefore, this event meets the criteria for an AI Incident due to the direct and deliberate harm caused by the AI system's use.
Thumbnail Image

Israeli AI Technology Used to Identify 37,000 Hamas Targets

2024-04-04
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting that has directly led to harm, including the targeting of civilians and misidentification of individuals as terrorists. The AI system's outputs were used to make lethal targeting decisions with minimal human review, leading to injury and harm to persons and communities, as well as potential violations of human rights. The involvement of AI in these harmful outcomes meets the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal in causing it.
Thumbnail Image

'Lavender': The AI machine directing Israel's bombing spree in Gaza | MR Online

2024-04-05
MR Online
Why's our monitor labelling this an incident or hazard?
The article explicitly details an AI system (Lavender) used in military targeting decisions that directly caused harm to thousands of civilians through airstrikes on homes. The AI system generated target lists with known error rates, and human personnel often rubber-stamped these decisions with minimal verification, leading to wrongful killings. The harms include injury and death to persons, harm to communities, and violations of human rights. This meets the definition of an AI Incident, as the AI system's use directly led to significant harm and rights violations. The article provides detailed evidence and testimony confirming the AI system's central role in these harms.
Thumbnail Image

Yes, AI Is Component Of Israel's Genocide In Gaza, But It's Not Whole Story - OpEd

2024-04-07
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting that has directly led to mass civilian casualties and violations of international law. The AI system's malfunction or design (e.g., broad targeting criteria, lack of human verification) has caused indiscriminate killings and disproportionate harm, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in the chain of events causing this harm.
Thumbnail Image

Israel's deadly use of AI systems marks start of a ghastly new era of warfare

2024-04-06
The National
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in military targeting that have directly caused harm to civilians and communities through lethal bombings. The AI system's outputs are used to justify attacks that have resulted in deaths and destruction, indicating direct causation of harm. The harms include injury and death to persons, harm to communities, and violations of human rights and international law. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Israeli Military Used AI to Identify 37,000 Targets in Gaza: Report

2024-04-04
The Defense Post
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to identify targets for military strikes, with the AI's role pivotal in selecting individuals for lethal action. The reported killing of AI-identified targets and civilian casualties demonstrates direct harm to persons and communities. The AI system's use in this context implicates violations of human rights and breaches of international law protections. Despite the IDF's denial, the detailed insider accounts confirm AI's direct role in causing harm, meeting the criteria for an AI Incident.
Thumbnail Image

'AI-assisted genocide': Israel reportedly used database for Gaza kill lists - RocketNews

2024-04-04
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for lethal targeting decisions in a military conflict, which has directly led to harm to civilians and potential violations of human rights. The AI system's involvement in identifying targets for bombing, with a significant error rate, constitutes direct harm and breaches of fundamental rights. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to injury and violations of human rights.
Thumbnail Image

'Lavender': The AI machine directing Israel's bombing spree in Gaza

2024-04-03
972
Why's our monitor labelling this an incident or hazard?
The article explicitly details an AI system ('Lavender') used in military targeting that has directly caused harm to thousands of civilians through airstrikes. The AI system generated lists of suspected militants, which were used with minimal human verification, leading to widespread killings and destruction of civilian homes. This constitutes injury and harm to groups of people (a), harm to communities (d), and likely violations of human rights and international law (c). The AI system's involvement is direct and central to the harm described, fulfilling the criteria for an AI Incident.
Thumbnail Image

'Lavender': Report Exposes Israel's AI-Driven Massacres in Gaza

2024-04-04
Palestine Chronicle
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-driven database (Lavender) used for target identification in military airstrikes, which directly led to mass civilian casualties and deaths. The AI system's outputs were used with minimal human intervention, causing indiscriminate bombings and violations of human rights. The harms include injury and death to civilians, harm to communities, and breaches of fundamental rights, all directly linked to the AI system's use. This meets the criteria for an AI Incident as the AI system's development and use directly caused significant harm.
Thumbnail Image

AI in the hands of humans without humanity

2024-04-07
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for target selection in military operations, which directly led to harm (civilian casualties) and violations of human rights. The AI's 10% error rate implies thousands of civilians were wrongly targeted, and the military's reliance on AI outputs with minimal human oversight exacerbated the harm. This fits the definition of an AI Incident because the AI system's use directly caused injury and harm to people and breaches fundamental rights. The event is not merely a potential risk or complementary information but a realized harm caused by AI use in warfare.
Thumbnail Image

Israel's AI targeting system 'Lavender' linked to 33,000 Palestinian deaths in Gaza | Free Press Kashmir

2024-04-05
Free Press Kashmir
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being used for military targeting, which has directly caused significant harm (over 33,000 deaths) to a civilian population. The AI system's involvement in lethal decision-making with minimal human intervention and a known error rate leading to civilian casualties fits the definition of an AI Incident, as it has directly led to injury and harm to groups of people and potential violations of human rights.
Thumbnail Image

Israel Used Artificial Intelligence to Identify Hamas Targets

2024-04-04
RayHaber | RaillyNews
Why's our monitor labelling this an incident or hazard?
The AI system was actively used in a military context to identify targets, and its outputs led to the killing of civilians, which is a direct harm to human life. The involvement of AI in lethal targeting decisions and the resulting civilian casualties clearly meet the criteria for an AI Incident as defined, since the AI system's use directly led to injury or harm to groups of people. The article explicitly states that the AI system was used operationally and that civilian deaths occurred as a consequence, confirming the direct link between AI use and harm.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify Gaza targets

2024-04-05
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for military targeting with minimal human oversight, leading to high civilian casualties in densely populated areas. This constitutes direct harm to people and communities, as well as violations of human rights. The AI system's role is pivotal in these harms, fulfilling the criteria for an AI Incident. Although the Israeli military denies using AI in this way, the report and multiple sources indicate the AI system's involvement in causing harm. Therefore, this event is classified as an AI Incident due to the realized and significant harms caused by the AI system's use in military targeting.
Thumbnail Image

The Fate of Hundreds of Thousands of Civilians in Gaza depends on Artificial Intelligence - Sarajevo Times

2024-04-05
Sarajevo Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for targeting in military operations, which have directly led to civilian deaths and destruction of property. The harm includes injury and death to civilians (a), harm to communities and property (d), and violations of human rights (c). The AI systems' malfunction or misuse (lack of human control, high margin of error, targeting based on flawed data) directly contributed to these harms. Hence, this is a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

The UN Secretary-General is "deeply disturbed" by Reports that Israel is using AI to identify Targets in Gaza - Sarajevo Times

2024-04-07
Sarajevo Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly caused harm to civilians, including deaths and destruction of property. The AI's role is pivotal in automating target selection without adequate human oversight, leading to violations of human rights and civilian casualties. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to people and communities, including breaches of fundamental rights and loss of life.
Thumbnail Image

Israel Used AI Machine 'Lavender' To Target Hamas

2024-04-04
Outside the Beltway
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting that directly led to the deaths of thousands of people, including civilians, which is a clear harm to health and communities. The AI system's outputs were used to make lethal decisions with minimal human oversight, and the system had a known error rate that resulted in wrongful targeting. This meets the definition of an AI Incident because the AI system's use directly led to injury and harm to groups of people and violations of human rights. The involvement of AI in lethal targeting and the resulting casualties is a direct causal link to harm, not merely a potential or future risk, so it is not an AI Hazard or Complementary Information. The event is not unrelated as it centrally involves an AI system causing harm.
Thumbnail Image

The 'human shields' lie has been conclusively, irrefutably debunked

2024-04-07
China Daily Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used by a military force to identify and target individuals for lethal strikes with minimal human oversight, leading to the deaths of many civilians, including children. The AI system's involvement in these lethal operations directly causes harm to people and communities and breaches fundamental human rights. The harm is realized and ongoing, not hypothetical, and the AI system's use is central to the incident. Hence, this is an AI Incident.
Thumbnail Image

'Lavender': The AI machine directing Israel's bombing spree in Gaza | From the Trenches World Report

2024-04-04
From the Trenches World Report
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI systems used in military targeting that have directly led to the deaths of thousands of civilians, including women and children, through airstrikes on homes. The AI system's role is pivotal in generating kill lists and timing attacks, with documented inaccuracies and minimal human verification causing wrongful deaths. This meets the definition of an AI Incident as it involves the use of AI systems whose outputs have directly led to injury and harm to groups of people, violations of human rights, and harm to communities. The harm is realized and ongoing, not merely potential, and the AI's involvement is central to the event.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify Gaza targets

2024-04-06
Ya Libnan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in military targeting that has directly led to significant civilian deaths and harm, fulfilling the definition of an AI Incident. The AI system's role in identifying targets with little human oversight and the resulting high civilian casualties constitute direct harm to people and possible violations of human rights. The article provides detailed claims and official concern, indicating realized harm rather than potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

The 'Human Shields' Lie Has Been Conclusively, Irrefutably Debunked

2024-04-05
Apokalyps Nu!
Why's our monitor labelling this an incident or hazard?
The AI system Lavender and its component 'Where's Daddy?' are used to identify and target individuals for lethal strikes with minimal human oversight, directly leading to civilian deaths and violations of human rights. This constitutes an AI Incident because the AI's use has directly led to significant harm to people and communities, fulfilling the criteria for harm to health, violations of rights, and harm to communities. The article reports realized harm, not just potential harm, so it is not a hazard or complementary information.
Thumbnail Image

Lavender: the First AI Genocide Device

2024-04-05
Bella Caledonia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used operationally to identify and target individuals for lethal drone strikes. The AI system's outputs were used with minimal human oversight, leading to indiscriminate bombings of civilian homes and mass civilian casualties. This constitutes direct harm to people and communities, including violations of human rights and war crimes. The AI system's role is central and pivotal in causing these harms, meeting the criteria for an AI Incident. The article documents realized harm, not just potential harm, and thus cannot be classified as a hazard or complementary information.
Thumbnail Image

Israel military using AI to bomb targets in Gaza: Report

2024-04-05
Live India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in military targeting that has directly led to harm, including civilian deaths and a humanitarian crisis, fulfilling the criteria for an AI Incident. The AI system's outputs influence bombing decisions, and despite human review, the process appears insufficient to prevent harm. The harm includes injury and death to persons and harm to communities, which are explicitly covered under the AI Incident definition. The AI system's role is central to the harm, not merely incidental or potential, so it is not a hazard or complementary information.
Thumbnail Image

UN sounds alarm over Israel's AI bombing of Gaza

2024-04-06
Northern Ireland News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly led to civilian casualties, which is harm to people and communities. The AI system's use in lethal decision-making and targeting suspected militants in densely populated areas implicates it in causing injury and death, fulfilling the criteria for an AI Incident. Although the Israeli military denies using AI for this purpose, the report and UN concerns indicate the AI system's role in harm. Therefore, this event is classified as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Does Israel's Adoption of AI Military Systems Predict a Sinister Turn in Warfare? | AI Explained | CryptoRank.io

2024-04-06
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly contributed to civilian deaths and harm in Gaza. The AI system's role in generating target lists that led to airstrikes causing civilian casualties constitutes direct harm caused by the AI system's use. This fits the definition of an AI Incident because the AI system's use has directly led to injury and harm to groups of people. The ethical concerns and debates about human oversight further support the significance of the AI system's involvement in causing harm.
Thumbnail Image

Israel, Gaza and AI software - is this the automation of war crimes?

2024-04-05
The National
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that has directly led to harm to civilians, including deaths and violations of international law. The AI system's role is pivotal in generating kill lists with minimal human checks, leading to indiscriminate attacks and civilian casualties. This constitutes an AI Incident because the AI system's use has directly caused harm to people and breaches of fundamental rights and legal obligations. The detailed allegations of misuse and resulting harm meet the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Does AI Tech Influence Military Decision-Making? The Harsh Realities of the Israel-Palestine War | AI Explained | CryptoRank.io

2024-04-04
CryptoRank
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by the Israeli Defense Forces to identify and target individuals in Gaza, with a high false positive rate and resulting in thousands of civilian deaths. The AI's role in selecting targets and enabling strikes that cause harm to civilians is direct and material. The harm includes injury and death to civilians, which fits the definition of harm to people under AI Incident criteria. The use of AI in lethal military targeting with insufficient oversight and accountability, leading to widespread civilian casualties, clearly constitutes an AI Incident rather than a hazard or complementary information. The event involves the use of AI systems, their deployment in conflict, and the resulting realized harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

The 'Human Shields' Lie Has Been Conclusively, Irrefutably Debunked

2024-04-06
ZNetwork
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in military targeting decisions that have directly led to the killing of civilians, including children, which is a clear harm to health and violation of human rights. The AI system's role is pivotal in compiling kill lists and timing attacks to maximize civilian casualties. This meets the criteria for an AI Incident because the AI's use has directly caused harm to people and communities, and breaches fundamental rights. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. The article is not merely complementary information or unrelated news, but a report of an AI Incident involving serious harm.
Thumbnail Image

Lavender & Where's Daddy: How Israel Used AI to Form Kill Lists & Bomb Palestinians in Their Homes

2024-04-06
ZNetwork
Why's our monitor labelling this an incident or hazard?
The article explicitly details the development and use of AI systems (Lavender and Where's Daddy?) by the Israeli military to identify and target individuals for assassination, leading directly to harm including deaths of civilians and destruction of property. The AI systems' outputs were used with minimal human oversight to authorize bombings that killed many Palestinians, including non-combatants and families in their homes. This constitutes direct involvement of AI in causing harm to persons and communities, as well as violations of human rights and international law. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Israel using 'Lavender' AI machine for Gaza killing spree

2024-04-04
The New Arab
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military targeting that has directly led to the killing of civilians and militants in Gaza, with thousands of deaths reported. The AI system influenced lethal decisions, including targeting family homes and using indiscriminate munitions, causing significant harm to people and communities. This meets the criteria for an AI Incident as the AI system's use has directly led to injury and harm to groups of people, violations of human rights, and harm to communities. The involvement is in the use of the AI system in military operations causing realized harm, not just potential harm.
Thumbnail Image

Israeli military AI system led to mass killing of civilians, alleges report

2024-04-04
The National
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the Israeli military to generate target lists for air strikes. The system's use directly led to mass civilian deaths, constituting harm to people and communities, and violations of human rights and international law. The AI system's questionable accuracy and minimal human review contributed to disproportionate civilian harm, fulfilling the criteria for an AI Incident. The harm is realized and significant, not merely potential, and the AI system's role is pivotal in the chain of events leading to the harm. Thus, the event is classified as an AI Incident.
Thumbnail Image

How is Israel reportedly using AI powered database to identify, kill targets in Gaza

2024-04-04
News9live
Why's our monitor labelling this an incident or hazard?
The AI system Lavender is explicitly mentioned as being used to identify bombing targets, directly influencing lethal military actions. The reported 10% error rate implies a significant risk of civilian casualties, constituting injury or harm to groups of people. The use of AI in this context raises serious ethical and legal concerns, including potential war crimes and violations of human rights. The AI system's development and use have directly led to harm and breaches of fundamental rights, fitting the definition of an AI Incident.
Thumbnail Image

'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza

2024-04-04
ZNetwork
Why's our monitor labelling this an incident or hazard?
The article explicitly details an AI system (Lavender) used in military targeting decisions that directly led to widespread bombing and killing of civilians and destruction of property in Gaza. The AI system generated kill lists with known error rates, and human oversight was minimal, effectively treating AI outputs as authoritative. The harms include injury and death to thousands of people, harm to communities, and violations of human rights. The AI system's development, use, and malfunction (errors) directly caused these harms, meeting the criteria for an AI Incident.
Thumbnail Image

'Lavender,' the AI giving bombing orders in Israel's war on Gaza

2024-04-04
L'Orient Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used in military targeting decisions that led to thousands of civilian deaths and destruction of property. The AI system's errors (10% error rate) and the military's overreliance on its outputs with minimal human oversight directly caused harm to people and communities, including violations of human rights. This fits the definition of an AI Incident, as the AI system's use directly led to injury and harm to groups of people and harm to communities. The involvement is in the use of the AI system for lethal targeting, and the harm is realized and severe.
Thumbnail Image

Israeli AI used to identify 37,000 targets in Gaza

2024-04-03
Arab News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) to identify targets, which directly led to airstrikes causing large-scale civilian deaths and destruction in Gaza. The AI system's role was central in enabling rapid and large-scale targeting decisions with a permissive approach to collateral damage, thus directly causing harm to people and communities. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people and violations of human rights. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. The article is not primarily about responses or updates, so it is not Complementary Information. Therefore, the correct classification is AI Incident.
Thumbnail Image

Israel uses AI 'Lavender' to identify bombing targets in Gaza

2024-04-05
The Siasat Daily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly named 'Lavender' used for target identification in military air strikes. The AI's role in generating target lists that led to over 33,000 deaths and tens of thousands wounded is a direct causal factor in the harm described. The use of AI to expedite targeting with minimal human oversight, combined with the resulting civilian casualties, clearly meets the definition of an AI Incident due to injury and harm to people and communities. The denial by the Israeli army does not negate the reported use and consequences. Therefore, this is classified as an AI Incident.
Thumbnail Image

Symposium on Military AI and the Law of Armed Conflict: The 'Need' for Speed - The Cost of Unregulated AI-Decision Support Systems to Civilians

2024-04-04
Opinio Juris
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as AI-enabled decision-support systems used in military targeting. The use of these AI systems has directly led to harm to civilians (injury and death), a violation of international humanitarian law (human rights and legal obligations), and harm to communities. The article provides detailed evidence of these harms occurring due to the AI systems' speed, scale, and error rates, including misidentification and attacks on civilian homes. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused significant harm. The discussion of regulatory gaps and calls for governance responses are complementary but do not negate the fact that harm has already occurred. Hence, the classification is AI Incident.
Thumbnail Image

Meet "Lavender" and "Gospel" - Israel's cold killing machines and another reason why I hate AI: Murderous thieving rich use AI to get richer, more murderous, more thieving. Prof Attaran: "Kill-happy, genocidal Israelis used an AI to decide who to target." Still think humanity isn't too evil and stupid to contain AI? | Ernst v. EnCana Corporation

2024-04-03
ernstversusencana.ca
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that directly led to civilian deaths and destruction of property, fulfilling the criteria for an AI Incident. The AI systems were integral to the decision-making process that caused harm to people and communities, including violations of human rights. The harm is realized and ongoing, not merely potential. The involvement of AI in the development, use, and malfunction (or misuse) leading to these harms is clear and central to the event described. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

傳以軍用AI辨識加薩目標 聯合國秘書長嚴正關切

2024-04-06
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for target identification in a military context, which has directly led to civilian deaths and injuries, fulfilling the criteria for an AI Incident. The harm is materialized (civilian casualties), and the AI system's role is pivotal in the decision-making process for lethal strikes. The involvement is in the use of the AI system, and the harm includes injury and violation of human rights. Despite the military's denial, the report and UN concerns provide credible evidence of AI involvement causing harm.
Thumbnail Image

衛報:以軍運用AI配啞彈無差別殺平民 用Lavender編死亡名單 | 聯合新聞網

2024-04-05
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) developed and used by the military to identify targets, which directly leads to harm—mass civilian deaths from military attacks. The AI system's scoring and targeting process is central to the incident, with military actions based on AI outputs causing injury and death to civilians, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving violations of human rights and harm to communities. Although the military denies AI use for targeting, the report and multiple sources confirm AI's pivotal role in the harm caused.
Thumbnail Image

傳以軍用AI辨識加薩目標 聯合國秘書長嚴正關切 | 國際焦點 | 國際 | 經濟日報

2024-04-06
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for target identification in a military context, which directly led to civilian deaths and injuries, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving injury and loss of life to civilians, and potential violations of human rights. The AI system's role is pivotal as it influenced lethal military actions with minimal human oversight. Despite the military's denial, the report and UN concerns provide credible evidence of AI involvement causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

以軍認誤殺志工 裴洛西籲停止供武

2024-04-06
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article describes a military airstrike where an AI system was reportedly used to identify targets, leading to the misidentification of a humanitarian aid vehicle and the death of seven aid workers. This constitutes direct harm to people caused by the AI system's use. The involvement of AI in target identification and the resulting civilian casualties meet the criteria for an AI Incident, as the AI system's use directly led to injury and death. The denial by the military does not negate the credible reports and UN concerns about AI use and its consequences. The event also includes a political response calling for cessation of arms supply, but the core issue is the AI-related harm.
Thumbnail Image

傳以軍用AI辨識加薩目標 聯合國秘書長嚴正關切 | 國際 | 中央社 CNA

2024-04-06
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military targeting that has directly led to harm to civilians, including deaths, which is a clear AI Incident under the OECD framework. The AI system's outputs were used to make lethal decisions with minimal human oversight, causing injury and loss of life, thus fulfilling the criteria for an AI Incident due to harm to people and communities. The denial by the military does not negate the reported harm and AI involvement. Therefore, this event is classified as an AI Incident.
Thumbnail Image

外媒:以軍用AI編死亡名單 配啞彈無差別殺平民(圖) - 軍事 -

2024-04-05
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by the Israeli military intelligence unit to identify targets, which directly led to attacks causing mass civilian deaths. The AI system's role was pivotal in generating target lists and enabling attacks with high collateral damage. The harms include injury and death to large numbers of civilians (harm to health and communities) and violations of human rights due to indiscriminate killings. The AI system's use in this context meets the criteria for an AI Incident as the harm is realized and directly linked to the AI system's deployment and outputs.
Thumbnail Image

傳以色列用AI辨識加薩攻擊目標 聯合國秘書長嚴重關切 - Rti央廣

2024-04-05
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for target identification in a military conflict, with minimal human supervision and a policy allowing high civilian casualties based on AI outputs. This directly links the AI system's use to harm to people (civilian deaths) and potential violations of human rights. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm to groups of people.
Thumbnail Image

以色列傳投入AI識別3.7萬哈瑪斯目標 以軍否認神秘「薰衣草」系統 - 自由軍武頻道

2024-04-05
def.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to identify and target individuals for lethal action, which directly relates to the use of AI systems. The reported use of AI to select targets that have been attacked, potentially causing civilian deaths and destruction of property, constitutes direct harm to persons and communities, fulfilling the criteria for an AI Incident. Despite the military's denial, the article presents multiple intelligence sources confirming the AI system's operational use and its role in harm. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use in warfare.
Thumbnail Image

以巴衝突|有傳AI 識別技術投入轟炸行動 以軍高層強烈否認

2024-04-08
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the alleged use of AI technology to identify targets for bombing in a conflict zone, leading to large numbers of civilian casualties, which constitutes harm to groups of people. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to people. The denial by the military does not negate the classification since the report and international condemnation highlight the AI system's pivotal role in the harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

UN chief says troubled by Israel's use of AI technology resulting in civilian casualties

2024-04-06
WAFA Agency
Why's our monitor labelling this an incident or hazard?
The report explicitly states that an AI tool is being used in military targeting, resulting in civilian casualties. This is a direct harm caused by the use of an AI system, fulfilling the criteria for an AI Incident under harm to health and people. The involvement of AI in causing real harm is clear and direct, not merely potential or speculative.
Thumbnail Image

Israel has brought 'relentless death & destruction' to Gaza: UN Chief

2024-04-06
Radio Pakistan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify targets, which is an AI system involved in a military context. The use of AI in targeting has directly contributed to widespread harm including death and destruction, and severe humanitarian consequences. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities.
Thumbnail Image

UN chief voices concern over reports of Israel using AI to identify targets in Gaza - Mangalorean.com

2024-04-06
Mangalorean.com
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI was used to identify targets in a military campaign resulting in high civilian casualties, including deaths and injuries to women and children. The AI system's use in targeting decisions has directly contributed to harm to human life and communities, fulfilling the criteria for an AI Incident. The UN chief's concern highlights the ethical and humanitarian implications of delegating life-and-death decisions to AI algorithms, reinforcing the direct link between AI use and realized harm.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify

2024-04-05
Arab News
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI in military targeting decisions that have caused high civilian casualties, which is a direct harm to human life. The involvement of AI in lethal decision-making processes that impact entire families and communities fits the definition of an AI Incident due to injury or harm to groups of people. The UN Secretary-General's concern underscores the seriousness of the harm caused by the AI system's use.
Thumbnail Image

UN Chief 'Troubled' by Reports Israel Used AI to Find Targets

2024-04-05
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI in military targeting, which is an AI system used in a context that can cause harm to civilians (harm to persons) and raises issues of accountability. This constitutes an AI Incident because the AI system's use has directly or indirectly led to potential or actual harm to people in a conflict zone. The UN Secretary-General's statement highlights the serious implications of such use, confirming the involvement of AI in a harmful context.
Thumbnail Image

UN chief voices concern over reports of Israel using AI to identify targets in Gaza

2024-04-05
english.news.cn
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI was used to identify targets in densely populated residential areas, resulting in many civilian deaths and injuries. This constitutes direct harm to people and communities caused by the use of an AI system in a military context. Therefore, this qualifies as an AI Incident due to the direct link between AI use and significant harm to human life and communities.
Thumbnail Image

UN chief voices concern over Israel's use of AI in Gaza strikes

2024-04-06
National Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military targeting that has resulted in significant civilian casualties, including deaths and suffering. This is a direct harm caused by the use of an AI system in a conflict setting, fulfilling the criteria for an AI Incident as it involves injury and harm to people and communities. The UN Secretary-General's concern highlights the ethical and humanitarian implications of delegating life-and-death decisions to AI algorithms, reinforcing the direct link between AI use and harm.
Thumbnail Image

UN Chief Decries Israeli Military's AI Use In Gaza Bombing Campaign

2024-04-05
Haberler.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by the Israeli military for target identification in a conflict zone, which has directly caused harm to civilians, including deaths. This fits the definition of an AI Incident because the AI system's use in military operations has directly led to injury and harm to groups of people. The UN chief's statement highlights the ethical and humanitarian concerns, reinforcing the direct link between AI use and harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Live updates: Israel-Hamas war, 7 aid workers killed in Gaza strike

2024-04-06
CNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify bombing targets, which has resulted in the death of aid workers delivering food in Gaza. This is a direct link between the AI system's use and harm to people and communities, fulfilling the criteria for an AI Incident. The UN Secretary-General's concern further underscores the seriousness of the harm. Although the Israeli military spokesperson denies AI use for identifying terrorists, the report and investigation indicate AI involvement in targeting, which has caused harm.
Thumbnail Image

UN Chief 'Deeply Troubled' By Reports Israel Using Artificial Intelligence to Identify Gaza Targets - News18

2024-04-05
News18
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for target identification in a conflict zone, with reports of minimal human oversight and a permissive policy for casualties, leading to civilian deaths and acknowledged wrongful killings of aid workers. The UN Secretary-General's concern highlights the serious harm caused by delegating life-and-death decisions to AI algorithms. This meets the criteria for an AI Incident as the AI system's use has directly and indirectly led to harm to persons and violations of human rights.
Thumbnail Image

Israel is reportedly using AI to pick Gaza targets to assassinate at their family home

2024-04-06
WAtoday
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used to identify bombing targets, which has directly contributed to lethal military actions causing harm to people, including civilians. This constitutes harm to persons and potential violations of human rights, fulfilling the criteria for an AI Incident. The involvement of AI in life-and-death decisions and the resulting harm is central to the report, not merely a potential risk or background information.
Thumbnail Image

UN chief decries Israeli military's AI use in Gaza bombing campaign | News

2024-04-06
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for target identification in a military context, leading to civilian casualties and deaths. This constitutes direct harm to people caused by the use of an AI system. The UN chief's statement highlights the ethical and procedural issues related to delegating life-and-death decisions to AI algorithms, reinforcing the connection between AI use and harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's use in military operations.
Thumbnail Image

UN Chief 'Deeply Troubled' By Reports Israel Using AI To Identify Gaza Targets

2024-04-05
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to identify targets in a military bombing campaign, leading to civilian casualties and wrongful killings. The AI system's role in targeting decisions with limited human oversight directly contributed to harm to people and violations of rights, fitting the definition of an AI Incident. The harm is realized, not just potential, and involves injury and violations of human rights, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to identify Gaza targets

2024-04-05
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for target identification in a military context, which has directly led to civilian casualties and acknowledged errors in targeting. This constitutes harm to people and possible violations of human rights, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in the harm caused, as it was used to identify targets with limited human oversight, leading to wrongful deaths.
Thumbnail Image

UN chief 'deeply troubled' by reports Israel using AI to bomb Gaza targets

2024-04-05
Gulf-Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify targets with very limited human oversight, resulting in thousands of civilian deaths, including women and children. This is a clear case where the AI system's use has directly led to injury and harm to groups of people (harm category a) and violations of human rights (category c). The scale and nature of the harm, including allegations of war crimes and crimes against humanity, confirm the severity of the incident. The AI system's outputs were treated as if they were human decisions, indicating reliance on AI in lethal targeting decisions. This meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UN Chief 'Troubled' by Reports Israel Used AI to Find Targets - BNN Bloomberg

2024-04-05
BNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military targeting that has led to civilian deaths and humanitarian harm, fulfilling the criteria for an AI Incident. The AI system's involvement in life-and-death decisions and the resulting casualties demonstrate direct harm to people and communities. The UN Secretary-General's statement highlights the risks and accountability issues associated with this AI use, reinforcing the significance of the harm. Although Israel disputes the report, the credible allegations and the context of ongoing conflict with documented casualties linked to AI-assisted targeting justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Guterres denounces Israel's use of Artificial Intelligence to bomb Gaza

2024-04-07
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military targeting that has led to bombings in Gaza, causing civilian deaths and humanitarian harm. The involvement of AI in lethal military decisions that impact human lives and the resulting casualties meet the criteria for an AI Incident, as the AI system's use has directly led to harm to people and communities. The Secretary General's condemnation and the firing of military officers further confirm the materialized harm linked to AI use in this context.
Thumbnail Image

Guterres denounces the use of AI by Israel for the bombings in Gaza

2024-04-07
USANews Press Release Network
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system in military operations that have resulted in civilian casualties and deaths of humanitarian workers, which constitutes harm to people (harm category a). The AI system's role is pivotal in targeting decisions that have led to these harms. The condemnation by the UN Secretary General and the dismissal of military officers further indicate the seriousness and direct impact of the AI system's use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lavender AI: UN chief says algorithms have no place in Gaza war - Tortoise

2024-04-08
Tortoise
Why's our monitor labelling this an incident or hazard?
The article describes the use of an AI system in a military conflict context to identify targets for bombing, which reportedly resulted in marking tens of thousands of Palestinians as suspected militants and an allegedly high civilian kill ratio. This indicates direct involvement of AI in decisions causing harm to people, including possible violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

UN Chief 'Deeply Troubled' By Reports Israel Using AI To Identify Gaza Targets | HowAfrica Latest news, views, gossip, photos and video

2024-04-05
How Africa News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military targeting that has resulted in civilian casualties and wrongful killings, which constitutes harm to people and violations of human rights. The AI system's role in identifying targets with limited human oversight directly contributed to these harms. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm and rights violations.
Thumbnail Image

Israel-Palestinians-conflict-UN-technology-AI

2024-04-05
nampa.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by Israel to identify targets in Gaza, leading to many civilian deaths. This is a direct harm to people caused by the use of an AI system, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal in the targeting process that caused the harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israel-Palestinians-conflict-UN-technology-AI

2024-04-05
nampa.org
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI is being used as a tool in military targeting, which directly relates to the use of an AI system. The military bombing campaign inherently involves harm to people and communities. Since the AI system's use is directly linked to identifying targets for bombing, which causes harm, this qualifies as an AI Incident under the framework's definition of harm to persons or groups resulting from AI use.
Thumbnail Image

UN chief 'deeply troubled' by reports of Israeli military using AI to identify targets in Gaza

2024-04-06
News9live
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI technology by the Israeli military to identify targets, sometimes with minimal human oversight, which has resulted in civilian casualties and violations of protocols. This constitutes direct harm to people and breaches of human rights, fulfilling the criteria for an AI Incident. The involvement of AI in lethal targeting decisions and the resulting harm to civilians and aid workers clearly meets the definition of an AI Incident as the AI system's use has directly led to harm and rights violations.
Thumbnail Image

أمين الأمم المتحدة يرفض استخدام الاحتلال الإسرائيلي الذكاء الاصطناعي في حرب غزة

2024-04-05
صحيفة سبق الالكترونية
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems by a military actor for targeting in an active conflict, which directly relates to potential harm to human life and violations of human rights. Although the article does not confirm actual harm, the use of AI in lethal targeting plausibly leads to significant harm, qualifying this as an AI Hazard. The UN Secretary-General's statement highlights the risk and ethical concerns, reinforcing the plausible future harm from AI use in warfare.
Thumbnail Image

غوتيريش يستنكر ربط إسرائيل أوامر القتل بغزة بالذكاء الاصطناعي

2024-04-05
Aljazeera
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to select targets, which has resulted in a high number of civilian deaths in Gaza. This is a direct harm to people caused by the use of AI in a military context. The involvement of AI in lethal targeting decisions and the resulting civilian casualties meet the criteria for an AI Incident under the OECD framework, specifically harm to people and violations of human rights. The UN Secretary-General's condemnation further supports the assessment of realized harm linked to AI use.
Thumbnail Image

تقرير: هكذا وظفت إسرائيل الذكاء الاصطناعي لتحديد الفلسطينيين المطلوب قتلهم في غزة

2024-04-05
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') explicitly mentioned as being used by the Israeli military to identify targets in Gaza. The use of this AI system directly led to harm (killing and injuring civilians), which is a clear AI Incident under the framework. The harms include injury and death to persons, violations of human rights, and harm to communities. The report also includes authoritative concern from the UN Secretary-General, reinforcing the seriousness of the incident. Therefore, this event qualifies as an AI Incident due to the direct causal link between the AI system's use and realized harm.
Thumbnail Image

غوتريش قلق حيال استخدام إسرائيل الذكاء الاصطناعي بحرب غزة

2024-04-05
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Lavender') used by the Israeli military to process large amounts of data and identify targets, including alleged militants, resulting in strikes that killed many civilians. The AI system's role in selecting targets and the subsequent use of unguided bombs causing civilian deaths clearly meets the definition of an AI Incident, as it directly led to injury and harm to groups of people and harm to communities. The ethical and legal concerns further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

صحيفة عمون : غوتيريش: تقارير تفيد بأنّ القصف الإسرائيلي يعتمد على الذكاء الاصطناعي

2024-04-05
وكاله عمون الاخباريه
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify targets, including in densely populated residential areas, resulting in elevated civilian harm. This constitutes direct harm to people caused by the use of an AI system, fitting the definition of an AI Incident. The involvement of AI in lethal targeting decisions and the resulting civilian casualties meet the criteria for harm to persons (a).
Thumbnail Image

الأمم المتحدة تعلق على استخدام جيش الاحتلال للذكاء الاصطناعي في قصف غزة

2024-04-05
بوابة فيتو
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to select targets, including in densely populated civilian areas, resulting in significant civilian harm. This constitutes direct harm to people caused by the use of an AI system, fulfilling the criteria for an AI Incident. The involvement of AI in lethal targeting decisions and the resulting civilian casualties clearly meet the definition of an AI Incident under harm category (a) injury or harm to the health of a person or groups of people.
Thumbnail Image

الأمم المتحدة "قلقة" بعد معلومات عن استخدام إسرائيل للذكاء الاصطناعي في حرب غزة

2024-04-05
الحرة
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to assist in target identification during airstrikes in Gaza, which has resulted in civilian casualties and destruction. This is a direct link between the use of an AI system and harm to human life, fulfilling the criteria for an AI Incident. The involvement of AI in military targeting decisions that cause injury or death to civilians is a clear example of harm (a) under the AI Incident definition. The article also references international concern about potential war crimes linked to this AI use, reinforcing the severity of the harm caused.
Thumbnail Image

غوتيريش يهاجم استخدام الجيش الإسرائيلي الذكاء الاصطناعي في حرب غزة

2024-04-05
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli army to select targets, which has resulted in elevated civilian harm. This constitutes direct harm to people caused by the AI system's use in a military context. The involvement of AI in lethal targeting decisions and the resulting civilian casualties meet the criteria for an AI Incident under the OECD framework, as it involves harm to people and potential violations of human rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

غوتيريس يبدي "قلقه" حيال استخدام الجيش الإسرائيلي الذكاء الصناعي في حربه على غزة

2024-04-05
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military in targeting during the conflict in Gaza, which has resulted in increased civilian harm. This constitutes an AI Incident because the AI system's use in military operations has directly led to injury and harm to groups of people, fulfilling the criteria for harm (a). The concern about linking life-or-death decisions to algorithmic calculations further supports the direct involvement of AI in causing harm.
Thumbnail Image

جوتيريش: قلق من استخدام إسرائيل للذكاء الاصطناعي في حرب غزة | أهل مصر

2024-04-05
أهل مصر
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli army to select targets in a conflict zone, resulting in high civilian casualties. This constitutes direct harm to people caused by the use of an AI system, fulfilling the criteria for an AI Incident. The involvement of AI in lethal targeting decisions and the resulting loss of civilian life is a clear example of harm (a) under the AI Incident definition.
Thumbnail Image

غوتيرش يبدي قلقاً بالغاً من استخدام الجيش الإسرائيلي للذكاء الاصطناعي في قتل المدنيين بغزة

2024-04-05
سما الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli army to select targets in a conflict zone, resulting in high civilian casualties and destruction of property. This is a direct harm to human life and communities caused by the AI system's use. The UN Secretary-General's concern highlights the ethical and humanitarian risks of delegating life-or-death decisions to AI algorithms. Therefore, this qualifies as an AI Incident due to direct harm caused by AI use in military targeting.
Thumbnail Image

غوتيريش يعرب عن قلقه من استخدام الاحتلال الذكاء الاصطناعي في عدوانه على غزة

2024-04-05
WAFA Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to select bombing targets in Gaza, resulting in civilian deaths and destruction of homes. This is a direct use of AI in a military context causing injury and harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's involvement is central to the event. The ethical and legal concerns raised further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

الأمين العام للأمم المتحدة يعرب عن قلقه إزاء تقارير استخدام الكيان الصهيوني الذكاء الاصطناعي لتحديد الأهداف في غزة

2024-04-05
وكالة تونس افريقيا للانباء
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI for target identification in a conflict zone, resulting in high civilian casualties and suffering. The AI system's use in targeting decisions has directly led to harm to human life and communities, fulfilling the criteria for an AI Incident. The involvement of AI in lethal military operations causing injury and death is a clear case of harm as defined. Therefore, this event is classified as an AI Incident.
Thumbnail Image

غوتيريش يبدي 'قلقا بالغا' حيال استخدام اسرائيل للذكاء الاصطناعي في العدوان على غزة

2024-04-05
الرأي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify targets in a conflict zone, leading to significant civilian harm. This is a direct involvement of an AI system in causing injury or harm to groups of people, fulfilling the criteria for an AI Incident. The concern about AI making life-or-death decisions further underscores the severity of the harm and the AI system's pivotal role.
Thumbnail Image

بعد التأكد من استخدام إسرائيل "الذكاء الاصطناعي" في حرب غزة .. "الأمم المتحدة" تبدي قلقها !

2024-04-05
كتابات
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to assist in targeting decisions during the Gaza conflict, which has resulted in civilian casualties and destruction. The Secretary-General and the UN Human Rights Council express concern that this AI use may contribute to international crimes. The harm to civilians and humanitarian workers is direct and significant, fulfilling the criteria for an AI Incident. The AI system's use in military targeting is the direct cause of harm, meeting the definition of an AI Incident under the OECD framework.
Thumbnail Image

غوتيريش "قلق" حيال استخدام الذكاء الاصطناعي في حرب غزة

2024-04-05
Azzaman
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for military targeting that has directly or indirectly led to harm to civilians, which constitutes injury or harm to groups of people. The use of AI in lethal targeting decisions with reported civilian casualties fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm to people. The expressed concerns and reported civilian losses confirm realized harm rather than just potential harm.
Thumbnail Image

غوتيريش يعلق على تقرير بريطاني عن استخدام إسرائيل للذكاء الاصطناعي لتحديد الأهداف في غزة

2024-04-05
Beirut Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military for target identification, which has resulted in increased civilian harm. This fits the definition of an AI Incident because the AI system's use in military operations has directly led to injury and harm to groups of people. The concern about algorithmic decision-making in lethal contexts further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الأمم المتحدة تحذر من استخدام إسرائيل الذكاء الإصطناعي كسلاح في حربها على غزة

2024-04-05
وكالة نيو ترك بوست الاخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military attacks, which involves the use of an AI system in a conflict setting. The Secretary-General warns that such use could lead to increased civilian harm and ethical violations, indicating a plausible risk of significant harm. Although the article does not report a specific incident of AI malfunction or misuse causing harm, the context implies a credible risk of harm due to AI's role in lethal decision-making. Therefore, this qualifies as an AI Hazard, as the use of AI as a weapon in war could plausibly lead to serious harm to civilians and violations of human rights.
Thumbnail Image

غوتيريش قلق: الوضع في القطاع الفلسطيني بائس تمامًا

2024-04-05
elmarada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the Israeli military to select targets, which has resulted in civilian harm, fulfilling the criteria for an AI Incident. The harm is direct (civilian casualties) and involves violations of human rights. The UN Secretary-General's statement underscores the gravity of delegating lethal decisions to AI, confirming the AI system's pivotal role in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

غوتيريش قلق حيال معلومات عن استخدام إسرائيل للذكاء الاصطناعي في حرب غزة

2024-04-05
قناة المملكة
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli army for target identification in a conflict zone, which has directly led to increased civilian casualties, a clear harm to people. This fits the definition of an AI Incident, as the AI system's use in military operations has directly caused harm to human life. The concern about linking life-or-death decisions to algorithmic calculations further supports the classification as an incident involving AI misuse or malfunction leading to harm.
Thumbnail Image

Guerra entre Israel y Hamas en vivo: combates en Gaza, muertos y más

2024-04-04
CNN Español
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used in military targeting decisions, which directly or indirectly has led to harm to people (civilian deaths) and potential violations of human rights and international law. The AI system's use with minimal supervision and the resulting civilian casualties constitute an AI Incident as per the definitions, since harm has occurred and the AI system's role is pivotal in the harm caused.
Thumbnail Image

La Inteligencia Artificial que utiliza Israel para matar a miembros de Hamás: "Lo hace fríamente

2024-04-04
as
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the Israeli military to detect targets and authorize attacks that caused civilian deaths and destruction. The AI system's role in identifying targets and enabling attacks with 'dumb bombs' that killed civilians directly links it to harm to persons and communities. This meets the criteria for an AI Incident because the AI system's use directly led to injury and death (harm to health) and harm to communities. The involvement is in the use of the AI system in military operations causing realized harm, not just potential harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

El ejército israelí utiliza IA para seleccionar objetivos en Gaza con un "sello de goma" de un operador humano: Informe

2024-04-03
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that has directly caused harm to people, including civilian casualties and destruction of property. The AI's role in generating kill lists and influencing bombing decisions constitutes direct involvement in causing injury and harm to groups of people, as well as harm to communities and potential violations of human rights. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Israel ataca a Hamás con una inteligencia artificial que no trata de impedir la muerte de civiles

2024-04-05
El Español
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting decisions that directly led to the deaths of thousands of civilians, which is a severe harm to persons and communities. The AI system's role was pivotal in generating target lists with a high error margin and minimal human review, leading to disproportionate civilian casualties. This meets the definition of an AI Incident because the AI system's use directly caused injury and harm to groups of people. The harm is realized, not just potential, and the AI system's involvement is central to the event.
Thumbnail Image

Israel usó una inteligencia artificial para identificar a 37.000 palestinos como objetivo de asesinato

2024-04-04
El Periódico
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system developed and used by the Israeli military to identify human targets for lethal strikes, which directly caused injury and death to thousands of people, including civilians. This constitutes direct harm to persons and communities, as well as violations of human rights. The AI system's outputs were relied upon heavily, effectively replacing human judgment, which led to significant harm. Therefore, this event meets the criteria for an AI Incident due to the direct causal link between the AI system's use and the resulting harms.
Thumbnail Image

Israel emplea la IA en la masacre en Gaza y el combate con Hamás: "Hemos matado a gente con un daño colateral de dos o tres dígitos"

2024-04-04
eldiario.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to select targets, which directly led to harm to civilians, fulfilling the criteria for an AI Incident. The AI system's outputs influenced lethal decisions causing injury and death, which is a direct harm to people. Therefore, this is classified as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Israel podría estar utilizando IA para encontrar objetivos de bombardeo en Gaza, según un informe - El Diario NY

2024-04-04
El Diario Nueva York
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the Israeli military to identify bombing targets, which is an AI system by definition. The AI's involvement in target identification has directly or indirectly contributed to harm, including civilian deaths and a humanitarian crisis in Gaza, fulfilling the criteria for harm to people and communities. The AI system's 10% error rate further underscores the risk and actual harm caused. Despite the military's claim that the AI is not used to identify terrorists, the AI's role in generating targeting information that leads to bombings causing civilian casualties is sufficient to classify this as an AI Incident. The harm is materialized and significant, not merely potential, and the AI system's role is pivotal in the chain of events leading to harm.
Thumbnail Image

Revelan que el Ejército israelí utiliza inteligencia artificial para seleccionar objetivos a matar en Gaza

2024-04-04
Democracy Now!
Why's our monitor labelling this an incident or hazard?
The described AI systems are explicitly used to select and track targets for lethal military strikes, which directly causes harm to individuals and likely involves violations of human rights. The AI's role in facilitating targeted killings with minimal human supervision constitutes an AI Incident under the framework, as it directly leads to injury or harm to persons and breaches fundamental rights.
Thumbnail Image

Cómo son los programas de inteligencia artificial usados por Israel para armar listas de personas palestinas para asesinar y bombardearlas cuando estaban en sus hogares

2024-04-05
Democracy Now!
Why's our monitor labelling this an incident or hazard?
The described AI systems ('Lavanda', '¿Dónde está papá?', and 'Evangelio') are explicitly used to identify and target individuals for lethal attacks and to destroy civilian infrastructure, directly causing injury and death, as well as harm to communities and violations of human rights. The article reports actual harm caused by these AI-enabled military operations, not just potential harm. Therefore, this event qualifies as an AI Incident due to the direct and significant harm caused by the AI systems' use in lethal targeting and destruction.
Thumbnail Image

Israel usó una inteligencia artificial para identificar a 37.000 palestinos como objetivo de asesinato

2024-04-04
El Correo Gallego
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system ('Lavender') was used to identify targets for assassination, with military forces relying on its outputs as if they were human decisions. The AI system's use directly led to the deaths of thousands of Palestinians, including civilians, which is a clear harm to human life and a violation of human rights. The AI system's involvement in lethal decision-making and the resulting civilian casualties meet the criteria for an AI Incident, as the harm is realized and the AI's role is pivotal in causing it.
Thumbnail Image

Read more

2024-04-03
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting that directly contributed to harm: thousands of civilian deaths and destruction of homes in Gaza. The AI system was used to identify targets and set parameters for acceptable civilian casualties, leading to disproportionate and widespread harm. This meets the definition of an AI Incident because the AI system's use directly led to injury and harm to groups of people, harm to communities, and violations of human rights. The involvement is in the use of the AI system for targeting decisions, and the harm is realized and significant. Thus, the event is classified as an AI Incident.
Thumbnail Image

El Ejército de Israel habría definido objetivos humanos en Gaza con ayuda de la IA

2024-04-04
Sputnik Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used in military targeting that directly led to civilian deaths and destruction, fulfilling the criteria for an AI Incident. The AI system's role was pivotal in identifying targets and authorizing attacks with known civilian harm. The harm includes injury and death to persons, harm to communities, and violations of human rights. The involvement of AI in these lethal decisions and the resulting casualties clearly meet the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Read more

2024-04-04
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in military targeting decisions, with reports alleging that these systems have led to civilian deaths, a clear harm to persons and communities. The AI systems are described as influencing or directing lethal actions, with human analysts acting as a 'rubber stamp,' indicating the AI's pivotal role. Although the Israeli military denies these claims, the article presents detailed allegations from multiple sources. Given the direct link between AI use and reported harm, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Read more

2024-04-04
esdelatino.com
Why's our monitor labelling this an incident or hazard?
Lavender is an AI system used in military targeting that has directly led to harm to groups of people, specifically Palestinians marked for lethal action. The use of AI in this context implicates serious human rights concerns and potential violations of international humanitarian law. The system's role in authorizing killings constitutes direct harm to persons, fitting the definition of an AI Incident. Although the military denies the use of AI for identifying terrorists, the report states the system was used for this purpose, and the AI's involvement in lethal targeting is central to the event.
Thumbnail Image

EEUU examina un informe según el cual Israel utilizó IA para identificar objetivos de bombardeo en Gaza

2024-04-05
MarketScreener
Why's our monitor labelling this an incident or hazard?
The event involves an AI system reportedly used by the Israeli military to identify bombing targets, which directly relates to the use of AI. The alleged use of AI to mark tens of thousands of civilians as suspects with limited human oversight implies a direct or indirect role of AI in causing harm to people and communities, including potential violations of human rights and loss of life. Despite official denials, the report's content and the serious consequences described meet the criteria for an AI Incident. The involvement of AI in lethal military targeting with insufficient oversight is a clear case of AI-related harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israel emplea la IA en la masacre en Gaza y el combate con Hamas: "Matamos a gente con un daño colateral de dos o tres dígitos"

2024-04-05
elDiarioAR.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') in the identification of targets that led to military strikes causing thousands of civilian deaths and destruction in Gaza. This constitutes direct harm to people and communities, as well as potential violations of human rights and international humanitarian law. The AI system's role is pivotal in the chain of events leading to these harms, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, so it cannot be classified as a hazard or complementary information. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Read more

2024-04-06
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being used to identify targets for lethal military action. The AI's outputs have directly led to harm to persons and communities through targeted killings and civilian casualties. The system's use with minimal human oversight and the acceptance of civilian casualties as collateral damage indicate a direct link between the AI system's use and significant harm. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to injury and harm to groups of people and harm to communities.
Thumbnail Image

Así utiliza Israel la inteligencia artificial para bombardear a Gaza

2024-04-07
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI systems (Lavanda, Habsora, and others) used in military targeting that have directly led to large-scale civilian deaths and destruction in Gaza. The AI systems generate target lists and trigger bombings, with human operators acting mainly as executors of AI decisions. The harms include injury and death to civilians, destruction of property, and violations of fundamental human rights. The AI's involvement is direct and central to the harm caused, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Israel Diduga Pakai Teknologi AI untuk Mengidentifikasi Sasaran Pengeboman di Gaza - Pikiran Rakyat Depok

2024-04-04
Pikiran Rakyat Depok
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (Lavender, The Gospel, and Where Is Daddy) by the Israeli military to identify bombing targets, which directly leads to harm (injury or death) in Gaza. The AI systems are used operationally to select targets for bombing, which is a direct causal factor in harm to people. Therefore, this event qualifies as an AI Incident under the definition of AI systems causing or contributing to injury or harm to groups of people.
Thumbnail Image

Israel Gunakan Kecerdasan Buatan Lavender untuk Bunuhi Warga Gaza, Termasuk 37 Ribu Target Hamas - Serambinews.com

2024-04-05
Serambi Indonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system (Lavender) was used to identify 37,000 targets, including civilians, for lethal military strikes in Gaza. The use of AI in this context directly led to injury and death of persons, including civilians, which is a clear harm to health and a violation of human rights. The AI system's involvement is central to the harm, as it automates and accelerates target selection, including pre-authorization of civilian casualties. This meets the criteria for an AI Incident as defined, involving direct harm and rights violations caused by the AI system's use.
Thumbnail Image

Israel Gunakan Program AI Lavender untuk Targetkan Pengeboman yang Tewaskan Ribuan Warga Sipil di Gaza : Okezone Techno

2024-04-04
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for targeting in military operations, which has directly led to large-scale civilian casualties and harm. The AI system's role in generating bombing targets with minimal human oversight constitutes direct involvement in causing harm to people and communities, fulfilling the criteria for an AI Incident. The harm is realized and significant, including deaths of civilians and children, and potential violations of human rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Cara Kerja Lavender, Sistem AI Israel yang Targetkan Pembunuhan Warga Palestina di Gaza : Okezone Techno

2024-04-04
https://techno.okezone.com/
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that has directly led to the deaths of civilians, constituting injury and harm to persons and violations of human rights. The AI system's role is pivotal in these harms, as it automates target identification with minimal human review, leading to wrongful killings. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly caused significant harm to people and breaches of fundamental rights.
Thumbnail Image

Israel Pakai AI untuk Tentukan Target Pengeboman di Gaza

2024-04-06
detiki net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used by the Israeli military to identify targets for bombing in Gaza. The system's development and use have directly led to harm: wrongful targeting and killing of civilians, destruction of homes, and collateral damage. The AI system's 10% error rate means innocent people are being killed or harmed due to misclassification. This is a clear case of an AI Incident as the AI system's use has directly caused injury and harm to people and communities. The military's use of AI for lethal targeting with known inaccuracies and resulting civilian casualties fits the definition of an AI Incident involving harm to persons and communities.
Thumbnail Image

AS Selidiki Laporan Israel Pakai AI Bombardir Gaza, Sekjen PBB Miris

2024-04-06
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly contributed to harm to civilians, fulfilling the criteria for an AI Incident. The AI system's use in identifying targets for bombing, resulting in civilian casualties, constitutes injury or harm to groups of people (harm category a) and potential violations of human rights (category c). Although the military denies full AI autonomy, the system's pivotal role in target selection and the minimal human oversight described indicate the AI system's involvement in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Bagaimana Israel Menyalahgunakan AI 'Lavender' untuk Menyerbu Gaza

2024-04-07
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify targets for bombing in Gaza, which has led to thousands of civilian deaths. This is a clear case where the AI system's use has directly led to harm to people and communities, fulfilling the criteria for an AI Incident. The involvement of AI in lethal military operations causing civilian casualties and potential violations of international law and human rights confirms the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Enam Bulan Genosida di Gaza, Israel Dilaporkan Gunakan AI untuk Bantai Warga Sipil

2024-04-04
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) in military targeting that has directly led to the deaths of tens of thousands of civilians, including children, which is a clear harm to human life and communities. The AI system's involvement in generating target lists with a known error rate and the minimal human oversight in approving strikes indicate that the AI's use is a direct contributing factor to the harm. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people and violations of human rights. The scale and nature of the harm are significant and clearly articulated, and the AI system's role is pivotal in the incident.
Thumbnail Image

AS Selidiki Laporan bahwa Israel Gunakan AI untuk Identifikasi Target Pengeboman di Gaza

2024-04-05
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Lavender') in military operations to identify bombing targets, which directly leads to harm to people and communities in Gaza. The AI system's role in marking individuals and homes as targets for airstrikes is a direct cause of physical harm and potential violations of human rights. Despite official denial, the investigative report and sources indicate the AI system's use and its impact. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

Sekjen PBB Prihatin Israel Gunakan AI dalam Perang Gaza

2024-04-06
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has resulted in thousands of civilian deaths and injuries, as well as destruction in Gaza. The AI system's role in identifying targets that include civilians and their homes directly links it to harm to people and communities, fulfilling the criteria for an AI Incident. The involvement is in the use of the AI system, and the harm is realized and significant, including loss of life and violation of rights. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Parah! Militer Israel Diduga Pakai AI untuk Identifikasi Target di Gaza

2024-04-04
Bisnis Indonesia Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Lavender) for military targeting decisions that have directly led to significant civilian casualties and injuries, fulfilling the criteria for harm to persons and violations of human rights under international law. The AI system's role in accelerating target identification and the reported error rate indicate malfunction or misuse contributing to the harm. This is a clear AI Incident as the AI system's use has directly caused injury and potential breaches of legal and ethical standards.
Thumbnail Image

Sekjen PBB Prihatin Israel Diduga Gunakan AI untuk Identifikasi Target di Gaza

2024-04-06
https://www.metrotvnews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') for target identification in a military context, which has directly led to significant civilian casualties, a clear harm to people and communities. The AI system's outputs were reportedly treated as human decisions, leading to permissive policies on civilian deaths. This meets the criteria for an AI Incident as the AI system's use has directly or indirectly caused harm to human life and violated human rights. The denial by the military does not negate the reported harm and AI involvement. Hence, the event is classified as an AI Incident.
Thumbnail Image

Lavender, Mesin AI Pemandu Militer Israel Membantai 33.000 Warga Gaza

2024-04-05
SINDOnews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned ('Lavender') used in military targeting decisions. The AI's outputs have directly led to the killing of thousands of civilians and destruction of homes, which is a direct harm to people and communities. The system's error rate and lack of human oversight further exacerbate the harm. This meets the criteria for an AI Incident because the AI system's use has directly caused significant harm (death and destruction) to a large group of people, fulfilling the harm categories (a) injury or harm to health of persons and (d) harm to communities and property.
Thumbnail Image

Israel Pakai AI Bernama Lavender untuk Identifikasi 37.000 Target Hamas

2024-04-04
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) explicitly described as being used to identify military targets, which directly led to harm to civilians and communities in a conflict zone. The AI's role is central and pivotal in causing this harm, fulfilling the criteria for an AI Incident. The harm includes injury and death to persons and potential violations of human rights, which are among the defined harms. The use of AI in lethal targeting with admitted civilian casualties clearly meets the threshold for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bagaimana AI Digunakan Israel Dalam Perang Melawan Hamas?

2024-04-05
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) used in military operations to identify targets, which directly influenced lethal actions resulting in civilian deaths and destruction of property. The AI's involvement in identifying targets and enabling strikes that caused harm to people and communities meets the criteria for an AI Incident. The harms are realized and significant, including loss of life and destruction of homes, which are clearly articulated harms. Despite military denials of direct AI targeting, the system's pivotal role in the chain of events leading to harm is established by multiple sources. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Keji! Israel Gunakan Database AI untuk Memilih Target Bantai Penduduk Gaza - Media Pakuan

2024-04-05
Media Pakuan
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Lavender) in military targeting decisions that directly lead to harm to civilians and potential war crimes, which are violations of human rights. The AI system's role is pivotal in identifying targets for bombing, and the harm is realized or ongoing. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm to people and violations of rights caused by the AI system's use.
Thumbnail Image

"Le sort de centaines de milliers de civils à Gaza est entre les mains de l'intelligence artificielle"

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used by the Israeli military to select targets in Gaza, leading to the deaths of thousands of civilians, including women and children. The AI systems' outputs were used without human verification, causing wrongful bombings and mass civilian casualties. This constitutes direct harm to people and communities, as well as violations of human rights. The AI's role is pivotal in the harm described, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' malfunction or misuse is central to the event.
Thumbnail Image

Gaza : des dizaines de milliers de cibles identifiées par l'IA pour l'armée israélienne

2024-04-05
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for lethal targeting decisions in a military conflict, leading to actual harm including civilian deaths and collateral damage. The AI's role is central and pivotal in identifying targets and timing attacks, with a significant error margin that risks wrongful killings. This constitutes direct harm to persons and communities, as well as potential violations of human rights. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Guerre dans la bande de Gaza : des médias israéliens affirment que Tsahal utilise une IA pour choisir les cibles de ses frappes

2024-04-05
Franceinfo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the military to identify and select targets for bombing, which directly relates to harm to persons and communities (harm category a and d). The system's outputs are used to make lethal decisions, with reported errors affecting target selection, indicating the AI's role in causing or contributing to harm. The event involves the use of AI in a military context with direct consequences on human life, fitting the definition of an AI Incident. The concerns expressed by the UN Secretary-General further underscore the gravity of the harm involved.
Thumbnail Image

Entretien. "À Gaza, l'IA Lavender a désigné 37 000 cibles humaines", affirme le magazine "+972"

2024-04-04
Courrier international
Why's our monitor labelling this an incident or hazard?
The AI system 'Lavender' is explicitly mentioned as being used to identify human targets for lethal military action. The system's designation of targets directly leads to harm to people, including potential wrongful deaths, which constitutes injury or harm to persons and violations of human rights. The minimal human verification and known error rate increase the risk of harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm and rights violations.
Thumbnail Image

Guerre à Gaza : Israël aurait recours à l'IA

2024-04-05
Radio Canada
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the Israeli military to identify targets for bombing in Gaza. The system's outputs have directly led to the deaths of thousands of Palestinians, including civilians, which is a clear harm to health and communities. The use of AI with minimal human oversight and acceptance of collateral damage indicates the AI system's role is pivotal in causing these harms. Therefore, this event qualifies as an AI Incident under the OECD framework, as it involves the use of an AI system whose outputs have directly led to significant harm to people and communities, as well as potential violations of human rights.
Thumbnail Image

Israël : l'armée se fierait à une intelligence artificielle pour définir ses cibles à Gaza

2024-04-05
Le Point.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the military to select targets, which has directly led to harm including deaths of civilians and destruction of buildings. The AI's role is pivotal in the targeting process, and the harms described include injury and death to persons, harm to communities, and property damage. The lack of thorough human oversight and the AI's error rate contribute to these harms. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

L'armée israélienne a identifié des dizaines de milliers de cibles à Gaza avec l'aide de l'IA - Le Temps

2024-04-05
Le Temps
Why's our monitor labelling this an incident or hazard?
The AI system named Lavender was used to generate assassination targets, leading to airstrikes that harmed many individuals and their families. This is a clear case where the AI system's use directly led to harm to persons and communities, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Israël utilise l'IA pour choisir des cibles à Gaza - rapport -- RT World News

2024-04-04
News 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to select targets for lethal strikes, which has directly led to the deaths of thousands of civilians, including women and children, thus causing injury and harm to groups of people (harm category a) and violations of human rights (category c). The AI system's outputs are used to make lethal decisions with insufficient human review, making the AI's role pivotal in the harm caused. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

''Lavender'', cette intelligence artificielle israélienne utilisée pour le ciblage de civils à Gaza

2024-04-04
L'Orient-Le Jour
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('Lavender' and 'Where's Daddy?') used by the military for targeting individuals, which directly led to airstrikes causing thousands of deaths, including civilians. This is a clear case where AI use in military operations caused injury and harm to people (harm category a) and violations of human rights (category c). The AI systems' outputs were heavily relied upon with minimal human oversight, leading to wrongful targeting and collateral damage. The harm is realized and significant, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Lavender ", l'intelligence artificielle qui dirige les bombardements israéliens à Gaza - L'Humanité

2024-04-04
L'Humanité
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies an AI system (Lavender) used in military targeting decisions that directly led to harm: thousands of Palestinians, including civilians, were killed in airstrikes based on AI-generated target lists. The AI system's outputs were treated as authoritative with minimal human oversight, resulting in wrongful targeting and civilian casualties. This constitutes direct harm to people and communities, as well as violations of human rights. Therefore, this event qualifies as an AI Incident under the OECD framework, as the AI system's use directly led to significant harm.
Thumbnail Image

" Saint Graal " génocidaire : comment Israël se sert de l'IA pour cibler des milliers de victimes

2024-04-04
Révolution Permanente
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of AI systems in military targeting that has directly led to harm, including mass civilian deaths and destruction of property. The AI system's outputs are used to make lethal decisions with a high margin of error, causing violations of human rights and harm to communities. This fits the definition of an AI Incident, as the AI's use has directly led to significant harm and breaches of fundamental rights.
Thumbnail Image

" Des cibles à l'infini " : les Israéliens utilisent le système Lavender dans le génocide à Gaza

2024-04-07
Chronique de Palestine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) in military targeting that has directly led to the deaths of a large number of civilians in Gaza. This constitutes injury and harm to groups of people (harm to health and life), violations of human rights, and harm to communities. The AI system's development and use are central to the incident, as it facilitated rapid identification and targeting of individuals, including civilians, with lethal force. Therefore, this event meets the criteria for an AI Incident due to the direct and significant harm caused by the AI system's deployment in a conflict setting.
Thumbnail Image

Lavender, Gospel, Depth of Wisdom, Firefactory : comment l'IA est utilisée par Israël dans sa guerre à Gaza

2024-04-06
Libération
Why's our monitor labelling this an incident or hazard?
The article explicitly details AI systems used in military targeting and strike operations that have directly caused harm to individuals and communities through lethal airstrikes. The AI systems generate target lists and automate strike timing, leading to thousands of deaths and destruction, which constitutes injury and harm to groups of people and harm to communities. The AI's role is pivotal in these harms, fulfilling the criteria for an AI Incident. The presence of AI is explicit, the use is operational in warfare, and the harms are realized and ongoing. Therefore, the classification is AI Incident.
Thumbnail Image

Jefe de ONU 'profundamente preocupado' por informes de que Israel usa IA en Gaza - La Razón

2024-04-05
La Razón | Noticias de Bolivia y el Mundo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Lavender' and other AI tools) by the Israeli military to identify targets in Gaza. This AI use has directly led to significant civilian deaths and injuries, constituting harm to persons and potential violations of human rights and international law. The article details realized harm caused by AI-enabled targeting decisions, meeting the criteria for an AI Incident. The involvement is not speculative or potential but ongoing and causing actual harm, thus it is not a hazard or complementary information.
Thumbnail Image

Lavender, la inteligencia artificial que dirige bombardeos de Israel

2024-04-05
Newsweek México
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used in military targeting decisions, which directly led to harm (civilian casualties) and violations of human rights. The AI's error margin and the systematic use of its recommendations for lethal strikes in residential areas constitute an AI Incident as per the definitions. The involvement of AI in causing injury and violations of rights is explicit and central to the report.
Thumbnail Image

Preocupan a jefe de ONU informes de uso israelí de IA para identificar objetivos en Gaza

2024-04-06
新华网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify targets, which has led to a high level of civilian casualties in Gaza. This constitutes harm to people and communities, fulfilling the criteria for an AI Incident. The AI system's involvement is in its use during military targeting decisions, which directly contributed to the harm. The severity and scale of the harm (thousands of deaths and injuries) further support classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Antonio Guterres "profundamente preocupado" por informes de que Israel usa IA en Gaza

2024-04-05
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems by the Israeli military to identify targets, which has directly led to a high number of civilian casualties, constituting harm to people and communities. This meets the criteria for an AI Incident because the AI system's use in targeting decisions has directly caused injury and harm to groups of people. The involvement is in the use of the AI system, and the harm is realized, not just potential.
Thumbnail Image

Guterres denuncia el uso de la Inteligencia Artificial por Israel para bombardear Gaza

2024-04-06
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the Israeli military to select targets for bombing, which has resulted in civilian casualties and deaths of humanitarian workers. This is a direct link between AI use and harm to people and communities. The Secretary-General's condemnation and call for accountability further confirm the significance of the harm caused. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing harm.
Thumbnail Image

El ejército israelí usaría IA para definir objetivos a bombardear

2024-04-06
Perfil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for target identification in military operations, with reports indicating that this use has caused many civilian deaths. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to people and communities, including violations of human rights. The military's own statements confirm the use of AI-enhanced targeting, and the reported scale of civilian casualties confirms realized harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Preocupa a la ONU informes que acusan a Israel de usar Inteligencia Artificial para identificar objetivos en Gaza

2024-04-05
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') by the Israeli military to identify targets, which has caused a significant number of civilian casualties, constituting harm to people and communities. This is a direct link between the AI system's use and realized harm, fitting the definition of an AI Incident. The concern about life-or-death decisions delegated to AI further supports the severity of the harm caused.
Thumbnail Image

Preocupa a la ONU uso de IA en Guerra de Gaza | Noticias de México | El Imparcial

2024-04-06
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used to identify targets in a conflict zone, with the AI outputs influencing lethal military decisions. This use has reportedly resulted in civilian casualties, constituting harm to persons and potential violations of human rights and international law. The AI system's role is pivotal in these harms, meeting the criteria for an AI Incident. Although the Israeli military denies direct AI use for targeting, the credible reports and UN concern support classification as an AI Incident due to direct or indirect harm caused by AI use in warfare.
Thumbnail Image

En la ONU preocupa que Israel use IA en Gaza

2024-04-06
Diario El Día
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to identify targets, which has directly resulted in significant civilian deaths and injuries, a clear harm to people and communities. The AI system's role is pivotal as it influences life-or-death decisions and the scale of collateral damage. The article explicitly links the AI system's use to realized harm, not just potential harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

En la ONU preocupa que Israel use IA en Gaza

2024-04-06
Diario Democracia
Why's our monitor labelling this an incident or hazard?
The presence of an AI system ('Lavender') used for target identification in a military context is explicitly mentioned. The use of AI in making lethal decisions that result in a large number of civilian deaths constitutes direct harm to people and violations of human rights. Despite the denial by the Israeli military, the credible allegations and reported consequences meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to significant harm. The harm is materialized, not just potential, and involves serious human rights violations and loss of life, fitting the definition of an AI Incident.
Thumbnail Image

Lavender: la máquina de inteligencia artificial que dirige el bombardeo de Israel en Gaza

2024-04-09
Kaos en la red
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used operationally to generate target lists for military bombings, with human operators largely rubber-stamping its decisions despite known error rates. This AI involvement directly led to significant harm, including civilian deaths and violations of fundamental rights. The harm is realized and substantial, meeting the criteria for an AI Incident. The AI system's development, use, and malfunction (errors) are all implicated in causing these harms, fulfilling the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Read more

2024-04-06
esdelatino.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly led to civilian deaths and harm in Gaza, which is a clear case of injury and harm to groups of people (criterion a) and harm to communities (criterion d). The use of AI in lethal targeting decisions without human accountability raises serious ethical and legal concerns. The harm is realized, not just potential, making this an AI Incident rather than a hazard or complementary information. The denial by the military does not negate the reported harm and AI involvement as described in the article.
Thumbnail Image

US looking at media report that said Israel used AI to identify bombing targets in Gaza

2024-04-05
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The media report alleges the use of AI in lethal targeting decisions, which would directly lead to harm to people and possible violations of human rights, fitting the definition of an AI Incident. However, the denial by the IDF and lack of verification by the U.S. indicate that the AI system's involvement is not confirmed. Since the event centers on an unverified media report and official denial, it is best classified as Complementary Information providing context and updates on a potential AI-related issue rather than confirming an AI Incident or Hazard.
Thumbnail Image

US Looking at Report That Israel Used AI to Identify Bombing Targets in Gaza

2024-04-04
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The report describes an AI system used to identify bombing targets, which directly relates to harm to people and communities, including potential violations of human rights. The AI system's role in marking tens of thousands of Gazans as suspects for assassination with little human oversight indicates a direct link to harm. Despite denials and lack of verification, the described use and its consequences meet the criteria for an AI Incident because the AI system's use has directly or indirectly led to significant harm. The event is not merely a potential risk (hazard) or complementary information, but an incident involving realized or ongoing harm linked to AI use.
Thumbnail Image

US looking at report that Israel used AI to identify Gaza bombing targets

2024-04-05
India Today
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions an AI system used to identify bombing targets, which involves AI system use. The alleged use of AI to mark tens of thousands of Gazans as suspects for assassination with little human oversight indicates a direct or indirect link to harm (injury, death, human rights violations). Even though the IDF denies AI use, the report's claims and the ongoing harm in the conflict context justify classification as an AI Incident. The AI system's role is pivotal in the targeting process, which has led to significant harm. The US investigation and denials do not negate the reported harm and AI involvement. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
The Hindu
Why's our monitor labelling this an incident or hazard?
The event involves the reported use of an AI system in military targeting that has directly or indirectly led to harm to people (civilian casualties) and possible violations of human rights, fulfilling the criteria for an AI Incident. The AI system's role in marking suspects for assassination with little human oversight implies direct involvement in harm. Despite denials and lack of official verification, the report's detailed claims and the context of ongoing conflict and casualties support classification as an AI Incident due to realized harm linked to AI use.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
ThePrint
Why's our monitor labelling this an incident or hazard?
The report explicitly states that an AI system was used to identify suspected extremists and targets, marking tens of thousands of Gazans as suspects with little human oversight. This AI involvement in lethal targeting has directly or indirectly led to significant harm, including deaths, displacement, and humanitarian crises. Despite official denial, the media report's claim is sufficient to classify this as an AI Incident because the AI system's use is linked to realized harm. The event meets the criteria for harm to groups of people and communities caused by AI use in military operations.
Thumbnail Image

Israel to take 'immediate steps' to increase Gaza aid via Ashdod port, Erez crossing

2024-04-05
haaretz.com
Why's our monitor labelling this an incident or hazard?
The report explicitly states that an AI system was used to identify bombing targets and mark tens of thousands of Gazans as suspects with little human oversight, which directly implicates AI in decisions that could cause injury or death and violate human rights. Even though the military denies the claim, the event centers on the alleged use of AI leading to harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to potential harm and rights violations. The U.S. investigation underscores the event's significance and potential impact.
Thumbnail Image

AsiaOne

2024-04-05
AsiaOne
Why's our monitor labelling this an incident or hazard?
The article describes a media report alleging that an AI system was used by the Israeli military to identify bombing targets, marking tens of thousands of Gazans as suspects with little human oversight. This use of AI in a lethal military context could directly or indirectly lead to harm to people and violations of human rights, fitting the definition of an AI Incident. Although the Israeli Defence Forces deny the claim and the US is investigating, the report's serious allegations and the potential for significant harm justify classifying this as an AI Incident. The AI system's involvement in targeting decisions that may have caused or contributed to harm is central to the event. Therefore, the event is not merely a hazard or complementary information but an AI Incident based on the reported use and associated harms.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
Times LIVE
Why's our monitor labelling this an incident or hazard?
The report involves an AI system allegedly used for lethal targeting decisions, which if true, directly implicates AI in potential harm to persons and human rights violations. Since the US is investigating and the claim is unverified and denied, the event currently represents a plausible risk of harm rather than confirmed harm. Therefore, it fits the definition of an AI Hazard rather than an AI Incident at this stage.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
TODAY
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions an AI system used to identify bombing targets, which is an AI system involvement in a military context. The alleged use of AI to mark tens of thousands of people as suspects for assassination with little human oversight directly relates to harm to persons and communities and potential human rights violations. Despite denials and lack of verification, the event describes realized harm linked to AI use, meeting the criteria for an AI Incident. The potential for direct or indirect harm through AI-driven targeting in conflict zones is significant and aligns with the definition of an AI Incident.
Thumbnail Image

U.S. Looking Into Report Israeli Military Used Artificial Intelligence To Identify Targets For Bombing In Gaza

2024-04-05
Sahara Reporters
Why's our monitor labelling this an incident or hazard?
The report alleges direct use of AI in lethal targeting, which if true, would directly lead to harm to persons and potential violations of human rights, qualifying as an AI Incident. However, the denial by the Israeli Defense Forces and the ongoing investigation by the U.S. indicate that the AI involvement is not confirmed. Therefore, the event currently represents a plausible risk of AI causing harm in a military context, fitting the definition of an AI Hazard rather than a confirmed Incident.
Thumbnail Image

US looking at media report that Israel used AI to identify bombing targets in Gaza | Law-Order

2024-04-04
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by the Israeli military to identify bombing targets, which is an AI system used in a context that could directly lead to injury or harm to people and communities. While the article does not confirm that harm has already occurred due to AI, the nature of the AI application in military targeting makes it plausible that such harm could occur. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to persons or communities.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza | Law-Order

2024-04-05
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves an AI system reportedly used to identify bombing targets, which directly relates to harm to persons and communities, including possible violations of human rights. The AI system's use in lethal military operations with little human oversight fits the definition of an AI Incident due to direct or indirect harm. Although the Israeli Defense Forces deny the use, the report and ongoing investigation indicate that harm has occurred or is ongoing. Therefore, this is not merely a hazard or complementary information but an AI Incident due to the serious and direct harms involved.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
Middle East Monitor
Why's our monitor labelling this an incident or hazard?
The article describes a credible media report alleging the use of AI to identify bombing targets, which if true, would directly lead to harm to persons (AI Incident). However, the denial by the Israeli Defence Forces and the lack of verification by the US indicate that the use of AI is not confirmed. Therefore, the event is best classified as an AI Hazard, reflecting the plausible risk of AI-enabled lethal targeting with insufficient human oversight, which could lead to serious harm.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-05
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions an AI system used to identify bombing targets, which is an AI system involved in a use case that has directly led to significant harm (deaths, displacement, starvation crisis) in Gaza. The AI system's role in marking suspects for assassination with little human oversight indicates a direct link to harm to persons and communities, fulfilling the criteria for an AI Incident. The denial by the Israeli Defense Forces and lack of verification by the U.S. does not negate the classification, as the report's content and the described harms align with the AI Incident definition.
Thumbnail Image

US looking at report that Israel used AI to identify bombing targets in Gaza

2024-04-06
newsR
Why's our monitor labelling this an incident or hazard?
The article discusses allegations that an AI system was used to select bombing targets, which if true, would mean AI was used in a way that could directly cause harm to people and property, fitting the definition of an AI Incident. However, the use of AI targeting is denied and under investigation, so the harm is not confirmed. The event thus represents a credible potential for harm from AI use in military targeting, but without confirmation of actual harm, it is best classified as an AI Hazard at this stage.
Thumbnail Image

"Lavanda", arma secretă a Israelulului pentru a identifica țintele Hamas: "Investesc 20 de secunde pentru fiecare ţintă" (The Guardian)

2024-04-03
Digi24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lavender) used in military targeting decisions, which directly led to harm to people and communities through airstrikes causing thousands of civilian deaths. The AI system's role was pivotal in identifying targets and accelerating the pace of attacks, with documented civilian casualties and destruction. This fits the definition of an AI Incident because the AI system's use directly led to injury and harm to groups of people (harm category a and d). The article provides detailed evidence of realized harm, not just potential harm, and thus cannot be classified as a hazard or complementary information. The involvement of AI in lethal targeting and the resulting civilian deaths clearly constitute an AI Incident.
Thumbnail Image

32.000 de oameni uciși în Gaza la cheremul unui sistem AI, "Lavanda". Investește "20 de secunde pentru fiecare ţintă"

2024-04-03
Stirile ProTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for target identification in a military conflict, which directly contributed to the killing of approximately 32,000 people in Gaza, mostly civilians. This is a clear case where the AI system's use has directly led to injury and harm to groups of people (harm category a) and harm to communities (d). The system's role was pivotal in accelerating and scaling the targeting process, with human oversight reduced to mere approval stamps. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Israelul a folosit Inteligența Artificială pentru a identifica și ataca 37.000 de ținte Hamas, inclusiv civili, dezvăluie The Guardian. "Ni s-a spus: "Bombardați orice puteți""

2024-04-03
Libertatea
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) explicitly described as being used to identify and select targets for military strikes. The use of this AI system directly led to harm, including the deaths of thousands of civilians, which qualifies as injury or harm to groups of people and harm to communities. The system's deployment in a lethal military context with documented civilian casualties and moral/legal concerns meets the criteria for an AI Incident. The article provides detailed evidence of the AI system's involvement in causing harm, not just potential harm, and thus it is not a hazard or complementary information but a clear incident.
Thumbnail Image

Israelul a folosit inteligența artificială pentru a identifica și ataca potențiale ține Hamas. Dezvăluiri The Guardian despre programul inovator "Lavender"

2024-04-03
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for target identification in military operations, which directly led to attacks causing civilian deaths and destruction. The AI system's outputs were pivotal in selecting targets, with minimal human oversight, leading to realized harm (civilian casualties and property damage). This fits the definition of an AI Incident, as the AI system's use directly caused harm to people and communities and raises legal and ethical concerns. The involvement is not hypothetical or potential but actual and documented, thus not an AI Hazard or Complementary Information. It is not unrelated as the AI system is central to the event.
Thumbnail Image

Premieră în războiul din Gaza: armata israeliană a folosit sistemul 'Lavanda', iar oficialii militari au 'permis uciderea unui număr mare de civili'

2024-04-03
Stiri pe surse
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lavender) used in military targeting decisions during an armed conflict. The AI system's outputs directly contributed to lethal strikes that resulted in significant civilian deaths, fulfilling the criteria of harm to persons and communities. The use of AI to identify targets with minimal human intervention and the authorization of strikes with known civilian casualties indicate direct causation of harm. This meets the definition of an AI Incident, as the AI system's use has directly led to injury and loss of life, as well as violations of human rights.
Thumbnail Image

Ofițeri de informații israelieni: Israel s-a folosit de Inteligența Artificială pentru bombardamentele din Fâșia Gaza. Reacția armatei israeliene | AUDIO

2024-04-04
Europa FM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for military targeting that has directly led to large-scale civilian deaths, which is a clear harm to persons and communities and a potential violation of human rights and international law. The AI system's involvement is direct in the use phase, selecting targets without significant human oversight, leading to lethal outcomes. This meets the criteria for an AI Incident as the AI system's use has directly caused significant harm.
Thumbnail Image

Mărturii șocante ale agenților IDF: Israelul a folosit Inteligența Artificială pentru a identifica 37.000 de posibile ținte Hamas. A fost acordată permisiunea de a ucide și civili: "Orice poți, bombardezi" - B1TV.ro

2024-04-03
B1TV.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) developed and deployed by the IDF to identify potential militant targets, including a database of 37,000 individuals. The AI system's outputs were used to authorize lethal strikes, including on civilians, with admitted pre-authorized civilian casualties. The AI system's role was pivotal in the decision-making process that led to widespread civilian deaths and destruction, fulfilling the criteria for an AI Incident involving harm to people and violations of human rights. The harm is realized and ongoing, not merely potential, and the AI system's malfunction or use directly contributed to these harms.
Thumbnail Image

The Guardian: Israelul a folosit "Lavanda" pentru a identifica 37.000 de ţinte Hamas. Dezvăluiri în premieră despre un sistem cu inteligenţă artificială care duce războiul pe un tărâm încă neexplorat

2024-04-03
News.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for identifying military targets, which directly influenced lethal military actions resulting in thousands of civilian deaths and destruction. The AI system's role was pivotal in the harm caused, fulfilling the criteria for an AI Incident. The harm includes injury and death to persons, harm to communities, and potential violations of human rights and international law. The AI system's development and use in this context led directly to these harms, not merely posing a future risk. Thus, the classification as an AI Incident is justified.
Thumbnail Image

Dezvăluiri despre "Lavender", sistemul de inteligență artificială pe care Israelul l-a folosit împotriva Hamas: "E fără precedent"

2024-04-04
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Lavender, developed and used by the Israeli military intelligence unit, to identify targets during military operations. The system's outputs directly influenced military strikes that resulted in civilian deaths and destruction of property, fulfilling the criteria for harm to persons and communities. The AI system's role was pivotal in the harm caused, as it generated large target lists and was trusted over human judgment. This meets the definition of an AI Incident, as the AI system's use directly led to significant harm.
Thumbnail Image

Israelul a folosit "Lavender" în război, o armă secretă bazată pe Inteligență Artificială. Cum funcționează sistemul

2024-04-05
Aleph News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used in military targeting during an active conflict, directly influencing decisions that lead to harm to people and communities. The AI's role is pivotal in identifying targets, replacing human judgment with statistical mechanisms, which implies direct or indirect harm. This fits the definition of an AI Incident because the AI's use has directly led to harm (injury or death) and potential violations of human rights. The article does not merely warn of potential harm but describes actual use in warfare with significant consequences.
Thumbnail Image

Sistemul "Lavanda". Serviciile secrete israeliene au folosit inteligența artificială pentru a identifica 37.000 de ţinte Hamas

2024-04-04
Ziua Veche
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) for identifying military targets, which directly influenced lethal military actions resulting in civilian deaths and destruction. The AI system's role was pivotal in the harm caused, fulfilling the criteria for an AI Incident. The harm includes injury and death to persons, harm to communities, and potential violations of human rights and international law. The involvement is through the use of the AI system in operational decision-making leading to realized harm, not just potential harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ΗΠΑ: Υπό εξέταση δημοσίευμα περί χρήσης A.I. από το Ισραήλ για προσδιορισμό στόχων στη Γάζα | Η ΚΑΘΗΜΕΡΙΝΗ

2024-04-05
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify bombing targets, which directly relates to the use of AI systems. The alleged use of AI to designate targets with minimal human oversight, combined with reported errors and resulting civilian casualties, fits the definition of an AI Incident involving harm to persons and communities. Although the military denies the use of AI in this way, the report and ongoing investigation indicate that the AI system's development or use has directly or indirectly led to harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ΗΠΑ: Υπό εξέταση αναφορές για πλήγματα του Ισραήλ στη Γάζα μέσω τεχνητής νοημοσύνης

2024-04-05
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The article discusses unverified reports that an AI system was used to select bombing targets, which would directly lead to harm to people and communities if true, meeting the criteria for an AI Incident. However, since the use is not confirmed and is under investigation, and the military denies such use, the event currently represents a credible potential for harm rather than confirmed harm. Therefore, it is best classified as an AI Hazard, reflecting the plausible future harm from AI use in lethal military targeting.
Thumbnail Image

ΗΠΑ: Εξετάζουν δημοσίευμα που θέλει το Ισραήλ να χρησιμοποίησε ΤΝ για να ορίσει δεκάδες χιλιάδες στόχους στη Γάζα

2024-04-05
insider.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system allegedly used to select bombing targets, leading to civilian deaths and harm to communities, which fits the definition of an AI Incident due to direct harm caused by the AI system's outputs. Although the military denies AI use, the report is based on multiple sources and details the AI's role and error rate, indicating the AI system's involvement in causing harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ΗΠΑ: Εξετάζεται δημοσίευμα για χρήση τεχνητής νοημοσύνης από το Ισραήλ ώστε να οριστούν στόχοι βομβαρδισμών στη Γάζα

2024-04-05
SofokleousIn.GR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system reportedly used to designate bombing targets, which directly relates to the use of AI in a military context. The AI system's outputs influenced lethal decisions, with documented errors leading to probable civilian casualties, thus causing harm to people and communities. Despite official denial, the report's detailed claims and the described consequences meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to significant harm.
Thumbnail Image

Ισραήλ: Με τεχνητή νοημοσύνη επελέγησαν οι στόχοι στη Λωρίδα της Γάζας; - Το αρνείται ο IDF | OnAlert

2024-04-05
OnAlert
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') allegedly used in military targeting decisions that led to bombing civilians, causing harm to people and communities. This is a direct link between AI use and realized harm, including potential violations of human rights. Despite official denial, the credible report and described consequences meet the criteria for an AI Incident rather than a hazard or complementary information. The harm is materialized, not just potential, and the AI system's involvement is central to the incident.
Thumbnail Image

ΗΠΑ: Υπό εξέταση δημοσίευμα περί χρήσης A.I. από το Ισραήλ για προσδιορισμό στόχων στη Γάζα

2024-04-05
www.kathimerini.com.cy
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Lavender') for military targeting, which is explicitly mentioned. The AI system's outputs were used to designate bombing targets with minimal human oversight, and errors in the system reportedly led to civilian casualties. This is a direct link between AI use and harm to people and communities, meeting the definition of an AI Incident. The denial by the military does not negate the reported harm and AI involvement. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's use in targeting.
Thumbnail Image

Ισραηλινά ΜΜΕ: Με τεχνητή νοημοσύνη ορίστηκαν δεκάδες χιλιάδες στόχοι των βομβαρδισμών στη Γάζα

2024-04-05
ΑΘΗΝΑ 9,84
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used in military operations to identify bombing targets, which directly led to harm to civilians and likely violations of human rights. The AI system's role in designating targets with minimal human oversight and its known error rate causing civilian deaths fits the definition of an AI Incident, as the AI's use has directly led to injury and harm to groups of people and breaches of fundamental rights. The denial by the military does not negate the reported facts and the credible sources cited. Therefore, this is classified as an AI Incident.
Thumbnail Image

ΗΠΑ: Υπό εξέταση δημοσίευμα περί χρήσης A.I. από το Ισραήλ για προσδιορισμό στόχων στη Γάζα

2024-04-05
makthes.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Lavender') used in military targeting decisions, which is explicitly mentioned. The AI system's use in target identification with minimal human oversight has directly led to harm—civilian deaths and harm to communities in Gaza. Despite denials, the report's details and the described consequences meet the definition of an AI Incident, as the AI system's role is pivotal in causing harm. The harm is realized, not just potential, so it is not an AI Hazard. The event is not merely complementary information or unrelated news, as it concerns alleged direct harm caused by AI use.
Thumbnail Image

Chefe da ONU 'profundamente preocupado' com denúncias de que Israel usa IA em Gaza

2024-04-05
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in military targeting that leads to civilian casualties, which constitutes direct harm to people and potential violations of human rights. The involvement of AI in life-or-death decisions and the resulting harm aligns with the definition of an AI Incident, as the AI system's use has directly led to harm.
Thumbnail Image

Chefe da ONU 'profundamente preocupado' com denúncias de que Israel usa IA em Gaza

2024-04-05
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') by the Israeli military to identify targets, which has resulted in a high number of civilian casualties, constituting harm to persons and communities. The AI system's involvement in lethal targeting decisions directly links it to harm, fulfilling the criteria for an AI Incident. The concern about delegating life-or-death decisions to AI further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Denúncia de uso da IA por Israel em Gaza preocupa chefe da ONU

2024-04-05
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the Israeli military to identify targets, which has directly caused harm to civilians in Gaza. This meets the criteria for an AI Incident because the AI system's use has directly led to injury and harm to groups of people. The involvement is in the use of the AI system for military targeting, and the harm is realized and significant. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Novas revelações sobre uso de Inteligência Artificial contra palestinos em Gaza choca ONU

2024-04-06
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify targets, which has caused a significant number of civilian casualties. This is a direct harm to people and communities, fulfilling the criteria for an AI Incident. The involvement of AI in lethal targeting with resulting civilian harm is a clear case of AI-related harm as defined in the framework.
Thumbnail Image

Novas revelações sobre uso de IA contra palestinos em Gaza choca ONU

2024-04-06
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for target identification in a military context, which has directly contributed to civilian harm. This constitutes an AI Incident because the AI's use in targeting has led to injury and harm to people, and likely breaches of human rights. The involvement of AI in lethal military decisions causing civilian casualties fits the definition of an AI Incident due to direct harm and rights violations.
Thumbnail Image

Denúncia de uso da IA por Israel em Gaza preocupa chefe da ONU

2024-04-05
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to identify targets, which has directly resulted in civilian deaths and alleged war crimes. The AI system's role in lethal targeting decisions and the resulting harm to civilians meet the criteria for an AI Incident, as the AI's use has directly led to injury and violations of human rights. The concerns expressed by the UN Secretary-General and legal experts further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Denúncia de que Israel usa IA para identificar e atacar alvos em Gaza preocupa a ONU

2024-04-05
CartaCapital
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems by the Israeli military to identify targets in a conflict zone, which has directly resulted in civilian casualties and alleged war crimes. The AI system's outputs were treated as human decisions, leading to disproportionate harm to civilians. This meets the criteria for an AI Incident as the AI system's use has directly caused injury and harm to groups of people and breaches of fundamental rights under international law. The article provides detailed allegations and concerns from credible sources, including the UN Secretary-General, confirming the realized harm linked to AI use.
Thumbnail Image

'Genocídio assistido por IA': Israel supostamente usou banco de dados para listas de assassinatos em Gaza - O Cafezinho

2024-04-08
O Cafezinho
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the military to identify targets, which directly led to harm (mass civilian deaths and injuries) and violations of human rights and international law. The AI system's malfunction or misuse (high error rate, insufficient human oversight) contributed to these harms. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to people and communities, including potential war crimes. Therefore, the event is classified as an AI Incident.
Thumbnail Image

İsrail ordusu, Gazze saldırılarında yapay zeka kullanıyor

2024-04-04
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly led to the deaths of thousands of civilians, including women and children, which is a clear harm to health and communities. The AI system's outputs were used without sufficient human oversight, leading to indiscriminate targeting and disproportionate civilian casualties. This constitutes a violation of human rights and a breach of legal protections. The AI's involvement is direct and pivotal in causing these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

İsrail sivilleri yapay zekayla öldürüyor - Sözcü Gazetesi

2024-04-04
Sözcü Gazetesi
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly led to the deaths of thousands of civilians, including women and children, which is a clear harm to human life and communities. The AI systems' outputs are used to make lethal decisions with insufficient human oversight, causing violations of human rights and breaches of legal and ethical norms. The involvement of AI in the development, use, and malfunction (e.g., classification errors leading to wrongful deaths) is central to the incident. This meets the criteria for an AI Incident as defined, since the AI systems' use has directly led to significant harm to people and communities.
Thumbnail Image

İsrail yayın organı: Gazze'deki yüz binlerce sivilin kaderi yapay zekanın elinde

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly caused civilian deaths and destruction, fulfilling the criteria for harm to persons and communities. The AI systems' outputs were used without sufficient human oversight, leading to indiscriminate or disproportionate attacks. The involvement of AI in causing these harms is direct and central to the event. Hence, the event is classified as an AI Incident.
Thumbnail Image

İsrail yayın organı: Gazze'deki yüz binlerce sivilin kaderi yapay zekanın elinde

2024-04-04
TRT haber
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for lethal targeting that have directly led to civilian deaths and destruction of civilian infrastructure, constituting harm to persons and communities and violations of human rights. The AI system's involvement in the development and use phases is clear, with direct causation of harm. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

İsrail basını yazdı: Sivillerin kaderi yapay zekanın elinde

2024-04-04
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender, Where's Daddy?, Habsora) used in military targeting that have directly caused harm to civilians through lethal strikes. The AI systems' outputs were relied upon without human oversight, leading to indiscriminate killings and breaches of proportionality, which are violations of human rights and cause injury and death. The involvement of AI in the development, use, and malfunction (misclassification, errors) directly led to significant harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

İsrail yayın organı: Gazze'deki sivilin kaderi yapay zekanın elinde

2024-04-04
birgun.net
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (Lavender and Habsora) used in military targeting decisions that have directly caused civilian deaths and destruction in Gaza. The AI's role is pivotal as targeting decisions are automated and human oversight is removed, leading to indiscriminate bombings and violations of proportionality. The harms include injury and death to civilians, harm to communities, and violations of human rights. This meets the criteria for an AI Incident because the AI system's use has directly led to realized harm, not just potential harm or complementary information.
Thumbnail Image

Gazze'de yapay zeka ile soykırım! 37 bin kişi 'Lavender' yazılımının hedefinde

2024-04-04
Ak�am
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Lavender') for targeting individuals in military operations, which has directly led to mass civilian deaths and destruction. The AI system's outputs were used to make lethal targeting decisions without sufficient human oversight, causing harm to health and life (harm category a) and violations of human rights (category c). The involvement of AI in the development, use, and malfunction (e.g., misclassification leading to wrongful deaths) is clear and central to the harm described. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

İsrail basını yazdı: Gazze'de Sivillerin kaderi yapay zekanın elinde

2024-04-04
Türkiye
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly caused civilian deaths and destruction, constituting injury and harm to people, violations of human rights, and harm to communities. The AI systems' outputs were used to conduct lethal strikes with insufficient human verification, leading to indiscriminate or disproportionate attacks. This meets the criteria for an AI Incident because the AI's use directly led to significant harm and rights violations. The detailed descriptions of harm and the AI's pivotal role in targeting confirm this classification.
Thumbnail Image

رسانه اسرائیلی: سرنوشت صدها هزار غیرنظامی در غزه در دست هوش مصنوعی است

2024-04-04
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly led to the death of thousands of civilians, which is a clear injury and harm to groups of people (harm category a). The AI system's role is pivotal as it identifies targets without verification, leading to wrongful killings. This meets the definition of an AI Incident because the AI's use has directly caused significant harm to human life and rights. Therefore, the event is classified as an AI Incident.
Thumbnail Image

هشدار گوترش درباره استفاده مخرب اسرائیل از هوش مصنوعی در غزه - تسنیم

2024-04-06
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the Israeli military uses AI to identify non-military targets in densely populated civilian areas, increasing civilian casualties and causing harm to communities. This use of AI directly leads to injury and harm to groups of people and breaches fundamental human rights. The AI system's role is pivotal in enabling these attacks, making this a clear AI Incident under the OECD framework. The harm is realized, not just potential, and involves violations of rights and harm to communities, fulfilling the criteria for an AI Incident.
Thumbnail Image

استفاده تل‌آویو از لاوندر برای نسل‌کشی در غزه

2024-04-04
فردانیوز
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the AI system 'Lavender' was used to identify 37,000 Palestinians as targets, with a high degree of confidence, leading to military strikes using 'dumb bombs' that destroy entire homes and kill all occupants. The AI system's role is pivotal in selecting targets and enabling mass lethal actions. The harm is realized and severe, including loss of life and harm to communities, meeting the criteria for an AI Incident. The involvement is in the use of the AI system for lethal targeting decisions, directly causing harm.
Thumbnail Image

رژیم صهیونیستی هوش مصنوعی را ماشین نسل‌کشی کرد

2024-04-06
ana.ir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) for military targeting that has directly led to the killing of thousands of civilians in Gaza, constituting injury and harm to people and violations of human rights. The AI system's role is pivotal in identifying targets and enabling attacks that caused these harms. The description includes the system's error rate and the lack of adequate human review, which contributed to wrongful killings. This meets the criteria for an AI Incident as the AI system's use directly caused significant harm and rights violations.
Thumbnail Image

اسرائیل از هوش مصنوعی برای کشتار فلسطینیان غزه استفاده می‌کند | رادیو زمانه

2024-04-04
radiozamaneh.info
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used for targeting individuals for bombing, which has directly led to the killing of thousands of civilians and destruction of homes. This constitutes harm to persons and communities, as well as violations of human rights. The AI system's outputs were treated as authoritative decisions with minimal human oversight, causing direct harm. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

نرم‌افزاری به نام لاوندر؛ "استفاده ارتش اسرائیل از هوش مصنوعی در جنگ غزه برای شناسایی ۳۷ هزار هدف"

2024-04-05
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Lavender) used by the military for target identification and attack planning. The AI system's outputs have directly influenced lethal military actions resulting in civilian deaths, as reported by Gaza's health ministry. This is a clear case where the AI system's use has directly led to harm to people, fulfilling the criteria for an AI Incident. The article also notes reliance on AI outputs without full human review, increasing the risk of wrongful targeting. Therefore, the event is classified as an AI Incident due to direct harm caused by AI-assisted military operations.
Thumbnail Image

هشدار گوترش درباره استفاده مخرب اسرائیل از هوش مصنوعی در نوار غزه

2024-04-06
اسپوتنیک ایران
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are used by the Israeli military to identify non-military targets in densely populated civilian areas, resulting in increased civilian deaths and destruction of civilian property and infrastructure. This constitutes direct harm to people and communities, as well as violations of human rights. The AI system's use in targeting decisions is a direct contributing factor to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Israele usa l'intelligenza artificiale nei raid a Gaza. Una bomba ogni 20 secondi. Così si moltiplicano le vittime": l'inchiesta di +972 Magazine - Il Fatto Quotidiano

2024-04-05
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being used to select military targets, with a documented error rate causing civilian casualties. The AI system's development and use have directly led to injury and death of people, fulfilling the criteria for an AI Incident. The article provides detailed evidence of realized harm (over 33,000 civilian deaths) linked to the AI system's operation and errors. Therefore, this is a clear case of an AI Incident involving harm to people due to the AI system's malfunction and use in lethal operations.
Thumbnail Image

Israele, intelligenza artificiale per stanare gli obiettivi di Hamas?

2024-04-04
Adnkronos
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Lavender') used in military target identification, which directly influenced lethal actions resulting in civilian deaths and destruction. The AI's role in facilitating rapid target approval with minimal human oversight indicates its involvement in causing harm to people and communities. The harms include injury and death to civilians and potential violations of human rights under international law. Despite partial denial by the military, the report's detailed testimonies and descriptions establish a credible link between AI use and realized harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cos'è Lavender, l'intelligenza artificiale usata da Israele per colpire i palestinesi

2024-04-04
Wired
Why's our monitor labelling this an incident or hazard?
Lavender is explicitly described as an AI system used to identify military targets, with its outputs directly influencing lethal military actions that have caused substantial civilian casualties and deaths. The system's use has led to violations of human rights and international humanitarian law, including disproportionate collateral damage. The involvement of the AI system in causing these harms is direct and central, meeting the criteria for an AI Incident under the OECD framework.
Thumbnail Image

L'Intelligenza Artificiale contro Hamas. Individuare i target e...

2024-04-04
Affari Italiani
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned ('Lavender') used in military operations to identify targets and authorize attacks, which directly leads to harm (civilian casualties) and violations of human rights. The AI system's role in replacing human decision-making in lethal targeting decisions constitutes an AI Incident as per the framework, given the direct link to harm and rights violations. The partial denial does not negate the reported use and impact. Therefore, this is classified as an AI Incident.
Thumbnail Image

Israele, intelligenza artificiale per stanare gli obiettivi di Hamas?

2024-04-04
Torino Oggi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting decisions that led to the killing of civilians, which is a direct harm to persons and communities. The AI system's role in identifying targets and the minimal human oversight described indicate that the AI's outputs were pivotal in causing these harms. This fits the definition of an AI Incident as the AI system's use directly led to injury and violations of human rights. The partial denial by the military does not remove the reported harms linked to AI use. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Israel usou Inteligência Artificial para criar "listas de morte" e marcou 37 mil palestinianos como possíveis combatentes - SAPO Tek

2024-04-04
SAPO Tek
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system was used to identify targets, which led to military strikes causing civilian deaths and destruction. The AI system's error margin and the military's operational decisions based on AI outputs directly caused harm to individuals and communities, constituting violations of human rights. Therefore, this event meets the criteria for an AI Incident, as the AI system's use directly led to significant harm.
Thumbnail Image

IA de Israel 'marcou' 37 mil palestinianos como possíveis combatentes

2024-04-04
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The AI system Lavender was explicitly used to generate 'kill lists' that the military acted upon, resulting in the deaths of thousands of civilians and destruction of critical infrastructure. The system's error rate and the military's operational choices (e.g., attacking homes at night, using non-guided bombs) indicate that the AI's outputs directly contributed to significant harm. This fits the definition of an AI Incident, as the AI system's use directly led to injury and harm to people, harm to communities, and likely violations of human rights.
Thumbnail Image

Inteligência artificial utilizada por Israel marcou 37 mil palestinianos como possíveis combatentes

2024-04-04
Publico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the military to identify targets, whose recommendations were followed with minimal human review, leading to the deaths of civilians and destruction of civilian infrastructure. The AI system's error rate and the military's use of unguided munitions increased the risk and occurrence of harm. This constitutes direct harm caused by the AI system's use, fulfilling the criteria for an AI Incident under the OECD framework, as it involves injury to persons, harm to communities, and likely human rights violations.
Thumbnail Image

Israel: Sistema de IA 'marcou' 37.000 palestinianos como possíveis combatentes

2024-04-04
Revista SÁBADO
Why's our monitor labelling this an incident or hazard?
The AI system's use to identify individuals as potential combatants in a conflict zone directly relates to violations of human rights and risks to life and safety. The system's outputs likely influence military or security actions that can cause injury or death, fulfilling the criteria for an AI Incident. The article describes realized harm in the conflict context, and the AI system's role in marking individuals is pivotal to these harms.
Thumbnail Image

Exame Informática | Israel usou Inteligência Artifical para identificar 37 mil alvos do Hamas

2024-04-04
Visão
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to identify targets, which has directly led to harm to civilians and loss of life, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in the targeting decisions, and the resulting harm includes injury and death to groups of people, as well as harm to communities. The article explicitly states the AI system's use and its consequences, including admitted collateral damage and errors in target identification, confirming direct harm caused by the AI system's outputs.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law - Times of India

2024-04-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The AI system Lavender is explicitly mentioned as being used for targeting airstrikes, which directly led to harm (civilian casualties). This constitutes an AI Incident because the AI system's use in military targeting has caused injury and harm to groups of people. The ethical concerns and international law implications further support the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Israel's use of AI to find targets in Gaza offers a terrifying glimpse at where warfare could be headed

2024-04-13
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that has directly contributed to thousands of civilian casualties and destruction in Gaza. The AI system's outputs were treated as authoritative with minimal human oversight, leading to imprecise and potentially wrongful killings. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people and communities. The article also highlights the ethical and legal concerns around autonomous AI in warfare, reinforcing the significance of the harm caused. Hence, the event is classified as an AI Incident.
Thumbnail Image

Gaza war: Israel using AI to identify human targets raising fears that innocents are being caught in the net

2024-04-12
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems ('Lavender' and 'Where's Daddy?') used in military targeting and killing operations. The AI's use has directly led to harm to human life and potential violations of human rights, fulfilling the criteria for an AI Incident. The article reports realized harm (killings) linked to AI use, not just potential harm, and discusses the AI's role in accelerating lethal actions with limited human control, confirming direct causation of harm.
Thumbnail Image

Israel's use of AI to find targets in Gaza offers a terrifying glimpse at where warfare could be headed

2024-04-13
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in military targeting that has directly and indirectly led to significant harm to civilians, including deaths and destruction, which constitutes injury and harm to groups of people (harm category a). The AI system's role in generating target lists that were acted upon with minimal human oversight indicates a malfunction or misuse in the use phase, contributing to the harm. The article provides detailed allegations of realized harm caused by the AI system's outputs, not just potential harm. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's development and use have directly led to significant harm to people and communities in a conflict setting.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

2024-04-12
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including civilian casualties and expanded targeting beyond senior militants. The AI systems' role in automating and accelerating strikes with limited human review indicates a direct link between AI use and realized harm. This fits the definition of an AI Incident, as the AI systems' use has directly led to injury and harm to groups of people and harm to communities. Although there is some dispute from the Israeli Defence Force about the AI nature of these systems, the reports and sources strongly indicate AI involvement in targeting decisions causing harm.
Thumbnail Image

Mark O'Connell: 'The machine does it coldly': Artificial Intelligence can already kill people

2024-04-13
The Irish Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used by the Israeli military to identify and target individuals for assassination, which directly leads to harm to people, including civilians. The AI system's role in facilitating mass killings and the scale of potential civilian casualties clearly meets the criteria for an AI Incident, as it involves the use of AI leading to violations of human rights and harm to communities. The harm is realized and ongoing, not merely potential, and the AI system's involvement is central to the event. Although the military denies the AI nature of the system, the detailed report and description of automated targeting justify classification as an AI Incident.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

2024-04-12
The Telegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including civilian casualties and violations of human rights. The AI systems are involved in generating target lists and automating strike decisions, with reported errors and reduced human oversight increasing the risk and occurrence of harm. This meets the definition of an AI Incident as the AI's use has directly led to injury and harm to groups of people and violations of fundamental rights. Although the Israeli Defence Force denies some claims, the report is based on multiple sources and details specific harms caused by AI-enabled actions, confirming the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

Israel's Use of AI in Gaza Raises Concerns as International Law Struggles to Keep Up with Technological Advancements | Technology

2024-04-12
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to civilian casualties, which is harm to persons and communities. The AI systems' role in generating target lists and automating strike decisions, combined with reported errors and minimal human review, directly implicates AI in causing harm. This meets the definition of an AI Incident, as the AI system's use has directly led to violations of human rights and harm to communities. The discussion of international law lagging behind does not negate the realized harm but contextualizes the governance gap. Hence, the event is classified as an AI Incident.
Thumbnail Image

World News | Israel Accused of Using AI to Target Thousands in Gaza, as Killer Algorithms Outpace International Law | LatestLY

2024-04-12
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to civilian casualties and potential violations of international law. The AI systems' role in generating target lists and automating decisions with minimal human review has caused injury and harm to people, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' involvement is central to the event. Although the Israeli Defence Force denies some claims, the report is based on multiple sources and details the AI systems' operational use and consequences, meeting the definition of an AI Incident.
Thumbnail Image

Israel's use of AI to find targets in Gaza offers a terrifying glimpse at where warfare could be headed

2024-04-13
Business Insider Nederland
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting that has directly contributed to thousands of civilian casualties and destruction, fulfilling the criteria for harm to people and communities. The AI system's role in generating targets with insufficient human review led to errors and tragic outcomes, which is a direct AI Incident. The involvement is in the use of the AI system, and the harms are realized, not just potential. The article also discusses broader governance and ethical concerns but the primary focus is on the actual harms caused by the AI system's deployment in conflict.
Thumbnail Image

Gaza Conflict: Israel using AI to identify Human Targets raises Fears Innocents are Targeted

2024-04-13
Informed Comment
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('Lavender' and 'Where's Daddy?') used for target identification and tracking, which have directly led to lethal outcomes, including the killing of potentially innocent people. This is a clear case of AI use causing injury and harm to groups of people, fulfilling the criteria for an AI Incident. The involvement of AI in accelerating the kill chain and marginalizing human control further supports the classification as an incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.
Thumbnail Image

-Full Story-

2024-04-13
LankaWeb.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the IDF to identify targets for lethal airstrikes. The system's outputs were relied upon with minimal human vetting, leading to the killing of thousands of Palestinians, many of whom were civilians not involved in fighting. This constitutes direct harm to people and communities, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in expanding target sets and enabling mass killings, which is a clear violation of human rights and international humanitarian law. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Habsora: Israel Military AI System "a Mass Assassination Factory"

2024-04-12
DesPardes + PKonweb
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender, Habsora, Where's Daddy?) used by the Israeli military to generate target lists and monitor targets for airstrikes. The use of these AI systems has directly led to harm, including civilian casualties, which qualifies as injury or harm to groups of people. Despite denials from the Israeli Defense Force, multiple sources and reports indicate AI's role in these lethal operations. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by AI system use in military targeting and strikes.
Thumbnail Image

Explained: How use of Lavender AI in Israel-Hamas conflict could change dynamics of war

2024-04-12
News9live
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) explicitly used in military operations to identify targets for air strikes. The AI's role in selecting targets has directly contributed to harm, including civilian casualties and destruction, fulfilling the criteria for an AI Incident under the framework. The article discusses realized harm caused by the AI's use, not just potential harm, and highlights issues of legality and moral safeguards, reinforcing the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How Israel is using 'Lavender' and 'Daddy' to identify 37,000 Hamas operatives

2024-04-09
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including civilian casualties, which constitutes injury or harm to groups of people. The AI's misidentification and the resulting airstrikes causing civilian deaths fulfill the criteria for an AI Incident, as the AI system's use has directly led to harm. The ethical and legal concerns further emphasize the significance of the harm caused. Therefore, this event is classified as an AI Incident.
Thumbnail Image

From 'Lavender' to 'Where's Daddy?': How Israel is using AI tools to hit Hamas militants - Times of India

2024-04-08
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned ('Lavender', 'Where's Daddy?') used in military targeting operations. The AI's use in identifying targets and facilitating strikes has directly led to harm, including civilian casualties, which constitutes harm to persons and communities. The AI's error rate and minimal human oversight exacerbate the risk and realization of harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in conflict operations.
Thumbnail Image

Algorithmic warfare raises new moral dangers

2024-04-11
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used for automated target identification in a military context, which directly contributed to the deaths of civilians and families, indicating injury and harm to groups of people. The system's error rate and minimal human oversight led to wrongful targeting and excessive collateral damage, fulfilling the criteria for an AI Incident. The harm is realized and significant, involving violations of human rights and loss of life. Although there is some dispute from the IDF, the reported facts and testimonies indicate the AI system's pivotal role in causing harm. Hence, this is not merely a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

IDF colonel discusses 'data science magic powder' for locating terrorists

2024-04-11
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system developed and used by a military intelligence unit to identify terrorist operatives and generate target lists used in military operations. The AI system's outputs have directly influenced targeting decisions that can cause injury or death, constituting harm to persons. Although humans retain final decision authority, the AI system's role is central and pivotal in the process. This meets the definition of an AI Incident because the AI system's use has directly led to harm or potential harm to individuals and groups, including violations of human rights. The article also discusses the ethical implications and the scale of AI involvement, confirming the significance of the harm and the AI's role.
Thumbnail Image

Israel's use of AI in Gaza is coming under closer scrutiny

2024-04-11
The Economist
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems ('The Gospel' and 'Lavender') used by the Israeli military to process intelligence data and mark targets for air strikes. The use of these AI tools has directly or indirectly led to harm, including civilian deaths, as the systems accelerate and influence targeting decisions with limited human oversight. This fits the definition of an AI Incident because the AI system's use has contributed to injury and harm to groups of people and raises concerns about violations of human rights and laws of war. Although Israel denies fully autonomous targeting, the reported operational reality indicates that AI plays a pivotal role in causing harm. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

2024-04-11
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used in military targeting that have directly caused harm to civilians through airstrikes, including wrongful targeting and civilian casualties. The AI systems' role in generating target lists and automating decisions with minimal human review directly links them to injury and harm to groups of people, fulfilling the criteria for an AI Incident. The harms include violations of human rights and breaches of international law, as well as physical harm to persons. Although the Israeli Defence Force denies some claims, the article presents credible sources and detailed descriptions of AI involvement in causing harm, justifying classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Israel and AI

2024-04-08
Arab News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Gospel and Lavender) used for targeting in military operations, which have directly led to civilian deaths and violations of international law, including disproportionate collateral damage and targeting of civilians, including children. The AI systems are integral to the decision-making process for lethal strikes, effectively making life-and-death decisions. This constitutes direct harm to people and communities, fulfilling the criteria for an AI Incident. The article also highlights the lack of transparency and accountability in these AI-driven decisions, further underscoring the severity of the harm caused.
Thumbnail Image

'Where's Daddy?': How Israel's AI in Gaza reveals a threat to our human values

2024-04-07
The Philadelphia Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system ('Lavender') used in military targeting that has directly caused harm to civilians, including mass killings of women and children. The AI's error rate and the military's operational decisions based on its outputs have led to violations of human rights and loss of life, fulfilling the criteria for an AI Incident. The involvement of AI in lethal decision-making with insufficient human oversight and the resulting fatalities clearly meet the definition of an AI Incident as the AI system's use has directly led to injury and harm to groups of people and violations of fundamental rights.
Thumbnail Image

Israel's AI Targeting System Reflects the Inhumanity It Was Programmed With

2024-04-09
Common Dreams
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system used in military targeting that has directly contributed to lethal outcomes, including civilian deaths and violations of human rights. The AI system's errors and the operational use of its outputs in airstrikes that kill non-combatants meet the criteria for an AI Incident, as the AI's use has directly led to injury and harm to groups of people and breaches of fundamental rights. The involvement is through the AI's use in targeting decisions, with insufficient human review, leading to real and significant harm.
Thumbnail Image

Lavender, Where's Daddy? -- Israel AI Tools Are Helping It Find And Hit Hamas Targets In Gaza

2024-04-10
Swarajyamag
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including the bombing of innocent civilians and aid workers. The AI's role is pivotal in generating target lists and timing strikes, and its inaccuracies have caused wrongful deaths. This fits the definition of an AI Incident because the AI's use and malfunction have directly led to injury and harm to people and communities. The event is not merely a potential risk or a complementary update but a realized harm caused by AI systems in active use.
Thumbnail Image

The Brief - The First AI War

2024-04-08
EurActiv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has directly led to harm, including the killing of civilians and potential war crimes. The AI system's use in identifying targets and authorizing strikes with high civilian collateral damage meets the criteria for an AI Incident, as it involves the use of AI leading directly to injury and harm to groups of people and violations of human rights. Although the Israeli military denies using AI for target identification, credible investigative reports and UN concerns support the classification. Therefore, this event is an AI Incident due to the realized harm caused by the AI system's deployment in lethal operations.
Thumbnail Image

Artificial intelligence also in genocides

2024-04-10
in-cyprus.philenews.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being used for target identification in a military conflict. The AI system's outputs have directly led to significant harm to human life, communities, and property, fulfilling the criteria for an AI Incident. The harms are materialized and severe, including thousands of deaths and destruction of infrastructure. The AI system's role is pivotal in the chain of events causing these harms, even if the military denies full reliance on it. Therefore, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

Report: Israel Using AI to Write Kill Lists in Gaza - Truthdig

2024-04-08
Truthdig: Expert Reporting, Current News, Provocative Columnists
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Lavender) used in military targeting decisions that directly led to killings, including of civilians, which is a clear harm to persons and communities. The AI system's role is pivotal as it automates target selection with minimal human review, causing wrongful deaths and violations of human rights. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

What Happens When Killer Algorithms Outpace International Law | Cryptopolitan

2024-04-11
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly caused harm to civilians, including deaths and destruction of infrastructure, which fits the definition of an AI Incident due to injury and harm to people and violations of human rights. The AI systems' deployment and their erroneous or negligent targeting decisions have led to these harms. The lack of regulation and ongoing use further emphasize the incident nature rather than a mere hazard or complementary information. Hence, this is classified as an AI Incident.
Thumbnail Image

Gordon Campbell On Israel's Murderous Use Of AI In Gaza

2024-04-08
Scoop
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the IDF to create kill lists for targeted assassinations, with documented use leading to the deaths of thousands of civilians, including children. The AI system's decisions are adopted with minimal human review, and the system operates within parameters that accept mass civilian casualties as collateral damage. This constitutes direct harm to people and communities, including violations of human rights and potentially war crimes. Therefore, this event qualifies as an AI Incident due to the AI system's direct and pivotal role in causing significant harm.
Thumbnail Image

Israel's Lavender: What could go wrong when AI is used in military operations?

2024-04-10
GZERO Media
Why's our monitor labelling this an incident or hazard?
Lavender is an AI system used in military operations to identify targets, which directly influences decisions that can cause physical harm or death. The article reports that the system is 90% accurate but implies that 10% of the time it targets civilians, causing injury or death. This is a direct harm to people caused by the AI system's outputs and use. The removal of human oversight further exacerbates the risk and actual harm. Therefore, this event meets the definition of an AI Incident due to direct harm to people caused by the AI system's use in warfare.
Thumbnail Image

Israel Has Allowed AI To Become Judge, Jury And Executioner - OpEd

2024-04-08
Eurasia Review
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including thousands of civilian deaths and children killed. The AI systems are integral to the decision-making process for lethal strikes, with documented collateral damage and ethical concerns. The harm is realized and ongoing, not hypothetical. The AI's role is pivotal in causing these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Against war and empire.

2024-04-08
Antiwar.com Original
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions several AI systems used by the Israeli military to select bombing targets, which have directly led to the deaths of thousands of people, including civilians, and the destruction of homes in Gaza. The AI systems' role is central and pivotal, with human personnel acting mainly as rubber stamps, indicating the AI's outputs are a primary cause of harm. The harms include injury and death (a), harm to property and communities (d), and violations of human rights (c). Given the direct causal link between the AI systems' use and these harms, this event meets the criteria for an AI Incident.
Thumbnail Image

Israel Allegedly Uses AI for Mass Targeting in Gaza

2024-04-11
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Lavender, Habsora, Where's Daddy?) used in military targeting that have directly led to harm, including civilian casualties and potential violations of human rights. The AI systems' use in generating target lists and automating targeting decisions with limited human review has caused injury and harm to groups of people, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, and the AI systems' role is pivotal in these harms. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Death by Algorithm: Israel's AI War in Gaza | Dissident Voice

2024-04-10
Dissident Voice
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Lavender) in a military context to identify and target individuals for lethal strikes. The AI system's outputs directly led to harm, including civilian deaths and violations of human rights, fulfilling the criteria for an AI Incident. The article provides detailed evidence of realized harm caused by the AI system's use, including mass civilian casualties and the bypassing of human vetting processes. Therefore, this is classified as an AI Incident due to the direct and significant harm caused by the AI system's deployment in warfare.
Thumbnail Image

Israel's AI Targeting System Reflects the Inhumanity It Was Programmed With

2024-04-10
The Smirking Chimp
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used in military targeting that has caused real, significant harm to civilians, including deaths and destruction of property. The AI system's error rate and the military's operational decisions based on its outputs have led to violations of human rights and loss of life, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's involvement is direct and pivotal in the chain of events causing harm.
Thumbnail Image

MIL-Evening Report: Israel accused of using AI to target thousands in Gaza, as killer...

2024-04-11
foreignaffairs.co.nz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to harm, including civilian casualties and potential violations of human rights and international law. The AI systems' role in generating targets and automating decisions with limited human review is central to the harm described. This fits the definition of an AI Incident, as the AI system's use has directly led to injury and harm to groups of people, as well as violations of rights. Although there is some denial from the Israeli Defence Force, the multiple reports and sources indicate the AI systems' involvement in causing harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

2024-04-11
Tolerance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to identify and target individuals for airstrikes, which directly leads to harm to people and potential violations of human rights. The use of AI in lethal targeting and assassination is a clear AI Incident as it involves direct harm to persons and breaches of fundamental rights. The involvement of AI in the development and use phases is evident, and the harm is realized, not just potential.
Thumbnail Image

Israel accused of using AI to target thousands in Gaza, as killer algorithms outpace international law

2024-04-11
Northern Ireland News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in military targeting that have directly led to civilian casualties and potential violations of international law and human rights. The AI systems' role in generating target lists and automating strike decisions with limited human review has caused injury and harm to groups of people, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and the AI systems' involvement is central to the event. Although the Israeli Defence Force denies some claims, the report is based on multiple intelligence sources and corroborated by other media, making the AI involvement and resulting harm credible. This is not merely a hazard or complementary information but a documented incident of AI causing harm.
Thumbnail Image

Israel is carrying out an AI-assisted genocide in Gaza

2024-04-10
The New Arab
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to generate thousands of targets rapidly, which has directly resulted in the deaths of thousands of Palestinian civilians. The harms include injury and death to people, violations of human rights, and harm to communities. The AI system's role is pivotal in the targeting process, and the article provides detailed evidence of the system's use and its lethal consequences. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use in military operations.
Thumbnail Image

What Happens When Killer Algorithms Outpace International Law | AI Explained | CryptoRank.io

2024-04-11
CryptoRank
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations that have directly caused harm to human life, including the killing of innocent civilians, which constitutes injury and harm to groups of people and violations of human rights. The AI systems' deployment and malfunction (or negligent use) have led to these harms. Therefore, this qualifies as an AI Incident. The article also discusses the broader context of regulatory gaps and geopolitical tensions but the primary focus is on realized harm caused by AI systems in warfare.
Thumbnail Image

Patrick Lawrence: 'Automated Murder': Israel's 'AI' in Gaza

2024-04-10
ScheerPost
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used by the IDF to identify and target Palestinians for assassination, with documented civilian deaths and human rights violations. The AI's outputs were treated as orders without thorough human verification, leading to direct harm and loss of life. This constitutes an AI Incident as the AI system's use has directly led to injury and harm to groups of people and violations of fundamental rights. The article also discusses the broader implications and ethical concerns but the core event meets the criteria for an AI Incident due to realized harm caused by AI deployment in military operations.
Thumbnail Image

Death by Algorithm: Israel's AI War on Gaza - Global Research

2024-04-10
Global Research
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) in military targeting that has directly resulted in civilian deaths and mass casualties. The AI system's malfunction or misuse (inaccurate targeting and acceptance of high civilian casualties) has led to injury and harm to groups of people, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in the chain of events causing this harm. Hence, the classification as AI Incident is justified.
Thumbnail Image

توقعات جديدة عن الوظائف المعرضة للخطر بسبب ثورة الذكاء الاصطناعى أبرزها المحاسبة - اليوم السابع

2024-04-10
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The article focuses on forecasts and warnings about the potential for AI to disrupt and replace jobs in certain sectors, which constitutes a plausible future risk rather than a realized harm. There is no mention of an AI system currently causing harm or malfunctioning, nor any direct or indirect harm having occurred. Therefore, this is best classified as an AI Hazard, reflecting credible concerns about future job displacement due to AI.
Thumbnail Image

أنت تتعرض للخداع.. تحذير من خطر جديد للذكاء الاصطناعي

2024-04-09
الوطن
Why's our monitor labelling this an incident or hazard?
The article describes a credible risk stemming from the use and potential misuse of AI systems that can learn to lie and deceive, which could plausibly lead to harms such as fraud and misinformation. Since the harm is not reported as having occurred yet but is a plausible future risk, this qualifies as an AI Hazard. The involvement of AI systems (advanced chatbots) is explicit, and the potential for harm (fraud, deception) is clearly articulated.
Thumbnail Image

نهج الفريق الأحمر .. احتمالات هجمات بيولوجية باستخدام الذكاء الاصطناعي

2024-04-10
Hespress
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) and their potential misuse in planning biological attacks, which could lead to significant harm to health and communities. Although no incident of harm has materialized, the report highlights plausible future risks and the need for preparedness, fitting the definition of an AI Hazard. The article does not describe an actual AI Incident but rather a credible risk scenario and strategic responses, so it is not Complementary Information either, as the main focus is on potential harm rather than updates on past incidents or governance responses.
Thumbnail Image

استخدام الذكاء الاصطناعي في النزاعات العسكرية يطرح تساؤلات أخلاقية

2024-04-11
Hespress
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in military applications, including autonomous or semi-autonomous targeting and strategic decision-making. It discusses the development and deployment of these AI systems and their potential to cause harm, such as civilian casualties and escalation of conflicts. Although it mentions past uses of AI that have led to harm, the main focus is on the broader ethical and strategic risks posed by AI in warfare, including future plausible harms. There is no report of a new specific AI Incident causing direct harm in this article, but the credible risk of harm is emphasized. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

"يحدد الميول الانتخابية".. تحذيرات من مخاطر جديدة لـ "الذكاء الاصطناعى"

2024-04-11
Dostor
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI-generated deepfakes and misinformation influencing elections and political beliefs, which constitutes harm to communities and potentially violates rights related to fair political participation. The involvement of AI systems in generating and spreading this misinformation is clear, and the harm is ongoing and realized, not merely potential. The article also discusses the insufficient mitigation efforts by technology companies, which indirectly contributes to the harm. Hence, this qualifies as an AI Incident under the OECD framework because the AI systems' use has directly or indirectly led to significant harm to communities through misinformation and political manipulation.
Thumbnail Image

هل يتفوق الذكاء الاصطناعي على الإنسان بحلول عام 2025؟.. خبير تكنولوجيا يجيب

2024-04-10
Dostor
Why's our monitor labelling this an incident or hazard?
The content is a commentary on AI development and its implications, without reporting any realized or imminent harm caused by AI systems. It focuses on expert views and general advice rather than a concrete event involving AI harm or hazard. Therefore, it fits the category of Complementary Information as it provides context and perspectives on AI without describing an AI Incident or AI Hazard.
Thumbnail Image

كيف تكتشف الصور المولدة بواسطة الذكاء الاصطناعي؟

2024-04-10
Aljazeera
Why's our monitor labelling this an incident or hazard?
The content focuses on educating readers about identifying AI-generated images and does not describe any realized harm or a specific event involving AI systems causing or potentially causing harm. There is no mention of an AI system malfunction, misuse, or development leading to direct or indirect harm. The article is informational and contextual, enhancing understanding of AI-generated content and its detection, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

هل تدرج شركات الذكاء الاصطناعي المواد المحمية بحقوق الطبع والنشر

2024-04-10
الوفد
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and ongoing issues related to AI systems using copyrighted materials without consent, which constitutes a violation of intellectual property rights. However, it primarily focuses on legislative proposals, industry responses, and legal actions addressing these concerns rather than reporting a specific incident of harm caused by AI. Therefore, it fits best as Complementary Information, providing context and updates on governance and societal responses to AI-related copyright issues, rather than describing a direct AI Incident or an immediate AI Hazard.
Thumbnail Image

قوى عظمى بالذكاء الاصطناعي.. هذا ما تخطط له الإمارات والسعودية

2024-04-12
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article primarily reports on the development and expansion of AI infrastructure and capabilities in the UAE and Saudi Arabia, including data centers and AI models. It does not mention any actual harm, injury, rights violations, or disruptions caused by AI systems. Nor does it describe any imminent or plausible AI-related hazards leading to harm. Instead, it provides complementary information about AI ecosystem growth, investments, and strategic positioning in the region, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

صندوق النقد يتوقع تأثر 60% من الوظائف بالذكاء الاصطناعي، وهذه الدول الأكثر تضررًا

2024-04-12
بوابة فيتو
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their potential economic impact on employment and labor markets. The IMF report forecasts plausible future harm from AI's use, such as job displacement and increased inequality, which fits the definition of an AI Hazard because it could plausibly lead to significant harm (economic and social). There is no description of an actual AI Incident (realized harm) or a response or update to a past incident (Complementary Information). Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

"الغارديان" عن عقيد في وحدة 8200 في الجيش الإسرائيلي: استخدمنا الذكاء الاصطناعي في غزة عام 2021

2024-04-11
القدس العربي
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (machine learning-based) used in a military context to identify individuals as terrorist targets, which directly led to military attacks in Gaza. This involvement of AI in lethal targeting and the resulting harm to people meets the definition of an AI Incident. The AI system's outputs were used to select thousands of targets, and although humans made final decisions, the AI's role was pivotal in causing harm. Therefore, this is not merely a potential hazard or complementary information but a realized incident involving AI-related harm.
Thumbnail Image

"أخطاء غوغل" تثير تساؤلات بشأن تحكم الذكاء الاصطناعي في المعلومات

2024-04-10
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The AI system 'Gemini' generated inaccurate and misleading images that distort historical facts, such as depicting racially diverse Nazi forces and anachronistic political figures. This misinformation can harm communities by shaping false collective memory and undermining trust in information, which fits the definition of harm to communities. The article explicitly states these harms have occurred and discusses the AI system's malfunction as the cause. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

كيف "يهدد" الذكاء الاصطناعي الانتخابات في 60 دولة؟

2024-04-10
الحرة
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems to create deepfake videos and audio that spread false political information, which is actively causing harm by misleading voters and destabilizing elections in multiple countries. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of democratic rights. The harm is ongoing and documented, not merely potential. The article also mentions calls for stronger policies and detection methods, but the primary focus is on the realized harm from AI-generated misinformation, not just future risks or responses, so it is not Complementary Information or an AI Hazard.
Thumbnail Image

"إذا ترك دون رادع".. تحذيرات من "مخاطر" الذكاء الاصطناعي على النظم الاجتماعية

2024-04-09
الحرة
Why's our monitor labelling this an incident or hazard?
The article centers on the potential dangers and societal risks posed by AI if left unchecked, including threats to democracy and social stability. It references current AI systems and their capabilities but does not describe any actual incident of harm caused by AI. The warnings and calls for legislation indicate a plausible risk of future harm, making this an AI Hazard rather than an AI Incident. There is no detailed report of a specific AI-related harm event, nor is the article primarily about responses or updates to past incidents, so it is not Complementary Information. It is clearly related to AI and its societal impact, so it is not Unrelated.
Thumbnail Image

استثمار فدرالي بقيمة 2,4 مليار دولار في الذكاء الاصطناعي | RCI

2024-04-08
Radio Canada
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of their development and deployment, specifically addressing AI safety and security. However, there is no mention of any realized harm, malfunction, or misuse of AI systems leading to injury, rights violations, or other harms. The article centers on preventive and governance efforts, investment announcements, and expert opinions on AI risks and safety research. Therefore, it does not qualify as an AI Incident or AI Hazard but fits the category of Complementary Information, as it provides context and updates on societal and governance responses to AI challenges.
Thumbnail Image

الحكومة الكندية تعلن عن خطة لإنشاء معهد مكلف بأمن الذكاء الاصطناعي - تيل كيل عربي

2024-04-08
تيل كيل عربي
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI system causing harm or posing a direct or indirect risk of harm. Instead, it reports on a government initiative to enhance AI safety and infrastructure, which is a governance and development update. There is no mention of incidents, hazards, or realized or potential harms related to AI systems. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI.
Thumbnail Image

تحذيرات من تأثير "الذكاء الاصطناعي"...في تحديد الميول الانتخابية

2024-04-11
almodon
Why's our monitor labelling this an incident or hazard?
The article explicitly links AI systems to the generation and spread of misleading political content, including deepfakes, which are causing real confusion and harm in elections worldwide. This constitutes harm to communities and violations of rights, fulfilling the criteria for an AI Incident. The involvement of AI is clear, as the misinformation is AI-generated, and the harm is ongoing and materialized, not merely potential. The article also discusses responses and challenges but the primary focus is on the harm caused by AI-generated disinformation, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يهدد اليد العاملة والمهن البشرية... والخبراء يحددون الوظائف الأكثر عرضة للخطر!

2024-04-09
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The article focuses on the potential future threat posed by AI to human jobs, which is a credible risk of harm to labor rights and employment. Since no actual harm or incident has occurred yet, and the article is primarily about expert warnings and predictions, this fits the definition of an AI Hazard. There is no indication of a realized AI Incident or complementary information about responses or mitigation measures. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

الذكاء الإصطناعي والحرب.. واقع جديد بدون ضوابط - قناة العالم الاخبارية

2024-04-10
قناة العالم الاخبارية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military targeting and weaponry, which has directly led to harm to civilians and destruction of property, fulfilling the criteria for an AI Incident. The article explicitly states that AI-enabled targeting platforms are central to ongoing military actions causing loss of life and damage, and raises concerns about ethical and legal violations. Therefore, this is a clear case of an AI Incident due to realized harm caused by AI use in warfare.
Thumbnail Image

بسبب الذكاء الاصطناعي.. توقعات جديدة عن الوظائف المعرضة للخطر

2024-04-11
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The article centers on forecasts and warnings about AI's potential to disrupt employment, particularly in repetitive and knowledge-based roles. It does not report any actual harm or incident caused by AI systems, but rather plausible future risks of job displacement and workforce transformation. Therefore, it fits the definition of an AI Hazard, as it plausibly leads to harm (job losses and economic disruption) due to AI development and use, but no direct or indirect harm has yet occurred as described in the article.
Thumbnail Image

من يملك التكنولوجيا الذكية يكسب الحرب مستقبلا | | صحيفة العرب

2024-04-11
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in military operations that have directly led to civilian casualties (harm to persons) and ethical violations in warfare, which qualifies as an AI Incident. It also discusses the broader strategic risks and potential for escalation due to AI use in war, but since actual harm has already occurred, the primary classification is AI Incident. The involvement of AI in targeting and autonomous systems is clear, and the harms include injury and violations of human rights. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ميتا تواصل حربها ضد "الإشاعات"

2024-04-09
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The article focuses on Meta's strategy to label AI-generated content and its moderation policies to address misinformation and harmful content. These are governance and societal responses to AI-related risks rather than descriptions of a specific incident or hazard involving AI systems causing or plausibly causing harm. There is no direct or indirect harm described as having occurred due to AI system malfunction or misuse, nor is there a specific event indicating plausible future harm from AI systems. Therefore, this is best classified as Complementary Information, providing context on AI governance and mitigation efforts.
Thumbnail Image

الأخلاقيات والتحديات: تحليل الآثار الاجتماعية والأخلاقية لتقدم الذكاء الاصطناعي

2024-04-11
جـــريــدة الفجــــــر المصــرية
Why's our monitor labelling this an incident or hazard?
The article discusses broad societal and ethical challenges related to AI progress without detailing any concrete AI system causing harm or posing an immediate risk. It does not report on a realized AI Incident or a specific AI Hazard but rather provides contextual and reflective information about AI's potential impacts and the need for governance. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI's societal implications without describing a particular harmful event or credible imminent risk.
Thumbnail Image

عراب الذكاء الاصطناعي يحذر من "حرب الروبوتات"

2024-04-12
اخبار اليمن الان
Why's our monitor labelling this an incident or hazard?
The mention of AI-powered weapons and the warning about potential major disasters indicates a plausible future harm stemming from AI systems used in autonomous or semi-autonomous weaponry. Although no actual harm has occurred yet, the risk of such harm is credible and significant, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The article does not describe a realized harm or incident, nor does it focus on responses or updates, so it is not Complementary Information. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

هل يتحول

2024-04-11
الفرات نيوز
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in warfare to identify targets and conduct operations, with reported civilian casualties linked to these uses, fulfilling the criteria for harm to persons and communities. The involvement of AI in decision-making processes that lead to lethal outcomes, and the ethical concerns about control and accountability, confirm that AI's development and use have directly or indirectly led to harm. The discussion of ongoing conflicts and AI's role therein supports classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"إذا ترك دون رادع".. تحذيرات من "مخاطر" الذكاء الاصطناعي على النظم الاجتماعية

2024-04-09
اخبار اليمن الان
Why's our monitor labelling this an incident or hazard?
The article discusses potential risks and calls for regulation to prevent harm to social and democratic systems, indicating a credible risk of future harm from AI. However, it does not report any actual harm or incident caused by AI at this time. Therefore, it fits the definition of an AI Hazard, as it concerns plausible future harm rather than a realized incident.
Thumbnail Image

تحذيرات من "مخاطر" الذكاء الاصطناعي على النظم الاجتماعية

2024-04-09
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it discusses AI's societal impact and potential misuse. The harms described (collapse of democracy, social systems, wars) are plausible future harms that could result from AI misuse or malfunction. Since no actual harm has occurred yet, and the article centers on warnings and calls for regulation, this fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

هل استعانة المعلم بتقنيات الذكاء الاصطناعي لتقييم طلاب يُشكل خطرًا؟ - اخبار السودان

2024-04-09
اخبار السودان
Why's our monitor labelling this an incident or hazard?
The article primarily provides an overview and discussion of AI use in education for assessment purposes, including ethical considerations and potential risks, but does not describe any realized harm or incident directly caused by AI systems. There is no indication of an AI incident or hazard occurring or imminent. The content fits the definition of Complementary Information as it offers context, expert views, and societal considerations related to AI in education without reporting a specific AI-related harm or credible future harm event.
Thumbnail Image

الذكاء الاصطناعي يهدد انتخابات 60 دولة وناشطون يتحركون - شفق نيوز

2024-04-10
Shafaq News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake content and misinformation that is actively spreading and influencing elections, causing harm to communities and democratic processes. This fits the definition of an AI Incident because the AI's use has directly led to harm (political misinformation, confusion, potential violence). The article details realized harms, not just potential risks, and discusses the societal impact and responses. Therefore, it is classified as an AI Incident.
Thumbnail Image

La IA en la guerra: un avance fulgurante y un control humano dudoso

2024-04-10
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to identify attack targets in Gaza, which implies the AI system's outputs directly influenced lethal military actions causing harm to people. This meets the definition of an AI Incident as the AI system's use has directly led to harm to persons and raises human rights concerns. The discussion of limited human oversight and the rapid AI-driven decision-making further supports the classification. Although some details are investigative and contested, the reported use and resulting harm are sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Inteligencia Artificial en la guerra: los enormes peligros y las ventajas estratégicas de la tecnología en los conflictos

2024-04-10
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used in military targeting and defense operations, which have directly led to harm in conflict zones (e.g., attacks in Gaza, Yemen). It also discusses the risks of escalation and loss of human control, which are plausible future harms. Since actual harm from AI-enabled military actions is reported or reasonably inferred, this qualifies as an AI Incident. The article also covers broader strategic and ethical concerns, but the presence of realized harm takes precedence over potential hazards or complementary information.
Thumbnail Image

La IA en la guerra: un avance fulgurante y un control humano dudoso

2024-04-10
El Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used in military contexts that have directly influenced decisions about targeting and warfare, which implicates potential violations of human rights and risks of harm to people and communities. The mention of AI-assisted target identification in Gaza, which has drawn international concern, indicates actual use of AI leading to harm or at least serious risk thereof. The ethical concerns about limited human control and the risk of escalation further support the classification as an AI Incident, since harm or violation of rights is occurring or highly likely. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sin control, la IA va ganando espacio en el campo de batalla

2024-04-14
Última Hora
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems used in military contexts, where their deployment has directly influenced decisions about targeting and engagement in armed conflicts. This use has led to real and significant harms, including potential loss of life and escalation of warfare, which are harms to persons and communities. The concerns about limited human oversight and the risk of escalation further underscore the direct and indirect harms caused by AI in these contexts. Therefore, this event qualifies as an AI Incident because the AI systems' use has directly or indirectly led to harm in warfare settings.
Thumbnail Image

La IA avanza a pasos de gigante en la guerra, pero con un control humano dudoso

2024-04-10
El Financiero, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to identify attack targets and make tactical decisions in active conflict zones, which directly leads to harm (injury or death) to people. The mention of AI-enabled missile defense and drone swarms further supports the presence of AI systems influencing military operations with potential lethal outcomes. The concerns about limited human oversight and the rapid decision-making by AI underline the risk of harm. Since harm is occurring or has occurred due to AI use in warfare, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely warn about potential future harm but documents ongoing use and consequences.
Thumbnail Image

La IA en la guerra: un avance fulgurante y un control humano dudoso

2024-04-13
UDG TV
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used in military applications, including autonomous or semi-autonomous target identification and decision support systems. It discusses the development and use of these AI systems and the risks they pose, including escalation of conflict and potential nuclear weapon use. While it mentions allegations of AI use in attacks, it does not confirm direct causation of harm by AI systems in a specific incident. Instead, it focuses on the broader risk and uncertainty about human control and the potential for serious harm. This fits the definition of an AI Hazard, as the AI systems' use could plausibly lead to significant harm, but no confirmed AI Incident is described.
Thumbnail Image

La Inteligencia Artificial en la guerra, algoritmos para matar * Semanario Universidad

2024-04-10
Semanario Universidad
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems used to identify military targets and assist in attack decisions, which directly relate to harm to persons and violations of human rights in conflict zones. The mention of the Lavender program allegedly used to select targets in Gaza, and the Iron Dome system's AI-assisted interception, show AI's direct role in lethal military operations. The concerns about limited human oversight and the potential for escalation further underline the risk of harm. Since the AI systems' use has already led to harm or is actively contributing to harm in warfare, this qualifies as an AI Incident under the framework, as the AI's development and use have directly or indirectly led to injury, harm, or violations of rights.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Lavender | Common Dreams

2024-04-16
Common Dreams
Why's our monitor labelling this an incident or hazard?
The Lavender AI system is explicitly described as an AI system used to generate kill lists that directly lead to lethal airstrikes causing death and injury to targeted individuals and their families, including innocent civilians. This constitutes direct harm to persons and violations of human rights. The system's automated scoring and rapid approval process contribute to wrongful targeting and collateral damage. The article clearly links the AI system's use to realized harm, fulfilling the criteria for an AI Incident. The historical context supports the understanding of the AI system's role in harm but does not diminish the direct causal link to harm in the current use case.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Israel's AI System Called "Lavender" - Global Research

2024-04-16
Global Research
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used to generate kill lists that directly lead to lethal airstrikes causing death and harm to civilians, including women and children. This constitutes direct harm to persons and communities, as well as violations of human rights. The AI system's development and use are central to the harm described. Therefore, this event qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to significant harm and violations of rights.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Lavender | naked capitalism

2024-04-16
naked capitalism
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) in the development and use of kill lists that have directly led to mass killings and collateral civilian deaths, constituting injury and harm to groups of people and violations of human rights. The AI system's role is pivotal in automating and accelerating targeting decisions with insufficient human oversight, resulting in significant harm. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Lavender | Dissident Voice

2024-04-16
Dissident Voice
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Lavender) explicitly described as generating kill lists that directly lead to lethal airstrikes causing death and harm to civilians, which constitutes injury and harm to groups of people and violations of human rights. The AI system's use and malfunction (high false positive rate) are central to the harm caused. Therefore, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly led to significant harm and human rights violations.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Lavender

2024-04-17
The Smirking Chimp
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Lavender, which assigns scores to individuals and generates kill lists that are used to conduct airstrikes resulting in deaths of targeted individuals and collateral civilian casualties. This constitutes direct harm to persons and communities and violations of human rights. The AI system's role is pivotal in automating and accelerating the targeting process, reducing human oversight and increasing the risk of wrongful killings. The harm is realized and ongoing, not merely potential. Hence, this is an AI Incident under the OECD framework.
Thumbnail Image

A Brief History of Kill Lists, From Langley to Lavender

2024-04-17
ZNetwork
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Lavender) that assigns scores to individuals to generate kill lists, which are then used to conduct airstrikes resulting in deaths of targeted individuals and collateral civilian casualties. This constitutes direct harm to persons and communities, as well as violations of human rights. The AI system's development and use are central to the harm described. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm including loss of life and human rights violations.
Thumbnail Image

‫ بينها Google... شركات تكنولوجيا شهيرة متهمة بالتواطؤ مع إسرائيل في قتل الفلسطينيين

2024-04-20
جريدة الشرق
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., Lavender) used for surveillance and targeting that have directly led to civilian deaths and human rights violations. The AI systems' probabilistic pattern recognition and targeting recommendations have caused harm to people, fulfilling the criteria for an AI Incident. The involvement of Google and other tech companies in providing AI technology that facilitates these harms further supports this classification. Therefore, this event is an AI Incident due to the direct and significant harm caused by AI-enabled systems.
Thumbnail Image

تعاون كيان الاحتلال مع جوجل وميتا ضد الفلسطينيين..كيف؟ - قناة العالم الاخبارية

2024-04-20
قناة العالم الاخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., facial recognition, machine learning algorithms like Lavender) used by the Israeli military in collaboration with Google and Meta to surveil and target Palestinians. The AI systems' outputs are used to create assassination lists and conduct military strikes, causing direct harm to individuals and communities, and violating human rights and international law. The AI systems' probabilistic nature and lack of verification contribute to wrongful targeting, fulfilling the criteria for an AI Incident due to direct harm and rights violations caused by AI use.
Thumbnail Image

"الأورومتوسطي" يدعو إلى التحقيق مع شركات تكنولوجية لمشاركتها في جرائم بغزة

2024-04-20
فلسطين اليوم - عاجل أخبار فلسطين ورام الله اخبار العرب
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for surveillance and targeting that have directly led to civilian casualties, constituting harm to persons. The AI systems' probabilistic targeting and lack of verification have caused wrongful targeting and deaths, fulfilling the criteria for an AI Incident. The involvement of major tech companies in enabling these systems further supports the classification as an AI Incident due to direct or indirect contribution to harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

الأورومتوسطي: شركات تكنولوجيا وتواصل اجتماعي تتسبب بقتل مدنيين في غزة

2024-04-20
الرأي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-supported systems (e.g., Lavender, Gospel) used for surveillance and targeting that have caused civilian deaths, which is a direct harm to people. The AI systems' malfunction or misuse (reliance on probabilistic data without verification) has led to wrongful targeting and killing of civilians, fulfilling the criteria for an AI Incident. The involvement of technology companies in enabling these systems further supports the classification. Therefore, this event is an AI Incident due to direct harm caused by AI system use in a military context resulting in loss of civilian life and human rights violations.
Thumbnail Image

بسبب تورّطها بجرائم قتل الفلسطينيين.. دعوات للتحقيق مع شركات تكنولوجية

2024-04-20
Al-Ahed News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for surveillance and targeting that have caused real harm—civilian deaths—thus meeting the criteria for an AI Incident. The AI systems' probabilistic targeting without verification and the resulting civilian casualties represent direct harm to people (harm category a) and violations of human rights (category c). The involvement of companies like Google and Meta in enabling these systems further supports the classification. Therefore, this event is an AI Incident due to the direct and serious harms caused by AI system use.
Thumbnail Image

الأورومتوسطي: شركات تكنولوجيا وتواصل اجتماعي تتسبب بقتل مدنيين في غزة

2024-04-20
وكالة الصحافة الفلسطينية
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (machine learning algorithms used for target identification and surveillance) whose outputs have directly led to harm, including injury and death of civilians, constituting violations of human rights. The use of AI in these military operations is central to the harm described, with documented cases of civilians targeted based on AI-generated intelligence. The involvement of major technology companies in providing data or technology that facilitates these harms further supports the classification as an AI Incident. The harms are realized, not hypothetical, and the AI systems' role is pivotal in causing these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Israel Diduga Pakai Data WhatsApp Buat Bom Rumah Warga Gaza

2024-04-25
CNNindonesia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system ('Lavender') used by the Israeli military to process data from WhatsApp to identify targets for bombing in Gaza. This use of AI has directly led to harm, including civilian casualties and destruction of homes, which constitutes injury to persons and harm to communities. The system's method of targeting based on social media connections and the resulting high civilian death toll indicate violations of human rights and humanitarian law. Therefore, this event meets the criteria for an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

WhatsApp Dituduh Bantu Israel Lacak Warga Palestina, Ini Faktanya

2024-04-25
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Lavender and Where's Daddy) used to infer locations and target individuals based on WhatsApp group data, which is an AI system involvement. The use of these AI tools has directly led to harm, including targeted killings and risk to civilians, which is harm to communities and a violation of human rights. Although WhatsApp denies involvement, the AI systems' use by Israel for these purposes is central to the harm described. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Israel Diduga Pakai WhatsApp untuk Targetkan Warga Palestina

2024-04-24
Bisnis Indonesia Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used by the Israeli military to identify targets, which directly leads to harm including deaths and civilian casualties. The AI system's reliance on WhatsApp group membership data for targeting decisions implicates privacy and human rights violations. The harm is realized and significant, including potential violations of international law and human rights, meeting the criteria for an AI Incident. The article also discusses the role of Meta/WhatsApp, but the primary focus is on the AI system's use causing harm, not just complementary information about company responses or policies.
Thumbnail Image

WhatsApp Diduga Bantu Israel Bantai Warga Palestina di Gaza

2024-04-23
beritasatu.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system, Lavender, used by Israel to identify targets in Gaza for military attacks. The AI system is said to be trained or informed by data obtained from WhatsApp groups, which is an AI system owned by Meta. The use of this AI system has directly led to harm to civilians, including potential killings, which is a violation of human rights and humanitarian law. Therefore, this is an AI Incident involving the use of AI in a harmful military context, with WhatsApp's data allegedly playing a role in enabling the AI's effectiveness. The harm is realized and serious, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

WhatsApp Bocor, Israel Dikabarkan Gunakan Data untuk Serang Rumah Warga Palestina | - Harianjogja.com

2024-04-24
Harianjogja.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Lavender) used by the military to identify targets based on data from WhatsApp, which is an AI system processing complex social data for decision-making. The use of this AI system has directly led to harm, including civilian deaths, which constitutes injury and violations of human rights. The article also discusses potential legal and ethical violations linked to this AI use. Therefore, this is a clear AI Incident as the AI system's use has directly caused significant harm.
Thumbnail Image

WhatsApp Dituduh Bocorkan Informasi Warga Palestina ke Israel, Ini Faktanya

2024-04-24
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
While the article involves an AI system ('Lavender') used by the Israeli military and discusses potential misuse of user data from WhatsApp, there is no confirmed incident of data sharing or harm caused by AI. The claims are allegations and WhatsApp denies them. Therefore, no realized harm or incident is established. The article mainly provides context and the company's response to the allegations, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Israel Diduga Gunakan WhatsApp untuk Targetkan Serangan ke Palestina

2024-04-25
KOMPAS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Lavender) used by the Israeli military to identify targets for attacks. The system uses data from WhatsApp groups to identify suspected militants, and this AI-driven targeting has directly led to lethal strikes killing civilians, including entire families. This constitutes direct harm to persons and communities, as well as potential violations of human rights. Therefore, this event meets the criteria for an AI Incident due to the AI system's use causing direct harm and rights violations.