US Navy Deploys AI to Accelerate Mine Detection in Strait of Hormuz

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Navy has contracted Domino Data Lab for nearly $100 million to develop AI systems that rapidly detect underwater mines in the Strait of Hormuz. The AI integrates multi-sensor data, enabling faster and more accurate mine identification, aiming to enhance maritime security and protect global trade routes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system developed by Domino Data Lab to detect underwater mines, which are a direct threat to maritime safety and global trade. The AI system's role is to speed up and improve mine detection, which is critical to preventing harm to personnel, infrastructure, and economic activity. No actual harm or incident caused by the AI system is reported; rather, the AI is used to mitigate a known hazard. The event thus describes a plausible future harm scenario where the AI system's use is central to managing a significant risk. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or managing harm related to underwater mines, but no harm has yet occurred due to the AI system itself.[AI generated]
Industries
Government, security, and defence

Severity
AI hazard

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

الخوارزميات ضد الألغام.. هل ستستخدم أمريكا الذكاء الاصطناعي لحسم "معركة هرمز"؟

2026-05-01
Aljazeera
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into autonomous maritime drones used for mine detection, which is a direct application of AI in a military operation. The AI system's development and use are central to addressing a real and ongoing threat posed by naval mines, which can cause injury, death, and disruption of critical infrastructure (global energy shipping routes). The AI system's role in accelerating detection and analysis is directly linked to mitigating these harms. Although the article does not report a specific accident or injury, the operational context involves real and immediate risks, and the AI system's deployment is a response to these harms. Hence, it is an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

البحرية الأميركية تستعين بشركة ذكاء اصطناعي لمواجهة ألغام إيران

2026-05-01
قناة العربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by Domino Data Lab to detect underwater mines, which are a direct threat to maritime safety and global trade. The AI system's role is to speed up and improve mine detection, which is critical to preventing harm to personnel, infrastructure, and economic activity. No actual harm or incident caused by the AI system is reported; rather, the AI is used to mitigate a known hazard. The event thus describes a plausible future harm scenario where the AI system's use is central to managing a significant risk. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to preventing or managing harm related to underwater mines, but no harm has yet occurred due to the AI system itself.
Thumbnail Image

بصفقة 100 مليون دولار.. الـ"AI" سلاح البحرية الأمريكية الجديد لتفكيك ألغام إيران فى مضيق هرمز (تقرير) | المصري اليوم

2026-05-01
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system for military mine detection and removal, which is explicitly described. However, there is no indication that the AI system has caused any injury, disruption, violation of rights, or harm to property or communities at this stage. The article focuses on the deployment and capabilities of the AI system, emphasizing its potential to accelerate operations and reduce human risk. Since no harm has occurred yet but the AI system's use in a conflict-prone area could plausibly lead to incidents in the future, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the article centers on the strategic deployment and capabilities of the AI system, not on responses or updates to past incidents.
Thumbnail Image

"عبر طائرات مسيّرة تحت الماء".. أمريكا تلجأ للذكاء الاصطناعى لكشف ألغام مضيق هرمز | المصري اليوم

2026-05-01
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (underwater drones with AI-based mine detection software) actively deployed to detect mines, which directly relates to safety and security in a critical infrastructure area (maritime navigation). However, the article does not report any actual harm or incident caused by the AI system, nor does it describe any malfunction or misuse leading to harm. Instead, it highlights the potential for improved safety and faster response times. Therefore, this event represents a plausible future risk mitigation scenario where AI could prevent harm, but no harm has yet occurred. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to preventing or causing harm in the future, but no incident has materialized yet.
Thumbnail Image

البحرية الأميركية توظف الذكاء الاصطناعي لرصد ألغام إيران خلال أيام بدل أشهر

2026-05-01
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (machine learning for mine detection) in a critical infrastructure context (maritime navigation through the Strait of Hormuz). The AI system's use directly relates to managing and mitigating threats from naval mines, which if undetected, could cause harm to people, property, and global economic operations. However, the article does not report any actual harm or incident caused by the AI system or its malfunction; rather, it describes the deployment and capabilities of the AI system to prevent or respond to threats. Therefore, this is not an AI Incident but rather an AI Hazard, as the AI system's use could plausibly lead to preventing or managing harm in a critical infrastructure context, and the technology's deployment itself implies potential future impacts on safety and security.
Thumbnail Image

في عمليات سرية.. البنتاغون يستعين بشركات الذكاء الاصطناعي

2026-05-01
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (machine learning programs) to detect naval mines, which is a critical military application. While no direct harm is reported, the deployment of AI in military operations, especially for mine detection and potentially other secret operations, carries plausible risks of harm if the AI malfunctions or is misused. However, since the article does not report any actual harm or incident resulting from the AI use, but rather the initiation or enhancement of AI capabilities, this constitutes a plausible future risk rather than a realized harm.
Thumbnail Image

قاضٍ أميركي يمنع إدارة ترمب من إلغاء الحماية لـ3000 يمني

2026-05-01
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the U.S. Navy for mine detection and removal, which is a critical infrastructure protection task. While the AI system is actively used, there is no indication that its use has directly or indirectly caused harm. Instead, it is intended to reduce harm by improving mine detection speed and accuracy. The geopolitical tensions and potential military escalations represent a broader risk environment but do not constitute an AI incident themselves. Since no harm has occurred and the AI system's role is preventive, this qualifies as Complementary Information providing context on AI deployment in a high-risk domain rather than an AI Incident or Hazard.
Thumbnail Image

أميركا تستعين بشركة ذكاء اصطناعي لمواجهة ألغام "هرمز"

2026-05-01
صحيفة الشرق الأوسط
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the U.S. Navy for mine detection and removal, confirming AI system involvement. The AI system is in active use (use phase) to enhance safety and operational efficiency. No harm or malfunction is reported, nor is there a direct or indirect link to realized harm. While the geopolitical context involves potential future conflict and infrastructure disruption, the AI system itself is described as a tool to reduce such risks, not as a source of hazard or incident. Thus, the event does not meet criteria for AI Incident or AI Hazard but fits the definition of Complementary Information, providing valuable insight into AI's role in maritime security and conflict management.
Thumbnail Image

بعقد قيمته 100 مليون دولار.. واشنطن تستنجد بـ"الذكاء الاصطناعي" لنزع ألغام هرمز

2026-05-01
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (machine learning models integrated with sensor data) for underwater mine detection and removal, which is a high-risk military application. The AI system is being developed and deployed to address a dangerous situation (mines threatening maritime safety and global economy). There is no report of actual harm caused by the AI system or its malfunction. The AI system's role is pivotal in potentially preventing harm by improving mine detection speed and accuracy. Given the military context and the potential for harm if the AI system fails or is misused, this qualifies as an AI Hazard. It is not an AI Incident because no harm has occurred due to the AI system. It is not Complementary Information because the article focuses on the contract and AI system deployment rather than updates or responses to past incidents. It is not Unrelated because the AI system is central to the event.
Thumbnail Image

الجيش الأميركي يوظف الذكاء الاصطناعي لرصد ألغام إيران

2026-05-01
الإمارات اليوم
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for mine detection, integrating sensor data and employing machine learning to identify new types of mines rapidly. The use of this AI system directly relates to preventing harm (injury, disruption to critical infrastructure, and economic harm) by improving mine detection capabilities. Since the AI system is actively deployed to mitigate real threats and prevent harm, this qualifies as an AI Incident due to the direct involvement of AI in addressing a harm-related scenario.
Thumbnail Image

البحرية الأمريكية تستعين بالذكاء الاصطناعي لمواجهة الألغام في "هرمز"

2026-05-01
البيان
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as it is used to train autonomous underwater vehicles to detect mines. The use of this AI system is intended to reduce harm to sailors and protect maritime trade routes, which are critical infrastructure. Although no harm has yet occurred, the AI system's deployment is directly related to preventing injury and disruption, thus it is an AI system in active use with a safety purpose. There is no indication of malfunction or harm caused by the AI system itself, nor is there a plausible risk of harm from the AI system's use described. Therefore, this is complementary information about the deployment and use of an AI system to mitigate risks, not an incident or hazard.
Thumbnail Image

البحرية الأمريكية توظف الذكاء الاصطناعي لرصد ألغام مضيق هرمز

2026-05-01
البيان
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by the U.S. Navy for mine detection and removal, which is a critical infrastructure and safety application. Although no actual harm or malfunction is reported, the context of military mine clearance in a tense geopolitical area implies a credible risk of harm if the AI systems fail or are misapplied. The event concerns the development and deployment of AI systems with potential to cause harm, thus fitting the definition of an AI Hazard. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information since the article focuses on the AI system's development and deployment rather than a response or update to a prior incident. It is not Unrelated because AI involvement is clear and central.
Thumbnail Image

تعزيز قدرات البحرية الأميركية بالذكاء الاصطناعي

2026-05-01
annahar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for underwater mine detection and removal, which is a clear example of AI system use in a critical infrastructure and military context. However, the article does not describe any incident of harm, malfunction, or violation resulting from the AI system's deployment. Instead, it focuses on the enhancement of capabilities and operational improvements. There is no indication of realized harm or direct/indirect negative consequences caused by the AI system. Therefore, this event does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides important context and updates on the deployment and operational use of AI in a critical domain, which is relevant for understanding the evolving AI ecosystem and its implications.
Thumbnail Image

البحرية الأميركية تتعاقد مع شركة ذكاء اصطناعي لتسريع نزع الألغام الإيرانية بمضيق هرمز

2026-05-01
Alrai-media
Why's our monitor labelling this an incident or hazard?
The article details the deployment and enhancement of AI systems for military mine detection, which is an AI system use case. However, it does not report any injury, damage, rights violation, or other harm caused by the AI system, nor does it describe any malfunction or misuse leading to harm. The event is about the development and use of AI to improve military operations, with no direct or indirect harm reported or plausible immediate harm described. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI adoption in defense and technological advancement without reporting harm or risk of harm.
Thumbnail Image

البحرية الأميركية تتعاقد مع شركة ذكاء اصطناعي لتسريع نزع الألغام الإيرانية بمضيق هرمز

2026-05-01
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in the use phase, enhancing underwater drones' capabilities to detect mines faster and more accurately. While no harm is reported as having occurred, the AI system's deployment in mine detection directly relates to preventing harm (injury or damage) by improving mine clearance. There is no indication of malfunction or misuse causing harm. The event focuses on the development and use of AI to improve safety and operational efficiency, with no current incident or realized harm. Therefore, this is an AI Hazard because the AI system's use could plausibly lead to preventing or causing harm in a critical infrastructure context, but no harm has yet occurred or been reported.
Thumbnail Image

الذكاء الاصطناعي... لمواجهة ألغام إيران!

2026-05-01
MTV Lebanon - Live Online TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as enhancing mine detection and removal operations, which is a critical infrastructure and safety-related application. The AI system is being developed and deployed to operate in a hazardous environment with potential for injury or disruption if mines are not detected. However, the article does not describe any incident of harm or malfunction caused by the AI system, nor does it report misuse or failure. The focus is on the contract award and the intended capabilities of the AI system, indicating a plausible future risk context but no realized harm. Thus, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm or mitigate harm in a conflict zone, but no incident has yet occurred.
Thumbnail Image

تقرير: البحرية الأميركية تستعين بشركة ذكاء اصطناعي لمواجهة ألغام إيران

2026-05-01
موقع عرب 48
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being used by the U.S. Navy to detect and remove naval mines, which are physical hazards threatening human life and critical infrastructure. The AI system's use directly impacts the management of critical infrastructure (maritime navigation) and reduces injury risk to personnel, fulfilling the criteria for an AI Incident. Although the article focuses on the contract and capabilities, the AI system is actively deployed in a context where harm is present and being addressed, not merely a potential future risk or complementary information.
Thumbnail Image

البحرية الأمريكية تستعين بشركات الـAI لتسريع رصد الألغام فى مضيق هرمز - اليوم السابع

2026-05-01
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (underwater drones controlled by AI) for mine detection and removal. While no harm is reported as having occurred, the military application in a conflict-prone region and the nature of the technology imply a credible risk of harm in the future, such as accidental detonations, escalation of military tensions, or misuse of autonomous systems. Since the event concerns the development and deployment of AI systems that could plausibly lead to harm but no harm has yet materialized, it fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

البنتاغون يستعين بشركات الذكاء الاصطناعي في عمليات سرية - Lebanese Forces Official Website

2026-05-01
Lebanese Forces Official Website
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in military operations, including mission planning and weapons targeting, which are sensitive and potentially high-risk applications. While the article does not report any realized harm or incidents resulting from these AI deployments, the use of AI in secret military operations and autonomous or semi-autonomous weapons targeting plausibly could lead to significant harms such as injury, violations of rights, or escalation of conflict. Therefore, this situation represents an AI Hazard due to the credible risk of future harm stemming from the deployment of AI in these contexts.
Thumbnail Image

البحرية الأمريكية توظّف الذكاء الاصطناعي لرصد الألغام الإيرانية في مضيق هرمز

2026-05-01
S A N A
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed and used by the U.S. Navy for autonomous mine detection, confirming AI system involvement. However, there is no indication that the AI system caused any injury, disruption, rights violation, or harm. Instead, the AI system is intended to reduce harm by improving detection speed and accuracy, thus potentially preventing incidents. The event is about the deployment and enhancement of AI capabilities, not about harm or plausible harm caused by AI. Hence, it fits the definition of Complementary Information, providing supporting context on AI's role in military operations and safety improvements.
Thumbnail Image

الجيش الأميركي يعزز رصد ألغام هرمز بالذكاء الاصطناعي وسط توتر مع إيران

2026-05-01
موقع بكرا
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (machine learning for mine detection) in a critical infrastructure context (maritime navigation through the Strait of Hormuz). While the AI system is actively being developed and deployed, the article does not describe any harm or incident resulting from its use. Instead, it focuses on the enhancement of capabilities to detect mines and reduce human intervention, which is a positive application aimed at preventing harm. Therefore, this is not an AI Incident. It also does not describe a hazard where AI could plausibly lead to harm, but rather the use of AI to reduce risk. The article is best classified as Complementary Information, providing context on AI deployment in military maritime security.
Thumbnail Image

اخبارك نت | البحرية الأميركية توظف الذكاء الاصطناعي لرصد ألغام إيران خلال أيام بدل أشهر

2026-05-01
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (machine learning for mine detection) in a military context, which is clearly described. However, there is no indication that the AI system has caused any injury, disruption, rights violation, or other harm. The article focuses on the deployment and capabilities of the AI system rather than any negative consequences or incidents. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI adoption and technological advancement in defense, without reporting harm or plausible harm.
Thumbnail Image

اخبارك نت | في عمليات سرية.. البنتاغون يستعين بشركات الذكاء الاصطناعي

2026-05-01
موقع أخبارك للأخبار المصرية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems in military operations, including mission planning and weapons targeting, which are high-risk applications. However, there is no indication that any harm has yet occurred due to these AI systems. The involvement of AI in these contexts could plausibly lead to harms such as injury, violation of rights, or disruption, but these are potential future risks rather than realized incidents. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

البحرية الأمريكية تعتمد الذكاء الاصطناعي لتسريع إزالة الألغام في مضيق هرمز

2026-05-01
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system used in autonomous underwater vehicles for mine detection and removal. The AI system's use aims to reduce risks to human sailors and improve response times, which directly relates to safety and protection of critical maritime infrastructure. However, the article does not report any actual harm or incident caused by the AI system; rather, it describes the deployment and enhancement of AI capabilities to prevent harm and improve security. Therefore, this event represents a plausible future risk mitigation and operational enhancement scenario involving AI, but no harm or malfunction has occurred. This fits the definition of Complementary Information, as it provides context on AI deployment and its intended benefits without describing an incident or hazard.
Thumbnail Image

اتفاق أمريكي بـ100 مليون دولار مع شركة ذكاء اصطناعي لنزع "ألغام هرمز" - صحيفة الوئام

2026-05-01
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system explicitly described as enabling autonomous underwater vehicles to detect naval mines more quickly and accurately. The AI system's use is intended to mitigate harm by improving mine detection and removal, which directly relates to preventing injury or harm to people (naval personnel and maritime traffic) and disruption of critical infrastructure (a key global shipping route). Although the article does not report an actual harm event, the deployment of this AI system is directly linked to reducing existing hazards and risks. Therefore, this is not a hazard or complementary information but an AI system in active use with a direct link to preventing harm. Since no harm has yet occurred or been reported, but the AI system is actively used to prevent significant harm, this fits best as Complementary Information about an ongoing AI deployment enhancing safety. However, given the system's operational use and the context of conflict, it is more appropriate to classify this as Complementary Information rather than an Incident or Hazard because no harm or plausible future harm from the AI system itself is described.
Thumbnail Image

البحرية الأمريكية تستخدم شركات الذكاء الاصطناعي لرصد الألغام في مضيق هرمز

2026-05-01
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (underwater autonomous drones using AI for mine detection) in a critical infrastructure context (Strait of Hormuz). No actual harm or incident is reported, but the use of AI in military mine detection carries plausible risks of harm, such as accidental damage, escalation of conflict, or operational failures. Since no harm has yet occurred, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the AI system's deployment and its potential implications, not on responses or updates to past incidents.
Thumbnail Image

البحرية الأمريكية تعزز قدراتها لكشف الألغام في مضيق هرمز عبر الذكاء الاصطناعي

2026-05-01
Hakaekonline | حقائق اون لاين
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and use of AI systems for mine detection, which is a critical military application. While no harm has yet occurred, the deployment of such AI systems in sensitive and potentially conflict-prone areas could plausibly lead to incidents involving injury, disruption, or harm if the systems malfunction or are misused. Therefore, this event represents a plausible future risk related to AI use in military operations, qualifying it as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

البحرية الأمريكية تستعين بشركة ذكاء اصطناعي لمواجهة سلاح خطير - وكالة ستيب نيوز

2026-05-01
وكالة ستيب نيوز
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, used operationally to detect underwater mines, which are a serious threat to critical maritime infrastructure and global economic stability. The AI's use directly supports the prevention of harm to critical infrastructure and economic disruption, which aligns with harm category (b). Although no harm has yet occurred, the AI system's deployment is intended to mitigate existing threats and prevent incidents. Therefore, this event represents a positive use of AI to reduce plausible future harm rather than an incident or hazard itself. The article does not report any malfunction, misuse, or harm caused by the AI system, nor does it indicate a risk of harm stemming from the AI system's development or use. Hence, this is best classified as Complementary Information about AI deployment and its role in addressing a security challenge.
Thumbnail Image

الذكاء الاصطناعي يقتحم "ملحمة هرمز" ليرصد الألغام

2026-05-01
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for underwater mine detection by autonomous underwater vehicles, indicating AI system involvement. The AI system is used operationally to improve safety and speed in mine detection, which is a critical infrastructure and security task. There is no indication of any malfunction, misuse, or harm caused by the AI system. The article focuses on the deployment and enhancement of AI capabilities to prevent harm rather than describing an incident or a plausible future harm scenario. Therefore, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides important complementary information about AI's application in military and security contexts, which is valuable for understanding the broader AI ecosystem and its governance implications.
Thumbnail Image

البحرية الأميركية تعزز قدراتها بالذكاء الاصطناعي لرصد ألغام هرمز

2026-05-01
elsiyasa.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems for mine detection, which is a clear AI system involvement. The article focuses on enhancing capabilities to detect and remove mines, which if successful, would prevent harm to people, property, and global commerce. There is no indication that the AI system has malfunctioned or caused harm, nor that harm has occurred due to the AI system. Instead, the AI is used to reduce risk and improve safety. Thus, this is not an AI Incident. It is also not a hazard since no plausible future harm from the AI system itself is described; rather, the AI is a tool to mitigate existing hazards. The article is best classified as Complementary Information, providing context on AI deployment in a critical security domain and its potential benefits.
Thumbnail Image

خوارزميات بـ100 مليون دولار.. هل يفك الذكاء الاصطناعي شفرة مضيق هرمز؟

2026-05-01
الجزيرة نت
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems integrated into autonomous underwater vehicles used by the US Navy to detect mines, which is a clear AI system involvement. The AI is used operationally to analyze sensor data and update detection models rapidly, which is a use case of AI in a critical infrastructure and military context. However, there is no indication that the AI system caused any harm, malfunctioned, or was misused leading to harm. Instead, the AI system is helping to mitigate a real threat (naval mines) and improve operational readiness. The article also discusses broader military AI applications and strategic implications, which provide important context but do not describe an AI Incident or AI Hazard. Hence, the article fits the definition of Complementary Information, enhancing understanding of AI's role in military operations without reporting a new harm or plausible future harm event.
Thumbnail Image

البحرية الأميركية تستعين بالذكاء اصطناعي لمواجهة "ألغام هرمز"

2026-05-01
Asharq News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for mine detection and removal, which is a clear AI system involvement. However, there is no indication that the AI system caused any harm or malfunctioned. Instead, the AI system is used to improve safety and operational efficiency. The article focuses on the deployment and capabilities of the AI system rather than any incident or hazard arising from it. There is no mention of plausible future harm caused by the AI system itself; rather, the AI system is intended to reduce harm. Therefore, the event does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information as it provides important context about AI use in a critical military application.
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system in a military context to detect underwater mines, which are physical hazards that can cause injury or harm to people and disrupt critical infrastructure (shipping lanes). The AI system's deployment is intended to mitigate these harms by improving mine detection speed and accuracy. Although the article does not report a specific incident of harm caused by the AI system, the AI is actively used in an operational context where harm is a direct concern. Since the AI system is being used to prevent or manage a real and ongoing threat (mines in a critical waterway), and the article does not describe any malfunction or failure leading to harm, this event is best classified as an AI Hazard. It represents a credible and plausible risk environment where AI is integral to managing a hazardous situation, but no harm caused by the AI system itself is reported.
Thumbnail Image

US Navy Turns to AI Firm Domino for Options to Counter Iranian Mines

2026-05-01
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being used by the U.S. Navy for underwater mine detection, fulfilling the AI System criterion. However, there is no indication that the AI system has caused any injury, disruption, rights violation, or other harm. The article focuses on the contract award and the capabilities of the AI system to improve mine detection speed and accuracy, which is a positive development and does not describe any realized or potential harm. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides complementary information about AI deployment in a critical infrastructure context, enhancing understanding of AI's role in military maritime operations.
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
Economic Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in the use phase, enhancing underwater drones' ability to detect mines faster and more accurately. The mines pose a direct threat to human safety and global trade infrastructure, and the AI system's role in mitigating this threat is central. Since the AI system is actively deployed to reduce harm and protect critical infrastructure, this qualifies as an AI Incident due to the direct link between AI use and harm prevention in a high-risk context.
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used by the US Navy for underwater mine detection, confirming AI system involvement. The AI system is being used operationally to improve detection speed and accuracy, which is a use case of AI. However, there is no mention of any injury, damage, rights violation, or disruption caused by the AI system. The event focuses on the deployment and potential benefits of the AI system rather than any harm or malfunction. Since no harm has occurred but the AI system is being used in a context with potential risks (military mine detection), it fits the definition of Complementary Information rather than an Incident or Hazard. It provides context on AI adoption and operational use without reporting harm or plausible harm from the AI system itself.
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
The Straits Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in the use phase, aiding the Navy in detecting underwater mines. The presence of AI is clear from the description of machine learning models integrated with sensor data to identify mines. The harm involved relates to the risk of mines disrupting critical shipping lanes, which is a threat to global economic stability and safety of sailors. Since the AI system is actively used to mitigate this harm by improving mine detection, and the article discusses ongoing use rather than a malfunction or a potential future risk, this qualifies as an AI Incident where the AI system's use is directly linked to addressing a significant harm (disruption of critical infrastructure and risk to human safety).
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for underwater mine detection, which is a critical military application involving autonomous or semi-autonomous decision-making. Although no harm has yet occurred, the AI system's role in detecting mines in contested waters implies a credible risk of harm to sailors and disruption of maritime operations if the system malfunctions or is insufficiently reliable. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm related to critical infrastructure and human safety, but no incident has been reported yet.
Thumbnail Image

US navy turns to AI firm for smarter options to counter Iranian mines

2026-05-01
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system developed by Domino Data Lab to train underwater drones for mine detection, which qualifies as an AI system. The AI is being used operationally by the US Navy, indicating use rather than just development. However, the article does not describe any actual harm or incidents resulting from the AI system's use, only the potential for faster and more effective mine detection. Given the military application and the critical nature of mine detection in contested waters, there is a plausible risk that the AI system's malfunction or misuse could lead to harm (e.g., failure to detect mines, accidental triggering, or escalation of conflict). Since no harm has yet occurred, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article is not updating or responding to a prior incident, nor is it unrelated as it clearly involves an AI system with potential safety implications.
Thumbnail Image

US Navy turns to AI firm Domino for options to counter Iranian mines

2026-05-01
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to train underwater drones for mine detection, which is a task involving real-time decision-making and complex sensor data integration, clearly involving AI. The use of this AI system directly addresses a harm scenario: underwater mines threaten the health and safety of sailors and the operation of critical shipping lanes. The AI system's deployment reduces the time to update detection models from months to days, which is a significant operational improvement in managing this hazard. Since the AI system's use is directly linked to preventing or mitigating harm to people and critical infrastructure, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not describe a future risk or a response to a past incident but the active use of AI in a context where harm is plausible and being managed.
Thumbnail Image

US turns to AI firm to clear Iranian mines

2026-05-01
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software to detect naval mines, which are physical hazards that can cause injury or death to sailors and disrupt critical maritime infrastructure. The AI system's role in speeding up mine detection and enabling faster response to evolving threats directly relates to harm prevention and safety. Since the AI system is actively used in operations that impact human safety and critical infrastructure, this qualifies as an AI Incident under the definition of harm to persons and disruption of critical infrastructure caused by AI system use.
Thumbnail Image

US Navy uses AI to counter mines

2026-05-01
Daily Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for mine detection and classification, which is a critical military operation with potential direct impact on safety and security. The use of AI in this context is part of ongoing operations in a high-risk area, implying direct involvement in managing threats that could cause harm to personnel, vessels, and global shipping routes. Although no specific harm is reported as having occurred, the AI system's deployment in active military operations addressing real threats means it is currently in use to prevent or mitigate harm. This fits the definition of an AI Incident because the AI system's use is directly linked to managing a situation with potential for injury or harm to persons and disruption of critical infrastructure (maritime shipping routes).
Thumbnail Image

US Navy Signs $100M Contract With AI Company In Push To Clear Mines From Strait Of Hormuz

2026-05-01
WOWO 1190 AM | 107.5 FM
Why's our monitor labelling this an incident or hazard?
The article details the development and deployment of an AI system for mine detection, which is a clear AI system use case. However, there is no indication that any harm, injury, or violation has occurred yet. The AI system's use is intended to prevent harm by improving mine detection and reducing risk to sailors and ships. Therefore, this event represents a plausible future risk scenario related to AI in military operations but does not describe an incident or realized harm. It fits the definition of an AI Hazard because the AI system's use could plausibly lead to harm in the future, especially given the military context and potential for conflict in the Strait of Hormuz.
Thumbnail Image

US Navy Turns To AI Firm Domino For Options To Counter Iranian Mines - KAYHAN LIFE

2026-05-01
KAYHAN LIFE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system for underwater mine detection, which is a clear AI system involvement. The AI is used operationally (use) to detect mines that threaten critical infrastructure (shipping lanes) and human safety (sailors). However, no actual harm or malfunction is reported; the AI system is intended to reduce risk and improve detection speed. Given the military context and the potential for harm if the AI system fails or is misapplied, the event plausibly could lead to an AI Incident in the future. Thus, it fits the definition of an AI Hazard. It is not Complementary Information because the article focuses on the contract and AI deployment rather than updates or responses to past incidents. It is not Unrelated because the AI system and its potential impact are central to the event.
Thumbnail Image

U.S. Navy Taps AI Firm to Counter Mines in Strait of Hormuz - Jordan News | Latest News from Jordan, MENA

2026-05-01
Jordan News | Latest News from Jordan, MENA
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for autonomous underwater vehicles to detect naval mines, which are a direct threat to critical infrastructure and safety. The use of AI accelerates mine detection and removal, which is crucial to preventing harm. Since the AI system's use is directly linked to managing and protecting critical infrastructure (the Strait of Hormuz), this qualifies as an AI Incident under category (b) - disruption of the management and operation of critical infrastructure. The event describes the deployment and use of the AI system, not just its development or potential future harm, and the harm prevented or mitigated is significant and clearly articulated.
Thumbnail Image

Bomba iz Ormuza! Šok za ceo svet: Znate li šta Amerika upravo šalje u moreuz? Ovako nešto dosad nije viđeno

2026-05-01
Informer
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the software uses AI to train and operate underwater drones for mine detection. The use of this AI system directly addresses a significant safety hazard—underwater mines—that threaten maritime navigation and global energy supply. While the article does not report any harm occurring due to the AI system, it describes the AI's use to prevent harm by accelerating mine clearance and improving safety. Therefore, this event does not describe an incident where harm has occurred, but rather the deployment of AI to reduce a known hazard. Given the strategic importance and potential risks involved, the AI system's use here plausibly reduces the risk of harm but does not itself cause harm. Hence, this is best classified as Complementary Information about AI deployment in a critical security context, rather than an AI Incident or AI Hazard.
Thumbnail Image

AI će tražiti mine u Ormuskom moreuzu za američku mornaricu

2026-05-01
Tanjug News Agency
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the software uses AI to train underwater drones for mine detection. The use of AI here is intended to improve safety by identifying and clearing mines that pose a threat to navigation and global trade. While the article does not report any harm occurring yet, the AI system's use is directly related to preventing harm to people, property, and critical infrastructure (shipping lanes). Therefore, this is a case where the AI system's use plausibly leads to harm prevention, but no harm or incident is reported. Hence, it qualifies as Complementary Information about AI deployment in a safety-critical context rather than an AI Incident or Hazard.
Thumbnail Image

AI na ozbiljnom zadatku: Tražiće mine u Ormuskom moreuzu

2026-05-01
Glas javnosti
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as the software uses AI to train underwater drones for mine detection. The use of this AI system is intended to reduce risks and improve the speed and accuracy of mine detection, which directly relates to safety and security in a critical infrastructure area. Although the article does not report any harm occurring yet, the deployment of AI in this context addresses a significant hazard related to naval safety and global trade. Since the AI system's use is ongoing and intended to prevent harm, and no incident of harm is reported, this event qualifies as an AI Hazard due to the plausible risk of harm if the AI system fails or is misused in this high-stakes environment.
Thumbnail Image

BOŽE, POMOZI! Znate li šta Amerika upravo šalje u MOREUZ? NEMOGUĆE!

2026-05-01
Republika.rs | Srpski telegraf
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (software for training and managing underwater drones with AI capabilities) in an operational context to detect and clear mines, which are a direct threat to critical infrastructure and human safety. Although the article does not report an incident of harm caused by the AI system, it describes the AI's active deployment to address an existing hazard (mines) that pose significant harm. Since the AI system is being used to prevent harm rather than causing harm, and no malfunction or misuse is reported, this event is best classified as Complementary Information about the deployment and use of AI in a critical security context, rather than an AI Incident or AI Hazard.