AI-Assisted Drone Search Locates Missing Mountaineer in Italian Alps

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

After months of unsuccessful human searches, AI software analyzing drone images identified the body of missing Italian mountaineer Nicola Ivaldo in the Piedmont Alps. The AI system detected a red helmet in the snow, enabling rescue teams to locate the climber, demonstrating AI's critical role in search and rescue operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system was explicitly used to analyze drone imagery and identify the missing mountaineer's location, which directly led to the recovery of the body. This involvement of AI in a real-world operation that impacts human health and safety fits the definition of an AI Incident. Although the AI did not save the person's life, it materially contributed to the search and rescue outcome, which is a form of harm mitigation and critical health-related intervention. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.[AI generated]
Industries
Government, security, and defence

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Η τεχνητή νοημοσύνη εντόπισε το σώμα αγνοούμενο ορειβάτη

2026-01-11
Ποια iPhone και iPad είναι ευάλωτα στα πρόσφατα κενά ασφαλείας της Apple
Why's our monitor labelling this an incident or hazard?
The AI system was explicitly used to analyze drone imagery and identify the missing mountaineer's location, which directly led to the recovery of the body. This involvement of AI in a real-world operation that impacts human health and safety fits the definition of an AI Incident. Although the AI did not save the person's life, it materially contributed to the search and rescue outcome, which is a form of harm mitigation and critical health-related intervention. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πώς το AI έλυσε το μυστήριο ενός αγνοούμενου ορειβάτη | Cyprus Times

2026-01-11
Cyprus Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software analyzing thousands of drone images to identify unusual features (a red helmet) in a difficult mountainous terrain, which led to finding the missing person. The AI system's use was integral to the search and rescue operation, directly impacting the outcome. The harm (death of the mountaineer) had already occurred, and the AI system's role was crucial in locating the body, thus addressing harm to a person. This fits the definition of an AI Incident because the AI system's use directly led to a significant outcome related to harm to a person.
Thumbnail Image

Ένα κόκκινο σημάδι στο χιόνι: Πώς το AI έλυσε το μυστήριο ενός αγνοούμενου ορειβάτη - e-thessalia.gr

2026-01-11
e-thessalia.gr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI in the form of image analysis algorithms processing drone-captured images to find a missing person in a challenging environment. This AI system was actively used in the search and rescue operation, leading to the discovery of the climber's helmet and body. This constitutes the use of an AI system that directly led to a significant outcome related to human safety and life, which fits the definition of an AI Incident involving harm to a person (in this case, the recovery of a missing person who presumably died). Although the harm is the death of the person, the AI system's role was pivotal in resolving the incident. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Μια κόκκινη κουκκίδα στο χιόνι: Πώς η τεχνητή νοημοσύνη εντόπισε αγνοούμενο ορειβάτη - iefimerida.gr

2026-01-10
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in analyzing drone images to detect the missing person, which directly led to the discovery of the individual. This involvement of AI in a real-world rescue operation that affected human health and safety constitutes an AI Incident under the framework, as the AI system's use directly led to a significant outcome related to injury or harm (in this case, locating a missing person, potentially saving lives or providing closure). The article also mentions the system's limitations and ethical concerns, but the primary focus is on the AI's role in the incident, not just complementary information or future hazards.
Thumbnail Image

Ένα κόκκινο pixel στο χιόνι: Πώς η τεχνητή νοημοσύνη έλυσε το μυστήριο ενός αγνοούμενου ορειβάτη

2026-01-11
LiFO.gr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze images and identify anomalies that human searchers might have missed or taken much longer to find. This use of AI directly contributed to locating the missing person, which relates to harm to a person (the missing mountaineer) by enabling recovery and closure. While the AI did not prevent death, it played a pivotal role in the search and rescue operation, which is a positive impact but still falls under the framework of AI Incident because the AI system's use was directly linked to an event involving harm to a person. The article does not describe a malfunction or misuse causing harm, but the AI system's use in a real event involving human harm qualifies it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Πώς η τεχνητή νοημοσύνη οδήγησε στον εντοπισμό αγνοούμενου ορειβάτη στις Άλπεις

2026-01-11
thepressroom.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect the missing person, which directly led to the discovery of the individual. The harm (death of the mountaineer) is a realized injury to a person, and the AI system's use was instrumental in the search and rescue operation. The AI system's development and use directly contributed to managing the harm and the emergency response. Hence, this is an AI Incident rather than a hazard or complementary information, as the AI system's role was central and the harm was realized.
Thumbnail Image

Πώς το AI έλυσε το μυστήριο ενός αγνοούμενου ορειβάτη

2026-01-11
Ant1 Live
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software analyzing drone images to find the missing mountaineer. The AI system's role was in the use phase, analyzing images to detect anomalies (the red helmet). The harm (the mountaineer's death) was pre-existing and not caused by the AI system. The AI system's involvement did not cause or contribute to harm but rather helped locate the body, aiding the rescue teams. This fits the definition of Complementary Information, as it provides supporting information about AI's positive application in search and rescue, rather than describing an AI Incident or AI Hazard. There is no indication of AI malfunction, misuse, or potential future harm. Therefore, the event is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

Πώς ένα κόκκινο σημάδι στο χιόνι έλυσε το μυστήριο αγνοούμενου ορειβάτη - larissanet.gr

2026-01-11
larissanet.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a missing person. The AI system's use directly led to locating the deceased mountaineer, which is a harm to a person (death). The AI system's involvement was in the use phase, assisting rescue teams by processing large amounts of data quickly and accurately. The article clearly states that without the AI highlighting the red pixel (helmet), the body might not have been found. This direct link between AI use and harm resolution fits the definition of an AI Incident. There is no indication that the AI caused the harm, but its use is pivotal in the incident's outcome, which is consistent with the framework's criteria for AI Incident classification.
Thumbnail Image

Πώς ένα κόκκινο σημάδι στο χιόνι έλυσε το μυστήριο αγνοούμενου ορειβάτη

2026-01-11
KontraNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to analyze drone imagery for search and rescue purposes. The AI system's output directly led rescuers to the location of the missing person, who was deceased, thus addressing harm to a person (death). The AI system's role was pivotal in solving the case and improving the efficiency of the search. This fits the definition of an AI Incident because the AI system's use directly led to the discovery of harm (the deceased individual). The article does not describe a potential or future risk but an actual event where AI contributed to addressing harm. Hence, it is not an AI Hazard or Complementary Information but an AI Incident.
Thumbnail Image

عجز عنه البشر لشهور.. الذكاء الاصطناعي يعثر على مفقود في إيطاليا خلال ساعات

2026-01-12
24.ae
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze high-resolution images from drones to identify potential locations of the missing climber. The AI's analysis directly led to the discovery of the climber's helmet and ultimately the body, which is a direct link to harm to a person (the climber was missing and later found deceased). The AI system's role was pivotal in overcoming the limitations of human search efforts and harsh environmental conditions. Therefore, this qualifies as an AI Incident under the definition of an event where AI use has directly or indirectly led to harm to a person.
Thumbnail Image

متفوقاً على 50 منقذاً.. الذكاء الاصطناعي يعثر على جثة متسلق | صحيفة الخليج

2026-01-12
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze thousands of high-resolution images to identify a small visual clue (a red spot) that led to locating the missing climber's body. This use of AI directly influenced the outcome of the search and rescue operation. However, the event does not describe any harm caused by the AI system; rather, it facilitated a beneficial result. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides supporting information about the beneficial application of AI in search and rescue operations, enhancing understanding of AI's positive impact.
Thumbnail Image

الذكاء الاصطناعي يعثر على متسلق مفقود بعد شهور من البحث في جبال إيطاليا

2026-01-12
أخبارنا المغربية
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze images and identify the location of the missing climber, which directly relates to the harm of the climber's death. The AI system did not cause the harm but was pivotal in the search and rescue operation outcome. The event involves the use of AI leading to a significant outcome related to harm to a person, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يعثر على جثمان متسلّق جبال فُقد قبل عام

2026-01-12
البيان
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images to find the missing climber's body, which is a direct link to harm (death of a person). The AI's use led to the discovery, thus directly contributing to addressing an incident involving harm to health. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

الذكاء الاصطناعي يعثر على جثمان متسلّق جبال فُقد قبل عام شمال إيطاليا

2026-01-13
SANA
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze high-resolution images from drones to identify unusual elements in the terrain, leading to the discovery of the missing climber's body. This use of AI directly resulted in a positive outcome related to human safety and recovery efforts. Although no injury or harm was caused by the AI, the AI system's use was pivotal in resolving a critical search and rescue operation, which relates to harm prevention and recovery. Therefore, this qualifies as an AI Incident due to the AI system's direct role in addressing a human safety issue.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido - BBC News Mundo

2026-01-16
BBC
Why's our monitor labelling this an incident or hazard?
An AI system was used in the search operation to analyze imagery or data to identify a key visual clue (the red helmet). The AI's involvement is in the use phase, assisting human teams in locating missing persons. The harm (death of the mountaineer) is linked to the event, but the AI did not cause or contribute to the harm; rather, it was used as a tool to mitigate harm. Since the AI's use did not directly or indirectly lead to harm but rather supports rescue efforts, this event does not qualify as an AI Incident. It also does not describe a plausible future harm scenario (AI Hazard). Instead, it provides complementary information about AI's application in search and rescue, highlighting its potential benefits and limitations.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido

2026-01-16
Acento
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software analyzing drone images to identify points of interest that led to finding the missing mountaineer. The AI system was used in the operation and contributed to locating the body, but it did not cause the harm (the death) nor did it malfunction. The event is a positive example of AI application in a critical context. Since the AI's role is supportive and no new harm or plausible future harm is described, the event fits the definition of Complementary Information, providing context and insight into AI's beneficial use in search and rescue missions.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido

2026-01-16
LA NACION
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a missing person. The AI's involvement was in the use phase, assisting search and rescue teams by processing large amounts of data rapidly and identifying points of interest. The AI system's output directly led to the discovery of the missing mountaineer's body, which is a harm to a person (death). Although the AI did not cause the death, its role was pivotal in the incident's outcome, fitting the definition of an AI Incident. The article does not describe potential future harm but a realized event where AI contributed to addressing a harm. Therefore, the classification is AI Incident.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido

2026-01-16
MDZ Online
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images and identify points of interest that led to the discovery of the missing mountaineer's body. The AI's involvement was in the use phase, assisting in image analysis and anomaly detection. The harm (death of the mountaineer) had already occurred, but the AI system's use directly led to locating the body, which is a harm related to a person. This fits the definition of an AI Incident because the AI system's use directly led to an outcome involving harm to a person (finding a deceased individual). The article also discusses the AI system's limitations and future potential, but the main event is the AI-assisted discovery, not just complementary information or a hazard. Hence, the classification is AI Incident.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido

2026-01-16
El Observador
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images to identify potential locations of the missing person. The AI's output directly influenced the search and rescue operation, leading to the discovery of the deceased mountaineer. The harm (death) had already occurred, but the AI system's involvement was crucial in locating the body, which is a form of harm to a person (harm to the individual and their family/community). The AI system's use and its outputs were directly linked to the event's outcome, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a montañista desaparecido | Teletica

2026-01-16
Teletica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a missing person. The AI's output directly led rescuers to the body of the missing mountaineer, thus the AI system's use is directly linked to the harm (death) of a person. The article also discusses prior successful uses of similar AI systems in rescue operations, reinforcing the AI system's role in causing or addressing harm. Hence, this is an AI Incident, as the AI system's use directly led to the identification of harm (the deceased individual).
Thumbnail Image

Un pixel rojo en la nieve: cómo la IA ayudó a encontrar a un montañista desaparecido

2026-01-16
eldia.com.do
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software analyzing drone images to detect anomalies that led to finding the missing mountaineer's body. The AI system was used in the operation (use phase) and directly contributed to locating the individual, which is linked to harm (death). The AI system's role was pivotal in the search and rescue context. Hence, this is an AI Incident rather than a hazard or complementary information. The event is not unrelated because AI involvement is clear and consequential. It is not merely complementary information because the AI's use directly led to a significant outcome related to harm.
Thumbnail Image

Como un píxel rojo en la nieve, la IA encontró a un desaparecido en los Alpes

2026-01-16
ellitoral.com
Why's our monitor labelling this an incident or hazard?
The AI system was explicitly used to analyze drone images and detect anomalies, which led to the discovery of the missing person's body. Although the AI did not cause the death, its use was pivotal in confirming the harm (death) and assisting the rescue operation. The event involves the use of an AI system and a resulting harm to a person (death), meeting the criteria for an AI Incident. There is no indication that the AI malfunctioned or caused harm, but its role in the chain of events leading to the discovery of harm is direct and significant. Therefore, this is classified as an AI Incident.
Thumbnail Image

Alpler'de kaybolan dağcıyı yapay zekâ nasıl buldu?

2026-01-16
Halk TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system analyzing drone imagery to locate a missing person in a mountainous area. The AI system's output directly led to finding the body of the missing mountaineer, which is a direct link to harm (death). The AI system was used in the search and rescue operation, and its role was pivotal in locating the individual. Although the person was found deceased, the AI system's involvement is central to the outcome. This fits the definition of an AI Incident as the AI system's use directly led to harm (death).
Thumbnail Image

Kardaki kırmızı piksel: Yapay zeka kayıp dağcıyı nasıl buldu? - BBC News Türkçe

2026-01-17
BBC
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a missing person. The AI's output directly led to the discovery of the lost mountaineer's body, which is a harm to a person (death). The AI system's use was integral to the search and rescue operation, demonstrating direct involvement in an event with realized harm. This fits the definition of an AI Incident because the AI system's use directly led to addressing a harm-related event (finding the deceased).
Thumbnail Image

Kardaki kırmızı piksel: Yapay zeka kayıp dağcıyı nasıl buldu?

2026-01-17
Haberler
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a missing person. The AI's use directly led to locating the deceased individual, which is a harm to a person (death). The AI system's role was pivotal in the search and rescue operation, enabling the discovery of the body in a challenging environment. This fits the definition of an AI Incident because the AI system's use directly led to an outcome related to harm (the death of the mountaineer) and the AI's involvement was essential in the event's resolution. The article also references previous similar uses of AI in search and rescue, reinforcing the AI system's operational role in incidents involving harm to persons.
Thumbnail Image

Kayıp Dağcı Yıllar Sonra Yapay Zeka ile Bulundu - Son Dakika

2026-01-17
Son Dakika
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images to detect anomalies indicative of the missing person's location. The AI's output directly led to the discovery of the deceased mountaineer, fulfilling the criteria of an AI Incident where AI use has directly or indirectly led to harm (death). Although the death occurred before the AI's involvement, the AI system's role in the search and rescue operation is central to the event described. The article does not describe potential or future harm but a realized harm with AI involvement, so it is not an AI Hazard or Complementary Information. It is not unrelated because AI is central to the event.
Thumbnail Image

"Un píxel rojo" detectado por la IA, clave para encontrar el cuerpo de un alpinista desaparecido después de 10 meses

2026-01-17
20minutos.es - Últimas Noticia
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze aerial images and identify a critical clue (the red pixel) that led to the discovery of the missing person. The AI system's use directly led to the recovery of the body, which is a form of harm (death) being addressed and resolved. Although the AI did not cause harm, its use was pivotal in managing a serious human incident involving injury or death. This qualifies as an AI Incident because the AI system's use directly influenced an outcome related to human health and safety in a critical context.
Thumbnail Image

Kardaki kırmızı piksel: Yapay zeka kayıp dağcıyı nasıl buldu?

2026-01-17
T24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as analyzing drone images to detect anomalies indicative of a lost person. The AI's output directly led to finding the body of the lost mountaineer, which is a direct link to harm (death). The AI system's use was part of the rescue operation, and its malfunction or misuse is not indicated; rather, it functioned as intended and was crucial in locating the individual. This fits the definition of an AI Incident because the AI system's use directly led to addressing harm to a person. The article also discusses the broader context of AI in search and rescue, but the primary event is the AI-enabled discovery of the lost person, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Así ayuda la IA a ayudar a encontrar personas desaparecidas mediante fotos

2026-01-17
infobae
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in analyzing thousands of drone images to identify clues (e.g., a red helmet) that led to finding missing persons, which is a direct link to harm (death) of individuals. The AI's role was pivotal in accelerating and enabling the search and rescue operation, which otherwise would have been slower and less effective. The article reports actual realized harm (missing persons found deceased) and the AI's direct involvement in addressing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

IA y drones logran hazaña en los Alpes: así encontraron a un montañista perdido tras 10 meses de búsqueda

2026-01-17
pulzo.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images, significantly reducing the time needed to review data and enabling the identification of the mountaineer's location. The AI's role was pivotal in the search and rescue operation, which involved harm to a person (the mountaineer was missing and later found deceased). Therefore, this event qualifies as an AI Incident because the AI system's use directly led to addressing harm related to a missing person case.
Thumbnail Image

Desapareció en los Alpes italianos y fue hallado casi un año después gracias a un operativo con inteligencia artificial

2026-01-18
infobae
Why's our monitor labelling this an incident or hazard?
The AI system was involved in the use phase, assisting in image analysis for search and rescue. There was no harm caused by the AI system; instead, it helped locate a missing person, potentially saving lives or at least enabling recovery. The event does not describe any malfunction, misuse, or risk of harm from the AI system. It is a factual report on the beneficial application of AI in a real-world scenario, with no indication of AI-related harm or plausible future harm. The article also provides context on AI capabilities and limitations in rescue operations, which fits the definition of Complementary Information.
Thumbnail Image

Yapay zeka kayıp dağcıyı nasıl buldu?

2026-01-18
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to analyze drone images to find a missing person, which is a direct use of AI in a life-critical context. The AI's detection of anomalies in images led to the discovery of the lost mountaineer's body, which is a harm related to injury or death. The AI system's involvement was in the use phase, aiding search and rescue teams. The event involves realized harm (the person was found deceased), and the AI system's role was pivotal in locating the body. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

İnsan gözünün göremediğini o gördü: Ekiplerin aylardır aradığı kayıp dağcıyı yapay zeka buldu!

2026-01-18
Mynet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI software analyzing thousands of drone images to identify anomalies, which led to pinpointing the location of the missing mountaineer. The AI system's involvement was in the use phase, assisting search and rescue teams. The harm (death of the mountaineer) had already occurred, and the AI system's role was pivotal in locating the body. This fits the definition of an AI Incident, as the AI system's use directly led to the identification of harm to a person. The event is not merely a potential risk or complementary information but a concrete case where AI was instrumental in a real-world outcome involving harm.