Amazon Delivery Drone Crash Causes Oregon Field Fire

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An Amazon Prime Air autonomous delivery drone (MK27 model) crashed during a test flight in Oregon after motor failure, resulting in a lithium battery fire that burned several acres of wheat farmland. The incident highlights the risks of AI-operated drones, causing significant property and environmental damage.[AI generated]

Why's our monitor labelling this an incident or hazard?

The drone is part of Amazon's experimental delivery program, which likely involves AI systems for autonomous navigation and operation. The crash and subsequent fire represent direct harm to property caused by the malfunction of an AI system. Although no injuries to people were reported, the fire damage to agricultural land constitutes harm to property and communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Mobility and autonomous vehiclesLogistics, wholesale, and retailAgricultureEnvironmental servicesRobots, sensors, and IT hardware

Affected stakeholders
Business

Harm types
EnvironmentalEconomic/PropertyReputational

Severity
AI incident

Business function:
Logistics

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

An Amazon Delivery Drone Crashed and Caused a Large Fire

2022-03-25
Futurism
Why's our monitor labelling this an incident or hazard?
The drone is part of Amazon's experimental delivery program, which likely involves AI systems for autonomous navigation and operation. The crash and subsequent fire represent direct harm to property caused by the malfunction of an AI system. Although no injuries to people were reported, the fire damage to agricultural land constitutes harm to property and communities. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction.
Thumbnail Image

An Amazon drone crash in Oregon set a field ablaze

2022-03-24
Business Insider
Why's our monitor labelling this an incident or hazard?
The event describes a malfunction of an AI system (Amazon's autonomous delivery drone) during its use, which directly led to harm to property and the environment by causing a fire that burned several acres of a wheat field. The drone's autonomous operation and failure are central to the incident. The harm is realized and materialized, not just potential. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon delivery drones are falling out of the sky, with at least 8 crashes in the last year, documents show

2022-03-26
Business Insider
Why's our monitor labelling this an incident or hazard?
Amazon's delivery drones are autonomous AI systems involved in multiple crashes over the past year, including one that caused a significant brush fire. The crashes stem from malfunctions such as software glitches and hardware issues, indicating AI system malfunction. The harm includes property damage and potential risk to human safety. Since the AI system's malfunction has directly led to realized harm, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon delivery drone sparked fire when it crashed into Oregon field, says FAA

2022-03-24
The Independent
Why's our monitor labelling this an incident or hazard?
The Amazon Prime Air drone is an AI system designed for autonomous delivery. The crash was caused by motor failure leading to uncontrolled free fall and subsequent fire damage to agricultural property. This constitutes harm to property, fulfilling the criteria for an AI Incident. The report confirms the drone's AI nature (MK27 drone) and the harm caused. Although no human injury occurred, the property damage is significant and directly linked to the AI system's malfunction.
Thumbnail Image

Amazon delivery drone crash sparked acres-wide fire in Oregon: FAA

2022-03-25
DroneDJ
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous delivery drone) whose malfunction (motor failure) directly caused a crash and subsequent fire that damaged several acres of farmland. This constitutes harm to property and environment, fulfilling the criteria for an AI Incident. Although no injuries occurred, the property/environmental harm is significant and directly linked to the AI system's malfunction during use.
Thumbnail Image

Amazon's Drone Delivery Program Is Hit by Crashes and Safety Concerns - BNN Bloomberg

2022-04-10
BNN
Why's our monitor labelling this an incident or hazard?
Amazon's delivery drones are AI systems capable of autonomous flight and decision-making. The article reports multiple crashes, including one that caused a brush fire damaging 25 acres, which is harm to property and environment. The failures of safety features and management decisions to prioritize speed over safety directly contributed to these incidents. Although no injuries to people are reported, the harm to property and the risk to personnel are clear. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction and use have directly led to harm.
Thumbnail Image

Amazon's Drone Delivery Program Is Hit by Crashes and Safety Concerns

2022-04-10
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
Amazon's delivery drones are AI systems designed for autonomous flight and delivery. The article reports actual crashes, including one causing a brush fire, and multiple safety system failures, which constitute harm to property and risk to people. The incidents stem from the use and malfunction of these AI systems. The harm is realized, not just potential, and the AI system's role is pivotal. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Amazon drones to replace cargo vehicles? Not so soon as safety concerns mount

2022-04-11
https://auto.hindustantimes.com
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous navigation and delivery tasks. The crashes represent malfunctions that have raised safety concerns, implying a credible risk of injury or property damage. Since harm has not yet been reported but the risk is credible and ongoing, this situation qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or governance actions but on the potential safety risks and development challenges.
Thumbnail Image

世界初、ドローン同士の直接通信で自動追従群飛行と自律接近回避に成功

2022-04-11
プレスリリース・ニュースリリース配信シェアNo.1|PR TIMES
Why's our monitor labelling this an incident or hazard?
The described system involves AI-like autonomous decision-making capabilities in drones for coordinated flight and collision avoidance, which implies the presence of AI systems. However, the event reports a successful development and demonstration without any mention of harm or malfunction. There is no indication that any injury, property damage, rights violation, or other harm has occurred or is imminent. Therefore, this is a technological development with potential future implications but no realized or immediate harm.
Thumbnail Image

NICT、4機のドローンが自律的に「群飛行」「接近回避」する実証に世界初成功

2022-04-11
CNET
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the drones use autonomous flight control algorithms for swarm behavior and collision avoidance, which are indicative of AI-based decision-making. However, the article reports a successful demonstration without any harm or malfunction occurring. There is no indication of injury, disruption, rights violations, or other harms resulting from this technology at this stage. While the technology could plausibly lead to future hazards if misused or malfunctioning, the article focuses on the successful proof of concept and future development plans rather than any realized or imminent harm. Therefore, this event is best classified as Complementary Information, providing context and updates on AI system development and potential applications without reporting an incident or hazard.
Thumbnail Image

ドローンが戦争のルールを変える--ウクライナで明らかになった実力

2022-04-14
CNET
Why's our monitor labelling this an incident or hazard?
The drones mentioned are AI systems as they perform autonomous or semi-autonomous tasks such as reconnaissance, navigation, and targeted attacks. Their use in military operations has directly led to harm in the form of destruction of military assets and potential injury or death in the conflict. The article details actual use and impact, not just potential risks, thus qualifying as an AI Incident under the framework.
Thumbnail Image

プーチン戦争であらわになったドローンの脅威、そして無防備な日本

2022-04-14
日経ビジネス電子版
Why's our monitor labelling this an incident or hazard?
The Bayraktar TB2 drones are AI-enabled attack drones capable of autonomous or semi-autonomous targeting and attack functions. Their use in the Ukraine conflict has directly contributed to military harm, including destruction of enemy logistics and air defense systems, which constitutes harm to persons and property. The article explicitly links the drones' use to military outcomes and harm, fulfilling the criteria for an AI Incident. Although the article focuses on the strategic role of drones rather than a specific malfunction or misuse, the deployment of AI systems in active combat causing harm qualifies as an AI Incident under the framework.
Thumbnail Image

4機のドローンを安全に編隊飛行、NICTが世界初の制御技術

2022-04-14
日経クロステック(xTECH)
Why's our monitor labelling this an incident or hazard?
The described system involves AI-related autonomous flight control algorithms that enable drones to coordinate and avoid collisions autonomously. This qualifies as an AI system because it infers from input data (position information) to generate outputs (flight control commands) influencing the physical environment. However, the article reports a successful demonstration without any harm or malfunction occurring. There is no indication of injury, disruption, rights violations, or property/environmental harm. The event shows a technological advancement with potential safety benefits, but no realized harm or incident. Therefore, it is best classified as Complementary Information, as it provides important context and development in AI-enabled drone control technology without describing an AI Incident or AI Hazard.
Thumbnail Image

ドローン同士で相互通信、自律的に編隊飛行・衝突回避 NICTが世界初

2022-04-12
ITmedia
Why's our monitor labelling this an incident or hazard?
The system involves AI through autonomous flight control algorithms enabling drones to communicate and avoid collisions without human control, fitting the definition of an AI system. The event is about the development and successful testing of this system, with no reported harm or malfunction causing injury or damage. However, the technology's nature and intended use in crowded airspace imply a plausible risk of future harm (e.g., collisions, interference with manned aircraft), qualifying it as an AI Hazard. There is no indication of realized harm or violation of rights, so it is not an AI Incident. It is more than just complementary information because it highlights a new AI capability with potential safety implications.
Thumbnail Image

NICT、ドローン同士の直接通信による自動飛行に成功

2022-04-11
マイナビニュース
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of autonomous flight control algorithms and direct communication enabling drones to fly in formation and avoid collisions autonomously. Although the article reports successful tests without any harm or malfunction, the technology's deployment in real-world crowded airspace carries plausible risks of incidents such as collisions or interference with manned aircraft. Since no actual harm or incident has occurred yet, but there is a credible potential for future harm, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are clearly involved and the event concerns their autonomous use with potential safety implications.
Thumbnail Image

Amazonの「ドローン配達」プロジェクトが遅延や墜落事故といった課題に直面、大量の従業員が離職する事態も

2022-04-11
GIGAZINE
Why's our monitor labelling this an incident or hazard?
The drones are AI systems because they autonomously navigate and deliver packages. The article reports actual drone crashes, including one causing a wildfire, which is harm to property and environment (d). There are also implied risks to human safety and organizational harm due to employee departures and safety concerns. The AI system's malfunction and use have directly led to these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

NICT、ドローン4機の群飛行に成功。自ら接近回避も | Techable(テッカブル)

2022-04-13
Techable(テッカブル)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the drones use autonomous flight control algorithms and inter-drone communication to coordinate their movements and avoid collisions, which fits the definition of an AI system. However, the article only reports a successful experimental demonstration without any harm occurring or any indication that harm is likely or imminent. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. The article provides complementary information about AI technology development and its potential future applications, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

世界初、ドローン同士の直接通信で自動追従群飛行と自律接近回避に成功|2022年|NICT-情報通信研究機構

2022-04-11
nict.go.jp
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems in drones for autonomous flight coordination and collision avoidance, which is a clear example of AI system use. However, the article does not report any harm or incident resulting from this technology, nor does it indicate any immediate risk or plausible future harm. It mainly describes a successful technical achievement and potential operational benefits, without mentioning any negative consequences or risks. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI system development and deployment in drone technology.
Thumbnail Image

アマゾンのドローン配送、10年経過で離陸まだ-試験で墜落、安全懸念

2022-04-11
Bloomberg.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous delivery drones) whose malfunction (failure of safety features leading to a crash and fire) has directly caused harm (property damage and safety risk). This fits the definition of an AI Incident because the AI system's malfunction has led to harm and regulatory concerns. The article reports actual harm (fire from crash) and ongoing safety issues, not just potential future harm, so it is not merely an AI Hazard. It is not Complementary Information because the main focus is on the incident and its consequences, not on responses or broader ecosystem context. Therefore, the classification is AI Incident.
Thumbnail Image

贝佐斯的无人机送货梦:十年近130亿元投入依旧难实现

2022-04-10
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous delivery drones) whose malfunction (safety feature failures leading to loss of control) directly caused harm (a wildfire and injuries). This fits the definition of an AI Incident because the AI system's malfunction led to injury and property/environmental harm. The article also discusses regulatory and operational challenges but the key point is the realized harm from the drone crash.
Thumbnail Image

亚马逊无人机送货项目仍在苦苦挣扎 - cnBeta.COM 移动版

2022-04-12
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous delivery drone) whose malfunction directly caused a significant fire, harming property and the environment. The drone's motor failure and safety system failures indicate malfunction during use. The harm is realized and significant, meeting the criteria for an AI Incident. Although the company disputes some claims, the reported fire and drone crash are factual harms linked to the AI system's malfunction. Hence, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

贝佐斯的无人机送货梦:十年近130亿元投入依旧难实现 - IT 与交通 - 航空 - cnBeta.COM

2022-04-10
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous drones with AI-based flight control) whose malfunction directly caused harm: a crash leading to a wildfire and injuries. This fits the definition of an AI Incident because the AI system's malfunction led to injury and property/environmental harm. The article also discusses ongoing risks and regulatory challenges, but the realized harm from the crash is the primary focus, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

10年烧掉130亿:无人机送货仍遥遥无期

2022-04-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous delivery drones) whose malfunction (loss of control and safety feature failure) directly caused harm (a forest fire and personnel injuries). The article explicitly mentions the safety failures and the resulting crash, which meets the criteria for an AI Incident due to harm to property and people. The ongoing development and regulatory challenges are context but do not negate the realized harm from the incident.