Lawsuit Alleges Figure AI's Humanoid Robots Pose Lethal Safety Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A former engineer has sued Figure AI, claiming he was fired after warning that the company's humanoid robots could exert enough force to fracture a human skull, posing serious safety risks. The lawsuit highlights concerns over inadequate safety measures in the AI-controlled robots' design and operation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly described as humanoid robots controlled by AI, which can exert physical force. The former employee's allegations indicate that the AI system's use and potential malfunction or unsafe design could directly lead to serious physical harm to humans. The lawsuit and safety concerns highlight a direct link between the AI system's operation and potential injury, fulfilling the criteria for an AI Incident. Although the harm has not yet occurred, the described risk is concrete and tied to the AI system's use, and the firing of the whistleblower suggests a failure to address these risks, reinforcing the incident classification rather than a mere hazard or complementary information.[AI generated]
AI principles
SafetyAccountability

Industries
Robots, sensors, and IT hardware

Affected stakeholders
WorkersGeneral public

Harm types
Physical (injury)

Severity
AI incident

Business function:
Research and development

AI system task:
Other


Articles about this incident or hazard

Thumbnail Image

Elon Musk are încă o idee nebună: Vrea să facă o armată specială, dar China îl anunță că deja a construit-o VIDEO

2025-11-21
Ziare.com
Why's our monitor labelling this an incident or hazard?
While the article involves AI systems (humanoid robots with AI capabilities), it does not describe any incident where the AI systems have directly or indirectly caused harm, nor does it describe a plausible imminent risk of harm. The mention of an "army of humanoid robots" is speculative and does not indicate an existing or imminent AI hazard. The article is primarily about market competition, technological development, and strategic positioning, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting harm or credible imminent harm.
Thumbnail Image

Proces împotriva Figure AI: "Roboții companiei ar putea fractura craniul unui om"

2025-11-22
Mediafax.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as humanoid robots controlled by AI, which can exert physical force. The former employee's allegations indicate that the AI system's use and potential malfunction or unsafe design could directly lead to serious physical harm to humans. The lawsuit and safety concerns highlight a direct link between the AI system's operation and potential injury, fulfilling the criteria for an AI Incident. Although the harm has not yet occurred, the described risk is concrete and tied to the AI system's use, and the firing of the whistleblower suggests a failure to address these risks, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

Elon Musk are încă o idee nebună: Vrea să facă o armată specială, dar China îl anunță că deja au construit-o VIDEO

2025-11-21
Business24
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically humanoid robots with AI capabilities. It describes their development, production, and potential future deployment at scale, including the concept of an 'army' of robots. While no actual harm or incident is reported, the nature of these AI systems and their potential military or widespread use plausibly pose future risks. Hence, the event fits the definition of an AI Hazard, as it could plausibly lead to AI incidents in the future. It is not an AI Incident because no harm has occurred yet, nor is it Complementary Information or Unrelated, as the focus is on the AI systems and their potential impact.
Thumbnail Image

Elon Musk vrea să construiască o "armată" de roboți umanoizi

2025-11-22
TechRider.ro
Why's our monitor labelling this an incident or hazard?
The article primarily focuses on the development and future potential of humanoid robots equipped with AI, without describing any actual harm, malfunction, or misuse that has occurred. There is no indication that these AI systems have directly or indirectly caused injury, rights violations, disruption, or other harms. The discussion of potential future capabilities and industry trends suggests possible future risks but does not specify any credible or imminent hazards. Therefore, the article is best classified as Complementary Information, providing context and updates on AI system development and industry direction rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

휴머노이드 로봇, 인간 두개골 깰만큼 강력"...첫 내부 고발

2025-11-23
www.donga.com
Why's our monitor labelling this an incident or hazard?
The humanoid robot qualifies as an AI system due to its autonomous or semi-autonomous nature implied by its ability to exert physical force and cause damage. The whistleblower's report indicates that the robot's development or malfunction poses direct risks of injury to humans (fracturing skulls) and harm to property (damaging steel refrigerator doors). The firing of the whistleblower after raising safety concerns further highlights the seriousness of the issue. Since the robot's dangerous capabilities have already manifested in damage and pose a direct threat to human safety, this event constitutes an AI Incident.
Thumbnail Image

"로봇이 사람 두개골 깰 수 있어"...피겨AI, 前직원에 피소 | 연합뉴스

2025-11-22
연합뉴스
Why's our monitor labelling this an incident or hazard?
The humanoid robot developed by Figure AI is an AI system capable of autonomous operation with physical strength sufficient to cause serious injury. The former safety engineer's documented warnings about the robot's ability to fracture human skulls and cause damage to steel objects indicate a direct risk of harm to people. The firing of the engineer after raising these concerns and the alleged abandonment of safety plans suggest a failure in managing the AI system's risks. The lawsuit and public disclosure of these facts confirm that harm has either occurred or is highly plausible, meeting the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but involves realized or imminent harm linked to the AI system's use and development.
Thumbnail Image

"로봇, 인간 두개골 깰만큼 강력"...경고했다가 해고당한 직원 [지금이뉴스]

2025-11-24
YTN
Why's our monitor labelling this an incident or hazard?
The humanoid robot developed by Figure AI qualifies as an AI system due to its autonomous robotic capabilities. The engineer's warnings about the robot's strength and malfunction causing physical damage indicate a credible risk of harm to humans. The firing of the engineer after raising these concerns and the subsequent lawsuit highlight the development and use of the AI system with potential safety hazards. Since no actual injury to a person is reported, but the risk is credible and documented, this event fits the definition of an AI Hazard. It is not an AI Incident because harm to people has not yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

"로봇이 두개골 깰 수도"... 휴머노이드 오작동 리스크 부상

2025-11-23
국민일보
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in humanoid robots whose malfunctions have caused or could cause physical harm to humans, fulfilling the criteria for AI Incidents. The former employee's lawsuit highlights ignored safety risks that could lead to serious injury, and documented cases in China show robots acting violently or dangerously. The harms include injury risk to people (a), and the AI system's malfunction or unsafe operation is a direct contributing factor. The article also discusses the need for AI-specific safety procedures, reinforcing the AI system's central role in these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"로봇이 사람 두개골 깰 수도"⋯해고한 직원에 소송 당한 로봇회사

2025-11-23
inews24
Why's our monitor labelling this an incident or hazard?
The humanoid robot developed by Figure AI is an AI system capable of autonomous or semi-autonomous operation, as inferred from the context of safety concerns and physical damage caused. The engineer's warnings about the robot's strength and incidents of damage to property demonstrate plausible risks of physical harm. Although no actual injury to humans is reported, the potential for serious injury (e.g., fracturing a skull) is credible and significant. The company's alleged dismissal of safety concerns and firing of the whistleblower further heightens the risk. Since no actual harm to people has been reported yet, but the risk is credible and imminent, this event is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

"로봇이 인간 두개골 깰 수 있어" 지적한 직원 자른 피겨AI 피소 | 중앙일보

2025-11-23
중앙일보
Why's our monitor labelling this an incident or hazard?
The humanoid robot is an AI system with autonomous capabilities. The engineer's safety concerns about the robot's strength and potential to cause serious injury indicate a credible risk of harm to humans. Although no actual injury to a person is reported, the robot has caused physical damage to property, and the safety issues were allegedly ignored by the company. This situation represents a plausible future harm scenario (AI Hazard) rather than a realized harm (AI Incident). The firing of the whistleblower and the lawsuit further emphasize the seriousness of the safety concerns. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

피규어AI, 안전 경고한 엔지니어 해고 논란...로봇 위험성 은폐 의혹

2025-11-24
디지털투데이 (DigitalToday)
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses robots developed by an AI startup, which are humanoid and likely AI-powered, posing lethal physical risks if malfunctioning. The safety engineer's warnings about these risks were ignored, and he was fired, which indicates a failure in managing AI system safety. The potential for injury (skull fractures) is a direct harm to human health, fitting the definition of an AI Incident. The company's alleged concealment of these risks and dismissal of the engineer further supports the classification as an incident rather than a hazard or complementary information. The involvement of AI in the robots and the direct link to potential physical harm justifies this classification.
Thumbnail Image

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

2025-11-22
CNBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of humanoid robots developed by Figure AI. The plaintiff, a safety engineer, alleges that the robots are powerful enough to cause serious physical injury, which is a direct harm to human health. The complaint also indicates that safety concerns were ignored and that the safety roadmap was downgraded, which could lead to harm. The involvement of AI in the robots' operation and the direct link to potential physical injury qualifies this as an AI Incident. The whistleblower lawsuit and the described safety failures demonstrate realized or imminent harm risks, not just potential future harm, thus it is not merely a hazard or complementary information.
Thumbnail Image

You Must Read This Riveting Whistleblower Lawsuit About Allegedly Dangerous Robots

2025-11-22
Gizmodo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (humanoid robot with AI capabilities) whose development and use allegedly led to unsafe conditions and near-injury incidents. The whistleblower's safety concerns about the robot's power and unpredictability, and the company's failure to implement adequate safety measures, directly relate to potential harm to persons. The near-miss incident and the robot's capacity to inflict severe injury demonstrate realized or imminent harm. Therefore, this qualifies as an AI Incident due to injury or harm to persons caused by the AI system's use and malfunction.
Thumbnail Image

AI robots with the power to crush skulls? "Whistleblower" lawsuit sounds alarm on Figure AI

2025-11-22
TechSpot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-powered humanoid robots whose malfunction or unsafe design has already caused physical damage (a quarter-inch gash in a steel refrigerator door) and poses a credible risk of severe injury to humans (skull-crushing capability). The whistleblower's claims indicate that these safety risks were ignored or downplayed by the company, which continued to seek investment and plan large-scale deployment. This constitutes direct involvement of AI systems in a situation that has led or could lead to harm to people, fulfilling the criteria for an AI Incident. The lawsuit and whistleblower status further confirm the seriousness and direct link to harm.
Thumbnail Image

Whistleblower lawsuit claims Figure AI robots have the strength to fracture skulls

2025-11-23
Notebookcheck
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-powered humanoid robots, which qualify as AI systems due to their autonomous capabilities. The whistleblower alleges that the robots have the physical strength to cause severe injury, and safety concerns were raised but ignored. No actual harm has been reported yet, but the potential for serious injury is credible and significant. The lawsuit and whistleblower status highlight risks in the development and deployment of these AI systems. Since harm is plausible but not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information or unrelated, as it centers on safety risks directly linked to AI system development and use.
Thumbnail Image

Whistleblower claims Figure AI fired him for warning their humanoid robot could kill - Cryptopolitan

2025-11-22
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The humanoid robots are AI systems with autonomous capabilities that can physically interact with humans and environments. The whistleblower's claims indicate that these robots have already demonstrated dangerous behavior (e.g., cutting into steel, potential to break human skulls), which constitutes a direct risk of injury or harm to people. The firing of the safety engineer after raising these concerns suggests a failure in safety governance and potential indirect contribution to harm. The lawsuit and the described events show that harm has either occurred or is imminent due to the AI system's malfunction or unsafe use. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Engineer says he was fired after warning Figure AI's robot could 'fracture a human skull' - VnExpress International

2025-11-24
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a humanoid robot) whose malfunction has already caused physical damage to property (a steel refrigerator door) and is described as capable of causing serious injury to humans (fracturing a skull). The engineer's warnings about these risks were ignored, and he was fired after raising safety concerns, indicating a failure in managing AI safety. The robot's malfunction and the company's disregard for safety protocols constitute a direct or indirect cause of potential harm to humans, fitting the definition of an AI Incident. Although no actual injury to a person is reported, the credible risk and the malfunction event itself meet the criteria for an AI Incident under the framework, as the AI system's malfunction has directly led to harm to property and poses a clear risk of injury to people.
Thumbnail Image

Robots can 'crush human skulls' warns AI whistleblower - Daily Star

2025-11-24
Daily Star
Why's our monitor labelling this an incident or hazard?
The article describes an AI system (humanoid robots) whose malfunction or unsafe design could directly lead to serious physical injury or death, which fits the definition of an AI Hazard because the harm is plausible and credible but not yet realized as an incident. The whistleblower's claims about the robots' power and a prior malfunction causing damage to property support the plausibility of harm. Since no actual injury or harm to persons is reported, this is not an AI Incident. The article is not merely complementary information because it focuses on the safety risks and legal action related to the AI system's potential for harm. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Whistleblower Says He Was Fired for Warning Execs That New Robot Could Crush Human Skull

2025-11-24
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a humanoid robot AI system deployed in real industrial environments, which malfunctioned and caused property damage and near injury to a human. The whistleblower's safety tests indicate the robot's force exceeds thresholds that could cause serious injury or death. The company's alleged disregard for these safety concerns and firing of the whistleblower further highlight risks and harm linked to the AI system's use and malfunction. This meets the criteria for an AI Incident because the AI system's malfunction and use have directly and indirectly led to harm to persons and property.
Thumbnail Image

Figure AI Worker Sues Over Firing After Raising Safety Risks (1)

2025-11-24
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The article describes the development and use of AI-powered humanoid robots, which qualifies as AI systems. The safety concerns raised imply potential risks that could plausibly lead to harm, but no actual harm or incident has been reported. The firing of the safety head after raising concerns is related to the use and management of the AI system but does not itself constitute an incident or hazard. Therefore, this event is best classified as Complementary Information, as it provides context on governance and safety issues related to AI development without reporting a realized or imminent harm.
Thumbnail Image

Engineer Accuses Firm of Firing Him for Warning of AI Robot's 'Superhuman Speed' and Ability To Inflict 'Severe Permanent Injury' on Humans

2025-11-25
The New York Sun
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-powered humanoid robots) whose development and use have directly led to safety risks and near-harm incidents to humans. The engineer's warnings about the robots' superhuman speed and potential to cause severe injury, supported by test data and a near-miss incident, demonstrate a direct link between the AI system's malfunction/use and potential harm. The company's ignoring of these concerns and retaliatory firing of the engineer further highlight the seriousness of the issue. Hence, this is an AI Incident as the AI system's malfunction and unsafe deployment have caused or could cause injury or harm to people.
Thumbnail Image

La nueva generación de humanoides llega con una advertencia. Un exingeniero asegura que el robot Figure 02 posee fuerza suficiente para triturar un cráneo humano

2025-11-28
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The robot Figure 02 is an AI system as it uses autonomous movement algorithms and mechanical actuators to perform complex tasks. The complaint alleges that the robot's force is sufficient to cause serious physical injury, which is a direct safety hazard. However, the article does not report any actual injury or incident caused by the robot, only a legal claim and internal test results. The risk of harm is credible and plausible given the robot's capabilities and the alleged lack of safety protocols. This fits the definition of an AI Hazard, as the event describes circumstances where the AI system's use or malfunction could plausibly lead to harm, but no harm has yet materialized. The article also discusses broader governance and regulatory concerns, reinforcing the hazard classification rather than an incident or complementary information. There is no indication that harm has already occurred, so it is not an AI Incident. It is more than general AI news or product launch, so it is not Unrelated or Complementary Information.
Thumbnail Image

El peligro oculto del robot humanoide doméstico de Figure: "Tiene tanta fuerza que podría destrozar un cráneo"

2025-11-26
El Español
Why's our monitor labelling this an incident or hazard?
The event involves humanoid robots that almost certainly incorporate AI systems for autonomous operation. The lawsuit alleges that the company ignored serious safety issues, including the robots' ability to exert enough force to cause severe physical injury, and that near-accidents have already occurred. These facts indicate direct or indirect harm to human health or safety due to the AI system's malfunction or unsafe use. The presence of an AI system is reasonably inferred from the description of humanoid robots performing tasks autonomously. The harm is materialized or imminent, making this an AI Incident rather than a hazard or complementary information. The event is not merely about potential future harm but about actual safety failures and near-harm events.
Thumbnail Image

Denuncian a Figure AI por un robot señalado como altamente peligroso

2025-11-30
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (humanoid robots using AI) whose malfunction or unsafe behavior has directly led to physical harm to property and could have caused injury to people. The accidental forceful impact by the robot demonstrates a failure or risk in the AI system's operation. The former security chief's denunciation and the company's response provide context but do not negate the fact that harm has occurred or could have occurred. Therefore, this is an AI Incident due to realized harm and safety concerns linked to the AI system's use and malfunction.
Thumbnail Image

Figure AI, denunciada por su exjefe de seguridad: alerta de que el robot Figure 02 tiene fuerza para fracturar un cráneo

2025-11-27
20 minutos
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system: a humanoid robot with autonomous or semi-autonomous capabilities that can manipulate objects and exert physical force. The ex-security chief's allegations describe a failure in safety protocols and risk management during development and testing, with the robot capable of causing serious physical injury. The robot's ability to generate force sufficient to fracture a human skull constitutes a direct physical harm risk (harm to health of persons). The internal incident of the robot damaging a refrigerator with significant force further supports the presence of a malfunction or unsafe design. Although no actual human injury is reported, the direct risk and evidence of unsafe operation meet the criteria for an AI Incident because the AI system's malfunction or unsafe design has directly led to a significant harm risk. The company's denial does not negate the presence of the risk or the internal incident. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Figure AI, denunciada por su exjefe de seguridad: alerta de que el robot Figure 02 tiene fuerza para fracturar un cráneo

2025-11-28
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system—the Figure 02 humanoid robot with advanced manipulation capabilities powered by AI. The complaint alleges that the robot can exert force sufficient to cause serious injury, and that safety controls were ignored or removed, creating a credible risk of harm. Although no actual injury has been reported, the potential for serious physical harm is clear and directly linked to the robot's design and use. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving injury. The event is not an AI Incident because no harm has yet occurred to people. It is not Complementary Information because the main focus is the safety risk and legal complaint, not a follow-up or governance response. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Demandan a Figure AI y revelan los peligros de su robot humanoide doméstico: puede 'fracturar un cráneo humano'

2025-11-27
La Razón
Why's our monitor labelling this an incident or hazard?
The robot is explicitly described as AI-driven (Helix AI system) and capable of physical actions with force sufficient to cause serious injury, including skull fractures. The safety engineer's warnings about these risks and the near-accident incident (robot hitting a fridge door with significant force) indicate a direct link between the AI system's operation and potential harm. The dismissal of the engineer after raising these concerns suggests negligence in addressing these hazards. Since the robot is intended for consumer use, the risk of injury is not hypothetical but plausible and imminent. Therefore, this event qualifies as an AI Incident due to the direct and significant risk of injury to humans caused by the AI system's use and malfunction.