Pibot: Humanoid AI Robot Developed to Pilot Aircraft Autonomously

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at KAIST have developed Pibot, a humanoid robot powered by AI and large language models, capable of piloting aircraft without cockpit modifications. While no incidents have occurred, its deployment in safety-critical aviation and military roles poses credible future risks if the AI malfunctions or is misused.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (the humanoid AI pilot) that is under development and not yet deployed operationally, so no realized harm or incident has occurred. The AI pilot's capabilities and intended use in operating aircraft imply a credible risk of future harm (e.g., accidents, safety failures) if deployed without sufficient safeguards. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. The article does not report any actual harm or violation, so it is not an AI Incident. It is not merely complementary information because the main focus is on the unveiling and capabilities of the AI pilot with potential future implications, not on responses or ecosystem updates. It is not unrelated because the AI system is central to the event.[AI generated]
AI principles
SafetyRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomy

Industries
Mobility and autonomous vehiclesRobots, sensors, and IT hardwareTravel, leisure, and hospitalityGovernment, security, and defence

Affected stakeholders
WorkersGeneral publicBusiness

Harm types
Physical (death)Physical (injury)Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionGoal-driven organisationReasoning with knowledge structures/planningInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

AI pilot unveiled by Korean Scientists | Inquirer Technology

2023-08-16
Inquirer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the humanoid AI pilot) that is under development and not yet deployed operationally, so no realized harm or incident has occurred. The AI pilot's capabilities and intended use in operating aircraft imply a credible risk of future harm (e.g., accidents, safety failures) if deployed without sufficient safeguards. Hence, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident in the future. The article does not report any actual harm or violation, so it is not an AI Incident. It is not merely complementary information because the main focus is on the unveiling and capabilities of the AI pilot with potential future implications, not on responses or ecosystem updates. It is not unrelated because the AI system is central to the event.
Thumbnail Image

WATCH | Meet 'Pibot,' the humanoid robot that can safely pilot an airplane better than a human | News24

2023-08-15
news24
Why's our monitor labelling this an incident or hazard?
The event involves an AI system under development with advanced autonomous capabilities in piloting aircraft, which could plausibly lead to harm if deployed without sufficient safety measures, especially given its potential military applications. However, since the robot is still in development and no harm or malfunction has occurred, this constitutes a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI Top Gun: Meet PiBot, a humanoid that has pipped pilots on every count

2023-08-16
Firstpost
Why's our monitor labelling this an incident or hazard?
Pibot is an AI system integrating large language models and robotics to perform complex piloting tasks autonomously. Although no harm has yet occurred, the article highlights its intended use in aviation and military contexts, where errors or malfunctions could lead to injury, disruption, or other harms. The development and planned deployment of such a system in safety-critical and defense roles plausibly could lead to AI Incidents in the future. Since no actual harm or incident is reported, but the potential risk is credible and significant, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Scientists Build Humanoid Robot That Can Pilot a Plane

2023-08-19
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Pibot) that uses large language models and autonomous control to pilot aircraft. While no harm has yet occurred, the use of such a system in piloting planes, especially in military or extreme environments, carries credible risks of injury, disruption, or other harms if the AI malfunctions or makes errors. Therefore, this qualifies as an AI Hazard due to the plausible future harm from the AI system's use in critical aviation operations.
Thumbnail Image

Meet Pibot, The Humanoid Robot That Can Fly An Aeroplane Just Like...

2023-08-16
2oceansvibe News | South African and international news
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (Pibot) that can autonomously operate airplanes and other vehicles. Although no incident of harm has been reported, the nature of the system and its intended use in critical infrastructure (aviation, military) plausibly could lead to harm such as accidents, operational disruptions, or safety failures if the AI malfunctions or is misused. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident in the future, given the high-risk domain and autonomous capabilities described. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the potential risks and capabilities of the AI system.
Thumbnail Image

This Humanoid Robot Can Safely Pilot An Airplane Better Than A Human - Wonderful Engineering

2023-08-17
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (Pibot) designed to pilot airplanes autonomously or semi-autonomously. Although no harm has yet occurred, the nature of the system's intended use in aviation safety-critical operations means it could plausibly lead to incidents involving injury or disruption if it malfunctions or is misused. Therefore, this qualifies as an AI Hazard due to the credible risk of future harm inherent in deploying such AI systems in piloting aircraft.
Thumbnail Image

این گروه به زودی شغل خود را از دست می‌دهند!

2023-08-17
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Pibot) under development that could plausibly lead to future harm in terms of job loss for human pilots, which is a socio-economic harm. However, no actual harm or incident has occurred yet. Therefore, this qualifies as an AI Hazard because the development and potential use of this AI system could plausibly lead to harm (job displacement). There is no indication of realized harm or malfunction causing injury, rights violations, or other harms at this stage, so it is not an AI Incident. It is more than just general AI news or a product announcement, as it implies a credible risk of future harm.
Thumbnail Image

ربات انسان‌نمای Pibot می‌تواند بهتر از خلبانان انسانی هواپیماها را هدایت کند

2023-08-20
انتخاب
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Pibot) that uses advanced AI capabilities including large language models to pilot airplanes autonomously. Although no harm has yet occurred, the robot's intended function in controlling aircraft directly relates to critical infrastructure and safety. Given the high stakes of aviation safety, the development and future deployment of such an AI system plausibly could lead to incidents involving injury or disruption. Since the robot is still under development and no incident has occurred, this is best classified as an AI Hazard rather than an AI Incident.
Thumbnail Image

پیشرفت علم تهدیدی برای شغل خلبانی! | وقایع روز

2023-08-17
پایگاه خبری - تحلیلی وقایع روز
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system integrated into a humanoid robot capable of autonomous or semi-autonomous piloting of aircraft. Although no harm has yet occurred, the article implies that this technology could plausibly lead to future harms such as job displacement for human pilots, which is a significant societal harm. Since the harm is potential and not realized, this qualifies as an AI Hazard rather than an AI Incident. There is no indication of actual injury, rights violations, or other realized harms at this stage, nor is the article primarily about responses or governance, so it is not Complementary Information.
Thumbnail Image

تماشا کنید: این ربات خلبان هواپیما را بهتر از انسان‌ هدایت می‌کند

2023-08-19
زومیت
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Pibot) designed for autonomous piloting, which is clearly an AI system given its capabilities in perception, control, and natural language understanding. However, the article only discusses the development and capabilities of the system without any indication of harm or malfunction. There is no mention of injury, disruption, rights violations, or other harms occurring or having occurred. While the technology could plausibly lead to future hazards if deployed without proper safeguards, the article does not emphasize or warn about such risks explicitly. Therefore, this is best classified as Complementary Information, providing context and insight into AI advancements without reporting an incident or hazard.
Thumbnail Image

ربات انسان‌نمای Pibot می‌تواند بهتر از خلبانان انسانی هواپیماها را هدایت کند [تماشا کنید]

2023-08-20
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Pibot) designed to autonomously pilot airplanes, which is a critical infrastructure domain. The robot's development and intended use could plausibly lead to harm such as injury or disruption if it malfunctions or is misused. Since the robot is still under development and no harm has yet occurred, this fits the definition of an AI Hazard rather than an AI Incident. The article does not report any realized harm or incident, nor does it focus on responses or governance, so it is not Complementary Information. It is clearly related to AI systems and their potential impact, so it is not Unrelated.
Thumbnail Image

PIBOT: desarrollan el primer piloto humanoide del mundo

2023-08-21
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
PIBOT is an AI system designed to pilot aircraft manually using AI capabilities such as reading manuals, memorizing protocols, and visual perception. Although no harm or incident has occurred yet, the deployment of such AI systems in safety-critical environments like aviation could plausibly lead to injury or disruption if failures happen. The article focuses on the development and potential applications, with no realized harm, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Atención señores pasajeros, PIBOT es el primer robot humanoide capaz de pilotar un avión mediante IA

2023-08-21
Mundo Deportivo
Why's our monitor labelling this an incident or hazard?
PIBOT is an AI system capable of autonomous flight control, which is a high-stakes application with potential for serious harm if malfunction or misuse occurs. The article describes successful simulation tests but no actual incidents or harm. Since the system is still in development and not yet deployed in real aircraft, no direct or indirect harm has occurred. However, the nature of the AI system and its intended use in piloting aircraft means it could plausibly lead to harm in the future, such as accidents or safety failures. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

PIBOT es el primer robot humanoide capaz de pilotar un avión mediante IA: ¿Te subirías?

2023-08-21
FayerWayer
Why's our monitor labelling this an incident or hazard?
PIBOT is an AI system designed to pilot aircraft autonomously. Although no harm has yet occurred, the article implies that the system could plausibly lead to incidents if deployed without sufficient testing or safeguards, given the high-risk nature of autonomous flight control. Therefore, this event qualifies as an AI Hazard because it involves the development and potential use of an AI system that could plausibly lead to harm (e.g., injury or disruption) in the future, but no actual harm or incident has been reported yet.
Thumbnail Image

PIBOT: Desarrollan el primer piloto humanoide del mundo

2023-08-22
HoyBolivia.com - El primer Periódico Digital de Bolivia
Why's our monitor labelling this an incident or hazard?
PIBOT is an AI system designed to pilot aircraft manually, involving AI in real-time decision-making and control. The article does not report any harm or malfunction but discusses ongoing development and future applications, implying potential risks. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, especially given the critical nature of piloting aircraft. There is no indication of realized harm or incident yet, so it is not an AI Incident. It is not merely complementary information since the main focus is on the AI system's development and potential impact, not on responses or ecosystem updates. Hence, the classification is AI Hazard.
Thumbnail Image

Crean robot humanoide que podría sustituir a personas en las fábricas, aunque tiene un problema

2023-08-26
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
Apollo 1 is an AI system (a humanoid robot with autonomous capabilities) being developed for physical labor substitution. The article does not report any realized harm or incident caused by the robot, but its intended use to replace human workers in factories could plausibly lead to future harms such as job displacement or economic disruption. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harms related to labor and communities, even though no harm has yet occurred.
Thumbnail Image

PIBOT, el robot que es capaz de pilotar un avión

2023-08-26
PasionMovil
Why's our monitor labelling this an incident or hazard?
PIBOT is an AI system explicitly described as a humanoid robot with AI that can pilot an aircraft by physically manipulating controls and communicating via language models. The event involves the development and testing phase, with no actual harm reported yet. However, the article highlights plans for real flight tests and potential deployment in dangerous tasks, implying plausible future harm such as accidents or safety failures. Since no harm has occurred yet but there is a credible risk, this fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI systems with potential safety implications.
Thumbnail Image

AI能開飛機! 韓研發第一個能讀懂飛行手冊機器人

2023-09-08
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The AI system (the humanoid robot with AI reading and piloting capabilities) is explicitly mentioned and is in development and testing phases. No actual harm or incident has been reported; the AI has not yet flown real planes. The article discusses future potential applications and benefits, but also implies that the system could be used to operate aircraft autonomously. Given the high-stakes nature of piloting, any malfunction or misuse could plausibly lead to injury or harm, qualifying this as an AI Hazard rather than an Incident. The mention of the hydrogen-powered plane is unrelated to AI harm and does not affect the classification.
Thumbnail Image

UNESCO公布教育使用AI指南 納保護隱私及年齡設限規範

2023-09-08
公共電視
Why's our monitor labelling this an incident or hazard?
The UNESCO guidelines represent a governance response to AI use in education, aiming to mitigate risks such as privacy violations and misuse (e.g., cheating), thus this is Complementary Information. The mention of the AI humanoid pilot robot capable of autonomous flight and potential use in military vehicles suggests a plausible future risk of harm if such AI systems are deployed without adequate safeguards, constituting an AI Hazard. However, no actual harm or incident is reported in the article. Therefore, the overall classification is Complimentary Info, with an embedded AI Hazard aspect regarding the AI pilot robot development.