
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
During a university sports event in Xi'an, China, a humanoid robot malfunctioned due to signal interference, unexpectedly hugging a female student during a dance performance. The incident, attributed to program errors from drone signal interference, raised safety concerns about AI systems in public settings, though no injuries occurred.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a humanoid robot with programmed dance movements) whose malfunction (due to signal interference) caused it to perform an unintended action (hugging a student). This unexpected physical contact poses a risk of injury, which is a form of harm to a person. Although no injury occurred this time, the malfunction directly led to a safety incident, indicating a realized AI Incident involving harm or risk of harm. Therefore, this qualifies as an AI Incident rather than merely a hazard or complementary information.[AI generated]