AI-Powered Robot Performs First Autonomous Surgery on Patient Model

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers at Johns Hopkins University developed an AI-driven surgical robot that autonomously performed a key phase of gallbladder removal on a realistic patient model. The robot adapted to unexpected scenarios and responded to voice commands, marking a significant milestone in autonomous AI use in high-stakes medical procedures, though no harm occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (autonomous surgical robot) was used to perform complex surgical procedures on pig organs without human intervention, demonstrating advanced AI capabilities. While the surgeries were successful on dead tissue, the article explicitly states that clinical deployment on humans is still years away and that significant challenges remain before human trials can begin. No actual harm or injury has occurred yet, but the technology's future use on humans could plausibly lead to harm if the system malfunctions or fails to respond to dynamic conditions in live surgery. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impact.[AI generated]
Industries
Healthcare, drugs, and biotechnology

Severity
AI hazard

Business function:
Research and development

AI system task:
Recognition/object detectionEvent/anomaly detectionReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Experimental surgical robot performs gallbladder procedure autonomously

2025-07-09
Reuters
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) that has been developed and tested successfully, showing advanced autonomous decision-making in surgery. However, there is no indication that the AI system has caused any injury, harm, or violation of rights, nor is there a credible risk of imminent harm described. The testing was controlled and on animal organs, with no reported malfunction or adverse outcome. Thus, it does not meet the criteria for AI Incident or AI Hazard. It is not merely unrelated, as it involves AI system development and testing, but since it does not describe harm or plausible harm, it is best classified as Complementary Information about AI advancements in healthcare.
Thumbnail Image

Robot surgery on humans could be trialled within decade after success on pig organs

2025-07-09
The Guardian
Why's our monitor labelling this an incident or hazard?
An AI system (autonomous surgical robot) was used to perform complex surgical procedures on pig organs without human intervention, demonstrating advanced AI capabilities. While the surgeries were successful on dead tissue, the article explicitly states that clinical deployment on humans is still years away and that significant challenges remain before human trials can begin. No actual harm or injury has occurred yet, but the technology's future use on humans could plausibly lead to harm if the system malfunctions or fails to respond to dynamic conditions in live surgery. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event and its potential impact.
Thumbnail Image

Robots 'to be doing human surgery in coming years' after pig success

2025-07-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The AI system (robotic surgical arm powered by AI similar to ChatGPT) has been used successfully on pigs, showing autonomous surgical capabilities. No actual harm or injury to humans has occurred yet, but the article clearly discusses the potential future deployment of these AI surgical robots on humans. Given the complexity and risks inherent in surgery, the AI system's use could plausibly lead to harm (injury or health harm) if errors occur during autonomous operations on humans. Thus, this is an AI Hazard rather than an AI Incident. The article does not describe any realized harm or violation of rights, only a promising but still experimental development with foreseeable risks.
Thumbnail Image

Robot performs realistic surgery 'with 100% accuracy´

2025-07-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgery, a high-stakes medical task directly related to human health. Although the surgery was conducted on a realistic model rather than a human, the AI system's use in this context could plausibly lead to harm if errors occur in real patients. The article does not report any actual harm or malfunction but emphasizes the system's readiness for clinical trials and the need for safety and training. Hence, this qualifies as an AI Hazard due to the credible risk of harm in future use, rather than an AI Incident where harm has already occurred.
Thumbnail Image

Robot performs realistic surgery 'with 100% accuracy'

2025-07-09
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an autonomous surgical robot using machine learning architectures similar to ChatGPT) performing complex surgical tasks. Although the robot has not yet been used on real patients and no harm has occurred, the nature of autonomous surgery inherently carries risks of injury or harm to patients if the system malfunctions or makes errors. The article discusses the system's capabilities and future plans for human trials, indicating plausible future harm. Thus, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Robot performs surgery with '100% accuracy'

2025-07-09
The Independent
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) that has been developed and tested in a simulated environment performing surgery tasks with high accuracy. The AI system's use is experimental and has not yet been deployed in actual human surgeries, so no direct or indirect harm has occurred. The article focuses on the successful demonstration and potential of the AI system rather than any incident or hazard. Therefore, it does not qualify as an AI Incident or AI Hazard. It is not merely unrelated, as it involves AI development and use, but since it reports on a research advancement without harm or plausible harm, it fits best as Complementary Information, providing context and updates on AI surgical systems.
Thumbnail Image

A robot might perform your next surgery

2025-07-09
The Independent
Why's our monitor labelling this an incident or hazard?
The robot is an AI system using machine learning architecture similar to ChatGPT, trained to perform surgical tasks autonomously. The event involves the use of AI in a realistic surgical simulation, showing the system's ability to operate without human intervention. Although no direct harm has occurred (the surgery was on a lifelike model), the technology's future use in real surgeries could plausibly lead to injury or harm to patients if errors or malfunctions happen. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

Robot performs surgery for first time powered by ChatGPT's machine learning - The Mirror

2025-07-09
Mirror
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a surgical robot powered by machine learning akin to ChatGPT) performing autonomous surgery. While the surgery was successful and no harm is reported, the development and use of such autonomous surgical AI systems inherently carry plausible risks of harm to patient health if malfunctions or errors occur in future deployments. Therefore, this event represents an AI Hazard, as it plausibly could lead to injury or harm to patients if the technology is used widely without sufficient safeguards. There is no indication that harm has yet occurred, so it is not an AI Incident. The article focuses on the breakthrough itself rather than responses or governance, so it is not Complementary Information. It is clearly related to AI systems, so it is not Unrelated.
Thumbnail Image

Robot performs realistic surgery with 100% accuracy in 'major leap'

2025-07-09
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The robot performing surgery autonomously with 100% accuracy and responding to voice commands implies the presence of an AI system capable of complex decision-making and real-time adaptation. However, the article does not mention any harm or injury resulting from the surgery, nor any malfunction or failure. The event is a demonstration of AI capabilities in surgery without any reported harm or risk. Therefore, it does not qualify as an AI Incident or AI Hazard. It is a significant development in AI application but does not report harm or plausible harm. Hence, it is best classified as Complementary Information, providing context on AI advancements in healthcare.
Thumbnail Image

Robô realiza primeira cirurgia realista sem auxílio humano

2025-07-10
Tecnologia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) performing a real surgical procedure autonomously, which qualifies as AI system involvement. However, since the surgery was performed on a pig cadaver and no injury, harm, or violation of rights occurred, there is no realized harm. The article mentions potential future applications and challenges but does not describe any incident or harm caused by the AI system. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides significant context and advancement in AI surgical robotics without reporting harm or plausible imminent harm.
Thumbnail Image

Por primera vez, un robot ha realizado una cirugía realista sin ayuda humana

2025-07-09
Hipertextual
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgery, which is a high-stakes application with potential for serious harm. However, since the surgery was conducted on a realistic replica and no actual harm to a person or property occurred, this event does not qualify as an AI Incident. It also does not describe a near miss or credible risk that could plausibly lead to harm imminently, so it is not an AI Hazard. The article mainly reports a technological breakthrough and experimental success, which is informative but does not itself constitute harm or imminent risk. Therefore, it is best classified as Complementary Information, providing context on AI development and its future implications in surgery.
Thumbnail Image

Un robot entrenado con IA y vídeos de cirugías opera una vesícula biliar sin ayuda humana

2025-07-09
Diario1
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H surgical robot) that autonomously performs complex surgical tasks, which clearly qualifies as an AI system. The use of the AI system in surgery is described as successful and precise, but the surgeries were conducted on ex vivo pig tissues, so no actual human harm occurred. However, the autonomous surgical operation inherently carries risks of injury or harm if applied in real clinical settings. Therefore, this event represents a plausible future risk of harm due to autonomous AI surgical systems. Given that no actual harm has occurred yet but the AI system's use could plausibly lead to injury or harm in future applications, this event is best classified as an AI Hazard.
Thumbnail Image

Robô faz 1.ª cirurgia realista sem ajuda humana: "Avanço transformador"

2025-07-09
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as performing autonomous surgery, which is a high-stakes medical procedure. The AI system's use directly led to a surgical operation being performed without human intervention. Although no harm is reported, the nature of autonomous surgery inherently carries risks of injury or harm to patients if the system malfunctions or makes errors. Therefore, this event plausibly leads to potential harm (injury or health harm) due to the AI system's autonomous operation in a critical medical context. As no actual harm is reported yet, but the AI system's use could plausibly lead to injury or harm, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Surgical robots take step towards fully autonomous operations

2025-07-09
New Scientist
Why's our monitor labelling this an incident or hazard?
The event involves an AI system actively used to perform surgery autonomously, which is a clear AI system involvement. However, the surgery was performed on a dead pig, so no actual harm to a living being occurred. The article discusses the potential for future use on live animals and humans, where risks of harm (injury or health harm) are plausible. Therefore, this event constitutes an AI Hazard because the AI system's use could plausibly lead to harm in future applications, but no harm has yet materialized. It is not Complementary Information because the article focuses on the AI system's autonomous operation and its implications rather than updates or responses to prior incidents. It is not an AI Incident as no injury or harm has occurred yet.
Thumbnail Image

Robotic surgery hits 'milestone' with autonomous gallbladder removal - UPI.com

2025-07-09
UPI
Why's our monitor labelling this an incident or hazard?
The AI system (SRT-H) is explicitly described as autonomously performing surgery, indicating AI system involvement. However, the procedure was performed on a pig cadaver, so no actual harm to humans or property occurred. The article discusses the potential for future clinical use and the need for further testing to ensure safety, indicating plausible future risks but no current incident. Thus, this qualifies as an AI Hazard because the autonomous surgical system could plausibly lead to harm if deployed clinically without sufficient safeguards, but no harm has yet occurred. It is not Complementary Information because the main focus is on the new AI system's capabilities and milestone achievement, not on responses or updates to prior incidents.
Thumbnail Image

Un robot realiza la primera cirugía sin ayuda humana en un paciente real | El Diario Vasco

2025-07-09
El Diario Vasco
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H robot) performing autonomous surgery on a human patient, which is a direct use of AI in a context where harm to health is a critical concern. Although the surgery was successful and no harm was reported, the AI system's autonomous operation in a real medical procedure inherently carries risks of injury or harm to the patient. This meets the criteria for an AI Incident because the AI system's use directly relates to potential harm to a person's health, and the event is not merely a future risk but an actual deployment of AI in a sensitive, high-risk context. Therefore, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Pig-Organ Trial Suggests AI Robot Surgeons Are Coming

2025-07-09
Newser
Why's our monitor labelling this an incident or hazard?
The AI system (autonomous surgical robots) is explicitly involved in performing complex operations autonomously, which qualifies as AI system involvement. The event stems from the use and development of the AI system. However, since the surgeries were performed on deceased pig organs and no harm to humans or other harms have occurred, this does not meet the criteria for an AI Incident. The potential for future harm exists but is not immediate or certain, so it does not qualify as an AI Hazard either. The article mainly reports a technological advancement and research progress, which is informative but does not describe an incident or hazard. Therefore, the event is best classified as Complementary Information.
Thumbnail Image

Un robot realiza la primera cirugía de "un modelo de humano anatómicamente realista" sin supervisión humana

2025-07-09
Cadena SER
Why's our monitor labelling this an incident or hazard?
An AI system (the autonomous surgical robot) is explicitly involved, performing complex real-time decision-making and adaptation in surgery. The event involves the use and development of the AI system. However, the surgery was conducted on an anatomically realistic model, not a human, so no actual harm (injury, rights violation, or property/community/environmental harm) has occurred. The robot's demonstrated capabilities indicate a credible risk that such systems could cause harm if deployed on humans without sufficient oversight or safety measures. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Robot entrenado con IA opera vesícula biliar sin ayuda humana (video)

2025-07-09
MVS Noticias
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgical tasks, which fits the definition of an AI system. The event involves the use of this AI system in a medical context. However, since the surgeries were performed on ex vivo pig tissues and no harm to living beings occurred, there is no direct or indirect harm as defined for an AI Incident. The event also does not describe a plausible future harm scenario or risk of harm, so it does not qualify as an AI Hazard. Instead, it is a significant development in AI surgical robotics, providing complementary information about AI capabilities and progress in the field without reporting harm or risk. Therefore, the classification is Complementary Information.
Thumbnail Image

In A First, A Robot Listened To Spoken Instructions And Performed Surgery - Just Like A Human Would

2025-07-09
IFLScience
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) that has been developed and tested successfully in a controlled environment. There is no indication of injury, harm, or violation of rights occurring as a result of the AI system's use. The article emphasizes the potential and viability of such AI systems for future surgical autonomy but does not describe any realized harm or incident. Therefore, it does not meet the criteria for an AI Incident. However, given the nature of autonomous surgical robots and their potential to cause harm if malfunctioning or misused in real surgeries, this development plausibly could lead to an AI Incident in the future. Hence, it is best classified as an AI Hazard, reflecting the credible risk associated with deploying such systems in real-world medical settings.
Thumbnail Image

Robô realiza a primeira cirurgia realista sem ajuda humana

2025-07-09
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot SRT-H) that autonomously performed surgery on a realistic patient model, demonstrating advanced AI capabilities in a high-risk domain. No actual harm or injury occurred since the surgery was on a model, not a human, so it is not an AI Incident. However, the autonomous nature and complexity of the system imply a credible risk of harm if deployed in real clinical environments without sufficient safeguards. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to injury or harm to persons in the future.
Thumbnail Image

Watch This Robot Surgeon Flawlessly Operate On Pig Organs Without Human Control At Johns Hopkins

2025-07-09
Study Finds
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (a two-part AI controlling a surgical robot) performing autonomous surgery steps. While the current tests are on pig organs in a lab setting with no harm reported, the system's future use on live patients could plausibly lead to harm if errors occur during surgery. The AI system's development and use in this context present a credible risk of injury or harm to patients in the future. Since no actual harm has occurred yet, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Robotic surgery hits 'milestone' with autonomous gallbladder removal

2025-07-09
The Island Packet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (SRT-H) performing autonomous surgery on a pig cadaver, which qualifies as AI system involvement. However, since the procedure was conducted in a controlled experimental setting without any harm to humans or animals (the pig was a cadaver), and no injury, rights violation, or other harm has occurred, this does not meet the criteria for an AI Incident. The article highlights the potential for future clinical applications and the need for further testing, indicating plausible future risks but not immediate harm. Therefore, it fits best as an AI Hazard, representing a credible potential for future harm if deployed clinically without sufficient safeguards. However, since the article mainly reports a successful milestone and proof of concept without explicit focus on risks or warnings, and emphasizes the need for further validation, it is more appropriate to classify this as Complementary Information providing context on AI advancements and their implications rather than a direct hazard or incident.
Thumbnail Image

Autonomous gallbladder removal: Robot performs first realistic surgery without human help

2025-07-09
Tech Xplore
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (SRT-H) performing autonomous surgery, which is a direct use of AI in a high-stakes health context. Although no real patient was harmed, the technology's capability to perform complex surgery autonomously implies a plausible risk of injury or harm if deployed clinically without sufficient validation and safety measures. The event does not describe any actual harm or malfunction causing injury, so it is not an AI Incident. It is more than complementary information because it reports a concrete autonomous AI system performing surgery, not just research findings or governance responses. Hence, it fits the definition of an AI Hazard due to the credible potential for future harm.
Thumbnail Image

Un robot entrenado con IA y vídeos de cirugías opera una vesícula biliar sin ayuda humana

2025-07-09
Diario de Cádiz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) performing complex surgery autonomously, which fits the definition of an AI system. The use of the AI system is experimental and has not caused any injury, health harm, or other harms as defined. There is no indication of malfunction or misuse. The event does not describe any realized harm but does highlight the potential for future harm if such autonomous surgical systems are deployed without proper validation and safety measures. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

Johns Hopkins teaches robot to perform a gallbladder removal on a realistic patient

2025-07-09
The Robot Report
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the Surgical Robot Transformer-Hierarchy) that autonomously performs surgery, a high-risk medical procedure. While the robot performed successfully in trials on a lifelike patient model without causing harm, the nature of autonomous surgery inherently carries risks that could plausibly lead to injury or harm to patients in future real-world applications. Since no actual harm has occurred yet, but the potential for harm is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the primary event of autonomous surgery performance, not on responses or updates to prior incidents. It is not Unrelated because the AI system and its operation are central to the event.
Thumbnail Image

Robot performs realistic surgery 'with 100% accuracy'

2025-07-09
Shropshire Star
Why's our monitor labelling this an incident or hazard?
The robot uses AI (machine learning architecture similar to ChatGPT) to perform autonomous surgery, adapting in real-time to complex and unpredictable scenarios. This qualifies as an AI system. The event involves the use and development of this AI system. Although the surgery was successful with 100% accuracy on a realistic patient model, no actual human patients were harmed or involved yet. The article discusses the potential for future clinical use and the importance of safety and training, indicating plausible future harm if the system malfunctions or is misused. Since no harm has occurred yet, but plausible harm exists, this is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the demonstration of autonomous surgery and its implications, not on responses or governance.
Thumbnail Image

'Major leap': Robot performs realistic surgery with 100% accuracy

2025-07-09
dpa International
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) developed and used to perform surgery with high precision. Although the surgery was conducted on a realistic patient model and not a real human, the AI system's use in this context could plausibly lead to harm if applied in real clinical settings without adequate validation and safety measures. Therefore, this qualifies as an AI Hazard because it plausibly could lead to injury or harm to patients in the future. There is no indication of actual harm or malfunction causing harm at this stage, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's autonomous surgical performance and its implications for future clinical use.
Thumbnail Image

Robot Surgeons Learn Like Residents -- And Just Performed First Autonomous Surgery todayheadline

2025-07-10
Today Headline
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (surgical robots with machine learning capabilities) performing autonomous surgery, which is a direct use of AI. Although the surgery was conducted on pig cadavers and not humans, the AI system's autonomous operation in a complex medical procedure is a realized event demonstrating AI's capability to perform tasks traditionally done by human surgeons. This represents a direct involvement of AI in a high-stakes domain with potential for harm if errors occur. The article does not report any harm occurring during the procedure, but the autonomous surgery itself is a materialized event involving AI use with direct implications for health and safety. Therefore, this qualifies as an AI Incident because the AI system's use in autonomous surgery is a realized event with direct relevance to injury or harm to health (even if no harm occurred this time, the event is a critical milestone in AI systems performing autonomous medical interventions).
Thumbnail Image

Surgical robots take step towards fully autonomous operations todayheadline

2025-07-10
Today Headline
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in performing autonomous surgery, which qualifies as an AI system under the definitions. The event involves the use of AI in a real-world task with potential for future clinical application. However, since the surgery was conducted on a dead pig, no injury or harm to a person or group has occurred, and no violation of rights or other harms are reported. The article highlights the potential for future autonomous surgeries and the need for regulation, indicating plausible future risks but no current incident. Thus, the event fits the definition of Complementary Information, as it provides important context and progress in AI surgical robotics without describing an AI Incident or AI Hazard.
Thumbnail Image

Robot Surgeon Executes Key Phase of Surgery Without Human Assistance

2025-07-10
AIwire
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (SRT-H) performing autonomous surgery tasks without human assistance, indicating AI system involvement. The event stems from the use and development of the AI system. Although no harm has occurred yet (the surgeries were performed on ex vivo models), the autonomous nature of the system and its intended application in real surgeries imply a credible risk of injury or harm to patients if deployed clinically without sufficient safeguards. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article does not report any actual injury, rights violation, or other harm caused by the AI system at this stage, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the focus is on the AI system's autonomous surgical capabilities and their implications for future harm potential.
Thumbnail Image

Primera operación hecha por robot modo autónomo - Ciencia y tecnologia - Ansa.it

2025-07-09
Agenzia ANSA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot SRT-H) performing surgery autonomously on a patient simulator. Although the surgery was conducted in a simulated environment and no actual patient harm occurred, the AI system's autonomous operation in a complex, high-stakes task like surgery presents a plausible risk of harm if deployed in real clinical settings. The article highlights the robot's ability to adapt and make decisions in unpredictable scenarios, which could plausibly lead to injury or harm if errors occur in real use. Therefore, this qualifies as an AI Hazard, as it plausibly could lead to an AI Incident in the future if deployed clinically without sufficient safeguards.
Thumbnail Image

A robot shows that machines may one day replace human surgeons

2025-07-09
EL PAÍS English
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (an autonomous surgical robot using generative AI and machine learning) and its use in surgery. However, no harm or injury has occurred; the robot is still in experimental stages, performing surgeries on animal tissues and controlled environments. The article focuses on the progress, potential benefits, challenges, and philosophical questions related to autonomous surgical robots. This fits the definition of Complementary Information, as it provides context and updates on AI development and its societal and ethical implications without describing a realized or imminent harm.
Thumbnail Image

美国研究人员训练机器人自主完成胆囊切除手术 每日观点_财报网

2025-07-10
finance.3news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the SRT-H robot) that autonomously performs complex surgical tasks, making independent decisions during surgery. This involves the use of AI in a high-stakes medical context where malfunction or errors could cause injury or harm to patients. However, the article does not report any actual harm or incidents resulting from the robot's use; it only describes the successful training and capability development. Therefore, this event represents a plausible future risk scenario where autonomous surgical robots could lead to harm if malfunction or misuse occurs, qualifying it as an AI Hazard rather than an AI Incident.
Thumbnail Image

Study describes robot operating on gall bladder autonomously, 'milestone' in use of AI - The Economic Times

2025-07-10
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Surgical Robot Transformer-Hierarchy, SRT-H) that autonomously performs complex surgical procedures, adapting in real-time and making decisions without human intervention. Although the surgery was performed on ex vivo human tissue and not on live patients, the system's capability to operate autonomously in a realistic setting indicates a direct use of AI with potential implications for patient safety and clinical outcomes. However, since the surgery was conducted on tissue samples and no actual harm or injury to patients occurred, this event does not describe realized harm but demonstrates a credible advancement that could lead to future AI incidents if deployed clinically without adequate safeguards. Given the current state described, this qualifies as Complementary Information about a significant AI development and its potential clinical impact rather than an AI Incident or AI Hazard at this stage.
Thumbnail Image

Pela 1ª vez na história, robô faz cirurgia guiado por IA sem qualquer ajuda humana

2025-07-10
O Globo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the autonomous surgical robot) performing a real medical procedure on human tissue. However, the surgery was successful and comparable to experienced surgeons, with no harm reported. Since no injury, harm, or violation occurred, and the event describes a successful use rather than a malfunction or harm, it does not qualify as an AI Incident. There is also no indication of plausible future harm or risk from this event. Therefore, it is best classified as Complementary Information, as it provides important context and advancement in AI applications in surgery without describing harm or hazard.
Thumbnail Image

Un robot realizó por primera vez una cirugía sin ayuda humana

2025-07-10
infobae
Why's our monitor labelling this an incident or hazard?
An AI system (SRT-H) was explicitly involved, performing autonomous surgery using AI techniques including imitation learning and language-conditioned hierarchical learning. The event involves the use and development of the AI system. Although no actual harm occurred (the surgeries were on realistic models), the system's autonomous surgical capabilities could plausibly lead to harm if deployed in real medical contexts without proper validation and safety measures. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article does not report any injury, violation, or damage caused by the AI system, so it is not an Incident. It is not merely complementary information because the main focus is on the AI system's autonomous surgical performance and its implications for future use and risks.
Thumbnail Image

Johns Hopkins' AI-powered, voice-controlled robot performs autonomous surgery

2025-07-10
FierceBiotech - free daily biotech briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems, including large language models and other AI algorithms, to enable a robot to perform autonomous surgery. The robot was trained on surgical videos and demonstrated the ability to adapt to variations and unexpected events during surgery. However, the surgeries were performed in a controlled, ex vivo (outside a living body) environment, with no indication of harm or injury to actual patients. There is no report of any injury, violation of rights, or other harm resulting from the AI system's use. The event represents a technological advancement with potential future implications but does not describe any realized harm or incident. Therefore, it qualifies as Complementary Information, providing important context and progress in AI surgical systems without constituting an AI Incident or AI Hazard at this stage.
Thumbnail Image

美国研究人员训练机器人自主完成胆囊切除手术

2025-07-10
人民网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot with autonomous capabilities) developed and tested for medical procedures. Although the AI system is operational and has demonstrated success in trials, there is no indication that any harm, injury, or violation of rights has occurred. The robot is not yet deployed in clinical practice on humans, so no direct or indirect harm has resulted. The event represents a significant technological advancement with potential future impact but does not describe an incident or hazard causing or plausibly leading to harm at this stage. Therefore, it is best classified as Complementary Information, providing context and updates on AI development in healthcare without reporting an AI Incident or AI Hazard.
Thumbnail Image

2025-07-10
人民网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot with autonomous AI capabilities) in its development and use phases. Although the robot has demonstrated successful autonomous surgeries on pig organs, there is no indication of any injury, violation of rights, or other harms occurring. The article highlights the potential for future clinical use and benefits but does not report any realized harm or incident. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm in future clinical applications, but no harm has yet occurred.
Thumbnail Image

AI早报 | 马斯克的号称"最强AI模型"Grok4发布;剑指谷歌,英伟达、OpenAI进军网页浏览器领域

2025-07-11
东方财富网
Why's our monitor labelling this an incident or hazard?
The article primarily covers new AI product releases, research achievements, and educational initiatives without describing any direct or indirect harm caused by AI systems. The mention of ChatGPT citing unreliable sources is a potential concern but is not confirmed as causing harm or incidents. The autonomous surgery robot's successful tests indicate progress rather than malfunction or harm. The AI browsers and model upgrades are market developments. Hence, the content fits the definition of Complementary Information, providing context and updates on AI developments and ecosystem responses without reporting new incidents or hazards.
Thumbnail Image

智能机器人自主完成胆囊切除手术 表现堪比资深外科医生

2025-07-10
东方财富网
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an intelligent surgical robot) autonomously performing surgery on a patient model, demonstrating advanced AI capabilities. There is no indication of any injury, harm, or violation of rights occurring; the surgery was successful and conducted in a controlled experimental setting. While the technology's future deployment could pose risks, the article does not report any actual or imminent harm or credible risk leading to harm at this stage. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides important complementary information about AI development and its potential impact on healthcare.
Thumbnail Image

OP-Roboter entfernt Gallenblase - ganz ohne menschliche Hilfe

2025-07-10
Bild
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a learning surgical robot performing autonomous operations. Although no harm has yet occurred, the nature of the system and its intended use in autonomous surgery imply a credible risk of injury or harm to patients if failures happen. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm, but no incident has been reported so far.
Thumbnail Image

美国机器人自主完成胆囊切除手术 表现堪比资深医生

2025-07-10
驱动之家
Why's our monitor labelling this an incident or hazard?
The AI system (SRT-H robot) is explicitly described as autonomously performing a complex surgical procedure, which directly affects the health of patients. The successful completion of the surgery with a 100% success rate indicates realized use of the AI system in a medical context. Since the AI system's use directly relates to health outcomes, this qualifies as an AI Incident under the definition of injury or harm to health (even if the outcome is positive, the event involves direct AI use in a high-stakes health context). The article does not describe any harm or malfunction, but the event involves the AI system's use in a critical health procedure, which is a direct involvement with potential for harm or benefit. Given the successful outcome and no harm reported, this is best classified as an AI Incident reflecting the AI system's direct involvement in health-related outcomes.
Thumbnail Image

AI机器人自主完成复杂胆囊切除手术,准确率100%

2025-07-10
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) that has been developed and tested successfully in a research context. There is no indication of any injury, violation of rights, or other harm resulting from its use so far. The AI system's use is experimental and has not yet been deployed in clinical practice where harm could occur. The article focuses on the capabilities and potential future applications of the AI system rather than any realized harm or risk. Hence, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, providing context and updates on AI advancements in medical robotics.
Thumbnail Image

Primo intervento chirurgico fatto da un robot senza aiuto umano

2025-07-10
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot Srt-h) performing autonomous surgery, which is a direct use of AI. While no actual harm occurred since the operation was on a simulator, the development and use of such autonomous surgical AI systems could plausibly lead to harm if deployed clinically without sufficient safeguards. However, since the article reports a successful experiment without harm or malfunction, and no indication of imminent risk or hazard, it is best classified as Complementary Information about AI progress and potential future impacts rather than an Incident or Hazard.
Thumbnail Image

Adiós a los cirujanos: un robot entrenado con IA y vídeos de cirugías opera una vesícula biliar sin ayuda humana

2025-07-10
El Español
Why's our monitor labelling this an incident or hazard?
The robot is an AI system explicitly described as using machine learning and trained on surgical videos to perform autonomous surgery. The event involves the use of this AI system in a real surgical task, albeit on ex vivo animal tissue, with no actual harm reported. However, the nature of the system—autonomous surgery—carries inherent risks of injury or harm to human patients if deployed without proper controls. Since no actual harm has occurred yet, but plausible future harm is credible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it reports a new development with potential risk, not a response or update to a prior incident. It is not Unrelated because the AI system and its use are central to the event.
Thumbnail Image

Mit KI aus ChatGPT: Roboter führt erste eigenständige Operation durch

2025-07-10
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as controlling a surgical robot performing autonomous operations. While no injury or harm has occurred yet (the operation was on a pig cadaver), the AI system's autonomous decision-making in surgery could plausibly lead to harm in future human applications if errors or malfunctions happen. The event does not report any actual harm or rights violations but highlights a credible risk inherent in the AI system's intended use. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Primo intervento chirurgico fatto da un robot senza aiuto umano

2025-07-10
Tiscali Notizie
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the autonomous surgical robot Srt-H) performing surgery without human help on a simulator, indicating AI system involvement and autonomous decision-making. No actual harm occurred since the procedure was on a simulator, so it is not an AI Incident. However, the autonomous surgical robot's capabilities imply a credible risk of future harm if used on real patients, fitting the definition of an AI Hazard. The event does not describe a response to a past incident or broader governance context, so it is not Complementary Information. It is not unrelated as it clearly involves AI.
Thumbnail Image

Prima operazione chirurgica fatta da un robot in modo autonomo - Biotech - Ansa.it

2025-07-10
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot Srt-h) performing a complex task (surgery) independently. Although the operation was conducted on a simulator and no actual patient harm occurred, the AI system's autonomous surgical capability could plausibly lead to harm if deployed in real clinical settings without adequate safeguards. Therefore, this event constitutes an AI Hazard, as it demonstrates the potential for future AI incidents involving autonomous surgical robots that might cause injury or harm to patients if malfunctions or errors occur.
Thumbnail Image

Un hito en la medicina: un robot extirpa una vesícula sin ayuda humana y entrenado por vídeos y Chat GPT

2025-07-10
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot with AI trained by videos and ChatGPT architecture) performing autonomous surgery. No actual harm has occurred since the surgeries were on ex vivo pig tissues, so it is not an AI Incident. However, the AI system's autonomous surgical capabilities could plausibly lead to harm in future real-world applications if errors or malfunctions occur. Therefore, this is best classified as an AI Hazard, reflecting the credible risk of future harm from autonomous AI surgical systems.
Thumbnail Image

AI使手术机器人接近"完全自主"

2025-07-10
科学网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system controlling a surgical robot performing complex autonomous tasks. Although the current tests were on deceased animals and no harm occurred, the planned future use on live animals introduces credible risks of injury or harm to health, fulfilling the criteria for an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's development and its plausible future risks.
Thumbnail Image

New era in healthcare? AI robot aces gall bladder operation with zero human help

2025-07-10
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as performing autonomous surgical operations, which fits the definition of an AI system. The surgeries were performed on human tissue outside the body, so no direct harm to patients occurred, ruling out an AI Incident at this stage. However, the autonomous nature of the robot and its intended future use in live surgeries imply a credible risk of harm if deployed prematurely or malfunctioning, meeting the criteria for an AI Hazard. The article does not describe any realized harm or legal issues, nor is it merely a product announcement without risk context, so it is not Complementary Information or Unrelated.
Thumbnail Image

¿Adiós a las listas de espera? Un robot realiza con éxito la primera cirugía sin intervención humana y eso marca un antes y un después

2025-07-10
El Confidencial
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Surgical Robot Transformer-Hierarchy) performing autonomous surgery, which is a direct use of AI impacting human health. The AI system's development and use led to a successful surgical intervention, which is a direct event involving AI and human health. Although no harm occurred, the event is a realized use of AI in a high-stakes medical context, which fits the definition of an AI Incident because it involves direct influence on health outcomes. The event is not merely potential harm (hazard) or complementary information; it is a concrete instance of AI system use with direct health impact. Therefore, it is classified as an AI Incident.
Thumbnail Image

Pela 1ª vez na história da medicina, robô faz cirurgia guiada por IA sem ajuda de humano

2025-07-10
Terra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the SRT-H robot) autonomously performing a surgical procedure without human touch, demonstrating real-time learning and adaptation. While the surgery was on extracted tissues, the AI's autonomous operation in a clinical context implies a credible risk and potential for future incidents involving patient health. No harm or injury is reported, so it is not an AI Incident. The event is more than general AI news or product launch; it shows a plausible future scenario where AI could directly impact human health, fitting the definition of an AI Hazard.
Thumbnail Image

Chirurgie-Roboter führt selbstständig Gallenblasenoperation durch

2025-07-10
heise online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot using machine learning and imitation learning) performing autonomous surgery. While the current operation was on a dead pig and no harm occurred, the system's intended use is to perform complex surgical procedures autonomously, which inherently carries risks of injury or harm to patients if errors occur. The article highlights the robot's ability to adapt and self-correct, but also notes the operation took longer than a human surgeon, implying potential risks in real-world deployment. Since no actual harm has occurred yet, but plausible future harm exists due to the nature of autonomous surgery, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Realizan la primera cirugía autónoma con ChatGPT: robot extirpa una vesícula sin ayuda humana

2025-07-11
LaRepublica.pe
Why's our monitor labelling this an incident or hazard?
An AI system (the SRT-H surgical robot) was developed and used to perform a complex medical procedure autonomously, demonstrating advanced decision-making and adaptability. Although the surgery was performed on a pig cadaver and not a living patient, the event involves the use of AI in a high-stakes medical context where malfunction or misuse could lead to injury or harm to humans in future applications. However, since the procedure was conducted on a cadaver and no actual harm occurred, this event does not qualify as an AI Incident. Instead, it represents a plausible future risk scenario where autonomous surgical AI could lead to harm if deployed clinically without adequate safeguards. Therefore, it is best classified as an AI Hazard, reflecting the credible potential for harm in future real-world use.
Thumbnail Image

Histórico: robô com IA faz 1ª cirurgia em órgão humano sem ajuda de médicos

2025-07-10
TecMundo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-HT robot using AI similar to that powering ChatGPT) performing a fully autonomous surgical operation on a human organ. This is a direct use of AI in a high-stakes medical context where harm could occur. However, the article reports the surgery was successful and went as expected, with no harm or injury reported. Since no injury or harm occurred, and the event describes a successful operation, it does not qualify as an AI Incident. Nonetheless, the autonomous surgical operation could plausibly lead to harm if malfunction or errors occur in future uses, representing a credible risk inherent in autonomous surgery. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future risk of harm from autonomous AI surgical systems.
Thumbnail Image

Un robot que aprendió a operar con vídeos ha hecho su primera cirugía de hígado sin ayuda de humanos

2025-07-10
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the Transformer-Hierarchy surgical robot) that autonomously performed surgery, which is a direct use of AI in a high-stakes health context. Although no injury or harm is reported, the nature of autonomous surgery carries inherent risks that could plausibly lead to harm if the AI malfunctions or makes incorrect decisions. The event is not a Complementary Information piece because it reports a new development with potential for future harm, nor is it unrelated. Since no actual harm occurred, it is not an AI Incident. Therefore, it fits the definition of an AI Hazard, as the autonomous surgical AI system's use could plausibly lead to injury or harm to patients in the future.
Thumbnail Image

KI entfernt Gallenblase: Erstmals operiert ein Roboter selbstständig

2025-07-10
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot trained with AI) performing a complex medical procedure. However, since the surgeries were conducted on ex vivo pig tissue (not living subjects), no direct harm or injury occurred. There is no indication of malfunction or misuse causing harm. The article focuses on the demonstration of capability and potential future implications rather than an incident causing harm. Therefore, this qualifies as an AI Hazard because the autonomous surgical robot's use could plausibly lead to harm in future real-world applications, but no harm has yet materialized.
Thumbnail Image

Robô guiado por IA realiza a primeira cirurgia sem ajuda humana

2025-07-10
Correio Braziliense
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H surgical robot) that autonomously performed surgery on human tissue, which is a direct use of AI in a high-stakes medical context. Although the surgeries were performed on deceased human tissue and not live patients, the AI system's operation directly relates to health outcomes and medical safety. This constitutes an AI Incident because the AI system's use has directly led to a significant impact on human health contexts, demonstrating both the capabilities and risks of autonomous AI in surgery. The event does not describe a hazard or potential future harm but an actual deployment and operation of an AI system with direct implications for health, fitting the definition of an AI Incident.
Thumbnail Image

Un robot entrenado con IA y vídeos de cirugías opera una vesícula biliar sin ayuda humana

2025-07-10
López-Dóriga Digital
Why's our monitor labelling this an incident or hazard?
An AI system (the surgical robot with machine learning capabilities) was used to perform a complex surgical procedure autonomously. Although the surgery was conducted on ex vivo pig tissues and no direct harm to humans occurred, the event involves the use of AI in a high-stakes medical context where malfunction or misuse could plausibly lead to injury or harm to humans in future applications. Since the event is a proof of concept without actual harm yet, but with clear potential for future harm if deployed clinically, it qualifies as an AI Hazard rather than an AI Incident. There is no indication of realized harm or violation of rights, so it is not an Incident. It is more than complementary information because it reports a concrete autonomous AI system performing surgery, not just a research update or governance response.
Thumbnail Image

Por primera vez en la historia, un robot realiza una cirugía guiada por inteligencia artificial sin asistencia humana

2025-07-10
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H surgical robot) that autonomously performed complex surgical tasks, demonstrating real-time decision-making and adaptation. Although the surgeries were performed on extracted human organs (ex vivo) and not on living patients, the AI system's use in a critical medical procedure directly relates to potential harm if malfunctioned or misused. However, since no harm or injury occurred and the procedures were successful, this event does not describe an incident causing harm but rather a significant technological milestone. The event does not describe any realized harm or violation but shows the plausible future use of AI in autonomous surgery, which could lead to harm if errors occur in clinical settings. Therefore, it qualifies as Complementary Information, providing important context on AI development and its potential implications in healthcare.
Thumbnail Image

El día en que los robots empezaron a operar solos ha llegado: así ha sido la primera cirugía autónoma con IA

2025-07-10
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot SRT-H) that autonomously performed surgical procedures, which fits the definition of an AI system. The use of the AI system is explicit and central to the event. No actual harm (injury, rights violation, property damage, etc.) has occurred since the surgeries were performed on ex vivo pig tissues, not living patients. However, the autonomous surgical capability implies a credible risk of harm if such systems are used in live human surgeries without proper controls. Therefore, the event is best classified as an AI Hazard, as it plausibly could lead to an AI Incident in the future. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI use with potential harm.
Thumbnail Image

Autonomous robot surgeon removes organs with 100% success rate

2025-07-10
New Atlas
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) that has been developed and tested successfully in a controlled environment without causing harm to humans. No actual harm or injury has occurred, and the surgeries were performed on models rather than live patients. While the technology could plausibly lead to future harm if deployed prematurely or malfunctioning during real surgeries, the article does not report any current harm or incidents. Therefore, this is a demonstration of AI capability with potential future risks but no realized harm yet, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the AI system's autonomous operation and its implications for future surgical use, which could plausibly lead to harm if misapplied or malfunctioning in real patients.
Thumbnail Image

Un robot realiza la primera cirugía sin ayuda humana en un paciente real

2025-07-10
LaSexta
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (the surgical robot SRT-H) that autonomously performed surgery. However, the surgery was conducted on ex vivo pig tissue, not on a living human or animal, and no harm or injury occurred. Therefore, this is not an AI Incident. The event demonstrates a plausible future risk and benefit scenario where autonomous surgical robots could be used on humans, which could plausibly lead to harm if malfunction or misuse occurs. Hence, it qualifies as an AI Hazard. It is not merely complementary information because the main focus is on the autonomous AI system performing surgery, not on responses or governance. It is not unrelated because it clearly involves AI and potential harm.
Thumbnail Image

Robot entrenado con IA y videos de cirugías opera una vesícula biliar sin ayuda humana | Periódico Zócalo | Noticias de Saltillo, Torreón, Piedras Negras, Monclova, Acuña

2025-07-10
Zócalo Saltillo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot with machine learning capabilities) performing autonomous surgery, which is a clear AI system involvement. The use is experimental and currently limited to ex vivo animal models, so no actual harm has occurred yet. However, the technology could plausibly lead to harm in future clinical applications if errors or malfunctions occur. Therefore, this qualifies as an AI Hazard, as it plausibly could lead to injury or harm to persons in future use, but no incident has yet occurred.
Thumbnail Image

US: Robot Developed by Johns Hopkins University Performs Gallbladder Removal Surgery With 100% Accuracy on Life-Like Model | 🔬 LatestLY

2025-07-10
LatestLY
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgical tasks. The event involves the use and development of this AI system. Since the surgery was performed on a life-like model and not on humans, no actual harm has occurred. Therefore, it does not qualify as an AI Incident. However, the potential for future harm exists if the system is deployed clinically without sufficient safeguards, making it an AI Hazard. The article primarily reports on the successful experimental use and potential future applications, so it fits best as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Chirurgia autonoma: i robot sostituiranno medici?

2025-07-10
Tom's Hardware
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved, performing autonomous surgical tasks. The event stems from the AI system's use in surgery. No actual harm has occurred since the tests were on deceased animals, but the article discusses the plausible future risk of harm when applied to living patients. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to injury or harm. There is no indication of realized harm yet, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the AI system's autonomous surgical operation and its implications for future harm.
Thumbnail Image

Robô realiza cirurgia sem intervenção humana nos EUA

2025-07-10
Poder360
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) performing a high-stakes medical procedure. Although no injury or harm occurred in the reported tests, the autonomous nature of the system and its application in surgery imply a credible risk of harm if errors or malfunctions occur in real-world use. Therefore, it fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to injury or harm to patients in the future. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Just like a real surgeon: Robot removes gallbladder with 100pc accuracy

2025-07-10
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (an autonomous surgical robot) performing complex surgery with high accuracy and adaptability, indicating AI system involvement. The system's use in surgery directly relates to human health, and the robot's ability to perform surgical tasks autonomously means any malfunction or error could cause injury or harm. Although the surgery was performed on a life-like patient model rather than a human, the event demonstrates realized use of AI in a context where harm to health is a primary concern. This meets the criteria for an AI Incident because the AI system's development and use have directly led to a situation with clear implications for injury or harm to people, even if actual patient harm has not yet occurred. The event is not merely a potential hazard or complementary information but a realized use of AI with direct health impact relevance.
Thumbnail Image

Experimental surgical robot performs gallbladder procedure autonomously

2025-07-10
Gulf Daily News Online
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) that has been developed and tested in a controlled experimental setting. While the AI system performs complex surgical tasks autonomously, the testing has been limited to pig organs outside of a clinical environment, with no reported injuries, health harms, or violations of rights. Therefore, no AI Incident has occurred. However, the autonomous surgical robot's capabilities and potential future deployment in human surgeries imply a plausible risk of harm if malfunction or misuse occurs. This makes it an AI Hazard, as the AI system's use could plausibly lead to injury or harm to patients in the future. The article does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. It is not unrelated because it clearly involves an AI system with potential health impacts.
Thumbnail Image

La inteligencia artificial ejecuta cirugía

2025-07-10
El Diario de Yucatán
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as it autonomously performs surgical tasks with real-time adaptation, fulfilling the definition of an AI system. However, the surgery was performed on ex vivo animal tissue, so no direct or indirect harm to humans or other protected entities occurred. There is no indication of malfunction or misuse causing harm. The article does not suggest plausible future harm from this development, only potential benefits. Thus, the event does not meet criteria for AI Incident or AI Hazard. It is a significant development that enhances understanding of AI capabilities in surgery, fitting the definition of Complementary Information.
Thumbnail Image

Robot performs first fully autonomous surgical task - a blend of AI and precision? - NaturalNews.com

2025-07-10
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The robotic system SRT-H is an AI system performing autonomous surgery, which is a high-risk application with potential for injury or harm to patients if errors occur. Although the current tests were on pig cadavers and no human harm has occurred, the article explicitly discusses the potential for future clinical use and the need for rigorous testing and regulation. This indicates a credible risk that the AI system could plausibly lead to harm in the future. Therefore, this event is best classified as an AI Hazard rather than an AI Incident, as no actual harm has yet materialized.
Thumbnail Image

SRT-H, il robot chirurgo che opera in autonomia: rimozione della cistifellea riuscita

2025-07-10
Hardware Upgrade - Il sito italiano sulla tecnologia
Why's our monitor labelling this an incident or hazard?
The SRT-H robot is an AI system performing autonomous surgery, which directly relates to the development and use of AI. Although the operation was successful and conducted on a model, the nature of autonomous surgery inherently carries risks of injury or harm to patients if deployed in real clinical settings. Since no actual harm has occurred yet, but the system's use could plausibly lead to injury or harm in the future, this event fits the definition of an AI Hazard. It is not Complementary Information because it is not merely an update or governance response, nor is it unrelated as it clearly involves an AI system with potential for significant harm.
Thumbnail Image

How successful is this autonomous surgical robot at removing organs?

2025-07-10
Government Technology
Why's our monitor labelling this an incident or hazard?
The article describes the development and successful testing of an autonomous AI-driven surgical robot that can perform organ removal surgeries independently on realistic models. While the robot has not yet been used on actual patients, its ability to understand and adapt in real time to surgical procedures indicates a significant advancement toward clinical viability. There is no mention of harm occurring or plausible harm from its use at this stage, only successful demonstrations in controlled settings. Therefore, this event does not describe an AI Incident or AI Hazard but rather a significant development in AI technology relevant to surgery, which fits best as Complementary Information.
Thumbnail Image

Primera cirugía robótica autónoma con ChatGPT: un robot realiza una...

2025-07-10
Infosalus
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the autonomous surgical robot SRT-H) performing a complex medical procedure on a real patient. The AI system's development and use are central to the event. Although the surgery was successful and no harm occurred, the nature of autonomous surgery inherently carries risks of injury or harm if the system malfunctions or makes incorrect decisions. The article highlights the system's ability to adapt and learn but does not report any actual injury or adverse outcome. Therefore, the event does not meet the criteria for an AI Incident (which requires realized harm) but fits the definition of an AI Hazard, as the autonomous surgical robot could plausibly lead to harm in future applications. The article does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. It is clearly related to an AI system and its use, so it is not Unrelated.
Thumbnail Image

Il 'dottor robot' opera da solo, primo intervento senza aiuto umano

2025-07-09
Adnkronos
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (an autonomous surgical robot) performing a complex medical procedure on a human patient. The AI system's use directly affects the health of a person, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health. Although the article does not report any harm or injury occurring during the procedure, the use of an autonomous AI system in surgery inherently carries risks to patient health. However, since the article emphasizes successful completion with results comparable to expert surgeons and no harm reported, this is a realized use of AI with direct impact on health. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in a high-stakes health context with potential for harm and actual deployment on a patient.
Thumbnail Image

Robot Achieves Complex Surgery Autonomously Through AI Training - Neuroscience News

2025-07-10
Neuroscience News
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgery, a high-risk application where AI's role is pivotal. The event involves the AI system's use (not just development) in a real surgical procedure on a lifelike patient, with the robot adapting and responding in real time. Although no harm occurred, the AI system's operation in this context directly relates to potential injury or harm to patients if malfunctioned. The article emphasizes the system's reliability and robustness, but the nature of autonomous surgery inherently involves significant risk. Therefore, this event is best classified as an AI Incident because it documents the AI system's autonomous operation in a critical task with direct implications for human health and safety, even if the outcome was successful. It is not merely a hazard (potential risk) or complementary information (response or governance), nor unrelated.
Thumbnail Image

Revolución médica: un robot extirpa una vesícula sin intervención humana y entrenado por ChatGPT

2025-07-10
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (an autonomous surgical robot trained with machine learning similar to ChatGPT) performing a complex medical procedure. The robot's operation directly affects patient health, fulfilling the criteria for potential harm. Although the article describes a successful surgery without injury, the nature of autonomous surgery entails credible risks of harm if the AI system malfunctions or makes incorrect decisions. Since no actual harm or injury is reported, this is not an AI Incident. Instead, it is an AI Hazard because the autonomous surgical robot's use could plausibly lead to injury or harm in future operations. The article does not focus on responses to past harm or legal/governance actions, so it is not Complementary Information. It is not unrelated because the AI system is central to the event.
Thumbnail Image

La delicada operación quirúrgica que realizó un robot manejado con inteligencia artificial | NTN24.COM

2025-07-11
NTN24 | Últimas Noticias de América y el Mundo.
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) that has been developed and tested successfully. There is no harm or violation of rights reported; the robot performed accurately, albeit slower than human surgeons. The article focuses on the development and testing phase, with no indication of malfunction, misuse, or harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It is not merely general AI news but provides complementary information about AI advancements in surgery, contributing to understanding AI's evolving role in healthcare without reporting harm or risk.
Thumbnail Image

Un robot extirpa una vesícula sin ayuda humana: la cirugía con IA que marca un antes y un después

2025-07-10
Radio Duna
Why's our monitor labelling this an incident or hazard?
The robot SRT-H is an AI system as it infers from input (videos, verbal commands) to generate outputs (surgical actions) autonomously and adaptively. The event involves the use of this AI system in a real surgical procedure. However, the article does not mention any harm or injury resulting from the surgery, nor any malfunction or failure. It highlights a technological advancement without reporting any realized or potential harm. Therefore, this is not an AI Incident or AI Hazard. It is a significant development in AI surgical systems but does not report harm or plausible harm. Hence, it is best classified as Complementary Information, providing context and understanding of AI progress in surgery.
Thumbnail Image

Robot performs surgery on its own for the first time powered by ChatGPT

2025-07-10
End Time Headlines
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Surgical Robot Transformer-Hierarchy) performing complex autonomous surgery, which clearly fits the definition of an AI system. The robot's use is described in a research context on animal models, with no reported injury or harm to humans yet. However, the technology's future deployment in human surgeries could plausibly lead to harm (e.g., surgical errors, complications) if not properly controlled. Since no actual harm has occurred yet, but the potential for harm is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. It is not unrelated because it clearly involves AI and its implications.
Thumbnail Image

Robot con IA opera una vesícula biliar sin asistencia humana - La Verdad

2025-07-10
Diario La Verdad
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the SRT-H robot) performing autonomous surgery, demonstrating advanced AI capabilities in real-time decision-making and adaptation. However, the procedure was conducted on ex vivo pig tissues, not humans, and no harm or injury has been reported. Therefore, this event does not qualify as an AI Incident since no harm has occurred. Given the potential risks associated with autonomous surgical robots if deployed in clinical settings, this event plausibly could lead to harm in the future, qualifying it as an AI Hazard. The article focuses on the demonstration and potential of the AI system rather than reporting any actual harm or incident.
Thumbnail Image

Watch robot perform gallbladder surgery in pig

2025-07-10
Cosmos Magazine
Why's our monitor labelling this an incident or hazard?
The robot uses AI (imitation learning, real-time decision making, and adaptation) to perform surgery autonomously, which qualifies it as an AI system. However, the surgeries were conducted on pig cadavers, so no injury or harm occurred. There is no indication of malfunction or misuse causing harm. The event shows potential for future autonomous surgical applications, but no plausible harm or incident is described as having occurred or being imminent. Therefore, it does not meet the criteria for AI Incident or AI Hazard. It is a significant research development providing context to the AI ecosystem, thus classified as Complementary Information.
Thumbnail Image

Can a Robot Really Perform Surgery Without Human Help? - TechRound

2025-07-10
TechRound
Why's our monitor labelling this an incident or hazard?
The robot SRT-H is an AI system explicitly described as performing autonomous surgery tasks. The event involves the use of AI in a real surgical context (albeit on models and cadavers) with demonstrated autonomous decision-making and adaptation. No actual harm or injury has been reported; the surgeries were performed on anatomical models and pig cadavers, so no patients were harmed. However, the technology's nature and intended use imply a credible risk of injury or harm if deployed in real surgeries without adequate safeguards. Thus, the event represents a plausible future risk (AI Hazard) rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with potential for harm.
Thumbnail Image

Robot Performs Simulated Surgery on Its Own in Historic Breakthrough

2025-07-10
La Voce di New York
Why's our monitor labelling this an incident or hazard?
The SRT-H robot is an AI system performing autonomous surgery simulation, clearly involving AI development and use. Although no actual harm occurred since the surgery was simulated, the event plausibly leads to future AI incidents if such autonomous surgical systems are deployed in real clinical settings, where errors could cause injury or harm to patients. Therefore, this event qualifies as an AI Hazard due to the credible risk of future harm stemming from autonomous AI surgical systems.
Thumbnail Image

USA, un robot esegue un'operazione da solo: è una svolta storica

2025-07-10
La Voce di New York
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the autonomous surgical robot SRT-H) that performed a complex task (surgery) autonomously. Although the operation was on a simulator and no actual harm occurred, the AI system's development and use in this context plausibly could lead to harm (injury or health harm) if deployed in real medical practice prematurely or malfunctioning. Hence, it fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the event itself, not on responses or updates to prior incidents. It is not Unrelated because the AI system and its potential impact are central to the report.
Thumbnail Image

Breakthrough in Robotic Surgery: Autonomous Robot Successfully Performs Gallbladder Removal - TUN

2025-07-10
tun.com
Why's our monitor labelling this an incident or hazard?
The robotic system is an AI system as it uses machine learning architecture similar to ChatGPT, enabling autonomous decision-making and adaptation during surgery. The event involves the use of AI in a medical procedure, but the surgery was performed on a lifelike model, not a human, so no injury or harm has occurred. There is no indication of malfunction or misuse causing harm. The event highlights a technological breakthrough and progress towards autonomous surgical systems, which is valuable complementary information for understanding AI's evolving role in healthcare. It does not report an AI Incident (no harm occurred) nor an AI Hazard (no plausible imminent harm is described). Hence, it fits the definition of Complementary Information.
Thumbnail Image

Robot performs first realistic gallbladder surgery without human help -- and learns as it

2025-07-10
Knowridge Science Report
Why's our monitor labelling this an incident or hazard?
The robot, an AI system using machine learning similar to large language models, performed gallbladder surgery without human help, adapting to unexpected situations and learning from feedback. This involves the use of AI in a high-stakes medical context where errors could cause injury or harm to patients. Although the article does not report any harm occurring, the event demonstrates the AI system's use and capabilities in a real surgical setting. Since the surgery was completed successfully without harm, and no incident or malfunction causing harm is reported, this is not an AI Incident. However, the deployment of autonomous surgical robots carries plausible risks of injury or harm if errors occur in future uses. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm in future applications.
Thumbnail Image

Sistema autônomo guiado por IA realiza procedimento cirúrgico sem ajuda humana | O Imparcial

2025-07-10
O Imparcial
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system performing autonomous surgery, which qualifies as an AI system by definition. However, the procedure was conducted on human cadaver tissue, with no reported injury or harm to living persons. There is no indication of malfunction or misuse causing harm. While the technology could plausibly lead to future AI Incidents if deployed clinically without adequate safeguards, the article focuses on the successful demonstration and technological advancement rather than potential risks or harms. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it provides complementary information about AI progress in medical robotics.
Thumbnail Image

Dottor robot intervento autonomo: prima operazione senza aiuto umano

2025-07-10
Unica Radio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the autonomous surgical robot) that performed a complex medical operation independently, using AI technologies such as computer vision and machine learning. The event involves the use of the AI system leading to a real-world outcome affecting a patient's health. Although the operation was successful and no harm occurred, the event is a direct use of AI in a high-stakes medical context, which inherently carries risks of injury or harm if malfunction or errors occur. Since the AI system's use directly impacts patient health, this qualifies as an AI Incident under the framework, as it involves the development and use of an AI system that has directly influenced a health-related outcome. The ethical concerns about responsibility and limits of autonomy further underscore the significance of this event as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Un robot ha eseguito un intervento chirurgico da solo: è la prima volta nella storia - La Provincia Di Varese

2025-07-10
La Provincia di Varese, Il quotidiano di Varese online
Why's our monitor labelling this an incident or hazard?
The SRT-H robot is an AI system performing autonomous surgery, which is explicitly stated. The event involves the use of the AI system in a simulated environment, with no actual patient harm reported. Since the AI system's autonomous operation in surgery could plausibly lead to harm if deployed in real clinical scenarios, this qualifies as an AI Hazard. There is no indication of actual harm or rights violations yet, so it is not an AI Incident. The article is not merely complementary information or unrelated, as it focuses on the AI system's autonomous operation and its implications for future harm potential.
Thumbnail Image

Il 'dottor robot' opera da solo, primo intervento senza aiuto umano

2025-07-09
Sarda News - Notizie in Sardegna
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (an autonomous surgical robot) that has directly performed a medical procedure on a human patient. While the article does not report any harm or injury resulting from the surgery, it describes a successful autonomous operation with outcomes comparable to an expert human surgeon. There is no indication of malfunction, harm, or violation of rights. The event is a milestone in AI development and deployment but does not describe any realized or potential harm. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides important context and progress in AI capabilities and applications in healthcare without associated harm or risk.
Thumbnail Image

Da Vinci Code: First Autonomous Robot Surgery Achieved in Pig Cadavers

2025-07-10
Inside Precision Medicine
Why's our monitor labelling this an incident or hazard?
The AI system is clearly involved as an autonomous surgical robot performing complex tasks. However, the surgeries were conducted on pig cadavers, so no actual harm to living beings occurred. The article does not report any malfunction or misuse causing harm, nor does it indicate a credible risk of harm from this demonstration itself. The event is primarily a research milestone and demonstration of AI capabilities, with discussion of future potential and challenges. This fits the definition of Complementary Information, as it provides important context and understanding of AI development and its implications without describing an AI Incident or AI Hazard.
Thumbnail Image

Study describes robot operating on gall bladder autonomously, 'milestone' in use of AI in clinical setting

2025-07-10
NewsDrum
Why's our monitor labelling this an incident or hazard?
An AI system (the autonomous surgical robot SRT-H) is explicitly involved, using AI algorithms for real-time decision-making and adaptation during surgery. The event involves the use of the AI system, and the robot has performed surgery successfully on human tissue ex vivo. There is no indication of harm or injury occurring; rather, the results are comparable to expert human surgeons. However, the event demonstrates the deployment of an AI system capable of autonomous surgical operations, which could plausibly lead to harm if malfunction or misuse occurs in future clinical applications. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to harm in real clinical settings, but no harm has yet occurred.
Thumbnail Image

Robô guiado por IA realiza 1ª cirurgia em humano

2025-07-11
UOL notícias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot guided by AI) that was used in a real human surgery. However, there is no indication of any harm, malfunction, or violation resulting from this use. The article reports a successful operation without any adverse outcomes. Therefore, this is not an AI Incident or AI Hazard. It is a significant development in AI application, but since it does not report harm or potential harm, it is best classified as Complementary Information, providing context and update on AI capabilities in medicine.
Thumbnail Image

Avance histórico: un robot realiza por primera vez una cirugía realista sin ayuda humana

2025-07-11
Globovision
Why's our monitor labelling this an incident or hazard?
The robot is an AI system as it performs autonomous surgical tasks requiring precise manipulation and adaptive responses to unpredictable biological tissue behavior. The successful completion of surgery without human help indicates the AI system's use. However, the article does not mention any harm or malfunction resulting from this event, nor does it indicate any potential for harm or risk. Therefore, this is a significant AI development but does not constitute an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides important context on AI capabilities and advances in medical robotics without describing harm or risk.
Thumbnail Image

Robô com IA alcança autonomia em cirurgia - 11/07/2025 - Equilíbrio e Saúde - Folha

2025-07-11
Folha de S.Paulo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as performing autonomous surgery with real-time adaptation and error correction, indicating AI system involvement. The system has been tested only on non-living tissue, so no actual harm has occurred yet. However, the article highlights the system's potential future use in live surgeries, which could plausibly lead to injury or harm to patients if the AI malfunctions or fails to respond correctly to unpredictable scenarios. Thus, it fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harm in the future. There is no indication of realized harm or legal/governance responses, so it is not an Incident or Complementary Information.
Thumbnail Image

AI机器人完成自主手术,成功切除猪胆囊

2025-07-12
煎蛋
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as performing autonomous surgery, a complex medical task with direct implications for human health. While the surgeries were conducted on dead pigs, meaning no injury or harm has yet occurred, the AI system's development and use in this context plausibly could lead to harm or benefit when applied to living patients. The article discusses the system's capabilities, self-correction, and the need for regulatory oversight before human use, indicating potential future risks and benefits. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but has not yet caused harm.
Thumbnail Image

Robô guiado por IA realiza 1ª cirurgia em humano

2025-07-11
Terra
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H surgical robot) that autonomously performed a real human surgery. Although no harm occurred, the AI system's use in a high-stakes medical procedure directly affects patient health and safety. Since the surgery was successful and no injury or harm was reported, this is not an AI Incident. However, the deployment of such an AI system in surgery carries plausible risks of harm if malfunction or errors occur in the future. Therefore, this event qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm in future cases. The article does not describe any actual harm or violation but highlights the potential for autonomous AI surgical systems to impact health outcomes.
Thumbnail Image

"Comme un chirurgien expérimenté": un robot a effectué une opération en autonomie quasi totale

2025-07-11
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (SRT-H) performing autonomous surgery, which is a high-stakes application with direct implications for human health. While the article reports a successful operation on a non-human patient model without harm, the development and use of such autonomous surgical AI systems inherently carry risks that could plausibly lead to injury or harm to patients in future real-world use. Therefore, this qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving injury or harm to persons if errors or malfunctions occur during autonomous surgery.
Thumbnail Image

Pela 1ª vez, robô com IA faz cirurgia sem nenhuma ajuda humana; veja como foi

2025-07-11
Canaltech
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (robotic surgery system trained with neural networks) performing autonomous surgeries. Although the surgeries were conducted on animal organs and no harm to humans has occurred, the technology's future use in live human surgeries could plausibly lead to injury or harm, which qualifies as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it reports a significant milestone with potential future risks, not just updates or responses to past incidents.
Thumbnail Image

Per la prima volta un robot chirurgo ha completato in autonomia un intero intervento

2025-07-11
lastampa.it
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (SRT-H) autonomously controlling a surgical robot to perform complete surgical procedures on biological tissue, which fits the definition of an AI system influencing physical environments. The autonomous completion of eight gallbladder removals without human correction demonstrates the AI system's use leading to direct physical intervention, fulfilling the criteria for an AI Incident. Although the surgeries were on ex vivo animal organs and no harm occurred, the AI system's autonomous operation in a medical procedure is a realized event, not just a potential risk. The article does not describe harm but the direct use of AI in a high-stakes physical task with health implications, which qualifies as an AI Incident under the framework. The potential future applications and safety considerations discussed do not change the classification, as the autonomous surgical use has already occurred.
Thumbnail Image

Robô guiado por IA realiza 1ª cirurgia em humano

2025-07-11
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The robotic system described is an AI system as it uses machine learning and AI techniques to perform autonomous surgical tasks, adapting in real-time and responding to voice commands. The event involves the use of this AI system in a real human surgery, which directly relates to human health and safety. Although the surgery was successful and no harm was reported, the involvement of the AI system in performing surgery on a human patient inherently carries risks of injury or harm if malfunction or errors occur. Therefore, this event qualifies as an AI Incident because the AI system's use directly impacts human health, fulfilling the criteria of an AI Incident even if the outcome was positive.
Thumbnail Image

Incroyable exploit : un robot réussit l'étape la plus risquée...

2025-07-10
Futura
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot controlled by AI) performing autonomous surgery, which is a high-stakes application with potential for serious harm if malfunction or errors occur. However, the described operation was conducted ex-vivo on a model, with no actual patient harm reported. Therefore, no realized harm has occurred yet, but the AI system's use could plausibly lead to harm in future real-world applications. This fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to injury or harm to persons if deployed clinically without sufficient safeguards.
Thumbnail Image

Robô com AI alcança autonomia total em procedimento cirúrgico pela 1ª vez

2025-07-11
Jornal Estado de Minas | Not�cias Online
Why's our monitor labelling this an incident or hazard?
The SRT-H system is an AI system performing autonomous surgery, which inherently involves significant risks to human health if malfunctioning or misused. However, the current tests were conducted on non-living tissue, with no reported injuries or harm. The article focuses on the system's capabilities and future potential, emphasizing the need for further validation and regulatory approval. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in future clinical applications, but no incident has yet occurred.
Thumbnail Image

Un robot con IA y entrenado con vídeos de cirugías, opera una vesícula biliar sin ayuda humana

2025-07-11
Vanguardia
Why's our monitor labelling this an incident or hazard?
The robot SRT-H is an AI system explicitly described as using machine learning and language-conditioned imitation learning to perform autonomous surgery. The event involves the use of this AI system to carry out a complex medical procedure without human intervention, which directly relates to potential harm or injury to patients if deployed clinically. Although the current tests were on ex vivo pig tissue (not live patients), the article emphasizes this as a transformative step toward clinical deployment, implying plausible future harm if the system malfunctions or makes errors in real surgeries. Given the direct involvement of an AI system in a high-stakes medical procedure with clear links to health and safety, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to an event with potential for injury or harm to people.
Thumbnail Image

Robô guiado por IA realiza 1ª cirurgia em humano

2025-07-11
Estadão
Why's our monitor labelling this an incident or hazard?
An AI system (the surgical robot) was used in a real human surgery, demonstrating autonomous and adaptive capabilities. Although the surgery was successful and no harm was reported, the use of AI in autonomous surgery inherently carries risks of injury or harm if errors or malfunctions occur. Therefore, this event plausibly could lead to an AI Incident in the future. Since no actual harm occurred, it is not an AI Incident but an AI Hazard. The article does not focus on responses or governance, so it is not Complementary Information, nor is it unrelated.
Thumbnail Image

Un robot propulsé par l'IA réalise pour la première fois une opération chirurgicale réaliste sans aide humaine

2025-07-11
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) that has performed a complex medical procedure without human aid. While the article highlights successful operation with expert-level precision, the nature of autonomous surgery inherently involves risks of injury or harm to patients if the AI system malfunctions or makes incorrect decisions. Since no actual harm is reported, but the potential for serious harm is credible and plausible, this event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with potential for harm.
Thumbnail Image

ChatGPT-powered robot successfully completes gallbladder removal surgery - NaturalNews.com

2025-07-11
NaturalNews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (SRT-H) performing autonomous surgery, which meets the definition of an AI system. The event involves the use and development of this AI system. Although no actual harm has occurred because the surgery was on simulated tissue, the article discusses plausible future risks such as hacking, malfunction, and ethical concerns that could lead to injury or harm to patients if the system is deployed in real clinical settings without sufficient safety validation. Thus, the event represents a credible potential for harm (AI Hazard) rather than an incident with realized harm. The article also does not focus on responses or governance measures, so it is not Complementary Information. It is not unrelated because it clearly involves AI and its implications.
Thumbnail Image

Un robot entrenado con IA y videos de cirugías operó una vesícula biliar sin ayuda humana

2025-07-11
La Capital MdP
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot using machine learning and natural language processing) performing surgery autonomously, which fits the definition of an AI system. The robot's use is described in a research/proof-of-concept context on animal tissues ex vivo, with no actual harm reported. Since no injury or harm has occurred, and the event is about demonstrating capability, it does not meet the criteria for an AI Incident. However, the autonomous surgical robot's development and use plausibly could lead to harm if deployed clinically, meeting the criteria for an AI Hazard. The article does not focus on responses, governance, or updates to prior incidents, so it is not Complementary Information. It is not unrelated because it clearly involves an AI system with potential for harm.
Thumbnail Image

Ce robot a réalisé une opération chirurgicale complexe sans aide humaine

2025-07-11
Santé Magazine
Why's our monitor labelling this an incident or hazard?
The robot's autonomous performance of a complex surgery, learning from video data and responding to voice commands, indicates the presence of an AI system. The event involves the use of this AI system in a critical healthcare operation where errors or malfunctions could cause injury or harm to a person. Although the operation was performed on a realistic patient (likely a simulation or model), the scenario includes unexpected emergencies typical of real medical conditions, implying a plausible risk of harm if deployed in real patients. Since no actual harm is reported but the AI system's use in such a high-risk context could plausibly lead to injury or harm, this qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Robot realiza por primera vez una cirugía realista sin ayuda humana - Diario Primicia

2025-07-11
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot using machine learning similar to ChatGPT) performing a complex medical procedure autonomously. However, since the surgery was performed on an anatomical model and not on humans, and no harm or injury has occurred, this does not qualify as an AI Incident. The article suggests that future human trials could happen, indicating plausible future harm potential. Therefore, this event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harm in the future if applied in real human surgeries without adequate safeguards.
Thumbnail Image

Un robot chirurgo ha eseguito il primo intervento tutto da solo, con una precisione del 100%

2025-07-11
Wired
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot) performing a complex medical procedure independently, which fits the definition of an AI system. The robot's use is described, but no harm or injury occurred, nor is there any indication of violation of rights or disruption. Since no harm has occurred, and the event does not describe a plausible risk of harm but rather a successful experiment, it does not qualify as an AI Incident or AI Hazard. It is not merely general AI news because it reports a significant milestone in AI application with potential implications. However, since no harm or plausible harm is described, the best classification is Complementary Information, as it provides important context and understanding of AI capabilities in surgery without reporting harm or risk.
Thumbnail Image

Un robot réalise une opération chirurgicale

2025-07-11
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgery, which is a complex real-time decision-making task. The article reports a successful operation with no harm or injury, so no AI Incident has occurred. Although future risks exist, the article does not emphasize plausible harm or risk, so it does not meet the criteria for an AI Hazard. The main focus is on the advancement and demonstration of AI capabilities in surgery, making this Complementary Information that enhances understanding of AI progress and potential.
Thumbnail Image

ChatGPT asiste en su primera OPERACIÓN robótica y realiza extirpación de VESÍCULA exitosamente | El Popular

2025-07-11
Diario El Popular
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (SRT-H) that autonomously performed a surgical operation, which is a direct use of AI in a high-stakes health context. The AI system's involvement led directly to a medical procedure affecting patient health, fulfilling the criteria for an AI Incident under harm category (a) injury or harm to health of a person or groups of people. Although the surgery was successful and no harm occurred, the AI system's use in a real operation is a materialized event involving AI impacting health outcomes. This is not merely a potential hazard or complementary information but a concrete instance of AI deployment with direct health implications.
Thumbnail Image

Robot quirúrgico autónomo logra operar sin ayuda humana

2025-07-11
El Output
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot SRT-H) that has been developed and tested in controlled experimental settings. While the system demonstrates advanced autonomous capabilities and adaptation, the surgeries were performed on models, not humans, so no actual harm or injury has occurred. The article discusses potential future use and challenges but does not describe any realized harm or incidents. Therefore, this event represents a plausible future risk scenario where autonomous surgical AI could lead to harm if deployed prematurely, qualifying it as an AI Hazard rather than an Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with potential health impacts.
Thumbnail Image

Cirugía histórica: robot con IA y ChatGPT extirpa vesícula sin ayuda humana

2025-07-11
MiMorelia.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a surgical robot with AI capabilities including generative AI and real-time error correction) performing a medical operation autonomously. This use of AI directly affects the health of a person (the patient undergoing gallbladder removal). Since the surgery was completed successfully, it implies the AI system's involvement led to a positive health outcome, but the event still qualifies as an AI Incident because it involves direct AI use in a high-stakes health context with potential for harm or benefit. The event is not merely a product announcement but reports a real-world use with direct health implications, fitting the definition of an AI Incident.
Thumbnail Image

Première mondiale : un robot suit des consignes vocales humaines et réalise seul une chirurgie complexe !

2025-07-10
Sciencepost
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgery, a high-risk application where errors could cause serious injury or death. Although the current tests are in research settings without reported harm, the autonomous nature and complexity of the system mean that its deployment could plausibly lead to harm. The article raises questions about responsibility and safety, indicating awareness of potential risks. Since no actual harm has occurred yet, but plausible future harm exists, this fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Autonomous Robot Surgeon Removes Organs With A 100% Success Rate

2025-07-12
Wonderful Engineering
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot SRT-H) whose use has directly led to successful surgical procedures on models and animal organs. Although no harm is reported, the robot's operation in a controlled experimental setting with no human patients means no injury or harm has occurred yet. However, the event demonstrates the use of AI in a high-stakes medical context with potential for future real-world application. Since no harm or violation has occurred, and the event focuses on the successful use and capabilities of the AI system rather than any harm or risk, it does not qualify as an AI Incident or AI Hazard. It is not merely unrelated, as it involves AI system use, but it is primarily a report on AI system capabilities and research progress without harm or risk. Therefore, it is best classified as Complementary Information, providing important context on AI development and potential future impacts in healthcare.
Thumbnail Image

Primera cirugía en el mundo realizada con ChatGPT: conoce los detalles

2025-07-11
Sú Médico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (including generative AI and models similar to ChatGPT) in performing autonomous robotic surgery on a human patient. This qualifies as an AI system involvement. However, the surgery was successful without reported injury or harm, so it does not meet the criteria for an AI Incident. There is no indication of plausible future harm or risk beyond general challenges and costs, so it is not an AI Hazard. The article mainly provides detailed information about the development, use, benefits, and challenges of AI in robotic surgery, which fits the definition of Complementary Information.
Thumbnail Image

Robô guiado por inteligência artificial realiza primeira cirurgia em humano

2025-07-12
AGORA MT
Why's our monitor labelling this an incident or hazard?
The event describes the use of an AI system (the SRT-H surgical robot) that autonomously performed a complex surgical procedure on a human, which directly relates to the health and safety of the patient. Since the AI system's use directly led to a medical intervention on a person, this qualifies as an AI Incident under the definition of causing or influencing injury or harm to a person or group of people. Although the surgery was successful and no harm is reported, the involvement of AI in a critical health procedure with direct impact on a human patient fits the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Autonomous Robot Surgery Trials Possible in a Decade

2025-07-11
WebProNews
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as an autonomous surgical robot trained via AI techniques to perform complex procedures. The current event is a successful experiment on pig cadavers, so no actual harm has occurred yet. However, the article discusses the potential for future human trials and the associated risks, including safety and ethical concerns. Given the plausible risk of injury or harm to patients if the AI system malfunctions or makes errors during autonomous surgery, this qualifies as an AI Hazard. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential future risks and implications of the AI system's deployment rather than just updates or responses to past incidents.
Thumbnail Image

Première mondiale : un robot alimenté à l'IA réalise une chirurgie complexe sans intervention humaine

2025-07-11
Trust My Science
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing autonomous surgical tasks. The event involves the use and development of this AI system. However, the surgeries were performed on dead pig organs, so no injury, health harm, or other harms occurred. The article discusses the potential for future clinical use, which could plausibly lead to harm if not properly managed, but no such harm has yet occurred. Thus, it qualifies as an AI Hazard due to the plausible future risk of harm from autonomous surgical AI systems, but not an AI Incident since no harm has materialized. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves an AI system with potential impact.
Thumbnail Image

Un robot autónomo entrenado con IA realiza la primera intervención quirúrgica sin ayuda humana

2025-07-11
Gaceta Médica
Why's our monitor labelling this an incident or hazard?
The robot is an AI system explicitly described as using machine learning to perform autonomous surgery, which is a direct application of AI. The event reports a successful autonomous surgery on an animal, which is a direct use of AI with potential implications for human health. Although no harm is reported, the event involves the use of AI in a critical health-related task where malfunction or errors could cause injury or harm. However, since the surgery was successful and no harm occurred, this event does not describe an AI Incident. Instead, it represents a plausible future risk scenario where autonomous surgical AI systems could lead to harm if malfunction or errors occur. Therefore, it qualifies as an AI Hazard, as the development and use of this AI system could plausibly lead to injury or harm in future clinical applications.
Thumbnail Image

Un robot que utiliza IA y entrenado con videos de cirugías realizó parte de una extirpación de vesícula biliar sin ayuda humana

2025-07-11
Colima Noticias
Why's our monitor labelling this an incident or hazard?
The robot uses AI trained on surgical videos to autonomously perform surgery, directly affecting patient health. The event involves the use of an AI system in a real medical procedure, which inherently carries risk of injury or harm. Even though the surgery was successful and supervised, the AI system's autonomous role in surgery meets the definition of an AI Incident because it directly influences a procedure with potential for harm. The article does not describe a mere potential risk or future hazard but an actual use of AI in a critical health context, thus qualifying as an AI Incident.
Thumbnail Image

Robô guiado por IA realiza primeira cirurgia em humano

2025-07-11
RD - Jornal Repórter Diário
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the SRT-H surgical robot) performing a real surgery on a human patient. The AI system's development, use, and autonomous decision-making directly influenced the surgical procedure. Given that surgery inherently involves risks of injury or harm to the patient, the AI system's role in performing the operation means it has directly led to potential harm, fulfilling the criteria for an AI Incident. Although the surgery was successful and no harm was reported, the AI system's involvement in a critical health procedure on a human patient meets the definition of an AI Incident because the AI system's use directly impacts human health and safety.
Thumbnail Image

Avance histórico en cirugía: robot autónomo opera sin ayuda humana directa

2025-07-11
Canal 2
Why's our monitor labelling this an incident or hazard?
The robot SRT-H is an AI system as it uses machine learning and autonomous decision-making to perform surgery. The event involves the use of this AI system in a realistic surgical context, demonstrating capabilities that could directly impact human health. Although the current tests are on models and no actual harm has occurred, the article explicitly mentions potential future application in humans, which could plausibly lead to injury or harm if the system malfunctions or makes incorrect decisions. Therefore, this event qualifies as an AI Hazard, as it plausibly could lead to an AI Incident in the future but has not yet caused harm.
Thumbnail Image

Avance histórico: un robot realiza por primera vez una cirugía realista sin ayuda humana

2025-07-11
NOTICIAS - LA JORNADA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot using machine learning) performing a complex medical procedure without human help. However, the surgery was conducted on an anatomical model, not on a living human or animal, and no harm or injury occurred. The article does not report any injury, violation of rights, or harm caused by the AI system. Instead, it reports a successful demonstration of AI capabilities with potential future applications. Since no harm has occurred but the AI system's use could plausibly lead to future incidents (e.g., if deployed in humans), this qualifies as an AI Hazard rather than an AI Incident. It is not Complementary Information because the article focuses on the event itself, not on responses or updates to prior incidents. It is not Unrelated because it clearly involves an AI system with potential implications for harm in the future.
Thumbnail Image

Cirurgia histórica: robô com IA opera humano pela 1ª vez - Boca do Povo News

2025-07-11
Boca do Povo News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the SRT-H robot with machine learning capabilities) performing a complex medical procedure autonomously. However, there is no indication of any injury, harm, or malfunction resulting from the AI system's use. The surgery was successful and the outcome positive. Therefore, this does not qualify as an AI Incident since no harm occurred. It also does not represent an AI Hazard because no plausible future harm is indicated or implied. The article reports a milestone achievement and provides context on the AI system's capabilities, which fits the category of Complementary Information as it enhances understanding of AI developments in healthcare without describing harm or risk.
Thumbnail Image

Robot Performs First Realistic Surgery Without Human Help

2025-07-11
Human Progress
Why's our monitor labelling this an incident or hazard?
The robot is an AI system performing complex autonomous surgical tasks, which involves real-time decision-making and learning. Although the surgery was performed on a lifelike patient (likely a simulation), the technology's deployment in real surgeries could lead to injury or harm to patients if malfunctions or errors occur. Since no actual harm has occurred yet, but plausible future harm exists, this event qualifies as an AI Hazard.
Thumbnail Image

Un robot chirurgien dopé à l'IA réalise la première opération complexe sans intervention humaine | TF1 INFO

2025-07-11
TF1 INFO
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (the autonomous surgical robot) used in a medical operation affecting a human patient. The AI system's development and use are central to the event. No harm or injury is reported; the operation was successful and comparable to expert human surgeons. However, the AI system's autonomous decision-making in surgery inherently carries a credible risk of harm if errors or malfunctions occur in the future. Thus, the event plausibly could lead to an AI Incident (harm to health) if things go wrong. Since no actual harm occurred, it is not an AI Incident but an AI Hazard. The event is not merely complementary information because it reports a concrete autonomous operation, not just a research update or governance response. It is not unrelated because the AI system is central to the event and its potential risks.
Thumbnail Image

Los cirujanos robots dejan de ser ficción gracias a la ciencia | En varios países entrenan con éxito sistemas quirúrgicos 100 por ciento autónomos

2025-07-12
Página/12
Why's our monitor labelling this an incident or hazard?
The event involves an AI system autonomously performing surgical procedures, which qualifies as an AI system by definition. The AI's use is demonstrated in practice on animal models, showing successful autonomous operation. No harm to humans or property has occurred, so it is not an AI Incident. However, the article discusses the plausible future risks and ethical questions related to autonomous surgical AI systems, indicating a credible potential for harm if such systems malfunction or are misused in human surgeries. Therefore, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Robot entrenado Inteligencia Artificial realiza exitosas cirugías de vesícula y sin ayuda humana - La Tercera

2025-07-12
LA TERCERA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot trained with AI) performing autonomous surgery, which fits the definition of an AI system. The use of the AI system in this experimental context has not caused any direct or indirect harm yet, as the surgeries were performed on pig organs ex vivo. However, the article highlights the potential for future deployment in live surgeries, where the AI system could plausibly lead to harm (e.g., injury to patients) if it malfunctions or is used prematurely. Thus, the event is best classified as an AI Hazard, reflecting credible future risks rather than an incident with realized harm. It is not Complementary Information because the article focuses on the experimental results and implications rather than updates or responses to prior incidents. It is not Unrelated because the AI system and its potential impacts are central to the report.
Thumbnail Image

Une intelligence artificielle apprend à effectuer des opérations chirurgicales seule, grâce au langage humain

2025-07-13
Les Numériques
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (SRT-H) that autonomously performs surgical operations on a synthetic patient, guided by human voice commands. Although the tests have not yet been conducted on real humans, the AI system's development and use in this context could plausibly lead to injury or harm if applied in real surgical settings. Hence, this qualifies as an AI Hazard due to the credible risk of harm inherent in autonomous surgical AI systems.
Thumbnail Image

Robot autónomo basado en ChatGPT extrae vesícula con precisión del 100 %

2025-07-12
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (SRT-H) performing autonomous surgical tasks, which fits the definition of an AI system. The use of the system in a realistic but ex vivo environment means no actual harm has occurred yet, so it is not an AI Incident. However, the article discusses the potential for future use in live surgeries, where malfunction or errors could plausibly lead to injury or harm to patients. Therefore, this qualifies as an AI Hazard, as the development and use of this AI system could plausibly lead to an AI Incident in the future if safety is not ensured.
Thumbnail Image

Un robot realiza la primera operación de vesícula sin asistencia humana

2025-07-12
Noticias y Protagonistas - Radio
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) performing complex tasks without human intervention, which qualifies as an AI system under the definitions. However, the surgeries were conducted on realistic models, not on actual patients, and no injury, harm, or violation of rights has occurred. Therefore, there is no realized harm to classify as an AI Incident. The successful completion of these surgeries demonstrates the system's capabilities but does not indicate any current or past harm. The potential future use on real patients could plausibly lead to harm if failures occur, but the article does not report any such incidents or near misses. Hence, the event is best classified as Complementary Information, as it provides important context and advances in AI surgical systems without reporting harm or credible imminent risk of harm.
Thumbnail Image

Robot performs first realistic surgery without human help

2025-07-09
The Hub
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (SRT-H) performing autonomous surgery without human intervention, trained via machine learning and capable of real-time decision-making and adaptation. This qualifies as an AI system under the definitions. The event involves the use of the AI system in a surgical context, but the surgery was performed on a lifelike patient model, not a real human, and no harm or injury is reported. Therefore, no AI Incident (harm) has occurred. However, the autonomous surgical system's capabilities imply a credible risk of future harm if deployed clinically without proper controls, making this an AI Hazard. The article focuses on the demonstration and proof of concept rather than any harm or incident. Hence, the classification is AI Hazard.
Thumbnail Image

Robô realiza a primeira cirurgia realista sem ajuda humana

2025-07-09
SAPO
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Surgical Robot Transformer-Hierarchy) performing autonomous surgery, which is a direct use of AI. Although no harm occurred, the autonomous surgical system's deployment in real medical settings could plausibly lead to injury or harm if malfunctions or errors happen. Since the article reports a successful experiment without harm, it does not qualify as an AI Incident. However, the potential for future harm from autonomous surgical robots is credible, making this an AI Hazard. The article focuses on the demonstration and potential of the system rather than reporting harm or legal/governance responses, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Pela primeira vez, robô opera uma pessoa sem interferência humana

2025-07-10
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (SRT-H) autonomously performing surgery on a human patient, which is a direct use of AI with potential to cause injury or harm to health (harm category a). The AI system was trained and used to perform a complex surgical procedure without human interference, indicating direct involvement in a high-risk medical intervention. Even though no harm is reported, the event involves actual use of AI in a context where harm is a direct risk, meeting the criteria for an AI Incident. It is not merely a potential risk (hazard) or a response/update (complementary information), nor unrelated to AI. Therefore, the classification is AI Incident.
Thumbnail Image

Autonomous robot performs first realistic surgery without human help

2025-07-07
Pulse24.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (an autonomous surgical robot using machine learning architectures similar to ChatGPT) performing complex surgical tasks without human help. Although the current trials were on pig gallbladders and no harm occurred, the nature of the AI system's use in surgery inherently carries risks of injury or harm to patients if deployed clinically. The event is a proof of concept demonstrating the AI system's capability and potential future use, which could plausibly lead to harm. Since no actual harm has occurred yet, it does not qualify as an AI Incident. It is not merely complementary information because the main focus is on the autonomous AI system's operation and its implications for future harm. Therefore, the classification is AI Hazard.
Thumbnail Image

Robô realiza a primeira cirurgia realista sem ajuda humana - Renascença

2025-07-09
Rádio Renascença
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot) performing autonomous surgery, which fits the definition of an AI system. The robot's use in surgery is described, but the surgery was conducted on a realistic model, not a human, and no harm or injury occurred. Therefore, it does not qualify as an AI Incident. However, the autonomous surgical robot's development and potential future deployment could plausibly lead to harm (e.g., injury to patients) if errors occur, making this an AI Hazard. The article does not describe any actual harm or malfunction, so it is not an Incident. It is not merely complementary information because the main focus is on the autonomous surgery event itself, not on responses or governance. Hence, the classification is AI Hazard.
Thumbnail Image

Autonomous robots perform first of its kind surgery using AI to follow senior surgeons vocal instructions

2025-07-10
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (autonomous surgical robot using imitation learning and voice commands) that was used to perform surgery. Although the surgery was on a pig and not a human, the robot's ability to operate autonomously and respond to emergencies indicates a significant advancement in AI-assisted medical procedures. There is no indication of harm occurring; rather, this is a demonstration of capability with potential future benefits for healthcare access. Since no harm has occurred but the AI system's use could plausibly lead to future incidents (positive or negative) in human surgery, this qualifies as an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the AI system's autonomous operation and its implications, not on responses or governance. Therefore, the classification is AI Hazard.
Thumbnail Image

Histórico: un robot con IA basada en ChatGPT realizó cirugía de vesícula sin intervención humana

2025-07-10
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a surgical robot with AI based on ChatGPT architecture) performing autonomous surgery, which is a clear AI system involvement. The use of the AI system in autonomous surgery could plausibly lead to harm to human health if deployed in real clinical environments, fulfilling the criteria for an AI Hazard. There is no indication of actual harm or malfunction in this reported event, so it does not meet the threshold for an AI Incident. The article focuses on the development and demonstration of the AI system's capabilities, not on a realized harm or legal/governance response, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Un robot extirpa una vesícula sin intervención humana: la primera cirugía autónoma con ChatGPT

2025-07-10
telecinco
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an autonomous surgical robot using machine learning and AI similar to ChatGPT) performing surgery without human intervention. While the surgery was successful and no harm is reported, the use of autonomous AI in surgery inherently carries risks of injury or harm to patients if malfunctions or errors occur. Given the AI system's direct role in performing surgery, this event represents a significant AI Hazard because it plausibly could lead to injury or harm if the system fails or malfunctions in future uses. There is no indication of actual harm occurring yet, so it is not an AI Incident. The article is not merely complementary information since it reports a concrete autonomous surgical procedure, not just a research update or governance response. Therefore, the classification is AI Hazard.
Thumbnail Image

KI-Chirurg? Roboter operiert erstmals ohne menschliche Hilfe

2025-07-10
geo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI-trained surgical robot) performing autonomous surgery steps, which clearly fits the definition of an AI system. The robot's autonomous operation in surgery could plausibly lead to harm if errors occur in real clinical settings, such as injury to patients or other health-related harms. Since the surgeries were done on non-living animal tissue and no harm to living beings occurred, this is not an AI Incident. However, the autonomous surgical robot's development and use in this experimental context plausibly could lead to future harm if deployed clinically without sufficient validation and safeguards, qualifying it as an AI Hazard. The article does not report any actual harm or legal or rights violations, nor does it focus on governance or societal responses, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Next-Gen Surgical Robots with Enhanced Flexibility

2025-07-10
AZoRobotics.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as performing autonomous surgery, which directly relates to potential injury or harm to patients (harm category a). Although no harm has yet occurred, the nature of the AI system's use in high-stakes medical procedures means it could plausibly lead to injury or harm if deployed widely without sufficient safeguards. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the development and testing phase, with no indication of realized harm or legal or rights violations. Therefore, the classification as AI Hazard is appropriate.
Thumbnail Image

AI Controlled Robot Performs Gallbladder Removal With "100 Percent Accuracy"

2025-07-12
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (an autonomous surgical robot) performing a complex medical procedure. However, the procedure was conducted on a mannequin, so no injury, harm, or violation of rights occurred. There is no indication of malfunction or misuse leading to harm, nor is there a credible risk of harm described. The event is a research milestone demonstrating AI capabilities and potential future applications, which fits the definition of Complementary Information as it enhances understanding of AI developments without describing an incident or hazard.
Thumbnail Image

Autonomer Roboter meistert Gallenblasen-OP

2025-07-13
futurezone.at
Why's our monitor labelling this an incident or hazard?
An AI system (SRT-H robot) was developed and used to perform autonomous surgery on ex-vivo models, demonstrating advanced AI capabilities in a high-stakes medical context. Although no actual harm occurred, the article highlights the potential for such systems to operate autonomously in real clinical settings, where malfunction or misuse could directly cause injury or harm to patients. This plausible future risk aligns with the definition of an AI Hazard. Since no actual harm has yet occurred, it is not an AI Incident. The article is not merely complementary information as it focuses on the autonomous operation and its implications rather than responses or governance. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Robot performs surgery without a human controlling its hands - Earth.com

2025-07-13
Earth.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot SRT-H) performing complex surgical tasks without human control, which clearly fits the definition of an AI system. The use of the system in surgery inherently carries risks of injury or harm to patients if errors occur. Although the robot has only been tested on lifelike models so far and no actual patient harm has occurred, the article explicitly discusses potential failure modes and the need for regulatory validation before live use. This indicates a credible risk that the AI system could plausibly lead to harm in the future. Hence, the event is best classified as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the autonomous surgical performance and its implications, not on responses or governance. It is not unrelated because the AI system and its potential for harm are central to the report.
Thumbnail Image

Robot successfully performs gallbladder surgery

2025-07-12
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (SRT-H) that autonomously performs surgical tasks, making independent decisions and adapting in real time, which fits the definition of an AI system. The surgeries were performed successfully on pig organs without human mechanical help, indicating the AI system's use in a critical medical procedure. However, no harm or injury to humans or animals is reported; the surgeries were conducted on organs from dead pigs, and the event is presented as a research breakthrough rather than an incident causing harm. There is no indication of malfunction or misuse leading to harm. The event highlights a technological advancement with potential future implications for healthcare but does not describe any realized harm or violation. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information, providing important context on AI development and its potential impact on surgical autonomy.
Thumbnail Image

AI Robot Performs Gallbladder Surgery, Leaves Researchers Amazed | Watch

2025-07-14
News18
Why's our monitor labelling this an incident or hazard?
The AI system (robotic surgery AI) is explicitly involved in performing surgery autonomously, which fits the definition of an AI system. However, the event describes a successful experiment on a deceased pig with no harm caused. The robot's autonomous corrections and operation demonstrate advanced AI use but do not result in any injury, rights violation, or other harms. The article also notes that full autonomy and safety regulations are still pending before human use. Therefore, this event represents a plausible future risk and advancement but no realized harm or incident. It is best classified as an AI Hazard, as the technology could plausibly lead to incidents in the future once deployed on live subjects, but no incident has yet occurred.
Thumbnail Image

Robot autonomously completes complex surgery guided by voice and video

2025-07-14
Robo Daily
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Surgical Robot Transformer-Hierarchy, SRT-H) that autonomously performs complex surgical procedures. Although the surgeries were conducted on pig cadavers and not humans, the AI system's use in performing intricate surgical tasks with expert-level accuracy and adaptability is clearly described. This demonstrates the AI system's capability to directly influence physical environments in a way that could lead to injury or harm if malfunctioning or misused. Given the critical nature of surgery and the potential for harm to patients, this development constitutes an AI Hazard because it plausibly could lead to harm if deployed in real clinical settings before full validation and safety assurances. However, since no actual harm or injury has been reported in this demonstration, and the surgeries were performed in controlled experimental conditions, it does not qualify as an AI Incident at this stage. Therefore, the event is best classified as an AI Hazard, reflecting the credible risk associated with autonomous surgical AI systems.
Thumbnail Image

رباتی که تنها با تماشای ویدیو جراحی آموخت

2025-07-12
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The robot is an AI system explicitly described as using AI similar to ChatGPT for autonomous surgical operations. It has performed multiple independent surgeries on human-like models, indicating real use of AI in a critical health-related task. Although the article does not mention any harm occurring, the deployment of such autonomous surgical AI systems carries plausible risks of injury or harm to patients if malfunction or errors occur. Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to harm in real surgical settings.
Thumbnail Image

روبات خودکار کیسه صفرا را از بدن خارج کرد

2025-07-12
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The robotic system clearly involves AI through machine learning and autonomous decision-making during surgery. The event involves the use and development of an AI system performing complex medical procedures. While no actual harm has occurred since the surgeries were on simulated models, the system's capabilities could plausibly lead to harm if deployed in real surgeries without adequate safeguards. Therefore, this event represents an AI Hazard due to the credible risk of harm from autonomous surgical AI systems in real-world medical contexts.
Thumbnail Image

این ربات بدون کمک انسان جراحی می‌کند!

2025-07-13
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The described surgical robot is an AI system capable of autonomous operation in complex, real-world scenarios. While the article reports a successful demonstration on a synthetic patient without actual harm, the nature of the system and its intended use in surgery imply a credible risk of injury or harm to humans if errors occur. Since no actual harm has occurred yet, but plausible future harm is inherent in the system's deployment, this event qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the autonomous AI system's operation and its potential implications for safety.
Thumbnail Image

ربات جراح جدید با کمک هوش مصنوعی عمل کیسه صفرا را با موفقیت انجام داد

2025-07-13
جهان مانا - پایگاه خبری اطلاع رسانی
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the autonomous surgical robot) performing a complex task successfully in experimental conditions without causing harm. There is no indication of injury, rights violations, or other harms to humans or communities. The article explicitly states that clinical human trials are not yet underway, so no plausible immediate harm is present. The main focus is on the technological achievement and its potential future impact, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

وقتی ربات جراح روی دست پزشک بلند شد

2025-07-13
سلامت نیوز
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an advanced surgical robot with AI capabilities) performing autonomous surgery. However, there is no indication of any injury, harm, or violation of rights occurring due to the robot's operation. The article describes a successful demonstration on a synthetic patient, implying no actual patient harm. Therefore, this is not an AI Incident. Also, since the robot has already performed the task successfully without harm, and the article does not suggest plausible future harm, it does not qualify as an AI Hazard. The article primarily provides information about a technological advancement and research progress, which fits the category of Complementary Information as it enhances understanding of AI developments in surgery without reporting harm or risk.
Thumbnail Image

انقلاب ربات‌ها در پزشکی؛ ربات آمریکایی بدون کمک انسان عمل جراحی انجام داد

2025-07-12
ana.ir
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI-powered surgical robot) that autonomously performed complex surgery on a pig's gallbladder without human help. This clearly meets the definition of an AI system and its use. Although no harm to humans has occurred, the technology's deployment in human surgery could plausibly lead to injury or harm, fulfilling the criteria for an AI Hazard. The article discusses the potential future risks and the need for safety and oversight, indicating plausible future harm. Therefore, the event is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ربات جراح SRT-H بدون دخالت انسان کیسه صفرا را با موفقیت خارج کرد

2025-07-10
تک ناک - اخبار تکنولوژی روز جهان و ایران
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the surgical robot) performing autonomous surgery, which is a clear AI system involvement. Although no actual harm has occurred since the procedure was done on a simulated body, the nature of the AI system and its application in surgery implies a credible risk of harm in future real-world use. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to injury or harm to persons if deployed clinically. It is not an AI Incident because no harm has yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

عمل جراحی بدون دخالت انسان! | فناوری که بادقت و بدون ‌نقص عمل می‌کند

2025-07-14
همشهری آنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (autonomous surgical robots with AI capabilities similar to ChatGPT or Gemini) in performing surgeries. Although the surgeries on pigs have been successful without harm, the article primarily focuses on the potential future use of these AI systems in human surgeries, which could plausibly lead to significant impacts on health outcomes. Since no actual harm has occurred yet in humans, but there is a credible risk and potential for harm or benefit, this qualifies as an AI Hazard rather than an AI Incident. The article does not report any realized harm or incident but highlights the plausible future implications and challenges.