Scientists Warn of Uncontrollable Superintelligent AI Risk

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple studies by international researchers, including those at the Max Planck Institute, warn that future superintelligent AI systems could become uncontrollable and pose significant risks to humanity. Theoretical calculations suggest that effective containment or control of such advanced AI may be fundamentally impossible, highlighting a credible future hazard.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunctioned or was misused. Instead, it presents a theoretical warning about plausible future risks associated with superintelligent AI. Therefore, it fits the definition of an AI Hazard, as it concerns circumstances where AI development could plausibly lead to harm in the future.[AI generated]
AI principles
SafetyRobustness & digital securityAccountabilityDemocracy & human autonomyTransparency & explainability

Industries
Government, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Physical (death)Public interest

Severity
AI hazard

AI system task:
Goal-driven organisationReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Científicos advierten que la humanidad no será capaz de controlar a las máquinas superinteligentes

2021-01-13
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual harm or incident caused by AI systems, nor does it report a specific event where AI malfunctioned or was misused. Instead, it presents a theoretical warning about plausible future risks associated with superintelligent AI. Therefore, it fits the definition of an AI Hazard, as it concerns circumstances where AI development could plausibly lead to harm in the future.
Thumbnail Image

Científicos advierten que los humanos no podríamos controlar a las próximas máquinas superinteligentes

2021-01-13
infobae
Why's our monitor labelling this an incident or hazard?
The article involves AI systems conceptually (superintelligent AI) and discusses their development and control challenges. It highlights a credible risk that such AI could become uncontrollable and dangerous, which could plausibly lead to harm in the future. Since no harm has yet occurred and the discussion is about theoretical limits and future possibilities, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely general AI news or complementary information because it focuses on the plausible risk of harm from AI development.
Thumbnail Image

Humanos no podrán controlar la inteligencia artificial, advierten científicos

2021-01-13
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (robots, autonomous vehicles, superintelligent AI) and discusses their development and use. It does not report any realized harm or incident but warns about plausible future harm due to uncontrollability of AI. Therefore, it fits the definition of an AI Hazard, as it describes a credible risk that AI could lead to harm in the future. It is not Complementary Information because it is not updating or responding to a past incident, nor is it unrelated as it clearly involves AI and its risks.
Thumbnail Image

Cálculos teóricos sugieren que no podremos controlar a las máquinas superinteligentes

2021-01-13
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The article centers on theoretical findings about the potential uncontrollability of future superintelligent AI systems, which could plausibly lead to significant harms if such AI were developed and misbehaved. Although no harm has yet occurred, the discussion clearly points to credible future risks inherent in the development and use of superintelligent AI. Therefore, this qualifies as an AI Hazard because it describes a plausible future risk of harm stemming from AI systems' development and use, without any current incident or realized harm.
Thumbnail Image

¿Cómo podría afectar a la humanidad una inteligencia artificial demasiado poderosa? Esto dicen los científicos - Noticias de Venezuela y el Mundo - Caraota Digital

2021-01-13
Noticias de Venezuela y el Mundo - Caraota Digital
Why's our monitor labelling this an incident or hazard?
The article centers on theoretical and future risks related to superintelligent AI, based on academic research. It does not describe any realized harm or incidents caused by AI systems, nor does it report any current malfunction or misuse. Therefore, it fits the definition of an AI Hazard, as it plausibly points to future risks of harm from AI systems that could be uncontrollable and dangerous, but no actual incident has occurred yet.
Thumbnail Image

Controlar una IA súperdesarrollada será simplemente imposible, indican nuevos cálculos

2021-01-14
TekCrispy
Why's our monitor labelling this an incident or hazard?
The article centers on a scientific study that concludes controlling a superdeveloped AI in the future would be impossible, based on theoretical and mathematical reasoning. This represents a plausible future risk (hazard) rather than a realized harm or incident. There is no mention of any current AI system causing injury, rights violations, or other harms. The discussion is about potential future challenges and risks inherent in AI development, fitting the definition of an AI Hazard. It is not merely general AI news or complementary information because it focuses on the credible risk of uncontrollable AI leading to harm, even if that harm has not yet occurred.
Thumbnail Image

La humanidad no será capaz de controlar a las máquinas │ elsiglocomve

2021-01-13
elsiglocomve
Why's our monitor labelling this an incident or hazard?
The article does not describe any actual harm or incident caused by AI, nor does it report on a specific event involving AI malfunction or misuse. Instead, it presents a theoretical exploration and warning about the plausible future risks of superintelligent AI escaping human control. This fits the definition of an AI Hazard, as it concerns a credible potential for harm stemming from AI development, but no harm has yet occurred.
Thumbnail Image

Humanos no podrían controlar a próximas máquinas superinteligentes, alertan científicos

2021-01-13
Noticias Oaxaca Voz e Imagen
Why's our monitor labelling this an incident or hazard?
The article discusses the potential future risk posed by superintelligent AI systems that could surpass human intelligence and become uncontrollable, which could lead to harm to humanity. The study's theoretical results indicate that containment algorithms are impossible to construct, implying a plausible future hazard. Since no actual harm or incident has occurred yet, but the risk is credible and directly related to AI development and use, this qualifies as an AI Hazard under the OECD framework.
Thumbnail Image

Aliados útiles, pero peligrosos... ¿podrían destruirnos? | Revista Bohemia

2021-01-14
Revista Bohemia
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (superintelligent AI) and their potential uncontrollable behavior. The event stems from the development and theoretical understanding of AI capabilities and containment. Although no harm has yet occurred, the research indicates that such AI could plausibly lead to significant harm if developed and deployed without effective control. Therefore, this qualifies as an AI Hazard because it concerns a credible risk of future harm from AI systems, not an actual incident or realized harm. The article does not describe any realized harm or incident, nor does it focus on responses or updates to past events, so it is not Complementary Information.
Thumbnail Image

Inteligencia Artificial: la Humanidad en peligro

2021-01-15
La Pionera de Clorinda
Why's our monitor labelling this an incident or hazard?
The article involves AI systems conceptually, specifically superintelligent AI, and discusses the potential for future harm due to uncontrollability. However, no actual harm or incident has occurred yet. The focus is on theoretical exploration and warnings about possible future dangers, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. There is no mention of a specific AI system malfunction or misuse causing harm, nor is it a governance or societal response update.
Thumbnail Image

¿Qué tienen que ver esta representación del águila calva y la red neuronal con la futura estrategia de inteligencia artificial de EE.

2021-01-13
Nuevo Periodico
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of a government office to coordinate AI initiatives and policy, including references to AI strategy and international cooperation. It does not report any AI incident or hazard, nor does it describe any realized or potential harm caused by AI systems. The main narrative is about policy and governance developments, which fits the definition of Complementary Information as it provides context and updates on AI governance without describing specific harms or risks.
Thumbnail Image

KDI "Solo el 3,6% de las empresas de tecnología de IA"

2021-01-14
Nuevo Periodico
Why's our monitor labelling this an incident or hazard?
The article does not mention any AI system causing injury, rights violations, infrastructure disruption, or other harms. It also does not describe any incident or hazard involving AI malfunction or misuse. The content is primarily about survey results and policy recommendations, which fits the definition of Complementary Information as it provides context and understanding of AI adoption and perceptions without reporting a specific incident or hazard.
Thumbnail Image

Humans would be unable to control an artificial superintelligence

2021-01-15
futuretimeline.net
Why's our monitor labelling this an incident or hazard?
The article involves an AI system concept—superintelligent AI—and discusses the theoretical impossibility of controlling such a system, which could plausibly lead to significant harms such as destruction or loss of control over critical systems. However, no actual harm has occurred yet; the discussion is about potential future risks based on theoretical analysis. Therefore, this qualifies as an AI Hazard, as it describes a credible risk that a superintelligent AI could become uncontrollable and cause harm in the future.
Thumbnail Image

No stopping AI? Scientists conclude there would be no way to control super-intelligent machines - Study Finds

2021-01-15
Study Finds
Why's our monitor labelling this an incident or hazard?
The article centers on a scientific study that concludes it is theoretically impossible to guarantee control over super-intelligent AI, which could plausibly lead to harm to humanity if such AI were to act maliciously or uncontrollably. No actual harm or incident has occurred yet, but the study identifies a credible future risk. Therefore, this qualifies as an AI Hazard because it describes a plausible future harm scenario stemming from AI development and use, without reporting any realized harm or incident.
Thumbnail Image

AI uprising could be impossible to control, experts warn

2021-01-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article involves AI systems conceptually (super-intelligent AI) and discusses their potential to cause harm if uncontrollable. However, it does not report any realized harm or incident caused by AI, nor does it describe a specific event where AI malfunctioned or was misused. Instead, it presents a theoretical analysis and expert warnings about plausible future risks. Therefore, it fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident in the future but no incident has occurred yet.
Thumbnail Image

Containment algorithms won't stop super-intelligent AI, scientists warn

2021-01-12
The Next Web
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI, but rather presents a theoretical limitation and a plausible future risk related to super-intelligent AI control. This fits the definition of an AI Hazard, as it plausibly could lead to harm in the future if such AI systems are developed and uncontrollable.
Thumbnail Image

Calculations Show It'll Be Impossible to Control a Super-Intelligent AI

2021-01-14
ScienceAlert
Why's our monitor labelling this an incident or hazard?
The article focuses on the theoretical impossibility of controlling a super-intelligent AI and the potential dangers it could pose in the future. It does not report any realized harm or incident involving AI but rather discusses plausible future risks based on scientific reasoning. Therefore, it fits the definition of an AI Hazard, as it describes circumstances where the development or use of AI systems could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

Scientists: It'd Be Impossible to Control Superintelligent AI

2021-01-12
Futurism
Why's our monitor labelling this an incident or hazard?
The article focuses on a theoretical argument and research findings about the challenges of controlling superintelligent AI, which could plausibly lead to harm in the future but has not yet caused any harm or incident. There is no description of an AI system currently causing injury, rights violations, or other harms. Therefore, this qualifies as an AI Hazard, as it highlights a credible potential future risk from AI development rather than an actual incident or complementary information about responses or updates.
Thumbnail Image

Humans won't be able to control artificial intelligence, scientists warn

2021-01-13
Entrepreneur
Why's our monitor labelling this an incident or hazard?
The article involves AI systems conceptually and discusses their potential future risks, specifically the inability to control superintelligent AI. This fits the definition of an AI Hazard, as it plausibly could lead to harm in the future. There is no indication of realized harm or incident, so it is not an AI Incident. It is not merely complementary information because the main focus is on the potential uncontrollability and risks, not on responses or updates to existing incidents. Therefore, the classification as AI Hazard is appropriate.
Thumbnail Image

New Study Shows Humans Can't Control Superintelligent Machines

2021-01-14
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The article centers on a theoretical study about the impossibility of controlling superintelligent AI, which is a credible potential risk but not an incident of realized harm. There is no mention of any current AI system causing injury, rights violations, infrastructure disruption, or other harms. The discussion is about plausible future harm and fundamental limits in AI control, fitting the definition of an AI Hazard. It does not report on a specific event or harm caused by AI, nor does it provide updates or responses to past incidents, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

Não teríamos como controlar máquinas superinteligentes, dizem cientistas - Planeta

2021-01-12
Planeta
Why's our monitor labelling this an incident or hazard?
The article discusses the theoretical impossibility of controlling superintelligent AI, which could plausibly lead to harm such as loss of human control or catastrophic outcomes. Although no actual harm has occurred yet, the study highlights a credible future risk associated with the development and deployment of superintelligent AI systems. Therefore, this qualifies as an AI Hazard because it concerns a plausible future harm stemming from AI development and use, rather than a realized incident or complementary information about responses or governance.
Thumbnail Image

Humanidade não conseguirá controlar computadores superinteligentes

2021-01-12
Inovação Tecnológica
Why's our monitor labelling this an incident or hazard?
The article focuses on the theoretical impossibility of controlling superintelligent AI and the potential risks it could pose in the future. It does not report any realized harm, malfunction, or misuse of an AI system, nor does it describe a concrete event involving AI systems causing or nearly causing harm. Therefore, it does not qualify as an AI Incident. However, since it discusses plausible future risks of uncontrollable superintelligent AI that could lead to harm, it fits the definition of an AI Hazard. It is not merely complementary information because the main focus is on the potential for harm from AI development, not on responses or ecosystem updates. Hence, the classification is AI Hazard.
Thumbnail Image

معهد ستوكهولم يحذ من تطبيقات الذكاء الاصطناعى على التسلح النووى - اليوم السابع

2021-01-19
اليوم السابع
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of nuclear weapons, focusing on the plausible future risks of AI use in nuclear command and control. The article does not describe any realized harm or incident but warns about the potential for AI-related failures or misuse that could lead to nuclear incidents. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to communities or global security. The article is not about an actual incident, nor is it merely complementary information or unrelated news.
Thumbnail Image

تحذيرات من الذكاء الاصطناعي الفائق

2021-01-17
وكاله عمون الاخباريه
Why's our monitor labelling this an incident or hazard?
The article involves AI systems conceptually, specifically superintelligent AI, and discusses the theoretical limits of controlling such systems. However, no actual AI system has caused harm yet, nor is there an incident described. The study warns about plausible future harm from superintelligent AI that cannot be contained, which fits the definition of an AI Hazard. There is no realized harm or incident, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential risk itself rather than responses or updates. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

تحذيرات من الذكاء الاصطناعي الفائق.. قد يؤذي البشر

2021-01-17
جريدة الوطن
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in a theoretical and future-oriented context, focusing on the plausible risk that superintelligent AI could harm humans. No actual harm has occurred yet, but the study highlights a credible potential for harm and efforts to mitigate it. Therefore, this qualifies as an AI Hazard, as it concerns plausible future harm from AI development and use.