Researchers Demonstrate Morris II AI Worm Exploiting ChatGPT and Gemini

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Researchers developed the Morris II generative AI worm, capable of autonomously spreading through AI chatbots like ChatGPT and Gemini, stealing data, sending spam, and bypassing security. Demonstrated in controlled tests, the worm exposes critical vulnerabilities in generative AI systems, raising concerns about future real-world cyberattacks exploiting these platforms.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (generative AI models and AI-powered email assistants) and describes a research prototype AI worm that can maliciously use these systems to steal sensitive data and spread malware. While no actual harm has occurred yet since the worm has not been deployed outside the lab, the demonstrated capabilities and described attack vectors plausibly could lead to AI incidents involving harm to individuals' data privacy and security. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm but has not yet caused realized harm.[AI generated]
AI principles
Robustness & digital securitySafetyPrivacy & data governanceRespect of human rightsAccountabilityTransparency & explainability

Industries
Digital securityIT infrastructure and hostingConsumer servicesMedia, social platforms, and marketingReal estateReal estate

Affected stakeholders
ConsumersGeneral publicBusiness

Harm types
Human or fundamental rightsEconomic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
ICT management and information securityCitizen/customer service

AI system task:
Content generationInteraction support/chatbotsGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

Morris II AI worm can steal your confidential data and infect ChatGPT and Gemini

2024-03-03
The Indian Express
Why's our monitor labelling this an incident or hazard?
The AI worm Morris II is an AI system or component that maliciously exploits other AI systems, leading to direct harm such as theft of confidential data and spreading malware. The involvement of AI in both the worm's operation and the targeted AI systems is explicit. The harms described include violation of privacy and potential broader security impacts, fitting the definition of an AI Incident due to realized harm caused by the AI system's use and malfunction.
Thumbnail Image

AI worms can spread through generative AI-powered emails.

2024-03-01
The Verge
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models like ChatGPT and Gemini) being exploited by a malicious AI-powered worm (Morris II) that causes harm by stealing data and overwhelming the email client, leading to security breaches and operational disruption. This constitutes direct harm linked to the use and malfunction of AI systems, fitting the definition of an AI Incident.
Thumbnail Image

This new AI worm can use email assistants to steal sensitive data, here's how it works | - Times of India

2024-03-05
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models and AI-powered email assistants) and describes a research prototype AI worm that can maliciously use these systems to steal sensitive data and spread malware. While no actual harm has occurred yet since the worm has not been deployed outside the lab, the demonstrated capabilities and described attack vectors plausibly could lead to AI incidents involving harm to individuals' data privacy and security. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harm but has not yet caused realized harm.
Thumbnail Image

AI worm that infects computers and reads emails created by researchers

2024-03-04
Yahoo Sports Canada
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models like ChatGPT and Gemini) being exploited to create a self-replicating worm that can steal data and spread malware. While the worm has been demonstrated by researchers and not yet deployed maliciously in the wild, the article clearly states the potential for such AI-powered attacks to cause harm, including data theft and malware spread, which are harms to individuals and communities. Since no actual harm has been reported yet but the risk is credible and plausible, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update but a demonstration of a new AI-enabled threat with potential for significant harm.
Thumbnail Image

This AI Worm Can Steal Data, Break Security Of ChatGPT And Gemini

2024-03-04
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models like ChatGPT and Gemini) and describes a malicious use of AI (an AI worm) that directly leads to harm by stealing confidential data and breaking security. The harm is realized, not just potential, as the researchers demonstrated data theft and system compromise. The involvement of AI is explicit and central to the incident, and the harm includes violation of privacy and security, which are human rights concerns. Hence, this is classified as an AI Incident.
Thumbnail Image

New Malware Worm Can Poison ChatGPT, Gemini-Powered Assistants

2024-03-01
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT-4, Gemini, LLaVA) and describes a novel malware worm that exploits prompt-injection vulnerabilities to manipulate these AI systems. While the harm (data extraction, spam, propaganda) has been demonstrated only in a controlled test environment, the researchers and the article emphasize the potential for real-world large-scale attacks causing significant harm. No actual harm has yet occurred outside the test setting, so it is not an AI Incident. The credible risk of future harm from this AI worm meets the definition of an AI Hazard.
Thumbnail Image

New Malware Worm Can Poison ChatGPT, Gemini-Powered Assistants

2024-03-01
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT-4, Gemini, LLaVA) and describes a malware worm exploiting these systems' vulnerabilities to perform malicious activities like data extraction and spam sending. The harm described (privacy breaches, spam, misinformation) fits within the AI Incident harm categories (a) and (d). However, the worm was demonstrated only in a test environment, and no real-world harm has been reported yet. The researchers and OpenAI acknowledge the potential for future large-scale attacks, indicating plausible future harm. Thus, the event is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because it reports a new primary risk, not a response or update to a past incident. It is not Unrelated because the AI system and its vulnerabilities are central to the event.
Thumbnail Image

Researchers create AI "worms" able to spread between systems -- stealing private data as they go

2024-03-04
TechRadar
Why's our monitor labelling this an incident or hazard?
The researchers developed an AI system (the worm) that exploits generative AI applications to self-replicate and perform malicious actions including stealing private data (social security numbers, credit card details) and spreading harmful content. The worm's operation involves AI systems generating outputs that propagate the worm, directly causing harm to individuals' privacy and potentially to communities through toxic content. This constitutes direct harm caused by the AI system's use and malfunction, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Using ChatGPT And Google's Gemini? Beware! This Malware Can Steal Your Personal Information

2024-03-04
Mashable India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (generative AI chatbots and email assistants) being exploited by an AI-powered worm that steals confidential data and propagates itself. This constitutes direct harm through violation of privacy and data security, which falls under violations of human rights and harm to property. The AI system's malfunction or misuse directly leads to realized harm, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Your ChatGPT & Gemini Might Be Infected By Morris II AI Worm. Protect Your Confidential Data Now -- Details Here

2024-03-04
english
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (generative AI models like ChatGPT and Gemini) being exploited by a malicious AI worm that causes direct harm by stealing sensitive personal data. The worm's development and use have directly led to realized harm through data theft and potential further malicious activities like spam and malware propagation. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's misuse and malfunction in security.
Thumbnail Image

Security researchers prove they can exploit chatbot systems to spread AI-powered worms

2024-03-04
TechSpot
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI chatbots like Bard, ChatGPT, Gemini Pro) and their use and malfunction (exploitation of retrieval-augmented generation and prompt processing) leading directly to harms such as data exfiltration and malware spread. The AI system's role is pivotal in enabling the worm to replicate and propagate autonomously across users and systems, causing harm to property (data) and potentially communities (via spam, abuse, propaganda). This constitutes an AI Incident because the harm is realized and directly linked to the AI system's exploitation.
Thumbnail Image

Researchers create AI worms that can spread from one system to another

2024-03-02
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly—generative AI models like ChatGPT and Gemini—and their use in autonomous AI agents capable of sending and receiving emails. The researchers demonstrated how adversarial self-replicating prompts can cause AI systems to propagate malicious instructions, steal sensitive data, and spread malware. Although no actual harm has occurred yet, the demonstrated capability and credible risk of future attacks constitute a plausible threat. The event does not describe realized harm but warns of a new kind of AI-enabled cyberattack that could lead to significant harms, fitting the definition of an AI Hazard rather than an AI Incident. The article also discusses mitigation strategies and responses, but the main focus is on the potential risk demonstrated by the research.
Thumbnail Image

Researchers develop self-replicating AI worm that can infiltrate, steal data

2024-03-05
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the self-replicating AI worm 'Morris II') that uses generative AI to autonomously infiltrate systems and steal data, causing direct harm to individuals by compromising personal information. The harm is realized, not just potential, as the worm has already been used against AI-powered email assistants to steal data and initiate spam. This fits the definition of an AI Incident because the AI system's use has directly led to harm (data theft and privacy violations).
Thumbnail Image

Researchers Create AI-Powered Malware That Spreads on Its Own

2024-03-04
Futurism
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI assistants like GPT-4, Gemini Pro, LLaVA) being exploited by a self-replicating AI-powered malware worm. The researchers demonstrated the worm's ability to extract sensitive personal data and spread autonomously in a controlled environment, indicating a credible risk of future harm. No actual harm has yet occurred in the wild, so it is not an AI Incident. The event is not merely complementary information because it reports a new experimental demonstration of a plausible threat rather than updates or responses to existing incidents. Hence, it fits the definition of an AI Hazard due to the plausible future harm from AI-powered malware spreading autonomously and compromising sensitive data.
Thumbnail Image

Zero-Click GenAI Worm Spreads Malware, Poisoning Models

2024-03-04
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event involves generative AI systems explicitly and demonstrates how adversarial prompts can cause these AI systems to propagate malware and exfiltrate data, which are harms to property and potentially to individuals' privacy and security. The researchers' demonstration is a proof-of-concept showing how AI systems can be manipulated to cause harm, but the article does not report actual incidents of harm occurring in the wild. Hence, it is a credible AI Hazard rather than an AI Incident. The event is not merely general AI news or a complementary update; it focuses on a plausible threat scenario involving AI misuse and malfunction.
Thumbnail Image

Researchers create never-before-seen cyberattack using generative AI

2024-03-05
TweakTown
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (generative AI models like GPT-4, Gemini Pro, and LLaVA) in a malicious way to create self-replicating malware that can extract sensitive information and spread autonomously. While the harm has not yet occurred in the wild, the demonstrated capability and credible warning indicate a plausible future risk of significant harm to individuals' privacy and security. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to persons through data breaches and cyberattacks. It is not an AI Incident yet because the malware has only been demonstrated in a closed environment and not caused actual harm in real-world use.
Thumbnail Image

Researchers Give Birth to the First GenAI Worm

2024-03-05
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the AI-powered worm) developed and demonstrated by researchers that can perform malicious activities such as data theft and malware propagation. Although no actual harm has been reported yet, the worm's capabilities clearly indicate a plausible risk of causing harm in the future if weaponized or deployed maliciously. Therefore, this event fits the definition of an AI Hazard, as it could plausibly lead to harms including injury to persons (via data breaches), harm to property (compromised systems), and harm to communities (through malware spread). It is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

AI Worm Developed by Researchers Spreads Automatically Between AI Agents

2024-03-02
GBHackers On Security
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (generative AI worms targeting AI agents like ChatGPT and Gemini) that autonomously spreads and causes harm by stealing data and deploying malware. This constitutes direct harm resulting from the use and exploitation of AI systems, fulfilling the criteria for an AI Incident. The harm includes breaches of security, unauthorized data access, and potential disruption to AI-dependent services, which align with harms to property and communities. The event is not merely a potential risk but demonstrates realized harm through the worm's capabilities and actions, thus classifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

New worm can propagate through generative AI, researchers warn

2024-03-04
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (generative AI models) and describes a malware worm that exploits these AI systems' architecture to propagate and cause harm. While no actual harm has yet occurred, the researchers demonstrate the worm's capabilities and warn about the potential for real harm such as data theft and spam propagation. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to users' data and security. The article does not report an actual incident but highlights a credible risk and potential future harm from AI misuse.
Thumbnail Image

OpenAI's Chatgpt & Google's Gemini Under A New AI Worm Attack

2024-03-02
NewsX
Why's our monitor labelling this an incident or hazard?
The event involves the development and demonstration of an AI system (the AI worm) that could plausibly lead to significant harms including data theft, cyberattacks, and disruption of AI ecosystems. While no actual incident of harm has been reported, the article clearly outlines the credible risk and potential for this AI worm to cause harm if deployed maliciously. Therefore, this qualifies as an AI Hazard because it describes a credible and novel AI-driven threat that could plausibly lead to an AI Incident in the future. It is not Complementary Information because the main focus is on the new threat itself, not on responses or updates to past incidents. It is not an AI Incident because no realized harm has yet occurred according to the article.
Thumbnail Image

The Arrival of AI Worms - courierstandardenterprise.com

2024-03-01
Courier Standard Enterprise
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (the generative AI worm) that could plausibly lead to harm such as data theft and malware deployment, which are significant harms to property and potentially to individuals or organizations. Since no actual harm has been reported yet and the demonstration was in a controlled environment, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and the need for security measures, indicating a credible future threat rather than realized harm.
Thumbnail Image

Researchers develop generative AI worm that can steal data and send spam emails

2024-03-03
THE DECODER
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a generative AI system (the AI worm) designed to perform malicious actions like data theft and spam propagation. Although these actions constitute harms (violations of privacy, potential harm to communities), the worm is currently a research prototype and has not caused actual harm yet. The researchers predict such worms could appear in the wild in the next two to three years, indicating a credible future risk. Therefore, this event qualifies as an AI Hazard rather than an AI Incident.
Thumbnail Image

Morris II AI worm can infect ChatGPT and Gemini and steal your data, Details

2024-03-03
DNP INDIA
Why's our monitor labelling this an incident or hazard?
The Morris II AI worm is an AI system or tool that can autonomously propagate malware and steal sensitive information by exploiting AI chatbots. This constitutes direct harm to users' data security and privacy, fulfilling the criteria for an AI Incident under harm category (c) violations of rights (privacy and data protection). The event involves the use and malfunction of AI systems leading to realized harm potential, and the researchers' disclosure and company responses are complementary but do not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

How To Protect Your Tech From AI Worms While Using ChatGPT or Gemini

2024-03-05
AugustMan Thailand
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (generative AI chatbots including ChatGPT and Gemini) and describes how a malicious AI worm exploits these systems' vulnerabilities to cause harm such as data theft, spam, and phishing attacks. These harms fall under injury to persons (via data theft leading to identity fraud), harm to communities (via spam and phishing), and violations of rights (privacy breaches). The worm's creation and demonstration show direct involvement of AI system malfunction and misuse leading to realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Here Come the AI Worms

2024-03-01
Wired
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI agents and large language models) and their use in a novel AI worm that can autonomously propagate and perform malicious actions like data theft and spamming. While no actual harm has occurred yet outside the test environment, the researchers demonstrate a credible risk that such AI worms could cause significant harm if deployed maliciously in real-world systems. This fits the definition of an AI Hazard, as the event plausibly leads to AI Incidents involving harm to data security and potentially critical infrastructure. The article does not describe an actual incident with realized harm but warns of a credible future threat, so it is not an AI Incident. It is not merely complementary information because the main focus is on the demonstration of a new AI-driven threat, not on responses or ecosystem context. Therefore, the classification is AI Hazard.
Thumbnail Image

Des chercheurs créent le tout premier ver informatique capable de se répandre dans les systèmes d'IA

2024-03-02
Clubic.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (generative AI agents and autonomous messaging assistants) being attacked by a worm that spreads malware and steals data. This constitutes harm to property and potentially to individuals' privacy and rights. Since the worm has already been used in tests to steal information and send spam, the harm is realized, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction directly led to harm.
Thumbnail Image

Morris II, un ver informatique qui se propage en retournant l'IA...

2024-03-05
Futura
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (large language models like ChatGPT, Gemini, LLaVA) and demonstrates how its misuse and architectural vulnerabilities allow a generative worm to propagate and steal data, which constitutes harm to property and potentially to individuals' privacy and security. The harm is realized, not just potential, as the worm successfully corrupts databases and steals data in the demonstration. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's malfunction and misuse.
Thumbnail Image

Cette IA d'un nouveau genre siphonne vos données personnelles depuis ChatGPT et Gemini

2024-03-01
PhonAndroid
Why's our monitor labelling this an incident or hazard?
Morris II is an AI system explicitly described as autonomously spreading between AI agents and performing unauthorized actions such as data extraction and spam dissemination. These actions directly harm users by compromising personal data and security, which fits the definition of harm to rights and property. The involvement of AI in the development and use of this worm is clear, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des chercheurs créent des " vers " d'IA, baptisé Morris II, capables de se propager d'un système à l'autre le ver peut déployer des logiciels malveillants en exploitant des failles dans des systèmes

2024-03-04
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI agents and ecosystems) and describes a malicious AI-generated worm that has been demonstrated to propagate and cause harm (data exfiltration, malware deployment) in controlled tests. The harm is direct and realized, not merely potential, as the worm's capabilities have been proven. The AI system's malfunction or exploitation is central to the incident, fulfilling the criteria for an AI Incident. The article also discusses the broader implications and risks, but the primary focus is on the demonstrated malicious AI-enabled attack causing harm, not just a warning or complementary information.
Thumbnail Image

Morris II : un vilain ver IA capable de se propager d'un système à l'autre - CNET France

2024-03-04
CNET France
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a generative AI worm that propagates autonomously and performs harmful actions like data theft and malware deployment. These actions have directly led to harm, including breaches of security and privacy, which fall under violations of rights and harm to property and communities. The article details the actual creation and demonstration of this worm, indicating realized harm or at least a concrete demonstration of harm potential in a real environment. Therefore, this qualifies as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Des vers informatiques peuvent infecter des systèmes en exploitant la GenAI

2024-03-04
ICTjournal - Le magazine suisse des technologies de l’information pour l’entreprise
Why's our monitor labelling this an incident or hazard?
The researchers developed an AI system (the generative AI worm) that exploits generative AI models (ChatGPT, Gemini, LLaVA) to steal sensitive data and propagate malicious prompts autonomously. The event involves the use and malfunction of AI systems leading directly to harm (data theft, spam propagation) demonstrated in a test environment, with credible warnings about future real-world harm. This meets the criteria for an AI Incident because the AI system's malfunction and use have directly led to harm and pose a significant threat to security and privacy.
Thumbnail Image

0

2024-03-04
developpez.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the generative AI worm Morris II) that has been developed and demonstrated to propagate autonomously and perform malicious actions such as data theft and malware deployment. These actions constitute direct harm to property, user data, and the security of AI ecosystems. The worm's ability to spread and cause damage without user interaction further underscores the severity of the harm. Although the current tests are in controlled environments, the demonstrated capabilities and warnings about future real-world deployment indicate that harm is either occurring or imminent. This meets the criteria for an AI Incident, as the AI system's use and malfunction have directly or indirectly led to significant harms.
Thumbnail Image

This AI worm can steal private data and send spam emails

2024-03-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI models like ChatGPT, Gemini, and LLaVA) and describes a novel AI-powered worm that can autonomously propagate and cause harm by stealing data and sending spam. While the worm has been demonstrated in a research setting and no actual incidents have been reported, the researchers warn that such attacks are only a matter of time, indicating plausible future harm. The AI system's use and malfunction (or malicious use) could plausibly lead to violations of privacy and harm to communities through spam and data theft. Since no actual harm has yet occurred, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the potential new threat posed by the AI worm, not on responses or ecosystem updates.
Thumbnail Image

An AI worm has been developed to burrow its way into generative AI ecosystems, revealing sensitive data as it spreads

2024-03-08
Yahoo News
Why's our monitor labelling this an incident or hazard?
The AI worm is an AI system that uses adversarial prompts to manipulate generative AI models, causing them to reveal sensitive data and self-replicate across AI systems. Although the harm has not yet occurred in the wild, the demonstrated potential for data exfiltration and widespread infection of AI ecosystems constitutes a plausible future harm. Therefore, this event qualifies as an AI Hazard because it highlights a credible risk of significant harm stemming from the use or misuse of AI systems, but no actual harm has been reported yet.
Thumbnail Image

Researchers warn, why using AI chatbots for writing your emails may be 'dangerous' - ET CISO

2024-03-06
ETCISO.in
Why's our monitor labelling this an incident or hazard?
An AI system (the AI worm) has been developed and demonstrated in a controlled environment to perform harmful actions, but no actual harm has occurred yet. The article warns about potential vulnerabilities and the need for security measures, indicating a plausible future risk of harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident, as the harm is potential, not realized.
Thumbnail Image

Morris II AI worm can steal your private data: What is it and how it works

2024-03-07
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI worm exploiting security weaknesses in AI models like ChatGPT and Gemini, which are AI systems. The worm uses adversarial self-replication to spread and steal sensitive personal data, constituting harm to individuals' privacy and potentially violating rights. This is a direct harm caused by the use and malfunction of AI systems, fitting the definition of an AI Incident.
Thumbnail Image

AI worm that can steal private data: What is it, how it works, and how to stay safe

2024-03-06
India Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models) being exploited by a novel AI-powered worm that can steal data and spread autonomously. Although no actual harm has occurred yet, the researchers warn of the plausible risk of data theft and security breaches. The AI worm's development and potential use constitute a credible threat that could lead to significant harm, meeting the criteria for an AI Hazard. Since no realized harm has occurred, it is not an AI Incident. The article is not merely general AI news or a response update, so it is not Complementary Information or Unrelated.
Thumbnail Image

'Telephone numbers, credit cards, SSN' can be stolen by terrifying new AI 'worm'

2024-03-05
The US Sun
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (the AI worm) that can steal sensitive personal data (credit card information, SSNs, photos, texts), which constitutes harm to individuals' privacy and potentially violates their rights. The AI worm's capability to spread and evade security measures indicates a direct risk of harm. Even though no actual incidents have been reported yet, the demonstrated capability and the researchers' warnings establish a credible potential for harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving data theft and privacy violations.
Thumbnail Image

This AI malware worm is capable of turning ChatGPT against you

2024-03-08
BGR
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI models like ChatGPT and Gemini) and describes a malware worm that could exploit these systems to cause harm. Although the malware has been demonstrated in a research context, it has not been deployed maliciously in the wild, and no actual harm has been reported. Therefore, this event represents a credible potential threat that could plausibly lead to AI incidents involving harm to data privacy, misinformation, or phishing attacks. This fits the definition of an AI Hazard rather than an AI Incident, as the harm is potential and not realized.
Thumbnail Image

Morris II: AI Worm Capable of Spreading Malware Using ChatGPT, Gemini

2024-03-05
CCN - Capital & Celeb News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered worm leveraging generative AI platforms) whose use and malfunction (exploitation of AI systems for malware spread) directly lead to significant harms including potential data theft, disruption of email systems, and cybersecurity breaches. The worm's autonomous spreading and ability to bypass user interaction requirements indicate realized harm or imminent risk. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harms related to cybersecurity and privacy violations. The article does not merely warn of potential harm but reports on an active AI-powered malware tested and demonstrated by researchers, indicating realized or ongoing harm potential.
Thumbnail Image

This Virus Steals Your Data from Generative AI Tools

2024-03-05
AI Business
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI models like Gemini Pro and GPT-4-powered ChatGPT) being exploited by malware to steal confidential data and perform malicious tasks. The harm (data theft and potential further malicious activities) is realized and directly linked to the AI system's manipulation by the worm. The malware's ability to propagate through AI ecosystems and cause unauthorized data extraction and spamming clearly meets the criteria for an AI Incident, as it causes harm to individuals' data privacy and security. Therefore, this is not merely a potential hazard or complementary information but a concrete AI Incident.
Thumbnail Image

Researchers create AI worms to show how they can infect computers

2024-03-06
NewsNation
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (generative AI models) to create self-replicating malicious prompts (AI worms) that can propagate through systems and cause harm such as data theft or malware deployment. While the researchers conducted this as a demonstration, the AI worms represent a credible and novel cyber threat that could lead to significant harm if exploited maliciously. Therefore, this qualifies as an AI Hazard because it plausibly could lead to AI Incidents involving harm to property, data, and potentially communities through cyberattacks. There is no indication that actual harm has yet occurred outside the research demonstration, so it is not an AI Incident. It is more than complementary information because it reveals a new credible risk rather than just updates or responses.
Thumbnail Image

The first virus that attacks artificial intelligence solutions is born

2024-03-06
newsbeezer.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots and assistants) and describes a malicious AI prompt worm that causes direct harm by stealing personal data, spreading spam, and potentially spreading disinformation. These harms fall under violations of privacy (a form of harm to persons) and harm to communities (disinformation). The AI system's malfunction or exploitation is central to the incident. The harm is realized, not just potential, as the worm was demonstrated to perform these malicious actions. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Artificial intelligence hacked by another artificial intelligence

2024-03-07
Bullfrag
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Morris II) developed by researchers that can autonomously replicate and spread maliciously, demonstrating AI use in cyberattack methods. While no actual harm has occurred yet, the potential for data security breaches and malicious program installation clearly indicates a plausible future harm scenario. Therefore, this event qualifies as an AI Hazard because it could plausibly lead to an AI Incident involving harm to digital information security and potentially to users or organizations relying on these AI systems.
Thumbnail Image

Self-Replicating AI Malware is Here😱 #ComPromptMized

2024-03-05
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI agents) and their malicious subversion to create self-replicating malware that can cause harm such as data theft, spam, disinformation, and potentially ransomware or remote code execution. While no actual harm has yet occurred in the wild, the research demonstrates a credible and plausible risk of AI-driven harm in the near future. This fits the definition of an AI Hazard, as the development and use of these AI worms could plausibly lead to AI Incidents involving harm to individuals, communities, and property. The article focuses on the demonstration and warning of this risk rather than reporting an actual incident, so it is not an AI Incident. It is more than complementary information because it reveals a new credible threat. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Researchers Create AI-Powered Worm That Self-Spreads Across Tools Like ChatGPT - DesignTAXI.com

2024-03-05
DesignTAXI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (generative AI models like ChatGPT) being exploited by a self-replicating AI malware that manipulates AI prompts to steal sensitive data and spread itself. While no actual harm has occurred since the worm was only tested in a controlled environment, the demonstrated vulnerabilities could plausibly lead to incidents involving data breaches and privacy violations. This fits the definition of an AI Hazard, as the development and use of this AI-powered worm could plausibly lead to harms such as violations of privacy and security if misused or unleashed. Since no harm has yet occurred, it is not an AI Incident. The article also does not primarily focus on responses or updates to past incidents, so it is not Complementary Information. Hence, the correct classification is AI Hazard.
Thumbnail Image

Index - Tech-Science - The time of worms that attack using artificial intelligence has come

2024-03-06
newsbeezer.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (language models like GPT-4 and Gemini Pro) being exploited to create a self-propagating worm that autonomously generates malicious prompts and leaks sensitive personal data. This is a direct use and malfunction of AI leading to harm (privacy breaches and potential spam/propaganda spread). The harm is realized in the test environment, demonstrating a concrete AI Incident rather than a mere hazard or complementary information. The article's focus is on the malware's capabilities and the security risks it poses, not just on theoretical risks or responses, so it is not complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

Artificial intelligence is hacked... By another artificial intelligence

2024-03-07
Bullfrag
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (a generative AI computer worm) developed and demonstrated by researchers. The worm's operation involves AI-generated adversarial messages that can propagate malware and compromise email systems. While no actual harm has occurred yet, the potential for such AI-enabled malware to cause data breaches or system compromise is credible and significant. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving harm to property or communities through cybersecurity breaches. Since no harm has yet materialized, it is not an AI Incident. The article also does not focus on responses or updates to prior incidents, so it is not Complementary Information. Hence, the correct classification is AI Hazard.
Thumbnail Image

Agora é possível 'hackear' o ChatGPT; entenda | Exame

2024-03-04
Exame
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT, Gemini, LLMs) explicitly and describes a security breach caused by an AI-generated worm exploiting vulnerabilities in these systems. The worm's operation leads directly to harm through data theft and malware spread, which are violations of privacy and security rights. The harm is realized, not just potential, as the worm can steal data and propagate malware. Hence, this is an AI Incident due to direct harm caused by the AI system's exploitation and malfunction.
Thumbnail Image

IAs não serão imunes a vírus, malwares e worms - Meio Bit

2024-03-04
Meio Bit
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems (generative LLMs) and demonstrates a vulnerability that could directly lead to harms including theft of sensitive personal data and propagation of malware. While the worm has not yet caused harm outside the controlled experiment, the researchers and security experts consider it a matter of time before such AI-targeting malware emerges in real-world scenarios. Therefore, this constitutes an AI Hazard because it plausibly could lead to an AI Incident involving harm to individuals and communities through data breaches and malicious software spread. It is not merely complementary information because the core of the article is about the demonstrated vulnerability and its implications for future harm, not just a response or update to a past incident.
Thumbnail Image

Malware criado com IA pode roubar dados privados

2024-03-07
Portal de Angola
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (the worm leveraging generative AI models) that directly leads to potential harm by stealing private data and spreading malware autonomously. The harm to individuals' privacy and data security constitutes injury or harm to groups of people (a). Since the worm is operational and demonstrated, and the article warns of its imminent deployment, this qualifies as an AI Incident due to realized or imminent harm caused by AI system use and malfunction. The AI system's role is pivotal in enabling the worm's propagation and data theft, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Este worm de IA pode roubar dados e quebrar a segurança do ChatGPT e do Gemini - Tw2sl

2024-03-04
تواصل
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Gemini, generative AI email assistants) and describes a novel AI-generated worm that has been demonstrated to steal sensitive data and bypass security measures, constituting direct harm to users' data confidentiality and security. This meets the criteria for an AI Incident because the AI system's use and malfunction have directly led to harm (data theft and security breaches).
Thumbnail Image

Pesquisadores conseguem desenvolver worm que ataca ferramentas de IA - Hardware.com.br

2024-03-05
Hardware.com.br
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (generative AI models like ChatGPT, Gemini, and LLaVA) and their vulnerabilities being exploited by a novel AI worm. The worm was demonstrated in a controlled setting, so no actual harm has yet occurred, but the potential for significant harm (data theft, spam, fraud) is credible and plausible. The article does not report any realized harm but warns of future threats, fitting the definition of an AI Hazard. The researchers' communication with companies and the companies' responses further support this classification as a hazard rather than an incident or complementary information.
Thumbnail Image

Esta "minhoca" criada com IA pode roubar dados privados

2024-03-07
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (generative AI models and AI-powered worm) and describes a direct harm scenario where the AI system's use and exploitation lead to data theft and malicious propagation. This fits the definition of an AI Incident because the AI system's development and use have directly led to harm (unauthorized data access and cyberattack capabilities). The article does not merely warn about potential harm but demonstrates an actual exploit, confirming realized harm rather than just plausible future harm. Therefore, the classification is AI Incident.
Thumbnail Image

برای انتشار ابزارهای هوش مصنوعی مجوز بگیرید

2024-03-04
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The article discusses a policy measure aimed at managing the risks associated with AI systems, particularly generative AI tools that may provide incorrect answers. There is no indication that any harm has yet occurred or that a specific AI incident or hazard event has taken place. Instead, this is a governance response to potential risks posed by AI, aiming to prevent future harm by imposing licensing and warning requirements. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI risks without describing a concrete incident or hazard.
Thumbnail Image

هوش مصنوعی باعث تشدید نابرابری‌ها در جهان می‌شود؟

2024-03-03
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in predictive policing that have caused harm through biased outcomes, which fits the definition of AI systems causing harm. However, the article discusses these harms in a general, illustrative manner without detailing a specific new incident or event. It focuses on expert warnings, societal concerns, and the broader implications of AI, which aligns with Complementary Information. There is no description of a particular AI Incident or a new AI Hazard event. Therefore, the article is best classified as Complementary Information, providing context and expert perspectives on AI-related social harms and inequalities.
Thumbnail Image

افزایش قابل‌توجه مصرف آب شرکت‌های تکنولوژی با رونق هوش مصنوعی

2024-03-05
روزنامه دنیای اقتصاد
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI models and large data centers) and their use leading to increased water consumption, which could plausibly lead to environmental harm. However, since no actual harm or incident is reported, and the article mainly provides context and calls for better data and transparency, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the issue.
Thumbnail Image

چت‌بات روان‌شناس چینی آمد

2024-03-03
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (EmoAda) designed for emotional support and mental health assistance, involving large language models and multimodal emotion recognition, which fits the definition of an AI system. However, there is no indication that the system has caused any harm or malfunction, nor is there a credible risk of harm described. The system is presented as a supportive tool, not a replacement for professional care, and the article focuses on research progress and future improvements. Thus, it does not meet the criteria for AI Incident or AI Hazard but fits the definition of Complementary Information, providing valuable context about AI applications in mental health support.
Thumbnail Image

این کرم آفت هوش‌مصنوعی است!

2024-03-04
عصر ايران،سايت تحليلي خبري ايرانيان سراسر جهان www.asriran.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (AI-powered worm using large language models and AI assistants) and describes realized harms including data theft and malware installation. The AI system's use is central to the incident, as the worm exploits AI capabilities to propagate and bypass security. This fits the definition of an AI Incident because the AI system's use has directly led to harm (data theft, malware infection). The article does not merely warn of potential harm but reports on an actual AI-powered worm developed and demonstrated by researchers, indicating realized harm or at least a concrete demonstration of harm potential in a real system context.
Thumbnail Image

ابزارهای هوش مصنوعی در هند ملزم به دریافت مجوز شدند

2024-03-04
خبرگزاری مهر | اخبار ایران و جهان | Mehr News Agency
Why's our monitor labelling this an incident or hazard?
The article focuses on a government-issued advisory and regulatory measures aimed at managing potential risks from AI tools, including misinformation and political impact. There is no description of an actual AI system causing harm or malfunction, nor a specific incident of harm occurring. Instead, it is a proactive governance response to potential AI hazards, aiming to prevent incidents such as misinformation affecting elections. Therefore, this qualifies as Complementary Information, providing context on societal and governance responses to AI risks.
Thumbnail Image

حکمرانی بر هوش مصنوعی باید از طریق خود آن انجام شود

2024-03-04
ana.ir
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems, nor does it describe a specific plausible hazard event. It is a discussion on AI governance, policy, and the potential benefits and challenges of AI, which fits the definition of Complementary Information as it provides context and insights into AI ecosystem governance without reporting a new incident or hazard.
Thumbnail Image

برای انتشار ابزارهای هوش مصنوعی مجوز بگیرید

2024-03-04
ایسنا
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI tools) and their use, but the article does not report any realized harm or incident caused by AI. Instead, it discusses government-issued recommendations and regulatory intentions to control AI deployment to prevent potential harms, particularly related to election integrity and misinformation. This fits the definition of Complementary Information, as it provides context on governance responses to AI-related risks without describing a specific AI Incident or AI Hazard.
Thumbnail Image

هوش مصنوعی را قبل از این‌که خیلی دیر شود متوقف کنید

2024-03-05
ایسنا
Why's our monitor labelling this an incident or hazard?
The article is a broad discussion and warning about the potential risks and societal impacts of AI, advocating for human-centered approaches and regulatory oversight. It does not describe a concrete event where an AI system caused harm or malfunctioned, nor does it report a near-miss or credible imminent threat from a specific AI system. Therefore, it fits the definition of an AI Hazard, as it plausibly points to future risks from AI development and use, but no realized harm or incident is described.
Thumbnail Image

هوش مصنوعی باعث تشدید نابرابری‌ها در جهان می‌شود؟

2024-03-03
ایسنا
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in predictive policing that have caused harm by reinforcing racial biases, leading to wrongful detentions and discrimination against Black people and other minorities. This is a direct example of harm to communities and violations of rights caused by AI use. The discussion of AI exacerbating digital divides also points to systemic harms. Since these harms are occurring and linked to AI system use, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

رونمایی چین از چت‌بات روانشناس

2024-03-03
ایسنا
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (EmoAda) that uses large language models and multimodal data to provide emotional support. However, the article does not report any realized harm or incidents caused by the system. It focuses on the system's development, potential benefits, and future improvements. There is no mention of injury, rights violations, or other harms resulting from its use. Therefore, this is not an AI Incident. It also does not describe a plausible risk of harm or hazard scenario but rather a positive development with ongoing research. It is not merely unrelated general AI news because it provides detailed information about the system and its context, but since no harm or plausible harm is described, it fits best as Complementary Information.
Thumbnail Image

این کرم آفت هوش‌مصنوعی است!

2024-03-03
خبرآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an AI-powered worm) developed and demonstrated by researchers. The worm can propagate autonomously and perform malicious actions such as data theft and malware installation, which are harms to property and potentially to individuals' privacy and security. The article does not report actual incidents of harm caused by this worm but highlights the plausible future risk it poses. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving cybersecurity harm.
Thumbnail Image

هند ابزارهای هوش مصنوعی را ملزم به دریافت مجوز کرد - تسنیم

2024-03-04
خبرگزاری تسنیم
Why's our monitor labelling this an incident or hazard?
The article focuses on a government-issued advisory and regulatory approach to AI tools to mitigate potential risks, such as misinformation or election interference, but does not report any actual harm or incident caused by AI systems. The involvement of AI is clear, but the event is about precautionary measures and policy development, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

هند ارائه محصولات هوش مصنوعی را ملزم به دریافت مجوز کرد

2024-03-04
رادیو فردا
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (AI chat tools, generative AI) and discusses their use and potential for misinformation and political impact, which are recognized AI-related harms. However, the article does not report a specific AI Incident where harm has occurred, nor does it describe a particular AI Hazard event with plausible imminent harm. Instead, it focuses on the Indian government's regulatory measures and warnings to social media platforms to prevent AI-related election interference. This fits the definition of Complementary Information, as it details governance responses and policy developments addressing AI risks, enhancing understanding of the AI ecosystem without reporting a new incident or hazard.
Thumbnail Image

راه اندازی سکوی هوش مصنوعی در هند فقط با موافقت دولت

2024-03-04
IRIB NEWS AGENCY
Why's our monitor labelling this an incident or hazard?
The article discusses a regulatory measure concerning AI system deployment, focusing on governance and safety protocols. There is no mention of any specific AI system causing harm or any incident or hazard occurring or imminent. The content is about policy and governance response to AI development, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ابزار‌های هوش مصنوعی باید از دولت هند مجوز بگیرند

2024-03-04
IRIB NEWS AGENCY
Why's our monitor labelling this an incident or hazard?
The article does not describe a direct or indirect AI Incident causing harm but discusses government recommendations and company responses aimed at preventing such harms, especially related to misinformation and election integrity. The mention of Google's AI tool producing a problematic response is background context for the regulatory response, not a detailed report of an incident causing harm. The main focus is on governance measures and risk mitigation, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

برای انتشار ابزارهای هوش مصنوعی مجوز بگیرید | هند از شرکت

2024-03-04
موتور جستجوی قطره
Why's our monitor labelling this an incident or hazard?
The article focuses on a governmental regulatory recommendation and policy measures aimed at managing AI risks, particularly regarding generative AI tools and their potential to provide incorrect or misleading information. There is no direct or indirect harm reported as having occurred, nor a specific incident of AI malfunction causing harm. Instead, the event is about regulatory and governance responses to potential AI risks, making it Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

هوش مصنوعی را قبل از این که خیلی دیر شود متوقف کنید | کارشناسان

2024-03-05
موتور جستجوی قطره
Why's our monitor labelling this an incident or hazard?
The article is a general warning and advocacy piece about AI risks and the need for human-centered AI. It does not describe any concrete AI incident or hazard event. It focuses on expert opinions and calls for change, which fits the definition of Complementary Information as it provides context and societal response to AI developments without reporting a specific incident or hazard.
Thumbnail Image

هوش مصنوعی باعث تشدید نابرابری ها در جهان می شود؟ | کارشناسان

2024-03-03
موتور جستجوی قطره
Why's our monitor labelling this an incident or hazard?
The text does not describe a specific AI system's development, use, or malfunction leading to harm or plausible harm. Instead, it presents general concerns and warnings about AI's potential effects, which aligns with general AI-related discourse rather than a concrete incident or hazard. Therefore, it fits best as Complementary Information, providing context and societal perspectives on AI rather than reporting a specific AI Incident or Hazard.
Thumbnail Image

مدل‌های هوش مصنوعی در هند به دریافت مجوز دعوت شدند

2024-03-04
ana.ir
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI models) and their development and deployment, but no actual harm or incident has occurred yet. The advisory is a preventive regulatory measure to avoid potential harms such as bias, discrimination, or election interference. Since the article focuses on the government's recommendation and regulatory approach without reporting any realized harm or incident, it constitutes Complementary Information about societal and governance responses to AI rather than an AI Incident or AI Hazard.
Thumbnail Image

هوش مصنوعی زنگ حذف کدام مشاغل را خواهد زد؟

2024-03-02
ana.ir
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by AI systems, nor does it describe a specific plausible future harm event. It is a general discussion and study summary about AI's impact on jobs, including concerns and statistics, but without any concrete incident or hazard. Therefore, it fits best as Complementary Information, providing context and understanding about AI's societal effects rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

جمهور - ابزارهای هوش مصنوعی در هند ملزم به دریافت مجوز شدند

2024-03-04
خبرگزاری جمهور
Why's our monitor labelling this an incident or hazard?
The article describes a government-issued advisory and regulatory measures aimed at managing the risks of AI tools, particularly regarding misinformation and election integrity. There is no report of actual harm or misuse occurring, only a precautionary approach to prevent potential harms. Therefore, this event is best classified as Complementary Information, as it provides context on governance responses to AI risks without describing a realized AI Incident or a direct AI Hazard.
Thumbnail Image

هوش مصنوعی جای کارمندان دولتی انگلیس را می‌گیرد - ITMen

2024-03-02
ITMen | آی تی من | پنجره‌ای نو رو به دنیای فناوری
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to replace or augment government employees' work, which could plausibly lead to job displacement (a form of harm to individuals' livelihoods). However, the article does not report any realized harm or incidents resulting from the AI use, only potential future impacts. Therefore, this situation fits the definition of an AI Hazard, as the AI's use could plausibly lead to harm (job loss), but no direct or indirect harm has yet occurred.
Thumbnail Image

کلودفلیر فایروال هوش مصنوعی می‌سازد

2024-03-06
تک ناک
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in both the threat landscape (malicious AI attacks) and the defensive tools (AI firewall). The development and use of these AI systems are aimed at preventing cyber harms that could plausibly occur. Since no actual harm or incident is described, but the article highlights credible potential AI-driven cyber threats and the defensive AI measures being developed, the event fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the development of new AI defense tools in response to emerging AI threats, not on updates or responses to a past incident. It is not an AI Incident because no harm has yet occurred.
Thumbnail Image

ایلان ماسک: در آستانه بزرگ‌ترین انقلاب فناوری هستیم

2024-03-05
زومیت
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of generative AI, AGI, and robotics, and mentions potential risks if AI is not properly regulated. However, it does not describe any realized harm or a specific event where AI caused injury, rights violations, or other harms. It also does not describe a concrete near-miss or hazard event but rather a general warning and overview. Therefore, it fits best as Complementary Information, providing context and societal/governance-related commentary on AI developments and risks, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

تهدیدی جدید برای کامپیوترها؛ کرمی که سیستم‌های هوش مصنوعی را آلوده می‌کند

2024-03-03
دیجیاتو
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (AI-powered worm targeting AI assistants) and describes realized harm: data theft from AI email assistants and potential malware installation. The worm exploits AI system vulnerabilities to cause harm, fulfilling the criteria for an AI Incident. The harm is direct and materialized, not merely potential, so it is not an AI Hazard. It is not Complementary Information or Unrelated because the article focuses on a specific harmful event involving AI systems.
Thumbnail Image

هوش مصنوعی چطور به اسرائیل در جنگ غزه کمک می‌کند؟

2024-03-04
اعتمادآنلاین
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images that are actively disseminated to mislead and manipulate public opinion in a conflict zone. This use of AI directly leads to harm by spreading misinformation, which affects communities and violates rights to accurate information. The article details specific instances where AI-generated images were used to falsely represent military actions, thus fulfilling the criteria for an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

دولت هند سیاست‌هایش را به هوش مصنوعی تحمیل می‌کند

2024-03-04
زومیت
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI models) and their regulation by a government authority. However, there is no indication that any AI system has caused harm or malfunctioned, nor that any incident or hazard has occurred or is imminent. The advisory is a policy measure aimed at managing AI risks and ensuring compliance, but it does not describe any realized or plausible harm from AI systems themselves. Therefore, this is Complementary Information about governance and regulatory responses to AI, not an AI Incident or AI Hazard.
Thumbnail Image

هوش مصنوعی باعث تشدید نابرابری‌ها در جهان می‌شود؟ - ITMen

2024-03-04
ITMen | آی تی من | پنجره‌ای نو رو به دنیای فناوری
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used in predictive policing that have caused harm to people, especially racial minorities, through biased predictions and faulty facial recognition leading to wrongful imprisonment. This constitutes a violation of human rights and harm to communities, fitting the definition of an AI Incident. The discussion of AI exacerbating digital divides also relates to harm to communities. Since the harms are ongoing and realized, and AI systems are directly involved, the classification is AI Incident.
Thumbnail Image

Este gusano de inteligencia artificial puede robar datos privados y enviar correos basura Por Euronews

2024-03-07
Investing.com Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI models) and describes a malicious AI-powered worm designed to exploit these systems to steal data and send spam. While no actual harm has yet occurred, the researchers warn of a credible risk that such AI-driven cyberattacks will happen. This fits the definition of an AI Hazard, as the event plausibly could lead to harms such as violations of privacy and disruption. Since no realized harm is reported, it is not an AI Incident. The article is not merely complementary information because it focuses on the potential threat posed by the AI worm rather than updates or responses to past incidents.
Thumbnail Image

Víctima de su propio invento: inteligencia artificial es hackeada por sus propios modelos

2024-03-05
infobae
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-based email assistants like ChatGPT, Gemini, and LLaVA) and describes a malicious AI-generated worm that can propagate and steal data. While no actual harm has yet occurred in the wild, the demonstrated capability to compromise AI systems and cause data breaches and spam dissemination represents a credible risk of harm. This fits the definition of an AI Hazard, as the event plausibly could lead to an AI Incident involving harm to property (data) and communities (users). It is not an AI Incident because the harm is not yet realized, nor is it Complementary Information or Unrelated since the focus is on a new AI-related security threat.
Thumbnail Image

Crean el primer gusano informático capaz de atacar a ChatGPT, Gemini y otros servicios de inteligencia artificial

2024-03-04
La Nacion
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (generative AI assistants like ChatGPT, Gemini, and LLaVA) and describes the use and malfunction of these AI systems due to malware infection. The malware causes direct harm by stealing confidential user data and spreading spam messages, which constitutes harm to individuals and communities. The researchers demonstrate a real exploit, not just a theoretical risk, so the harm is realized in the lab setting. Therefore, this qualifies as an AI Incident because the AI system's malfunction and exploitation directly lead to harm.
Thumbnail Image

Crean un virus informático contra la IA: su objetivo es advertir de un gran peligro

2024-03-04
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI assistants like ChatGPT, Gemini, and LLaVa) and a malicious program designed to exploit them. Although the worm has been demonstrated in a controlled environment, there is no indication that it has caused actual harm or incidents in the wild. The researchers' intent is to warn and prompt preemptive security measures. Therefore, this constitutes an AI Hazard, as the development and demonstration of this worm plausibly could lead to AI Incidents involving data theft and spam distribution if exploited maliciously in real-world settings. It is not an AI Incident because no harm has yet occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

Cuidado con lo que dices a ChatGPT: crean un virus informático que puede espiarte y robar tus datos

2024-03-06
El Español
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI chatbots like ChatGPT) and a malicious AI-related program designed to exploit vulnerabilities in these systems to steal data and spread infection. While no actual harm has occurred yet, the researchers' creation of this virus highlights a credible risk of future harm to users' data privacy and security. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to persons' data privacy and security. It is not Complementary Information because the main focus is on the potential threat posed by the virus, not on responses or updates to past incidents.
Thumbnail Image

Este gusano de inteligencia artificial puede robar datos privados

2024-03-07
Euronews Español
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (generative AI models) being exploited by a malicious AI-powered worm to steal private data and propagate itself automatically. While no actual harm has yet occurred, the demonstrated capability and credible warnings indicate a plausible risk of future harm. Therefore, this qualifies as an AI Hazard because the AI system's use or misuse could plausibly lead to an AI Incident involving harm to individuals' privacy and security. It is not an AI Incident yet since no realized harm is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

La IA se está expandiendo a velocidad de vértigo, así que unos expertos han creado el primer gusano para sistemas GenAI

2024-03-05
Xataka
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (generative AI models and AI-enabled email services) and describes the use and malfunction of these AI systems exploited by the malware. The malware's ability to steal data and replicate itself in AI systems constitutes a direct harm to property and potentially to user privacy and security, which falls under harm to property or communities. Although the malware was demonstrated in a controlled lab environment, the demonstrated capability shows a credible and realized AI-enabled cybersecurity threat. Therefore, this qualifies as an AI Incident because the AI system's malfunction and exploitation have directly led to harm (data theft and system compromise) in the experimental setting, highlighting a real and present risk.
Thumbnail Image

Morris II, el gusano informático que puede infectar ChatGPT o Gemini, robar datos e instalar 'malware'

2024-03-07
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (generative AI models like ChatGPT, Gemini, and LLaVA) and describes the use and malfunction of these AI systems exploited by the worm. The worm's actions have directly led to harm, including data theft (harm to persons' privacy and potentially intellectual property rights) and malware installation (harm to property and users' systems). The propagation of spam and unauthorized data sharing also harms communities and individuals. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly caused harm.
Thumbnail Image

Crean un gusano capaz de meterse en ChatGPT e instalar un virus para robarte los datos

2024-03-04
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems (generative AI assistants) to cause harm by stealing personal data and spreading malware. The malware exploits the AI's architecture and capabilities to bypass security and access confidential information, which constitutes a violation of privacy and potentially other rights. While the harm has not yet occurred in the wild, the demonstrated attack plausibly could lead to significant harm if deployed maliciously. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to individuals' data privacy and security. The article does not report an actual incident but a proof-of-concept demonstration in a test environment, so it is not an AI Incident yet.
Thumbnail Image

Morris II: El gusano que se propaga entre asistentes de IA generativa para robar información e instalar 'malware'

2024-03-04
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use and malfunction of AI systems (generative AI assistants) being exploited by a malware worm to steal confidential user information and spread spam. The harm is realized as data theft and malware infection, directly linked to the AI systems' vulnerabilities and their exploitation. Therefore, this qualifies as an AI Incident because the AI system's use and malfunction have directly led to harm (data theft and malware propagation).
Thumbnail Image

Crearon un gusano que se propaga en asistentes de IA generativa, roba información e instala malware

2024-03-04
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI assistants such as ChatGPT, Gemini, and LLaVA) being exploited by a malicious worm to steal confidential user data and spread malware. The harm is realized, including data theft and malware infection, which are direct harms to users and their property (data). The event stems from the use and malfunction (exploitation) of AI systems, fulfilling the criteria for an AI Incident. The article does not merely warn of potential harm but demonstrates actual harm caused by the worm's operation.
Thumbnail Image

Crean un gusano que infecta asistentes de IA generativa, roba información y les instala "malware" | Tecnología | La Voz del Interior

2024-03-04
La Voz
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI assistants such as ChatGPT, Gemini, and LLaVA) and describes a malicious software (worm) that infects these AI systems, steals sensitive user data (names, phone numbers, credit card numbers), and spreads malware. This constitutes direct harm to users' privacy and security, which falls under harm to persons or groups (a). The worm's ability to bypass security and propagate autonomously through AI systems indicates a malfunction and misuse of AI. Since the harm (data theft and malware infection) has been demonstrated and is ongoing in the test environment, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crean un gusano que se propaga entre asistentes de IA generativa,...

2024-03-04
europa press
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI assistants such as ChatGPT, Gemini, and LLaVA) and describes how a malicious worm exploits these systems to steal confidential user data and spread malware. The harm is direct and realized, including data theft and unauthorized message propagation, which are clear violations of user rights and harm to communities. The AI systems' vulnerabilities and their exploitation are central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crean Morris-II, un gusano IA de primera generación que puede infectar a ChatGPT y Gemini

2024-03-05
La Razón
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Morris-II worm) that exploits AI generative models to cause harm including data leakage and malware propagation. The harm is realized as confidential data is leaked and spam/malware is spread through AI applications. The worm's operation depends on AI systems processing adversarial prompts, which is a malfunction or misuse of the AI system. This meets the criteria for an AI Incident because the AI system's use and malfunction directly lead to harm to individuals' data privacy and security, as well as harm to communities through spam and malware dissemination. The involvement of AI is central and pivotal to the incident, and the harm is clearly articulated and ongoing.
Thumbnail Image

Inteligencia artificial es hackeada... Por otra inteligencia artificial

2024-03-07
FayerWayer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (a generative AI-based worm) whose development and potential use could plausibly lead to significant harm, such as data breaches or malware installation. Since no actual harm has yet occurred but the risk is credible and clearly articulated, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential threat and the need for stronger security measures and regulations, indicating plausible future harm from the AI system's use or misuse.
Thumbnail Image

Ciberataques autónomos dirigidos por Inteligencia artificial

2024-03-05
WWWhat's new
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (generative language models like GPT-4, Gemini Pro, LLaVA) in a malicious context to create autonomous malware capable of spreading and stealing sensitive information. While the harm has not yet materialized in real-world incidents, the article clearly states a plausible future risk of significant harm to cybersecurity and digital infrastructure. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving harm to property, communities, or critical infrastructure. It is not an AI Incident yet since no actual harm has occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

Víctima de su propio invento: inteligencia artificial es hackeada por sus modelos

2024-03-05
eju.tv
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (AI-based email assistants using large language models) and describes a malicious use of AI (a generative AI worm) that can propagate and cause harm by stealing data and sending spam. Although no actual harm has yet occurred in the real world, the demonstrated capability and potential for widespread exploitation present a credible risk of harm to users' privacy and information security. This fits the definition of an AI Hazard, as the AI system's use or malfunction could plausibly lead to an AI Incident. Since no realized harm is reported, it is not an AI Incident. The article also discusses the broader implications and responses, but the main focus is on the potential threat posed by the AI worm.
Thumbnail Image

Morris II, el malware que se propaga con ChatGPT y Gemini

2024-03-07
PasionMovil
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT 4.0, Gemini Pro, LLMs) being exploited by malware to propagate and steal data. The malware's use of AI-generated prompts to infect email assistants and extract information indicates AI system involvement in the potential harm. Although no actual harm has occurred yet, the article clearly states the threat is recognized by OpenAI and Google, and the malware could plausibly lead to data theft and privacy violations if deployed. Hence, it fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident but has not yet caused realized harm.
Thumbnail Image

Crean el primer gusano informático impulsado por Inteligencia Artificial

2024-03-06
El Diario de Ibiza
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered worm) that exploits generative AI systems to cause harm by stealing personal data and spreading malicious content. Although the worm is currently demonstrated only in a controlled environment and not yet causing real-world harm, the article clearly states that it could become operational and cause significant harm in the near future. Therefore, this constitutes an AI Hazard because the AI system's use could plausibly lead to an AI Incident involving harm to users' privacy and security. It is not an AI Incident yet because no actual harm has occurred in the wild, and it is not merely complementary information since the main focus is on the potential for harm from this AI system.
Thumbnail Image

Morris II, así es el gusano que se autoreplica en ChatGPT y Gemini para infectar PC por todo el mundo

2024-03-04
El Chapuzas Informático
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT 4.0, Gemini Pro) being exploited by a malware worm to generate malicious content and propagate itself, which could plausibly lead to harm such as data theft and widespread infection of PCs. Although the worm is currently theoretical and has not caused actual harm, the described mechanism and potential consequences fit the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to an AI Incident involving harm to individuals' data and property. There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential threat rather than updates or responses. Therefore, the correct classification is AI Hazard.
Thumbnail Image

Avrupa Parlamentosu yapay zeka yasasını onayladı

2024-03-13
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The article focuses on the legislative approval of AI regulations, detailing the rules, restrictions, and enforcement related to AI systems. It does not report any specific AI system causing harm or any incident involving AI malfunction or misuse. Instead, it presents a governance response to potential AI risks and harms, aiming to prevent future incidents. Therefore, this is Complementary Information as it provides important context on societal and governance responses to AI but does not describe a new AI Incident or AI Hazard.
Thumbnail Image

ChatGPT bir günde 17 bin haneden fazla elektrik tüketiyor!

2024-03-12
Haber7.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the environmental impact of AI's energy consumption, which is a significant concern but does not describe any realized harm or incident caused by AI systems. It discusses potential future challenges and the importance of addressing energy use but does not report an AI Incident or AI Hazard. Therefore, it fits the definition of Complementary Information as it provides supporting context and highlights broader implications without detailing a specific harmful event.
Thumbnail Image

Avrupa Parlamentosu dünyanın ilk Yapay Zeka Yasası'nı onayladı - BBC News Türkçe

2024-03-13
BBC
Why's our monitor labelling this an incident or hazard?
This event describes a major governance and regulatory response to AI risks and harms, focusing on preventing potential and existing harms from AI systems. It does not describe a specific AI Incident (harm realized) or AI Hazard (plausible future harm from a particular system or event). Instead, it details the enactment of legal frameworks and societal/governance measures addressing AI risks. Therefore, it fits the definition of Complementary Information, as it provides important context and response to AI-related issues without reporting a direct or indirect harm event.
Thumbnail Image

Avrupa Parlamentosu, yapay zeka yasasını kabul etti

2024-03-13
En Son Haber
Why's our monitor labelling this an incident or hazard?
The article focuses on the legislative approval of AI regulations by the European Parliament, detailing the rules and restrictions that will govern AI systems in the EU. It does not describe any specific event where an AI system caused harm or a plausible imminent harm scenario. Instead, it outlines societal and governance responses to AI, including risk-based regulation and transparency requirements. Therefore, this is Complementary Information as it provides important context and updates on AI governance but does not report a new AI Incident or AI Hazard.
Thumbnail Image

Avrupa Birliği'nde 'yapay zeka' kanunu onaylandı... İhlal edenlere 35 milyon euro ceza

2024-03-13
bigpara.hurriyet.com.tr
Why's our monitor labelling this an incident or hazard?
The article discusses a new AI law passed by the European Parliament that sets rules and penalties to mitigate potential harms from AI systems. It does not report any actual harm or incident caused by AI systems but rather outlines the regulatory response to anticipated risks. Therefore, it is not an AI Incident or AI Hazard but rather Complementary Information providing context on governance and societal responses to AI risks.
Thumbnail Image

Avrupa Birliği'nde 'yapay zeka' kanunu onaylandı... İhlal edenlere 35 milyon euro ceza

2024-03-13
Hürriyet
Why's our monitor labelling this an incident or hazard?
The article describes the enactment of a regulatory framework (the AI Act) by the EU Parliament to address potential and existing risks from AI systems. It does not report a specific AI Incident (harm caused by AI) or an AI Hazard (a specific event with plausible future harm). Instead, it focuses on societal and governance responses to AI-related risks, including rules, compliance requirements, and penalties. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on AI governance without describing a particular incident or hazard.
Thumbnail Image

Avrupa Parlamentosu "Yapay Zeka Yasası"nı onayladı

2024-03-13
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The article focuses on the legislative approval of a regulatory framework for AI systems in the EU, which is a governance response to AI-related risks and harms. It does not describe a specific AI system causing harm or an event where AI use or malfunction led to harm. Nor does it describe a plausible imminent harm event. Therefore, it fits the definition of Complementary Information, as it provides important context and societal/governance response to AI risks and impacts, but does not report a new AI Incident or AI Hazard.
Thumbnail Image

Avrupa Parlamentosu, yapay zeka yasasını ezici çoğunlukla onayladı

2024-03-13
euronews
Why's our monitor labelling this an incident or hazard?
The article describes the legislative approval of an AI regulatory framework by the European Parliament, focusing on future rules and oversight mechanisms for AI systems. It does not report any actual incident or harm caused by AI systems, nor does it describe a specific event where AI use or malfunction led to injury, rights violations, or other harms. Instead, it details a governance response aimed at preventing such harms. Therefore, it fits the definition of Complementary Information as it provides important context and updates on AI governance but does not describe a new AI Incident or AI Hazard.
Thumbnail Image

AB Parlamentosu, yapay zeka yasasını kabul etti

2024-03-13
CNN Türk
Why's our monitor labelling this an incident or hazard?
The article does not report any specific AI incident or harm that has occurred, nor does it describe a particular AI hazard event with imminent risk. Instead, it details a legislative measure addressing AI risks and harms through regulation and safeguards. Therefore, it fits the definition of Complementary Information, as it provides important context on societal and governance responses to AI, enhancing understanding of AI risk management but not describing a direct or potential harm event itself.
Thumbnail Image

Avrupa Parlamentosu 'Yapay Zeka Yasası'nı onayladı

2024-03-13
Yeni Evrensel Gazetesi
Why's our monitor labelling this an incident or hazard?
The event describes a legislative and governance response to AI technologies, focusing on regulation and risk management to prevent potential harms. It does not report any realized harm or incident caused by AI systems, nor does it describe a plausible immediate hazard. Therefore, it fits the definition of Complementary Information as it provides important context and updates on AI governance without detailing a specific AI Incident or AI Hazard.
Thumbnail Image

Yapay zekayı kullandı: Ölen annesiyle sohbet etti

2024-03-13
Cumhuriyet
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, used to simulate conversations with a deceased person. However, the article does not describe any realized harm or violation of rights, nor does it indicate a credible risk of future harm. The user's experience of 'creepy' results is subjective and does not constitute harm under the framework. The AI system's use here is personal and does not lead to injury, rights violations, or community harm. Hence, the event is not an AI Incident or AI Hazard but rather Complementary Information illustrating AI's societal and emotional implications.
Thumbnail Image

Avrupa Parlamentosu 'yapay zeka yasası'nı onayladı | Avrupa Haberleri

2024-03-13
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
This event describes a governance and regulatory response to AI technologies, detailing new legal frameworks and rules for AI system use and deployment. It does not describe a specific AI Incident or AI Hazard but rather a policy development aimed at managing AI risks and harms. Therefore, it fits the definition of Complementary Information as it provides important context and updates on AI governance without reporting a direct or potential harm event.
Thumbnail Image

AP'den ''Yapay Zeka Yasası''na onay

2024-03-13
Star.com.tr
Why's our monitor labelling this an incident or hazard?
This event describes a governance and regulatory response to AI technologies, focusing on preventing potential harms by imposing rules and safeguards. It does not describe a specific AI incident or hazard where harm has occurred or is imminent, nor does it report on a particular AI system malfunction or misuse causing harm. Instead, it provides complementary information about societal and legal measures addressing AI risks and impacts, enhancing understanding of the AI ecosystem and future risk management.
Thumbnail Image

Bir bu eksikti, yapay zeka virüs mü bulaştırıyor?

2024-03-12
Teknolojioku
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AI-based chat platforms) being targeted by a malware that itself uses AI and machine learning techniques. The malware's use has directly led to harms including personal data theft, propaganda dissemination, and phishing attacks, which are violations of rights and harm to communities. The article clearly states these harms are occurring, not just potential. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

Avrupa Parlamentosu'ndan yapay zeka yasasına onay

2024-03-13
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT, Gemini) and addresses their regulation to prevent harms such as rights violations, manipulation, and risks in high-stakes domains. However, the article does not describe any realized harm or specific incident caused by AI, nor does it describe a plausible immediate hazard event. Instead, it details a legislative and governance response to AI risks, which fits the definition of Complementary Information as it provides context and societal/governance responses to AI-related issues.
Thumbnail Image

Avrupa Parlamentosu'ndan 'Yapay Zeka' yasakları... Listede neler var

2024-03-13
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly and addresses their development and use. However, it does not report any realized harm or incident caused by AI, but rather the establishment of legal frameworks to mitigate potential harms and regulate AI use. This fits the definition of Complementary Information, as it provides governance and societal response context to AI risks and harms without describing a new AI Incident or AI Hazard.