European Parliament Disables AI Functions on Lawmakers' Devices Over Security Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The European Parliament has disabled AI features, including virtual assistants and writing tools, on lawmakers' tablets and work devices due to cybersecurity and data protection concerns. The precautionary measure follows internal risk assessments about potential data exposure to external AI service providers. No actual harm has been reported.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems (AI features on devices) and their use, but no direct or indirect harm has occurred. The European Parliament's decision to disable these features is a precaution against potential cybersecurity and data protection risks, indicating a plausible risk of harm in the future if these AI functions were left enabled. Therefore, this qualifies as an AI Hazard, as it concerns a credible potential for harm related to AI system use, but no incident has yet materialized.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Government, security, and defence

Severity
AI hazard

Business function:
Other

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

ЗМІ: Європарламент обмежив використання ШІ на службових пристроях

2026-02-16
Європейська правда
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI functions on devices) and their use, but no direct or indirect harm has occurred. The disabling of AI features is a precautionary step to prevent possible data breaches or cybersecurity issues. This fits the definition of Complementary Information, as it details a governance response to potential AI risks rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Європарламент заблокував ШІ на пристроях працівників через побоювання щодо кібербезпеки -- Politico

2026-02-17
InternetUA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI features on devices) and their use, but no direct or indirect harm has occurred. The European Parliament's decision to disable these features is a precaution against potential cybersecurity and data protection risks, indicating a plausible risk of harm in the future if these AI functions were left enabled. Therefore, this qualifies as an AI Hazard, as it concerns a credible potential for harm related to AI system use, but no incident has yet materialized.
Thumbnail Image

Європарламент заблокував функції ШІ на мобільних пристроях депутатів

2026-02-17
ipress.ua
Why's our monitor labelling this an incident or hazard?
This event involves the use of AI systems (AI features on mobile devices) and concerns about data security risks related to their use. However, there is no indication that any harm has occurred yet; rather, this is a precautionary measure to prevent potential data breaches or privacy violations. Therefore, it represents a plausible risk of harm due to AI system use, but no realized harm is reported. This fits the definition of an AI Hazard, as the event describes circumstances where AI system use could plausibly lead to harm (data security issues) but no incident has occurred yet.
Thumbnail Image

Європарламент заблокував функції штучного інтелекту на робочих пристроях працівників, -- Politico

2026-02-17
ms.detector.media
Why's our monitor labelling this an incident or hazard?
The event describes a decision to disable AI features on work devices to prevent possible data security and privacy issues. There is no indication that any harm has occurred yet, only a precautionary measure based on plausible risks. The involvement of AI systems is clear, but the focus is on preventing potential future harm rather than responding to an incident. Therefore, this qualifies as Complementary Information, as it provides context on governance and risk management related to AI use in a workplace setting without reporting an AI Incident or AI Hazard.
Thumbnail Image

Європарламент запровадив обмеження на використання ШІ на робочих пристроях -- ЗМІ.

2026-02-17
@ www.BIN.com.ua Business Information Network
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the disabled features are AI functions embedded in work devices. The event concerns the use of AI and its potential risks to data security and privacy. However, no actual harm has been reported; the action is precautionary to prevent possible cybersecurity incidents or data protection violations. Therefore, this event represents a plausible risk of harm from AI use, qualifying it as an AI Hazard rather than an Incident. It is not merely general AI news or a response to a past incident, so it is not Complementary Information.
Thumbnail Image

У Брюсселі відключили ШІ на планшетах євродепутатів

2026-02-16
censor.net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI functions on tablets) and their development/use context (use of cloud services for data processing). However, no actual harm or incident has occurred; rather, the institution is proactively mitigating potential risks related to data protection. Therefore, this situation represents a plausible risk of harm (data breaches or privacy violations) that could arise if AI functions were used without sufficient safeguards. As such, it fits the definition of an AI Hazard, not an AI Incident or Complementary Information, since it is about potential harm and risk mitigation rather than a response to a past incident or a general update.
Thumbnail Image

Європарламент вимкнув ШІ-функції на робочих пристроях депутатів та їхніх помічників: у чому причина

2026-02-18
OBOZREVATEL
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI features such as writing assistants and virtual helpers) whose data handling raised security concerns. However, no actual harm or incident has occurred; rather, the Parliament proactively disabled these features to prevent possible cybersecurity threats and data privacy violations. Therefore, this is a plausible risk scenario where AI use could lead to harm if not mitigated, fitting the definition of an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since it directly concerns AI system use and associated risks.
Thumbnail Image

European Parliament disables AI features on work devices

2026-02-17
NewsBytes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (built-in AI features on devices) and concerns about data being sent to cloud servers, which could pose cybersecurity and data protection risks. However, there is no indication that any harm has occurred yet. The action taken is a preventive measure to mitigate plausible future risks. Therefore, this event is best classified as Complementary Information, as it details a governance response to AI-related concerns without describing an AI Incident or AI Hazard.
Thumbnail Image

European Parliament Disables AI Features on Work Devices Over Security Risks

2026-02-17
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI assistants with cloud-based processing) and concerns about their use leading to potential data exposure and cybersecurity risks. However, there is no indication that any actual harm has occurred yet; the Parliament is still assessing the situation and has taken precautionary steps to prevent possible incidents. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (data breaches, violations of data protection laws), but no direct or indirect harm has been reported so far.
Thumbnail Image

European Parliament bars lawmakers from AI tools

2026-02-17
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI assistants performing email summarization) and concerns about data privacy risks related to their cloud-based operation. However, there is no indication that any harm has occurred yet. The Parliament's decision to disable these features is a preventive measure to avoid potential data breaches or privacy violations. Therefore, this situation represents a plausible risk of harm from AI use but no realized harm. It fits the definition of an AI Hazard, as the development or use of AI systems could plausibly lead to an AI Incident if data security is compromised, but no incident has occurred so far.
Thumbnail Image

EU Parliament bans AI use on government work devices as security fears rise

2026-02-17
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (built-in AI features relying on cloud services) and their use within government work devices. However, there is no indication that any harm has occurred yet; rather, the Parliament is proactively disabling these features to mitigate potential cybersecurity and data protection risks. This fits the definition of an AI Hazard, as the development or use of AI systems could plausibly lead to harm (e.g., data breaches or security incidents) if not controlled. There is no evidence of direct or indirect harm having occurred, so it is not an AI Incident. The event is more than general AI news or product updates, so it is not Unrelated or Complementary Information.
Thumbnail Image

European Parliament blocks AI on work-issued devices of its members

2026-02-17
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it concerns AI features on devices, but there is no indication that any harm has occurred due to AI malfunction or misuse. The Parliament's action is a response to plausible cybersecurity risks, aiming to prevent potential incidents. Therefore, this qualifies as Complementary Information, reflecting a governance response to AI-related risks rather than an AI Incident or Hazard.
Thumbnail Image

EU Parliament blocks AI features over cyber, privacy fears

2026-02-16
POLITICO
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI features on devices) and concerns about their use leading to potential data security and privacy harms. However, no actual harm has occurred yet; the Parliament's action is preventive to avoid plausible future harm. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm if not mitigated.
Thumbnail Image

The European Parliament pulls back AI from its own devices

2026-02-17
The Next Web
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (built-in AI features such as writing assistants and summarization tools) whose use has been restricted due to concerns about data security and privacy risks. However, there is no indication that any harm has occurred yet. The Parliament's decision is a preventive measure to avoid potential AI-related data breaches or privacy violations. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has materialized. The article does not describe a realized AI Incident or complementary information about a past incident, nor is it unrelated to AI systems.
Thumbnail Image

EU Parliament halts AI features over security, privacy worries | News.az

2026-02-16
News.az
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (built-in AI features such as writing and summarizing assistants) and their use within the Parliament's IT environment. However, the article describes a precautionary disabling of these AI features due to potential security and privacy risks, with no realized harm or incident reported. Therefore, this is a case where the AI system's use could plausibly lead to harm (e.g., data breaches or privacy violations) if left enabled, but no direct or indirect harm has occurred yet. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Parliament blocks AI features on MEPs' tablets over security fears

2026-02-17
Azeri - Press Informasiya Agentliyi
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of AI features like writing aids and virtual assistants on tablets. However, no actual harm has occurred; the Parliament is acting on security concerns to prevent possible data breaches or privacy violations. This is a plausible risk of harm related to AI use, but no incident has materialized. Therefore, this qualifies as an AI Hazard, as the development or use of AI features could plausibly lead to harm (data exposure, privacy violations) if not controlled, but no direct or indirect harm has yet occurred.
Thumbnail Image

EU Parliament Suspends AI Integration on Corporate Devices Over Cybersecurity Fears - IT Security News

2026-02-17
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The article indicates that AI systems are involved (AI-powered tools on work devices) and that an internal assessment found potential cybersecurity and data protection risks. However, no realized harm or incident has occurred yet. The Parliament's action is preventive, addressing plausible future risks from AI system vulnerabilities. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm but has not yet done so.
Thumbnail Image

EU Parliament bans AI use on government work devices

2026-02-17
Neowin
Why's our monitor labelling this an incident or hazard?
The European Parliament's disabling of AI features on work devices is a response to concerns about cybersecurity and data protection vulnerabilities associated with AI assistants using cloud services. Although no direct harm has occurred, the decision reflects a credible risk that AI systems could lead to data breaches or privacy violations. The event does not describe an actual AI Incident but a preventive measure against plausible future harm, fitting the definition of an AI Hazard. The mention of past incidents (e.g., CISA director uploading sensitive documents to ChatGPT) provides context but does not change the classification of this event.
Thumbnail Image

EU Parliament Blocks AI features on Corporate Devices Over Cybersecurity Concerns

2026-02-17
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems embedded in corporate devices and their use in processing data via cloud services. The European Parliament's decision to disable these AI features stems from concerns about potential data exposure and cybersecurity risks, indicating a plausible risk of harm. However, there is no indication that any actual harm, such as data breaches or violations of rights, has occurred. The event is primarily about preventing possible future harm from AI system use, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because AI systems and their risks are central to the event.
Thumbnail Image

European Parliament Blocks AI on Lawmakers' Devices Over Security Fears

2026-02-18
eWEEK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (built-in AI features like writing assistants and summarizers) and concerns about their use and data security. However, there is no indication that any harm has occurred yet. The Parliament's decision to disable these features is a risk mitigation step to prevent possible future harm related to data privacy and security. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has materialized.
Thumbnail Image

European Parliament switches off AI features over data fears

2026-02-18
Computing
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI features relying on cloud processing) and concerns about data privacy risks. However, there is no indication that any harm has occurred yet, only a plausible risk of harm (data leakage or unauthorized data sharing). The Parliament's disabling of AI features is a preventive measure to avoid potential incidents. Therefore, this qualifies as an AI Hazard, as the AI systems' use could plausibly lead to harm but no incident has materialized.
Thumbnail Image

EU Parliament prohibits AI features on lawmaker devices to mitigate cyber risk

2026-02-19
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The EU Parliament's disabling of AI features is a governance and security response to potential AI-related cyber risks, which fits the definition of Complementary Information as it describes a societal/governance response to AI hazards. The data breach incident involving the contractor uploading personal data to ChatGPT constitutes an AI Incident because the misuse of the AI system directly led to a data breach affecting personal information, which is a violation of privacy rights (a breach of obligations under applicable law). Therefore, the overall article contains both an AI Incident (the data breach) and Complementary Information (the EU Parliament's preventive measures). Since AI Incidents have priority over Complementary Information, the classification is AI Incident.
Thumbnail Image

Eurodeputații, avertizați să evite AI pe telefoane și tablete. Măsuri stricte din motive de securitate

2026-02-17
Digi24
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, as AI-powered features (such as writing assistants, summarization tools, and virtual assistants) were used on devices. The event stems from the use of AI systems and the potential security risks they pose by transmitting data to external cloud services. Although no direct harm has occurred, the disabling of these AI functions is a response to plausible cybersecurity and data protection risks that could lead to harm if exploited. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to an incident involving data breaches or privacy violations, but no incident has yet materialized.
Thumbnail Image

Parlamentul European a dezactivat AI pe telefoanele și tabletele de serviciu ale eurodeputaților

2026-02-17
GAZETA de SUD
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems integrated into devices and their use, which led to concerns about data security and privacy. However, no direct or indirect harm has occurred yet; the AI functions were disabled to prevent potential risks. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm (data breaches, privacy violations) if not mitigated. The article does not describe an actual incident of harm, but a preventive action based on plausible risk. Therefore, the classification is AI Hazard.
Thumbnail Image

Funcțiile AI de pe tabletele eurodeputaților, dezactivate din motive de securitate - trimit date în afara dispozitivelor

2026-02-17
CursDeGuvernare
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (integrated AI functions on tablets) whose use has been suspended due to security concerns about data transmission outside the devices, which could lead to privacy violations or data breaches. However, no direct or indirect harm has occurred yet; the action is preventive. This fits the definition of an AI Hazard, as the development or use of AI systems could plausibly lead to harm (data privacy breaches) if not mitigated. The article does not describe an actual incident of harm, nor does it focus on responses to a past incident, so it is not an AI Incident or Complementary Information. It is not unrelated because AI systems and their risks are central to the event.
Thumbnail Image

Parlamentul European a dezactivat funcțiile de inteligență artificială de pe toate dispozitivele de lucru ale eurodeputaților și ale personalului. Specialiștii instituției nu pot garanta securitatea datelor prelucrate cu aceste instrumente. - Biziday

2026-02-17
Biziday
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI functions like writing assistants and text summarizers) whose use has been suspended due to security concerns. However, there is no indication that any harm has occurred yet. The institution's IT department cannot guarantee data security, which implies a plausible risk of harm if these AI functions were used. Therefore, this situation represents an AI Hazard because it plausibly could lead to an AI Incident (e.g., data breaches or privacy violations) if the AI functions were used without adequate safeguards. Since no harm has materialized, it is not an AI Incident. It is also not merely complementary information because the main focus is on the potential risk and preventive action, not on broader ecosystem updates or responses to past incidents.
Thumbnail Image

Europarlamentarii au fost lăsați fără inteligență artificială pe dispozitivele oficiale. Parlamentul a blocat funcțiile AI

2026-02-17
spotmedia.ro
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (AI-integrated functions on devices) and concerns their use and potential risks. However, the article does not report any realized harm such as data breaches or violations of rights; rather, it reports a precautionary disabling of AI features to mitigate potential risks. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (data security breaches or privacy violations) if left enabled, but no direct or indirect harm has materialized yet.
Thumbnail Image

Eurodeputații, avertizați să evite AI pe telefoane și tablete. Măsuri stricte din motive de securitate

2026-02-17
Știri online, ultimele știri, presa online, ziar online - Sursazilei.ro
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI functions integrated into devices and the decision to disable them due to concerns about data security and potential exposure to external cloud services. Although no incident of data breach or harm has been reported, the precautionary disabling of AI features reflects recognition of plausible future harm from AI system use. The event is about mitigating risks before harm occurs, fitting the definition of an AI Hazard rather than an Incident. It is not merely complementary information because the main focus is on the potential security risks and the preventive action taken, not on a broader governance response or update.
Thumbnail Image

Eurodeputații, interzis să mai folosească AI pe dispozitive mobile. Ce au constatat serviciile de securitate - Aktual24

2026-02-17
Aktual24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI functions integrated into devices and their use in processing sensitive data. The decision to disable these AI features stems from concerns that data sent to cloud services could compromise confidentiality, indicating a plausible risk of harm to sensitive information and privacy. Since no actual harm has occurred yet, but there is a credible potential for harm if these AI functions continued to operate, this qualifies as an AI Hazard. The event is not an AI Incident because no realized harm is reported, nor is it merely complementary information or unrelated news.
Thumbnail Image

Parlamentul a blocat funcțiile AI: Europarlamentarii, lăsați fără inteligență artificială pe dispozitivele oficiale - Stiripesurse.md

2026-02-17
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI-integrated functions such as writing assistants and summarization tools) whose use has been disabled due to concerns about data security risks. However, there is no indication that any harm has occurred yet. The decision is based on a risk assessment and precautionary principle to avoid potential future harm related to data security and privacy. Therefore, this event represents a plausible risk scenario where AI use could lead to harm if not controlled, fitting the definition of an AI Hazard rather than an Incident. It is not merely general AI news or a response to a past incident, so it is not Complementary Information. Hence, the classification is AI Hazard.
Thumbnail Image

European Parliament Blocks AI On Lawmakers' Devices, Citing Security Risks

2026-02-18
UrduPoint
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Anthropic's Claude, Microsoft's Copilot, OpenAI's ChatGPT) and concerns their use in a sensitive context. However, no actual harm or incident has occurred yet; the parliament is acting out of caution to prevent possible future harm related to data security and privacy. Therefore, this is a plausible risk scenario rather than a realized harm. The event is best classified as Complementary Information because it reports a governance response to potential AI-related risks rather than an AI Incident or Hazard itself.
Thumbnail Image

European Parliament blocks AI on lawmakers' devices, citing security risks

2026-02-17
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots like Claude, Copilot, ChatGPT) and concerns their use could lead to data security and privacy harms. However, the article describes a precautionary blocking of AI tools before any harm has occurred. Therefore, this is an AI Hazard, as the development and use of AI systems could plausibly lead to incidents involving data breaches or privacy violations, but no direct or indirect harm has yet materialized.
Thumbnail Image

European Parliament blocks AI on lawmakers' devices, citing security risks - RocketNews

2026-02-17
RocketNews | Top News Stories From Around the Globe
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots like ChatGPT, Copilot, Claude) and concerns their use and data handling. However, there is no indication that any harm has occurred due to AI system malfunction or misuse. The European Parliament's action is a risk mitigation strategy to prevent potential data breaches or privacy violations. Therefore, this event represents a plausible future risk (AI Hazard) rather than an actual incident. Since the article focuses on the preventive policy decision rather than an ongoing or past harm, it fits best as an AI Hazard.
Thumbnail Image

European Parliament Blocks Ai On Lawmakers' Devices, Citing Security Risks

2026-02-17
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems and their use by European Parliament lawmakers. The blocking of AI features is a response to potential cybersecurity and privacy risks, indicating a plausible future harm if AI systems were used without restrictions. Since no direct or indirect harm has occurred yet, and the focus is on preventing possible data exposure and privacy violations, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a concrete action taken due to credible risks, nor is it unrelated as it directly concerns AI system use and associated risks.
Thumbnail Image

EU Parliament restricts AI tool use over cybersecurity concerns

2026-02-19
SC Media
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm caused by AI systems, nor does it report an incident where AI use led to injury, rights violations, or other harms. Instead, it details a precautionary measure taken by the EU Parliament to mitigate potential cybersecurity and privacy risks. This fits the definition of Complementary Information as it provides context on governance responses to AI-related risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Нека не ве лажат дека само ви "помагаат": ЗАБРАНЕТА ВЕШТАЧКА ИНТЕЛИГЕНЦИЈА НА КОМПЈУТЕРИТЕ ОД ЕП

2026-02-18
vecer.mk
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of embedded AI features (e.g., automatic email summarization, writing assistance) that process data in the cloud. However, no actual harm (such as data breaches or privacy violations) has been reported; rather, the decision is a preventive action to mitigate plausible risks. Therefore, this event does not describe an AI Incident or an AI Hazard but rather a governance and operational response to potential AI-related risks, fitting the definition of Complementary Information.
Thumbnail Image

Европскиот парламент ја исклучува вештачката интелигенција на службените уреди - МКД.МК

2026-02-16
МКД.мк
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (embedded AI functions like writing assistants and summarization tools) whose use has been disabled to prevent potential data security risks. There is no indication that any harm has occurred yet, but the decision is based on the plausible risk that these AI functions could lead to data exposure or breaches. Therefore, this qualifies as an AI Hazard because it concerns a credible potential for harm that has not yet materialized. The article focuses on preventive measures and risk assessment rather than an actual incident or realized harm.
Thumbnail Image

Европскиот парламент ја забрани вградената ВИ на уредите на европратениците поради заштита на податоците - Вечер ...1963 | Vecer MK

2026-02-16
vecer.press
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as embedded AI functions such as writing assistants and virtual assistants on official devices. The European Parliament's IT service assessed that these AI functions send data to cloud services, raising concerns about data security and privacy. The disabling of these functions is a precautionary response to potential risks, not a report of realized harm. Hence, the event fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm (data breaches or privacy violations) but no direct harm has yet occurred. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated as the focus is on a concrete risk management action regarding AI systems.
Thumbnail Image

ЕП ја забрани вградената вештачка интелигенција

2026-02-17
Trn.mk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (writing assistants, summarization, virtual assistants) whose use has raised concerns about data security and potential compromise of sensitive information. Although no actual harm or data breach is reported, the decision to disable these AI functions is a response to plausible risks of harm (data compromise and cybersecurity threats). Therefore, this event represents an AI Hazard, as the AI system's use could plausibly lead to an AI Incident if the risks materialize. The event is not an incident because no harm has yet occurred, nor is it merely complementary information or unrelated news.
Thumbnail Image

Европскиот парламент забрани да се користат ВИ-алатки на службените уреди

2026-02-19
fakulteti.mk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI tools with cloud-based processing) and their use on official devices. However, the article describes a preventive restriction to avoid potential privacy breaches and cyberattacks, not an actual incident causing harm. The decision is based on concerns about where data is sent and processed, which could plausibly lead to harm if unmitigated. Therefore, this qualifies as an AI Hazard because it concerns plausible future harm from AI system use, but no realized harm or incident is described. The article mainly reports a governance response to AI risks, not a realized AI Incident or complementary information about past incidents.
Thumbnail Image

Европскиот парламент забрани AI на службените уреди на пратениците

2026-02-19
utro.mk
Why's our monitor labelling this an incident or hazard?
This event involves the use of AI systems (AI writing assistants, summarization tools, virtual assistants) embedded in official devices. The decision to disable these AI functions stems from concerns about the use of cloud-based AI services that process sensitive official data externally, posing a plausible risk of data breaches or unauthorized data sharing. Although no actual harm has been reported yet, the potential for harm to data security and privacy is credible and significant. Therefore, this event constitutes an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving data breaches or violations of data protection rights if left unmitigated. The article focuses on the preventive action taken to mitigate this risk, not on an actual incident or harm that has occurred.
Thumbnail Image

Европскиот парламент повлече кочница: Еве што се случува!

2026-02-20
prv.mk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (writing assistants and email summarization tools) whose cloud-based processing raises concerns about data privacy and potential cyberattacks. The European Parliament's decision to disable these AI features is a response to the plausible risk of harm, specifically privacy violations and security breaches. Since no actual harm has occurred yet but there is a credible risk, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the preventive measure and the potential risks rather than reporting realized harm or incidents.
Thumbnail Image

Le Parlement européen désactive des fonctionnalités IA sur les appareils fournis aux eurodéputés | FranceSoir

2026-02-19
France Soir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI functionalities on devices and concerns about data being sent to cloud services, which involves AI system use. The decision to disable these functionalities is motivated by cybersecurity and data protection risks, indicating a plausible risk of harm. Since no actual harm or incident is reported, and the focus is on preventive action, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main subject is the potential risk and preventive measure, not a response to a past incident. It is not unrelated because AI systems are clearly involved.
Thumbnail Image

Au Parlement européen, la DSI met l'IA en pause

2026-02-19
Silicon
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI functionalities such as virtual assistants, text synthesis, and summarization) embedded in mobile devices. The DSI's action is a response to concerns about data security risks from these AI systems, specifically regarding data transfers to cloud services. However, no realized harm or violation has occurred yet. The blocking is a preventive measure to avoid potential future harm, fitting the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risk and preventive action, not on broader ecosystem updates or responses to past incidents.
Thumbnail Image

Le Parlement européen a désactivé les fonctionnalités d'IA intégrées aux appareils professionnels utilisés par son personnel, citant des risques non résolus pour la cybersécurité et la protection des données

2026-02-19
Developpez.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems integrated into devices used by European Parliament staff. The decision to disable these AI features is due to concerns about data security and potential exposure of sensitive information to third-party cloud providers, which could plausibly lead to harm such as data breaches or violations of data protection rights. However, the article does not report any actual harm or incident resulting from these AI systems, only the potential for such harm. Therefore, this event fits the definition of an AI Hazard, as it highlights credible risks that could plausibly lead to an AI Incident if unmitigated. The focus is on precautionary governance and risk management rather than on a realized incident.
Thumbnail Image

1

2026-02-19
next.ink
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI features) and their use within the Parliament's devices. However, there is no indication that any harm has occurred or that the AI systems malfunctioned. The disabling of these features is a preventive measure to avoid possible data protection issues. Therefore, this event is best classified as Complementary Information, as it describes a governance response to potential AI risks rather than an incident or hazard causing or plausibly leading to harm.