EU Debates AI-Powered Scanning of Private Messages for Child Protection

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The European Union is debating a legislative proposal requiring online platforms to use AI to scan private messages for child sexual abuse material. While supported by child protection groups, critics warn it poses significant privacy risks and could enable mass surveillance if implemented. No actual harm has occurred yet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of automated detection technologies for illegal content on messaging platforms, which could plausibly lead to harms such as privacy violations or misuse by authoritarian regimes. However, no actual harm or incident has yet occurred as the proposal is still under discussion and not implemented. Therefore, this qualifies as an AI Hazard because it describes a credible risk of future harm stemming from the use of AI systems for scanning private communications, but no direct or indirect harm has materialized yet. It is not Complementary Information because the article is not updating or responding to a past incident but discussing a potential future risk. It is not an AI Incident because no harm has occurred.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Protection des enfants ou de la vie privée? L'UE relance un débat explosif

2025-10-08
Courrier international
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of automated detection technologies for illegal content on messaging platforms, which could plausibly lead to harms such as privacy violations or misuse by authoritarian regimes. However, no actual harm or incident has yet occurred as the proposal is still under discussion and not implemented. Therefore, this qualifies as an AI Hazard because it describes a credible risk of future harm stemming from the use of AI systems for scanning private communications, but no direct or indirect harm has materialized yet. It is not Complementary Information because the article is not updating or responding to a past incident but discussing a potential future risk. It is not an AI Incident because no harm has occurred.
Thumbnail Image

Child protection vs privacy: decision time for EU

2025-10-08
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article describes a legislative proposal involving AI-based scanning technology to detect child sexual abuse material in private messages. While the technology's use could plausibly lead to harms such as privacy violations or misuse by authoritarian regimes, no actual harm or incident has occurred yet. The focus is on the potential risks and the policy decision process, not on a realized incident or harm. Therefore, this qualifies as an AI Hazard, as the development and potential use of AI systems for scanning private communications could plausibly lead to significant harms if implemented.
Thumbnail Image

"Chat control" : un règlement européen va-t-il placer les applications de messagerie sous surveillance ?

2025-10-08
RTL.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated detection of illegal content in private messages, which could plausibly lead to violations of privacy rights and surveillance harms if implemented. However, since the regulation is still under debate and no actual deployment or harm has occurred, this constitutes a plausible future risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard, as the development and potential use of AI systems for mass surveillance and content scanning could plausibly lead to significant harms to privacy and fundamental rights.
Thumbnail Image

Child protection vs privacy: decision time for EU

2025-10-08
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for scanning and detecting illegal content, which could plausibly lead to harms such as privacy violations and misuse by authoritarian regimes. However, no actual harm or incident has occurred yet; the proposal is still under debate and not yet implemented. Therefore, this situation represents a potential risk or hazard rather than an incident. The article primarily discusses the potential future impact and societal/governance responses, fitting the definition of an AI Hazard.
Thumbnail Image

Child protection vs privacy: decision time for EU

2025-10-08
Mountain Democrat
Why's our monitor labelling this an incident or hazard?
The event involves the potential use of AI systems to scan private messages for child sexual abuse material, which could plausibly lead to violations of privacy rights and other harms if implemented. Since the proposal is still under debate and no actual deployment or harm has been reported, it fits the definition of an AI Hazard rather than an AI Incident. The article does not describe a realized harm but highlights the credible risk and societal concerns about privacy and misuse, consistent with an AI Hazard classification.
Thumbnail Image

Child protection vs privacy: decision time for EU | FOX 28 Spokane

2025-10-08
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or automated detection systems to scan private messages for illegal content, which is an AI system use case. The legislative proposal could plausibly lead to violations of privacy rights and potential misuse, constituting a credible risk of harm. However, since the proposal is still under debate and no actual deployment or harm has occurred, it fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the potential implications and ongoing policy discussions rather than reporting an incident of harm or misuse.
Thumbnail Image

Chat Control : quel est ce projet de loi numérique qui divise profondément l'Europe ? : Actualités - Orange

2025-10-13
Orange Actualités
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (client-side scanning with AI) in a way that could plausibly lead to violations of human rights, specifically privacy rights and freedoms, if implemented. Although the harm is not yet realized, the credible risk of mass surveillance and intrusion into private communications meets the definition of an AI Hazard. The article focuses on the potential negative consequences and societal debate rather than reporting an actual incident of harm, so it is not an AI Incident. It is not merely complementary information or unrelated news, as the AI system's potential for harm is central to the discussion.
Thumbnail Image

Pédocriminalité : avec "Chat Control", l'UE accusée de menacer la vie privée des citoyens

2025-10-12
France 24
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of AI systems for automatic detection of illegal content in encrypted communications, which could plausibly lead to significant harms such as violations of privacy rights and mass surveillance if implemented. Since the regulation is still under discussion and no deployment or harm has yet occurred, it does not qualify as an AI Incident. The credible risk and debate about potential harms make it an AI Hazard. The article does not describe a response or update to a past incident, so it is not Complementary Information. It is clearly related to AI systems and their societal impact, so it is not Unrelated.
Thumbnail Image

Chat Control: contesté mais à nos portes, pourquoi ce projet de loi numérique divise profondément l'Europe

2025-10-13
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (client-side scanning algorithms) for automated detection of illegal content in private communications, including encrypted messages. The AI system's use is intended but not yet fully implemented, so no realized harm is reported yet. However, the article details credible concerns about potential harms such as mass surveillance, privacy violations, and erosion of fundamental rights, which are plausible outcomes of deploying such AI systems at scale. Hence, it fits the definition of an AI Hazard rather than an AI Incident. The article also includes extensive societal and governance responses and debates, but the primary focus is on the potential risks posed by the AI-enabled surveillance system.
Thumbnail Image

Pédocriminalité : l'outil " Chat Control ", qui scanne vos messageries, fait polémique

2025-10-12
Le Point.fr
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically AI-based client-side scanning tools used to detect illegal content in encrypted messages. The use of AI is central to the proposed regulation. However, the article does not describe any actual harm or incident caused by these AI systems; rather, it discusses the potential risks and controversies surrounding their deployment, including privacy concerns and possible errors. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harms such as privacy violations and surveillance, but no direct harm has yet been reported. It is not Complementary Information because the article is not an update or response to a past incident, nor is it unrelated as it clearly involves AI and its societal implications.
Thumbnail Image

Chat Control, une alliance baroque contre un "Big Brother" européen

2025-10-13
LExpress.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly references AI-related technologies like client-side scanning and automated detection systems intended to scan private communications, including encrypted messages. These AI systems are central to the proposed regulation's enforcement. Although the regulation has not been enacted and no direct harm has yet occurred, the article outlines credible risks of significant harm to privacy, fundamental rights, and security if the system is deployed. The concerns about weakening encryption and enabling mass surveillance are plausible future harms directly linked to the AI system's use. Since no actual harm has materialized yet, but the risk is credible and significant, the event fits the definition of an AI Hazard.
Thumbnail Image

Chat Control : l'Europe déchirée entre la protection des enfants et la vie privée - Siècle Digital

2025-10-13
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of AI-based client-side scanning algorithms to detect illegal content, which is an AI system development and intended use. The regulation is not yet in force, so no direct harm has occurred, but the potential for significant harm to privacy and human rights is credible and widely debated. The article highlights the plausible future harms from this AI system's deployment, including mass surveillance and privacy breaches, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. There is no indication that the system has malfunctioned or caused harm yet, and the main focus is on the potential risks and societal debate.
Thumbnail Image

Chat Control : l'Allemagne stoppe la surveillance des messageries de l'UE

2025-10-13
Economie Matin
Why's our monitor labelling this an incident or hazard?
The article centers on the political and legal rejection of an AI-enabled surveillance system (client-side scanning) before it was implemented, thus no direct or indirect harm has occurred. The AI system's development and intended use are discussed, but the project is currently blocked, so no incident or hazard has materialized. The discussion of risks and privacy concerns constitutes a governance and societal response to a potential AI hazard, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Chat Control : le règlement européen qui s'attaque à " la confidentialité des communications "

2025-10-13
Basta!
Why's our monitor labelling this an incident or hazard?
The event involves the potential use of AI systems to scan private communications, which could plausibly lead to violations of privacy and human rights if implemented. However, since the regulation is still in the proposal stage and no actual scanning or harm has occurred, this constitutes a plausible future risk rather than a realized incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Chat Control : le texte ne sera pas présenté ce 14 octobre devant le Conseil de l'UE mais il le sera plus tard - alloforfait.fr

2025-10-13
alloforfait.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related system conceptually, as it entails automated scanning of messages which likely involves AI technologies for detection. However, since the proposal has not yet been presented or implemented, and no harm has occurred, the event represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the development and potential use of such AI-powered scanning systems could plausibly lead to violations of privacy rights and other harms if enacted.
Thumbnail Image

What Chat Control means for your privacy - IT Security News

2025-10-14
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of automated scanning for illegal content, which is a use of AI technology. However, the article focuses on the potential risks and privacy concerns that could arise from implementing such systems, rather than describing an actual incident where harm has occurred. Therefore, it fits the definition of an AI Hazard, as the development and use of these AI-based scanning systems could plausibly lead to harms such as privacy violations and security vulnerabilities in the future.
Thumbnail Image

The vote on Chat Control has been postponed, but the "fight isn't over" yet - here's what we know

2025-10-13
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves the potential use of AI systems (automated scanning of encrypted private chats) that could plausibly lead to significant harms including violations of privacy rights, security vulnerabilities, and surveillance abuses. Since the regulation is not yet enacted and no harm has yet occurred, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the ongoing debate and the plausible future risks rather than reporting an actual incident of harm. Therefore, the classification is AI Hazard.
Thumbnail Image

EU Chat Control, Explained: Privacy, Encryption and Crypto Risks

2025-10-11
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (automated client-side scanning) in the development and intended use of the Chat Control regulation. Although no direct harm has yet occurred, the article outlines credible and significant potential harms including violations of privacy rights, weakening of encryption security, and risks to crypto security. These harms are plausible future outcomes if the regulation is enacted and the AI scanning systems are deployed. The article does not report an actual incident or realized harm but focuses on the potential risks and societal implications, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

EU Chat Control law could scan private chats

2025-10-13
Euro Weekly News Spain
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated scanning of private communications, which is explicitly described. The scanning could lead to violations of privacy and human rights, which are harms under the framework. Since the law is still under debate and not yet in effect, no direct harm has occurred, but the proposal plausibly could lead to significant harm if enacted. Therefore, this qualifies as an AI Hazard because it describes a credible future risk stemming from AI-enabled surveillance mandated by law.
Thumbnail Image

Digital privacy under threat? Our private messages might be monitored under pretext of child protection

2025-10-10
Daily News Hungary
Why's our monitor labelling this an incident or hazard?
The proposal involves the use of AI or algorithmic scanning systems to monitor private messages, which fits the definition of an AI system. The legislative proposal's implementation would enable mass surveillance, potentially violating human rights and privacy, which are harms under the framework. Since the proposal is still under debate and not yet enacted, no direct harm has occurred, but the credible risk of future harm is clear. Thus, this is best classified as an AI Hazard, reflecting the plausible future harm from AI-enabled client-side scanning and surveillance.
Thumbnail Image

EU postpones vote on Chat Control, but controversial idea still alive

2025-10-14
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for scanning and detecting illegal content in encrypted messages, which is a clear AI system involvement. The regulation's use could lead to violations of privacy rights and breaches of legal protections, constituting harm to human rights if implemented. Since the vote has been postponed and the regulation is not yet in effect, no realized harm has occurred, but the potential for significant harm remains credible. Therefore, this situation qualifies as an AI Hazard, as the development and potential use of AI systems for pervasive content scanning could plausibly lead to an AI Incident involving privacy violations and undermining of encryption.
Thumbnail Image

Ιδιωτικότητα εναντίον προστασίας των παιδιών: Τα σχέδια της ΕΕ για επιτήρηση των συνομιλιών στο Ίντερνετ

2025-10-16
ΠΟΛΙΤΗΣ
Why's our monitor labelling this an incident or hazard?
The event involves the proposed use of AI systems (algorithms scanning online communications) for detecting child sexual abuse material, which could plausibly lead to significant harms such as violations of privacy and human rights if implemented improperly. However, since the proposal has not yet been adopted or deployed, and no actual harm has been reported, this constitutes an AI Hazard rather than an AI Incident. The article primarily reports on the ongoing political and societal debate and the potential risks, not on a realized incident or harm. Therefore, the classification is AI Hazard.
Thumbnail Image

Τα σχέδια της ΕΕ για "chat control", επιτήρηση των συνομιλιών στο Ίντερνετ, διαιρούν τα κράτη μέλη

2025-10-16
ΑΘΗΝΑ 9,84
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (algorithms scanning chat messages for CSAM) whose use is proposed but not yet implemented. The article does not report any realized harm from these AI systems but highlights credible concerns about potential privacy violations and mass surveillance if the regulation is enacted. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to harm (privacy violations, breach of fundamental rights). Since no actual harm has occurred yet, it is not an AI Incident. The article is not merely complementary information because it focuses on the legislative proposal and its implications rather than updates or responses to past incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

Το σχέδιο της Ε.Ε. για την προστασία των παιδιών στο ίντερνετ - Το "chat control" που αρνούνται υψηλόβαθμα στελέχη

2025-10-16
emakedonia.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of algorithms (AI systems) for scanning internet communications to detect child sexual abuse material, which involves AI system use. However, it does not describe any actual incident of harm caused by these AI systems, nor does it describe a specific event where harm was narrowly avoided. Instead, it focuses on the ongoing legislative process, political disagreements, and societal debates about privacy and surveillance risks. This fits the definition of Complementary Information, which includes governance responses, policy debates, and societal reactions related to AI systems and their impacts. There is no direct or indirect harm reported, nor a plausible immediate hazard event described. Hence, the classification is Complimentary Info.
Thumbnail Image

Ιδιωτικότητα εναντίον προστασίας των παιδιών: Τα σχέδια της ΕΕ για "chat control" και η επιτήρηση των συνομιλιών στο Ίντερνετ - Fibernews

2025-10-16
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI algorithms to monitor online communications for child sexual abuse material, which involves AI system use. However, no actual harm or incident has occurred yet; the proposal is still under negotiation and has not been implemented. The potential harm includes privacy violations and mass surveillance, which could plausibly lead to violations of human rights and harm to communities if enacted. Since the harm is potential and the AI system's deployment is not yet realized, this fits the definition of an AI Hazard. It is not Complementary Information because the article's main focus is on the legislative proposal and its implications, not on updates or responses to a past incident. It is not Unrelated because AI systems are central to the proposal.
Thumbnail Image

Vie privée: l'UE renonce à une mesure phare d'un texte contre la pédocriminalité

2025-10-30
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI-enabled technology designed to scan private messages for illegal content, which is an AI system. However, the measure to mandate its use has been withdrawn before implementation, so no direct or indirect harm from the AI system has occurred. The event centers on policy decisions and societal/governance responses to the potential harms of AI surveillance technology, including privacy concerns and child protection. Therefore, it does not describe an AI Incident (no harm realized) nor an AI Hazard (no plausible future harm from the measure as it is withdrawn). Instead, it provides complementary information about AI governance and regulatory developments.
Thumbnail Image

Vie privée : Bruxelles renonce à "Chat Control", mesure polémique visant à lutter contre la pédocriminalité

2025-10-30
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The proposed 'Chat Control' measure would have mandated AI-based detection of illegal content, implicating AI system use in monitoring private communications. Although the measure was controversial and ultimately abandoned, its development and intended use posed a credible risk of human rights violations and privacy harms. Since no actual harm occurred due to the abandonment, the event represents a plausible future harm that is now being prevented, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Pédocriminalité : l'Union européenne renonce à la mesure phare d'un texte

2025-10-30
Ouest France
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of content detection technology intended to scan private communications for illegal content. However, since the EU has decided to abandon this measure before implementation, no direct or indirect harm has occurred. The article highlights the potential privacy and cybersecurity risks that such AI systems could pose if deployed, but these remain hypothetical. Thus, the event qualifies as an AI Hazard because it concerns a plausible future risk from AI systems that was ultimately not realized due to policy decisions. It is not an AI Incident because no harm has occurred, nor is it Complementary Information or Unrelated since it directly addresses AI-related policy and potential harms.
Thumbnail Image

Vie privée : l'UE renonce à une mesure phare d'un texte contre la pédocriminalité - RTBF Actus

2025-10-30
RTBF
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses a proposed AI-enabled measure to scan private messages for illegal content, which involves AI system use. However, the measure is being withdrawn before implementation, so no direct or indirect harm has occurred. The event is a governance response to concerns about privacy and rights, thus it fits the definition of Complementary Information as it updates on societal and policy responses to AI-related issues rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Vie privée: l'UE renonce à une mesure phare d'un texte contre la pédocriminalité

2025-10-30
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of content scanning technologies designed to detect illegal material in encrypted messages. However, the article does not report any realized harm or incident caused by these AI systems. Instead, it describes a policy decision to withdraw a proposed mandatory scanning measure due to privacy concerns, reflecting a governance and societal response to potential AI-related harms. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI governance and societal debates without describing a specific AI Incident or AI Hazard.
Thumbnail Image

" Chat Control " : l'UE renonce à une mesure phare d'un texte contre la pédocriminalité

2025-10-30
Le Soir
Why's our monitor labelling this an incident or hazard?
The event involves AI systems intended to scan private messages for illegal content, which could have led to violations of privacy and human rights (harm category c). Since the measure was proposed but not implemented, no actual harm occurred, but the potential for harm was credible and significant. The EU's decision to abandon the measure reduces this risk. Hence, this is best classified as Complementary Information, as it provides an update on governance and policy responses to AI-related privacy and surveillance concerns, rather than reporting an actual AI Incident or an ongoing AI Hazard.
Thumbnail Image

Vie privée: l'UE renonce à une mesure phare d'un texte contre la pédocriminalité

2025-10-30
Mediapart
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses a proposed AI-enabled technology designed to scan private communications for illegal content, which involves AI system use. The measure was not implemented and has now been withdrawn due to privacy and cybersecurity concerns, meaning no realized harm occurred. The potential for harm to privacy and human rights was credible and significant, fitting the definition of an AI Hazard. Since the event concerns the renouncement of a measure before harm occurred, it is not an AI Incident. It is not merely complementary information because the main focus is on the withdrawal of a hazardous AI-enabled measure, not on responses or updates to past incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

Chat Control : l'UE renonce à la mesure controversée de scan des messages privés

2025-10-30
RTL.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses a proposed AI-enabled technology to scan private messages, including encrypted ones, to detect child sexual abuse content. This involves AI systems analyzing user communications, which fits the definition of an AI system. The measure was intended to be mandatory but has been withdrawn before implementation, so no actual harm (such as privacy violations or rights breaches) has occurred yet. The potential for harm was significant, including violations of privacy and fundamental rights, but since the measure was abandoned, the event is best classified as an AI Hazard, reflecting a credible risk that was averted or prevented.
Thumbnail Image

Evropska unija odločila: pregled klepetov za zdaj prostovoljen - Svet24.si

2025-10-31
Svet24.si - Vsa resnica na enem mestu
Why's our monitor labelling this an incident or hazard?
The event involves AI-related technology (automated detection of abusive content likely involving AI systems), but the article centers on legislative proposals and political debate rather than a realized harm or a concrete incident involving AI malfunction or misuse. Since no direct or indirect harm has occurred, and the article does not describe a credible imminent risk or near miss, it does not qualify as an AI Incident or AI Hazard. The content is best classified as Complementary Information because it provides context on governance and societal responses to AI-related issues in online communications.
Thumbnail Image

EU predlaga, da bi podjetja posnetke spolnih zlorab v spletnih klepetih iskala na prostovoljni bazi

2025-10-31
MMC RTV Slovenija
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of automated detection of illegal content, which is a use of AI technology. However, the article does not describe any actual incident or harm caused by such AI systems, nor does it report a near miss or credible risk materializing at this time. Instead, it focuses on policy discussions and proposals, including voluntary versus mandatory scanning and privacy concerns. Therefore, this is best classified as Complementary Information, as it provides context and updates on governance and societal responses related to AI use in content moderation, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Zasuk v EU: namesto obveznega pregledovanja klepetov, prostovoljni pristop

2025-10-31
Dnevnik
Why's our monitor labelling this an incident or hazard?
The article centers on the policy debate about mandatory versus voluntary AI-enabled detection of illegal content in online chats. While AI systems are implied as the technology used for scanning content, no actual incident or harm has occurred or is described. The discussion is about potential future regulatory frameworks and the balance between privacy and child protection. This fits the definition of an AI Hazard, as the development and use of AI systems for content detection could plausibly lead to incidents involving privacy violations or other harms if mandatory scanning is implemented. However, since no harm has yet occurred, and the focus is on potential future risks and regulatory approaches, the classification is AI Hazard.
Thumbnail Image

V EU opustili možnost obveznega pregledovanja spletnih klepetov

2025-10-31
STA d.o.o.
Why's our monitor labelling this an incident or hazard?
The event involves AI systems for detecting illegal content, but no direct or indirect harm has occurred or is described. The focus is on policy and governance decisions regarding AI deployment and privacy, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related issues without reporting a new incident or hazard.