Meta Implements Employee Activity Tracking to Train AI Models

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta is installing tracking software on U.S.-based employees' computers to log keystrokes, mouse movements, and screen content for AI training. The initiative, aimed at improving AI agents' ability to perform work tasks, raises concerns about employee privacy and potential labor rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system being trained with employee activity data collected via a new tracking tool. While employees express concerns about potential job cuts and privacy implications, no actual harm or rights violations have been documented as having occurred. The tracking for AI training purposes could plausibly lead to harms such as labor rights violations or privacy breaches, making this an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the new tracking tool and its implications, not on responses or updates to prior incidents. It is not Unrelated because the event clearly involves AI system development and use with potential for harm.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Business processes and support services

Affected stakeholders
Workers

Harm types
Human or fundamental rights

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

Meta強化AI布局 奧克拉荷馬州Tulsa資料中心動工-MoneyDJ理財網

2026-04-22
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The article focuses on the development and investment in AI-related infrastructure without indicating any direct or indirect harm, malfunction, or plausible future harm caused by AI systems. It is a general news update about AI ecosystem expansion and infrastructure investment, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta to track workers' clicks and keystrokes to train AI

2026-04-21
BBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system being trained with employee activity data collected via a new tracking tool. While employees express concerns about potential job cuts and privacy implications, no actual harm or rights violations have been documented as having occurred. The tracking for AI training purposes could plausibly lead to harms such as labor rights violations or privacy breaches, making this an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the new tracking tool and its implications, not on responses or updates to prior incidents. It is not Unrelated because the event clearly involves AI system development and use with potential for harm.
Thumbnail Image

Meta to track workers' clicks and keystrokes to train AI

2026-04-22
BBC
Why's our monitor labelling this an incident or hazard?
An AI system (the tracking tool used to collect employee activity data for AI training) is explicitly involved. The event stems from the use of this AI-related tool. However, there is no evidence of realized harm such as injury, rights violations, or operational disruption. The concerns raised by employees indicate potential future risks related to privacy and labor rights, but these remain speculative at this stage. Hence, the event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harm, but no harm has yet occurred or been documented.
Thumbnail Image

Meta vai capturar movimentos do mouse de funcionários para treinamento em IA

2026-04-21
uol.com.br
Why's our monitor labelling this an incident or hazard?
The software described is used to collect detailed user interaction data to train AI models, which qualifies as AI system involvement in development. However, there is no indication that this has directly or indirectly caused harm to employees or others, nor that it has violated rights or laws yet. The article focuses on the deployment of this tracking tool and its purpose, without reporting incidents of harm or complaints. Therefore, this is best classified as Complementary Information, providing context on AI development practices and potential future concerns, but not an AI Incident or Hazard at this time.
Thumbnail Image

Exclusive-Meta to start capturing employee mouse movements, keystrokes for AI training data By Reuters

2026-04-21
Investing.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (tracking software designed to collect detailed user interaction data for AI training) in the workplace. The system's use directly affects employees by capturing sensitive behavioral data, which implicates labor rights and privacy protections. The description indicates the system is already deployed and actively collecting data, meaning harm in the form of rights violations is occurring or has occurred. This fits the definition of an AI Incident under violations of human rights or labor rights. Although Meta claims safeguards and limited use, the intrusive nature of the data collection and potential for misuse or insufficient consent justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-21
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Model Capability Initiative) that collects detailed employee behavioral data to train AI models, which is explicitly stated. The use of this AI system directly leads to a violation of labor rights and privacy protections, as it subjects employees to extensive surveillance without clear consent and potentially breaches legal frameworks, especially in Europe. The harm is realized in the form of rights violations and workplace power imbalance. This meets the criteria for an AI Incident under category (c) violations of human rights or labor rights. The event is not merely a potential risk or complementary information but a concrete case of AI-driven labor rights infringement.
Thumbnail Image

Meta espiará los ordenadores de sus empleados para entrenar a su IA

2026-04-21
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (training AI agents to perform office tasks) and the collection of employee data through software installed on their work computers. This data collection and use for AI training without explicit employee consent for such purposes, especially in a context of strained labor relations and increased productivity demands, constitutes a violation of labor rights and privacy. The harm is realized as employees are being surveilled and their data used in ways that may breach their rights. Therefore, this is an AI Incident due to the direct involvement of AI system development and use causing harm related to labor rights violations.
Thumbnail Image

Meta starts tracking employee computer use as AI takeover fears grow before layoffs: Story in 5 points

2026-04-22
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the deployment of an AI system (Model Capability Initiative) to collect detailed employee interaction data for AI training purposes. While this raises potential privacy and labor rights concerns, the article does not indicate that any harm has yet occurred or that there has been a breach of rights or other negative outcomes. The data is reportedly not used for performance assessment, and no incidents of misuse or harm are described. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI development and organizational responses related to AI adoption and workforce changes, without reporting a specific harm or credible risk of harm.
Thumbnail Image

Meta to track workers' clicks and keystrokes to train AI

2026-04-21
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system being trained with employee activity data collected via a new tracking tool. The involvement of AI in the development and use phases is clear. However, no actual harm has been reported; the concerns are anticipatory, relating to potential job cuts and privacy issues. Since the AI system's use could plausibly lead to harms such as labor rights violations or privacy breaches, this fits the definition of an AI Hazard. It is not Complementary Information because the article is not primarily about responses or governance measures, nor is it unrelated as it directly involves AI system use with potential harm.
Thumbnail Image

Meta will record employees' keystrokes and use it to train its AI models | TechCrunch

2026-04-21
TechCrunch
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of employee interaction data to train AI models, indicating AI system involvement in development and use. However, no direct or indirect harm has been reported yet. The concerns raised are about potential privacy implications, which represent plausible future harm but not a realized incident. Therefore, this event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to privacy-related harms, but no harm has yet occurred or been reported.
Thumbnail Image

Your Work Habits May Be AI's Next Big Dataset

2026-04-22
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to collect and analyze detailed employee behavior data to train AI models, fulfilling the AI system involvement criterion. The use of such data raises privacy and surveillance concerns, which relate to potential violations of fundamental rights and labor rights, fitting the harm categories. However, the article does not report any actual injury, rights violation, or legal penalty having occurred yet, only warnings and concerns from regulators and employees. Thus, the harm is plausible and credible but not realized, making this an AI Hazard. The article also discusses broader implications and regulatory responses, but the primary focus is on the potential for harm from this AI data collection practice.
Thumbnail Image

Read the full memo behind Meta's AI employee tracking rollout

2026-04-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system being trained using employee interaction data collected via software installed on work computers. The use of this AI system directly impacts employees' privacy and labor rights, as employees cannot opt out and express discomfort, indicating a violation of rights. The AI system's development and use have directly led to harm in terms of employee rights and workplace conditions. Hence, this qualifies as an AI Incident under the framework, specifically under violations of human rights or labor rights.
Thumbnail Image

Meta to track keystrokes, mouse movements for AI training; employees push back | Company Business News

2026-04-22
mint
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (software collecting data to train AI models). The use of this system is causing employee concern and backlash, indicating potential risks related to privacy and rights. However, the article does not report any actual harm or violation occurring yet, only the rollout and employee reactions. The lack of an opt-out and the nature of data collection suggest plausible future harm, such as privacy violations or misuse of data, fitting the AI Hazard definition. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated since it directly concerns AI system use and potential harm.
Thumbnail Image

Exclusive: Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-21
Reuters
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the tracking software collects behavioral data to train AI models. The event concerns the use and development of AI systems. No actual harm or rights violations are reported, so it is not an AI Incident. However, the nature of the data collection and its potential for privacy or labor rights violations means it could plausibly lead to such harms if safeguards fail or misuse occurs. Hence, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because the AI system and its data collection are central to the event.
Thumbnail Image

Meta Is Making Workers Train Their AI Replacements

2026-04-21
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
Meta's tracking software collects detailed employee activity data to train AI models intended to replace human workers, leading to layoffs and workforce reductions. This constitutes direct use of AI systems causing harm to labor rights and employment, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as layoffs are already planned and linked to AI deployment. The AI system's role is pivotal in this harm, as it is the tool enabling workforce replacement. Hence, the event is classified as an AI Incident.
Thumbnail Image

Can Your Mouse Clicks Train AI? Meta Tries It With Employees

2026-04-22
TimesNow
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to analyze employee computer interactions, indicating AI system involvement in development and use. However, there is no indication that this has directly or indirectly caused harm such as injury, rights violations, or other harms defined in the framework. The potential for privacy or labor rights concerns exists, but since no harm has yet occurred or been reported, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use in employee monitoring and data collection.
Thumbnail Image

Would you quit? Meta will put keyloggers on employee PCs for AI training

2026-04-21
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI training models requiring real user data) and the deployment of keylogging software to collect detailed employee data. This is a direct use of AI in a way that impacts employee privacy and labor rights, which are protected under applicable laws. The collection of keystrokes and screenshots without clear employee consent or safeguards constitutes a violation of rights. The article describes the event as ongoing or imminent, not merely potential, so it is an AI Incident rather than a hazard. The harm is indirect but real, as employee privacy and labor rights are being compromised through AI-driven surveillance.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-22
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Model Capability Initiative) used to collect detailed behavioral data from employees to train AI models. The use of such invasive monitoring without clear consent or adequate safeguards constitutes a violation of labor rights and privacy, which are protected human rights. The article highlights legal concerns and potential breaches of data protection laws, especially in Europe, indicating that harm to employee rights is occurring or imminent. Since the AI system's use directly leads to these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mark Zuckerberg's Meta to all employees in America: We are installing tracking software in your machines as we need your help to ...

2026-04-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI agents trained on employee interaction data) and the deployment of tracking software that collects sensitive employee data. This raises potential human rights and labor rights issues, as employee monitoring and data collection without clear consent or transparency can violate rights. However, since no actual harm or complaints are reported, and the article focuses on the announcement and intent rather than realized harm, this situation is best classified as an AI Hazard. It plausibly could lead to violations of rights or other harms if not properly managed, but no incident has yet occurred.
Thumbnail Image

Meta will start tracking employees' screens and keystrokes to train AI tools | Fortune

2026-04-21
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to collect detailed employee interaction data to train AI agents, confirming AI system involvement. However, no harm or violation has been reported or can be reasonably inferred as having occurred yet. The data collection is framed as part of AI development and use, but with safeguards and no mention of misuse or malfunction. While there could be plausible future privacy or labor-related harms, the article does not emphasize or document these risks as imminent or realized. Thus, the event does not meet the criteria for an AI Incident or AI Hazard but provides important complementary information about AI training data practices and corporate AI strategy.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data - CNBC TV18

2026-04-22
cnbctv18.com
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as Meta is using AI models trained on detailed employee interaction data to build autonomous AI agents. The event stems from the use and development of this AI system through extensive data collection and monitoring. While no explicit harm has been reported yet, the invasive surveillance practices raise credible concerns about privacy violations and labor rights infringements, especially given the legal context in various jurisdictions. This plausible risk of harm aligns with the definition of an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized. The article focuses on the implications and concerns rather than reporting an actual incident of harm or legal breach.
Thumbnail Image

Meta Plans to Turn Its Employees' Clicks and Keystrokes into AI Training Data

2026-04-21
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the Model Capability Initiative) that collects detailed employee activity data to train AI agents for autonomous task performance. The use of AI here is central to the event. Although no direct harm such as layoffs or privacy violations is confirmed, the invasive monitoring and the context of impending layoffs create a credible risk of harm to employees' labor rights and privacy. Since the harms are plausible but not yet realized or documented, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the deployment of the AI system and its potential impacts, not on responses or ecosystem context. It is not unrelated because the AI system and its implications are central to the event.
Thumbnail Image

Meta vai capturar movimentos do mouse de funcionários para treinamento em IA

2026-04-21
InfoMoney
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems being trained with employee interaction data, confirming AI system involvement. However, it does not report any harm or risk of harm resulting from this activity. The data collection is for model training, and the company claims safeguards and limited use. There is no direct or indirect harm described, nor a credible plausible future harm scenario presented. Thus, it does not meet the criteria for AI Incident or AI Hazard. Instead, it informs about AI development practices and internal data collection, which fits the definition of Complementary Information.
Thumbnail Image

Meta rastreará las pulsaciones de teclado de empleados para entrenar modelos de IA, informa Reuters Por Investing.com

2026-04-21
Investing.com Español
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the deployment of an AI-related monitoring system collecting detailed employee data to train AI models, indicating AI system involvement in development and use. However, there is no mention of actual harm occurring, such as privacy breaches, health issues, or legal violations. The potential for harm exists, especially regarding employee privacy and labor rights, but it remains a plausible future risk rather than a realized incident. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information. Therefore, the classification as an AI Hazard is appropriate.
Thumbnail Image

Meta收集員工鍵盤滑鼠行為訓練AI 惹隱私爭議

2026-04-22
工商時報
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI models trained on employee behavior data) and its development through data collection. The collection of detailed employee input data for AI training without clear consent can be considered a violation of privacy rights, which falls under violations of human rights or labor rights. Since the AI system's development and use directly lead to potential harm to employee privacy, this qualifies as an AI Incident.
Thumbnail Image

Meta to track employee keystrokes to train AI models, Reuters reports By Investing.com

2026-04-21
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear as the data collected is intended to train AI models for autonomous work tasks. The event stems from the use and development of AI systems. However, there is no indication that this has directly or indirectly led to any harm such as violation of rights or other harms defined in the framework. The article does not mention employee complaints, legal issues, or harm caused by this data collection. Therefore, it does not meet the threshold for an AI Incident. It also does not describe a plausible future harm scenario beyond the general concerns about privacy, which are not explicitly stated as risks here. The article mainly provides information about the AI data collection initiative, which fits best as Complementary Information, as it informs about AI development and use practices with potential implications but no realized or imminent harm.
Thumbnail Image

Meta implementa un software para registrar clics y pulsaciones de teclas de sus empleados en Estados Unidos, con el fin de que su equipo SuperIntelligence Labs entrene agentes de IA capaces de realizar tareas laborales autónoma

2026-04-21
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it collects detailed interaction data to train autonomous AI agents. The use of this system directly impacts employees by monitoring their activities in a detailed manner, which constitutes a violation of privacy and labor rights under applicable law. The event describes actual deployment and data collection, not just potential risk, so harm is realized. The privacy and labor rights violations fall under category (c) of harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta deploys surveillance software to track employees' screen activity

2026-04-21
GEO TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system being developed and trained using data collected through invasive employee surveillance software. The collection of keystrokes, mouse movements, and screenshots without clear consent or safeguards likely violates employee privacy and labor rights, which are protected under human rights and labor laws. This constitutes a violation of rights (harm category c). Since the AI system's development and use directly rely on this surveillance, the event qualifies as an AI Incident rather than a hazard or complementary information. The harm is realized through the breach of rights due to the surveillance practices tied to AI development.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-21
GEO TV
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the data collection is explicitly for training AI models to automate work tasks. The event stems from the use and development of AI systems. No direct or indirect harm (such as privacy violations or labor rights breaches) is reported as having occurred. The company claims safeguards and limited use, but the nature of the data collected and its sensitivity imply a credible risk of future harm if misused or if safeguards fail. Hence, this qualifies as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Report: Meta will train AI agents by tracking employees' mouse, keyboard use

2026-04-21
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system being developed and used to collect detailed employee interaction data for AI training purposes. Although no direct harm has been reported, the nature of the tracking and data collection could plausibly lead to violations of employee privacy and labor rights, especially given the legal concerns mentioned. Since the harm is potential and not yet realized, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the deployment and use of the AI system with potential for harm, nor is it unrelated as it clearly involves AI systems and their impact.
Thumbnail Image

Meta registrará los movimientos de ratón y teclado de sus empleados para entrenar a la IA

2026-04-21
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems being trained with employee interaction data collected via software installed on their computers. The data collection is for AI development purposes, which is the use phase of the AI system lifecycle. Although the article does not describe any realized harm such as privacy breaches or rights violations, the nature of the data collection (tracking detailed user inputs and screenshots) could plausibly lead to violations of labor or privacy rights if misused or inadequately protected. Since no harm has yet materialized, but there is a credible risk, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Meta Installing Software on Employee Computers to Track Everything They Do, Feed the Data to AI

2026-04-22
Futurism
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the deployment of an AI system (the Model Capability Initiative) that collects extensive employee activity data to train AI models for autonomous task completion. This use of AI directly infringes on employee privacy and labor rights, constituting harm under the framework's definition of AI Incident (violation of human rights and labor rights). The surveillance is invasive and ethically problematic, and the data collection is mandatory, which exacerbates the harm. The article also notes the lack of legal protections in the US for such surveillance, reinforcing the significance of the rights violation. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta vai monitorar computador de funcionários para treinar IA, diz reportagem * Tecnoblog

2026-04-21
Tecnoblog
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (training AI models) through intrusive monitoring of employees without opt-out, leading to a breach of labor rights and privacy. The AI system's development and use directly cause harm by violating employee rights, as evidenced by employee indignation and legal concerns. The monitoring software is explicitly AI-related, collecting data to train AI models for workplace tasks. This meets the criteria for an AI Incident because the AI system's use has directly led to harm (violation of labor rights and privacy).
Thumbnail Image

Mark Gongloff: Meta is making workers train their AI replacements

2026-04-21
ArcaMax
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being trained on employee computer activity to mimic and replace human work. The use of this AI system is directly linked to layoffs and job cuts at Meta, causing harm to employees' economic and social well-being. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people (employees losing jobs). The article does not merely discuss potential future harm or general AI developments but reports on realized harm due to AI deployment.
Thumbnail Image

Meta收集員工鍵盤滑鼠行為訓練AI 惹隱私爭議-MoneyDJ理財網

2026-04-22
MoneyDJ理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for employee behavior data collection and AI training, which is clearly AI system involvement. The controversy is about privacy concerns, which fall under potential violations of human rights or privacy rights. Since no actual harm or violation has been reported as having occurred, but there is a plausible risk of privacy harm, this situation fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the privacy controversy linked to the AI system's use, not on responses or ecosystem updates. Therefore, the classification is AI Hazard.
Thumbnail Image

Meta will closely watch employee keystrokes for AI training amid layoff speculations: All details

2026-04-22
Digit
Why's our monitor labelling this an incident or hazard?
Meta's use of an AI system to monitor detailed employee activity for AI training involves AI system use and raises privacy concerns. However, the article does not report any actual injury, rights violation, or other harm occurring yet. The concerns are about potential privacy and oversight issues, which could plausibly lead to harm in the future. Hence, this fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving privacy or rights violations, but no such incident has occurred yet.
Thumbnail Image

Meta 在員工電腦安裝追蹤軟體,抓滑鼠鍵盤數據訓練模型

2026-04-22
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (AI agents) being developed and trained using data collected via tracking software installed on employee computers. This confirms AI system involvement. However, there is no indication that this data collection or AI use has directly or indirectly caused harm to employees or others, such as privacy breaches, health issues, or rights violations. The company states protections are in place and that data is not used for performance evaluation. The event is about ongoing AI development and data collection practices, which is informative but does not describe an incident or a plausible imminent hazard. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Monitoreará Meta a empleados para mejorar su IA

2026-04-21
Tiempo
Why's our monitor labelling this an incident or hazard?
Meta's deployment of software to monitor employees for AI training involves AI system use and development. However, the article does not mention any actual harm or legal violations resulting from this monitoring, nor does it indicate plausible future harm beyond general concerns. The event is informational about AI development practices and internal data collection, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Exclusive: Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-22
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Model Capability Initiative) used to collect detailed employee behavioral data for AI training. The use of this system directly impacts employee privacy and labor rights, as it monitors keystrokes, mouse movements, and screen content, which are sensitive personal data. This constitutes a violation of human rights and labor rights protections, fulfilling the criteria for an AI Incident. The article describes the deployment and use of the AI system leading to this harm, not just a potential risk, so it is not merely a hazard or complementary information.
Thumbnail Image

Meta staff protest surveillance software on work PCs

2026-04-22
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system being developed using data collected via invasive surveillance software on employees. The surveillance collects detailed personal and work-related data, which directly infringes on employee privacy rights, a recognized human rights violation. The AI system's development depends on this data collection, making the AI system's use a direct cause of harm. Hence, this is an AI Incident involving violation of human rights through AI system use.
Thumbnail Image

Meta to train AI on employees' clicks and keystrokes, sparking surveillance fears

2026-04-22
The News International
Why's our monitor labelling this an incident or hazard?
An AI system (Model Capability Initiative) is explicitly mentioned as being used to monitor employees' activities in detail, including keystrokes and screen snapshots, to train AI models. This use of AI for surveillance directly affects employees' privacy and labor rights, which are protected under law. The article highlights concerns about intrusive, real-time monitoring and the lack of federal limits on worker surveillance in the U.S., indicating a breach of labor rights. Therefore, the event involves the use of an AI system leading to violations of human and labor rights, qualifying it as an AI Incident.
Thumbnail Image

Meta Vai Capturar Movimentos do Mouse de Funcionários para Treinamento em IA

2026-04-21
Forbes Brasil
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems being trained with employee interaction data, confirming AI system involvement. The event stems from the use and development of AI. However, no harm or violation has been reported or implied as having occurred. The company states safeguards are in place and data is not used for performance evaluation. The event does not describe any realized harm or plausible immediate harm but informs about AI training practices and internal data collection, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-21
The Manila times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Model Capability Initiative) used to collect detailed behavioral data from employees to train AI models for autonomous task performance. The data collection includes keystrokes and screen snapshots, which are highly intrusive and raise privacy and labor rights concerns. The article highlights that such surveillance practices may violate labor laws and data protection regulations, especially in Europe, indicating a breach of obligations intended to protect fundamental and labor rights. Since the AI system's use has directly led to these rights violations and workplace harm, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta to capture U.S. employee mouse movements and keystrokes to train AI

2026-04-22
The Japan Times
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the data collected is intended to train AI models for autonomous task performance. The event stems from the use and development of AI systems. Although no direct harm has yet been reported, the invasive monitoring of employees' computer interactions and screen content could plausibly lead to violations of privacy and labor rights, which are recognized harms under the framework. Since harm is plausible but not yet realized, this event fits the definition of an AI Hazard rather than an AI Incident.
Thumbnail Image

Meta to track employees' mouse clicks, keystrokes to train AI

2026-04-22
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (AI models being trained) and the collection of employee data via software, which qualifies as AI system involvement. However, there is no report or indication of any harm occurring or any plausible future harm directly linked to this AI system's use. The company claims safeguards and limits on data use, and no rights violations or other harms are reported. The event is about the deployment of AI-related data collection and workforce automation plans, which is informative about AI ecosystem developments and governance but does not describe an incident or hazard. Hence, it fits the definition of Complementary Information.
Thumbnail Image

Meta vai monitorar computadores de seus funcionários para treinar modelos de IA, diz agência

2026-04-21
Folha - PE
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the monitoring software collects data to train AI models for autonomous task execution. The event stems from the use of AI systems in employee monitoring and data collection. Although no direct harm or rights violation is reported, the invasive nature of the monitoring and potential privacy breaches could plausibly lead to violations of labor rights or privacy, which are harms under the framework. Since no realized harm is described, this is best classified as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to prior incidents, so it is not Complementary Information. It is not unrelated because AI systems and their use are central to the event.
Thumbnail Image

Meta To Track Employee Clicks and Keystrokes for AI Development Amid May 20 Layoffs | 📲 LatestLY

2026-04-22
LatestLY
Why's our monitor labelling this an incident or hazard?
The tracking software (MCI) is an AI system designed to collect granular data to train autonomous AI agents, which is explicitly stated. The use of this system directly impacts employees by monitoring their every keystroke and mouse movement, which can be considered a violation of labor rights and privacy. The AI system's development and use are directly linked to the planned layoffs, indicating harm to labor rights and employee welfare. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm in the workplace context.
Thumbnail Image

Meta to start recording employee mouse and keyboard actions for AI

2026-04-22
TweakTown
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (training AI agents) through detailed employee activity tracking. The AI system's development and use are central to the event, as the collected data is intended to improve AI autonomous task performance. The described mass layoffs linked to this AI deployment imply direct harm to employees' labor rights and job security. The extensive surveillance without clear legal limits also suggests a violation of rights. Hence, the event meets the criteria for an AI Incident involving violations of labor rights and harm to people.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data - BusinessWorld Online

2026-04-22
BusinessWorld
Why's our monitor labelling this an incident or hazard?
Meta's AI system is explicitly involved in collecting detailed employee data to train AI models, which directly impacts employee privacy and labor rights. The article indicates that this surveillance is already occurring, with potential legal and ethical violations, especially in certain jurisdictions. The harm is realized in terms of privacy infringement and potential labor rights violations, meeting the criteria for an AI Incident. The involvement is not merely potential or future harm but an ongoing practice with direct consequences for employees, thus excluding classification as a hazard or complementary information.
Thumbnail Image

Meta To Start Capturing Employees' Mouse Movements And Keystrokes To Train Its IA: Report

2026-04-21
International Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems being trained with employee interaction data, confirming AI system involvement. However, no direct or indirect harm has been reported or can be reasonably inferred as having occurred. The event describes ongoing data collection and AI model training, which is a development and use phase of AI systems but without any stated or implied realized harm. While there could be plausible future risks related to privacy or labor rights, the article does not frame these as imminent or credible hazards. The main narrative is about Meta's internal AI development strategy and data collection, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta Is Tracking Employee Keystrokes, Mouse Data to Train Advanced AI Models

2026-04-22
Tech Times
Why's our monitor labelling this an incident or hazard?
Meta's use of detailed employee interaction data to train AI systems involves AI system development and use. The concerns raised relate to privacy and ethical risks, which could plausibly lead to violations of rights or harm to individuals if mismanaged. However, the article does not report any actual harm or incidents resulting from this practice. Thus, it fits the definition of an AI Hazard, as the event could plausibly lead to an AI Incident involving privacy violations or rights breaches in the future, but no harm has yet occurred.
Thumbnail Image

Exclusive: Meta to Start Capturing Employee Mouse Movements, Keystrokes for AI Training Data

2026-04-21
GV Wire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the Model Capability Initiative) used to collect detailed behavioral data from employees to train AI agents. The use of this AI system directly leads to a violation of employee rights, including privacy and labor rights, as it subjects employees to extensive surveillance without clear consent or adequate safeguards, which is a breach of applicable laws and fundamental rights. The article reports the deployment and use of this system, indicating realized harm rather than a potential risk. Hence, it meets the criteria for an AI Incident under violations of human and labor rights.
Thumbnail Image

願意讓老闆追蹤你滑鼠動向嗎?這件事正在 Meta 上演 理由是為訓練 AI | 國際焦點 | 國際 | 經濟日報

2026-04-22
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly (AI agents trained on employee interaction data) and concerns the use of AI in a way that could plausibly lead to harm, specifically privacy violations and labor rights infringements due to extensive employee monitoring. Although no direct harm has been reported, the credible risk of such harm arising from this AI-enabled surveillance justifies classification as an AI Hazard. It is not an AI Incident because no actual harm has occurred yet, and it is not Complementary Information or Unrelated because the focus is on the AI system's use and its potential risks.
Thumbnail Image

Meta's New AI Initiative: Employee Monitoring for Machine Learning | Technology

2026-04-21
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being developed and improved through employee monitoring software that captures detailed interactions. The monitoring is intended to train AI agents for autonomous work tasks, indicating AI system use. Although no direct harm is reported, the concerns about privacy and labor rights violations, especially in the context of workforce reductions and lack of clear data protections, indicate a credible risk of harm. Since harm is not yet realized but plausible, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely general AI news or a response update, so it is not Complementary Information, nor is it unrelated.
Thumbnail Image

Meta vai monitorar computadores de funcionários nos EUA para treinar IA

2026-04-21
Tribuna do Sertão
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the data collected is intended to train autonomous AI models. The event stems from the use and development of AI systems. Although no direct harm is reported, the invasive monitoring of employees' computer activity could plausibly lead to violations of privacy and labor rights, which are recognized harms under the framework. Since harm is not yet realized but plausible, this is best classified as an AI Hazard rather than an AI Incident. The article focuses on the initiative and its potential implications rather than reporting an actual incident of harm.
Thumbnail Image

Meta vai capturar movimentos do mouse de funcionários para treinamento em IA

2026-04-21
R7 Notícias
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as the data collected is intended to train AI models for autonomous task performance. The event concerns the development and use of AI systems. Although no harm has been reported or directly linked to this data collection, the nature of the data (detailed user interactions and screenshots) and the context (employee monitoring) imply a credible risk of privacy violations or misuse, which could constitute harm to individuals' rights. Since the event describes a current practice that could plausibly lead to harm but does not report actual harm, it fits the definition of an AI Hazard.
Thumbnail Image

Meta Tracks Employee Mouse Movements, Keystrokes for AI Training

2026-04-21
Global Banking & Finance Review
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (AI agents trained on employee interaction data) and its development through data collection. However, there is no evidence or claim of realized harm or plausible future harm resulting from this practice. The safeguards and stated limitations on data use reduce the likelihood of harm. The event is primarily informative about AI training methods and internal company policies, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta to start tracking employee keystrokes, mouse movements, and screen activity to train AI models - Tech Startups

2026-04-21
Tech News | Startups News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as being developed and trained using detailed employee activity data collected through invasive monitoring software. This monitoring directly affects employees' privacy and labor rights, which are fundamental human rights. The use of such data to train AI systems that aim to automate and potentially replace human work further underscores the impact on labor rights. The article indicates that this practice is already underway, not hypothetical, and thus the harm is realized or ongoing. Given the direct link between AI system use and labor rights violations, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI代理時代來臨!Meta拿員工鍵盤、滑鼠操作數據訓練模型 引爆隱私爭議 | 鉅亨網 - 美股雷達

2026-04-21
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the Model Capability Initiative) that collects and processes employee interaction data to train AI agents. The collection and use of such data without clear consent or adequate safeguards can be reasonably inferred to cause harm related to privacy and labor rights violations, which are recognized harms under the AI Incident definition. The article explicitly mentions privacy and labor rights concerns, including expert opinions highlighting potential legal violations under GDPR and U.S. labor laws. Therefore, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm or violations of rights.
Thumbnail Image

智通财经APP获悉,据了解,Meta(META.US)正在美国员工的电脑上安装新的追踪软件,用于捕捉鼠标移动、点击及键盘输入,以训练其人工智能模型。内部备忘录指出,这是Meta打造可自主执行工作任务的AI代理这一广泛计划的......

2026-04-22
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (training AI models for autonomous agents) and the collection of detailed user interaction data from employees. However, it does not describe any direct or indirect harm resulting from this practice, nor does it indicate a plausible future harm that is credible and imminent. The focus is on describing the AI development process and internal data collection, with stated safeguards. This fits the definition of Complementary Information, as it provides supporting context about AI system development and data use without reporting an incident or hazard.
Thumbnail Image

Meta tracks employee activity to train AI systems

2026-04-21
The American Bazaar
Why's our monitor labelling this an incident or hazard?
Meta's use of AI systems to collect and analyze detailed employee activity data for AI training is clearly AI system involvement. The nature of involvement is the use of AI systems in development and training. While there are concerns about privacy and workplace surveillance, the article does not document any actual violation of rights or harm that has occurred. The potential for harm to employee privacy and autonomy is credible and plausible, given the scale and intrusiveness of the monitoring. Therefore, the event fits the definition of an AI Hazard, as it could plausibly lead to violations of rights or other harms in the future, but no harm has yet materialized. It is not Complementary Information because the article is not primarily about responses or updates to a prior incident, nor is it unrelated as it clearly involves AI systems and their use.
Thumbnail Image

Meta 強推 AI 監控方案 追蹤員工鍵鼠行為引隱私疑慮 | yam News

2026-04-22
蕃新聞
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the monitoring software collects data to improve AI models. The use of this AI system (data collection and monitoring) raises plausible risks of privacy violations and labor rights concerns, which are recognized harms under the framework. However, the article only reports employee concerns and internal backlash without evidence of actual harm or legal breaches occurring yet. Therefore, this situation represents a credible potential for harm (privacy and rights violations) but not a confirmed incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

為 AI 訓練蒐集數據:Meta 追蹤員工電腦行為惹議 | yam News

2026-04-22
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the tracking software collecting data for AI training) in a way that directly impacts employees' privacy and workplace rights. The collection of detailed behavioral data and screenshots without clear consent or safeguards beyond internal assurances constitutes a violation of labor and privacy rights. The employees' expressed concerns and the context of workforce reductions amplify the harm. This meets the criteria for an AI Incident as the AI system's use has directly led to harm in terms of rights violations and workplace harm, not merely a potential or future risk. Hence, the classification is AI Incident.
Thumbnail Image

Meta to start capturing employee mouse movements, keystrokes for AI training data - Profit by Pakistan Today

2026-04-22
Profit by Pakistan Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Model Capability Initiative) that collects detailed employee behavioral data to train AI models. This use directly impacts employees' privacy and labor rights, as it involves extensive surveillance without clear consent or legal safeguards, particularly problematic under European laws. The article states that the system is already in use, meaning the harm is occurring or imminent. The AI system's development and use are central to the event, and the harms relate to violations of labor and privacy rights, fitting the definition of an AI Incident.
Thumbnail Image

Meta拟在美国员工电脑上安装追踪软件 以训练人工智能模型

2026-04-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (training AI models with detailed user interaction data) and concerns the development phase of AI. Although there is potential for privacy and labor rights violations (which would be AI Incident if harm occurred), the article only describes a planned action without evidence of actual harm or legal breaches. Therefore, it represents a plausible risk scenario rather than a realized incident. It is not merely general AI news because it details a specific data collection practice with potential rights implications. Hence, it fits best as an AI Hazard, indicating a credible risk of harm due to the invasive data collection for AI training.
Thumbnail Image

Meta将为员工电脑装追踪软件训练AI,捕捉鼠标动作、还会截屏......

2026-04-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved as Meta is using employee interaction data to train AI models intended to automate work tasks. The event stems from the use and development of AI systems. While no direct harm has yet occurred, the article explicitly discusses the plausible future harm of job losses and labor displacement caused by these AI systems. This fits the definition of an AI Hazard, as the AI system's development and use could plausibly lead to violations of labor rights and harm to workers. The event is not a Complementary Information piece because it is not an update or response to a prior incident but a new development with potential future harm. It is not unrelated because AI systems and their impacts are central to the event.
Thumbnail Image

Meta将捕捉员工鼠标移动与击键操作,用于AI模型训练

2026-04-21
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI system development and use (training AI models with employee interaction data), but no harm or violation has been reported or can be reasonably inferred as having occurred. The data collection is internal and intended for model improvement, with stated safeguards. There is no mention of misuse, malfunction, or direct/indirect harm to individuals or groups. The article mainly informs about the AI training process and organizational plans, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta Tells U.S. Staff It Is Going to Start Surveilling Their Every Digital Move for A.I. Training

2026-04-22
Pixel Envy
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (tracking software used to train AI models) in the workplace, directly impacting employees by monitoring their digital behavior. This constitutes a violation of labor rights and privacy, which falls under harm category (c) in the AI Incident definition. Since the surveillance is actively occurring and involves the use of AI systems leading to rights violations, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta to leverage employee keystrokes for AI development

2026-04-22
NextBigWhat
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI-related data collection for model training, which is a development activity. While it raises ethical and privacy concerns, there is no indication that any harm has occurred yet. Therefore, it represents a plausible risk or concern but not an actual incident or hazard with realized or imminent harm. It is best classified as Complementary Information as it provides context on AI development practices and associated ethical considerations without reporting an incident or hazard.
Thumbnail Image

Facebook Parent Meta Considering New Tracking Tools For US Employees' Computers. Know Why

2026-04-21
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (autonomous AI models) and the collection of user interaction data to train these models, which qualifies as AI system involvement. However, there is no mention or implication of any realized harm or violation resulting from this activity. The data collection is internal and guarded, and the company states it is not used for employee evaluation, reducing concerns about rights violations. Since no harm has occurred or is described as plausible in the near term, and the article mainly provides information about AI development practices and internal data collection, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta to install tracking software on employees computers: AI data collection strategy

2026-04-22
Techlusive
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI agents trained on collected employee computer interaction data) and the deployment of tracking software to collect data for AI training. Although the company claims privacy protections, the monitoring of employee computer activity for AI training purposes could plausibly lead to violations of privacy or labor rights, which are harms under the AI Incident definition. Since no actual harm or rights violation is reported as having occurred yet, but the potential for such harm is credible, the event fits the definition of an AI Hazard rather than an AI Incident. It is not Complementary Information because it is not an update or response to a prior incident, nor is it unrelated as it clearly involves AI system development and use with potential harm.
Thumbnail Image

Exclusive-Meta to start capturing employee mouse movements, keystrokes for AI training data

2026-04-21
1470 & 100.3 WMBD
Why's our monitor labelling this an incident or hazard?
Meta's installation of tracking software to capture detailed employee inputs for AI training involves an AI system in use. Although the company claims safeguards and limits on data use, the extensive data collection and monitoring could plausibly lead to violations of employee privacy or labor rights, constituting potential harm. Since no actual harm or incident is reported, but there is a credible risk of future harm from this AI system's deployment, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Meta employees are up in arms over a mandatory program to train AI on their mouse movements and keystrokes

2026-04-21
DNYUZ
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the software collects detailed user interaction data to train AI models. The event involves the use of this AI system in a way that directly affects employees' privacy and autonomy, with no opt-out option, causing significant employee backlash and discomfort. This constitutes a violation of labor rights and potentially human rights, fulfilling the criteria for harm under the AI Incident definition. The harm is realized (employees are monitored without consent), not just potential, so it is not merely a hazard or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Meta vai treinar IA rastreando comandos de mouse e teclado dos funcionários

2026-04-21
Mundo Conectado
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Model Capability Initiative) that collects detailed employee interaction data to train AI agents. While the monitoring raises privacy and legal concerns, no actual harm or incident is reported. The potential for harm exists, especially regarding employee privacy and labor rights, but it remains a plausible future risk rather than a realized incident. Hence, the event fits the definition of an AI Hazard, as the development and use of this AI system could plausibly lead to harm, but no harm has yet occurred.
Thumbnail Image

Meta monitora computadores de funcionários para treinar IA

2026-04-21
DIÁRIO DO ESTADO | Confira as principais notícias do Brasil e do mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems being trained with data collected from employee monitoring software. While this raises plausible concerns about privacy and labor impacts, no actual harm or incident is reported. The monitoring is intended for AI training, and although privacy risks are acknowledged, they remain potential rather than realized harms. Hence, this fits the definition of an AI Hazard, as the development and use of AI systems here could plausibly lead to incidents involving privacy violations or labor rights issues in the future, but no such incident has yet occurred.
Thumbnail Image

Meta计划监控员工鼠标键盘收集数据训练AI - cnBeta.COM 移动版

2026-04-21
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
Meta is explicitly using AI systems to collect and analyze employee interaction data to train AI agents for office tasks. The use of such invasive monitoring raises credible concerns about privacy violations and potential breaches of labor and data protection laws, especially in jurisdictions like the EU. However, the article does not document any actual harm or legal rulings confirming violations; it mainly discusses potential risks and concerns. Hence, the event is best classified as an AI Hazard, reflecting plausible future harm from the AI system's use rather than an AI Incident with realized harm.
Thumbnail Image

Meta Will Record Employees' Keystrokes And Use It To Train Its Ai Models

2026-04-22
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
Meta's collection of detailed employee interaction data for AI training involves the use of AI systems and raises plausible risks of harm, particularly privacy violations. However, the article does not describe any actual harm or incident occurring so far. Therefore, this situation fits the definition of an AI Hazard, as it could plausibly lead to an AI Incident if privacy breaches or misuse occur, but no direct or indirect harm has been reported yet.