Shadow AI Causes Corporate Data Leaks and IP Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Employees' unauthorized use of generative AI tools, known as 'Shadow AI,' has led to incidents of confidential data leaks and intellectual property violations in workplaces. Notably, Samsung employees accidentally input sensitive code into public AI systems, prompting stricter company controls and highlighting the urgent need for robust AI governance and data protection measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article centers on the plausible risk that AI systems could be exploited to leak sensitive information covertly, which could lead to significant harm such as intellectual property theft and security breaches. It discusses research that proposes detection frameworks and calls for legal and governance upgrades to address these risks. Since no actual data leakage or harm has been reported, but the risk is credible and the article emphasizes the need for preventive measures, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential misuse.[AI generated]
AI principles
Privacy & data governanceAccountability

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI hazard

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

群益期貨引領 AI 交易革命:首創「策略生成器」實現零門檻量化交易 | 聯合新聞網

2026-04-15
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system designed to generate trading strategies from natural language input, which qualifies as an AI system. However, there is no mention or implication of any injury, rights violation, disruption, or other harm caused by the system. The article highlights the system's features, including self-correction and risk controls, and advises users to perform backtesting before live trading, indicating responsible deployment. Since no harm or plausible future harm is described, and the article mainly presents the AI system's launch and capabilities, it fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

AI 也會當內鬼?當 AI 成為資料通道,治理架構是否需要升級?

2026-04-15
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article centers on the plausible risk that AI systems could be exploited to leak sensitive information covertly, which could lead to significant harm such as intellectual property theft and security breaches. It discusses research that proposes detection frameworks and calls for legal and governance upgrades to address these risks. Since no actual data leakage or harm has been reported, but the risk is credible and the article emphasizes the need for preventive measures, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential misuse.
Thumbnail Image

開源監控平臺Grafana修補AI助理功能漏洞,防範提示注入攻擊導致資料外洩

2026-04-13
iThome Online
Why's our monitor labelling this an incident or hazard?
The Grafana AI assistant is an AI system that processes natural language queries and can be manipulated via malicious prompts embedded in external URLs. The vulnerability allows attackers to induce the AI assistant to bypass security mechanisms and send sensitive data externally. While the company has patched the issue and no evidence of exploitation or data leakage exists, the potential for harm is credible and plausible. The event does not describe realized harm but a credible risk of harm due to the AI system's malfunction or misuse, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Snowflake推資料互通架構,鎖定AI應用資料孤島與語意不一致問題

2026-04-14
iThome Online
Why's our monitor labelling this an incident or hazard?
The article does not describe any event where the development, use, or malfunction of AI systems has directly or indirectly caused harm or violations. It focuses on technical and governance improvements to support AI applications and overcome existing challenges like data silos and semantic inconsistency. There is no mention of realized harm, nor plausible future harm from these developments. Therefore, the content fits the definition of Complementary Information, providing context and updates on AI-related infrastructure and governance without reporting an AI Incident or AI Hazard.
Thumbnail Image

大型語言模型面臨量子威脅 後量子加密與去中心化框架成安全解方

2026-04-14
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically large language models, and discusses the security risks related to their data and operation. The quantum threat described (Harvest Now, Decrypt Later) represents a credible risk that could plausibly lead to AI incidents such as data breaches and violations of privacy or intellectual property rights. The article focuses on the potential for harm and the development of a security framework to prevent such harm, without describing any actual harm occurring. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI system security and risks.
Thumbnail Image

GitHub推AI代理安全遊戲 化解潛在威脅助開發者提升防禦技能

2026-04-15
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article centers on a training game that simulates AI system behaviors and vulnerabilities to improve developer skills in AI security. While it involves AI systems and addresses potential AI security risks, no actual harm or incident has occurred. The focus is on preventing plausible future harms by educating users, which fits the definition of Complementary Information as it supports understanding and response to AI hazards rather than reporting a new incident or hazard itself.
Thumbnail Image

影子 AI 潛伏職場,企業深陷機密外洩危機

2026-04-14
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of generative AI systems by employees in ways that have caused or could cause harm, such as confidential data leakage and intellectual property violations. The involvement of AI systems is clear, as generative AI tools and large language models are used to process sensitive information. The harms described include violations of intellectual property rights and data privacy laws, which are recognized categories of AI Incident harms. The mention of actual cases (e.g., Samsung) where sensitive data was input into public AI tools confirms that harm has occurred, not just potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

大型語言模型面臨量子威脅 後量子加密與去中心化框架成安全解方 | yam News

2026-04-14
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article centers on the plausible future risks posed by quantum computing to AI systems, specifically LLMs, and the security measures being developed to counter these threats. No realized harm or incident is described; instead, it highlights potential vulnerabilities and mitigation strategies. Therefore, this event fits the definition of an AI Hazard, as it concerns circumstances where AI system development and use could plausibly lead to harm (e.g., data breaches, misuse of AI due to prompt injection) if not properly secured. It is not an AI Incident because no direct or indirect harm has occurred yet, nor is it Complementary Information or Unrelated since it deals explicitly with AI security risks and frameworks.
Thumbnail Image

AI 也會當內鬼?當 AI 成為資料通道,治理架構是否需要升級?

2026-04-14
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article centers on the plausible risk that AI systems could be misused to leak sensitive information covertly, which could lead to significant harm if realized. It describes a credible threat scenario supported by recent research but does not describe any actual incident of harm or data breach having occurred. The discussion of governance upgrades and detection mechanisms is forward-looking and preventive. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to property or intellectual property rights if exploited, but no harm has yet materialized.
Thumbnail Image

AI幫你寫履歷、報稅更快?7成民眾反而更焦慮 專家揭3招自保|壹蘋新聞網

2026-04-15
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The article centers on public concerns about privacy risks from AI tool usage and expert recommendations to mitigate these risks. It also covers a cybersecurity company's strategic branding and future AI product plans. Since no actual harm or incident has occurred, and the focus is on awareness, risk management, and strategic responses, this fits the definition of Complementary Information. There is no direct or indirect harm caused by AI described, nor a plausible imminent hazard event. Therefore, the classification is Complementary Information.
Thumbnail Image

新光人壽舉辦「AI驅動轉型與風險管理」論壇 強化金融科技轉型迎新局 | 聯合新聞網

2026-04-16
UDN
Why's our monitor labelling this an incident or hazard?
The article does not report any incident or hazard involving AI systems causing or potentially causing harm. Instead, it focuses on a professional forum discussing AI applications and risk management strategies in finance. This fits the definition of Complementary Information as it provides context and updates on AI's role in financial technology transformation and governance without describing any specific AI Incident or AI Hazard.
Thumbnail Image

Cal.com要閉源了

2026-04-16
iThome Online
Why's our monitor labelling this an incident or hazard?
The article explicitly links the decision to close-source the commercial code to the increased cybersecurity risks caused by generative AI tools that can automate vulnerability scanning and exploitation. Although no actual data breach or harm has been reported, the company perceives a credible risk that AI could be used maliciously to exploit open-source code, which could lead to user data breaches or other harms. This fits the definition of an AI Hazard, where the development or use of AI systems could plausibly lead to an AI Incident. There is no indication of an actual AI Incident occurring yet, nor is this merely complementary information or unrelated news. Hence, AI Hazard is the appropriate classification.
Thumbnail Image

Gemini登陸Mac桌面 AI助理競爭升級至作業系統層級

2026-04-16
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction and features of an AI system (Gemini for Mac) and discusses potential privacy issues as considerations, but it does not describe any actual harm, malfunction, or misuse leading to injury, rights violations, or other harms. The privacy concerns are noted as future challenges rather than current incidents or hazards. Therefore, this is best classified as Complementary Information, providing context and updates on AI ecosystem developments without reporting an AI Incident or AI Hazard.
Thumbnail Image

OpenAI推資安專用AI模型 強化防禦對抗生成式攻擊風險

2026-04-16
Yahoo!奇摩股市
Why's our monitor labelling this an incident or hazard?
The article describes the launch of an AI system intended to improve cybersecurity defenses and acknowledges the potential for AI to be misused in cyberattacks. However, it does not report any actual harm or incident caused by the AI system, nor does it describe a specific event where the AI system malfunctioned or was misused resulting in harm. Instead, it discusses the potential risks and the evolving cybersecurity landscape with AI tools. Therefore, this qualifies as an AI Hazard because the AI system's development and deployment could plausibly lead to incidents involving cybersecurity harm in the future, but no realized harm is reported. It is not Complementary Information because the article is not primarily about responses or updates to a past incident, nor is it unrelated as it clearly involves AI systems and cybersecurity risks.
Thumbnail Image

OpenAI推資安專用AI模型 強化防禦對抗生成式攻擊風險 | yam News

2026-04-16
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (GPT-5.4-Cyber) used for cybersecurity defense and acknowledges the dual-use risk of AI in generating sophisticated attacks. However, it does not describe any actual harm or incident caused by the AI system, nor does it describe a specific event where harm was narrowly avoided. The focus is on the release of a new AI model and the broader implications for cybersecurity, including governance and risk management strategies. This fits the definition of Complementary Information, as it provides supporting context and updates on AI developments and responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

美政府禁令下仍測試AI Anthropic模型掀資安與政策爭議 | yam News

2026-04-16
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Anthropic's model) by a government agency for cybersecurity testing, which is a clear AI system involvement. The use is under a regulatory ban, indicating a policy conflict and potential risk. However, the article does not report any actual harm or incident caused by the AI system's use, only the potential for misuse or ethical concerns. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harms related to national security or policy breaches, but no direct or indirect harm has yet materialized.
Thumbnail Image

AI競局全面升溫 晶片、衛星與軍事應用重塑全球科技版圖 | yam News

2026-04-16
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article discusses the expansion and integration of AI technologies across multiple domains, including military and satellite communications, but does not describe any realized harm or incident caused by AI systems. It mainly reports on corporate strategies, technological advancements, and policy debates, which fall under providing contextual and ecosystem information. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.