Korean IT Firms Ban OpenClaw AI Agent Over Security Risks and Data Exposure

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Major South Korean IT companies, including Naver, Kakao, and Danggeun, have banned the use of the OpenClaw AI agent due to security vulnerabilities. The AI's ability to autonomously control computers led to incidents of data exposure and raised concerns about unauthorized data leaks and cyberattacks, prompting both corporate and governmental warnings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (OpenClaw) whose use is being restricted due to credible and serious security vulnerabilities that could plausibly lead to data breaches and cyberattacks, which constitute harm to property and communities. Since the article does not report any actual harm occurring but focuses on the potential risks and preventive measures taken by companies and authorities, this qualifies as an AI Hazard. The AI system's development and use could plausibly lead to an AI Incident involving data theft and cyberattacks, justifying classification as an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
Robustness & digital securityPrivacy & data governance

Industries
Digital securityIT infrastructure and hosting

Affected stakeholders
Business

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
ICT management and information security

AI system task:
Goal-driven organisation


Articles about this incident or hazard

Thumbnail Image

"김 대리, 회사서 '이것' 쓰면 절대 안 돼"...네이버·카카오도 사용 금지령 내렸다 - 매일경제

2026-02-08
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use is being restricted due to credible and serious security vulnerabilities that could plausibly lead to data breaches and cyberattacks, which constitute harm to property and communities. Since the article does not report any actual harm occurring but focuses on the potential risks and preventive measures taken by companies and authorities, this qualifies as an AI Hazard. The AI system's development and use could plausibly lead to an AI Incident involving data theft and cyberattacks, justifying classification as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

AI가 사람대신 PC 조작해 업무 네카오 '오픈클로 금지령' - 매일경제

2026-02-08
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) that autonomously controls computer inputs, which fits the definition of an AI system. The article reports actual harms including data breaches and potential privacy violations resulting from the AI's vulnerabilities and misuse, fulfilling the criteria for harm to rights and property (privacy and confidential information). The companies' responses to ban or restrict the AI's use further confirm the recognition of these harms. Hence, this is an AI Incident because the AI system's use and malfunction have directly or indirectly led to realized harms.
Thumbnail Image

PC를 대신 조작하는 AI...네·카·당 '사용 금지령' | 연합뉴스

2026-02-07
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) that autonomously manipulates computers and accesses sensitive data, which has directly caused harm through data leaks and security breaches. The bans and warnings from companies and government authorities confirm the recognition of these harms. The article details realized harms (data exposure) and the AI system's role in causing them, fulfilling the criteria for an AI Incident. Although there is also discussion of potential future risks, the presence of actual harm takes precedence in classification.
Thumbnail Image

나 대신 PC 조작하는 AI '오픈클로'...IT업계 사용금지령

2026-02-09
아시아경제
Why's our monitor labelling this an incident or hazard?
OpenCLO is an AI system that autonomously manipulates a PC to perform tasks, clearly fitting the AI system definition. The article reports actual security vulnerabilities and incidents of data exposure linked to its use, which constitute realized harm related to privacy and information security, falling under violations of rights and harm to communities. The bans by companies and warnings by authorities confirm the recognition of these harms. Hence, this qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction.
Thumbnail Image

AI 맘대로 컴퓨터 조작...네·카·당 "사내 사용 금지령" 내린 앱 뭐길래

2026-02-08
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) that autonomously controls computer inputs and accesses internal information, which qualifies as an AI system. The companies' restriction of its use is a response to potential security risks, but no actual harm or incident has been reported yet. Therefore, the event describes a plausible risk of harm (e.g., data leakage or unauthorized control) that could arise from the AI system's use, making it an AI Hazard rather than an AI Incident. The focus is on preventing possible future harm rather than reporting realized harm or an incident.
Thumbnail Image

네이버·카카오, AI 에이전트 '오픈클로' 사용 금지령

2026-02-08
Chosunbiz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) and concerns about its potential to cause cybersecurity incidents and data leaks. However, the event focuses on preventive bans and security policy changes rather than describing any realized harm or incident caused by the AI system. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (cyberattacks, data breaches), but no direct or indirect harm has yet occurred according to the article.
Thumbnail Image

PC를 대신 조작하는 AI...네·카·당 '사용 금지령' - 전파신문

2026-02-07
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenClaw) that autonomously controls computers and accesses sensitive data. The concerns raised are about potential unauthorized data leaks and cybersecurity vulnerabilities that could lead to harm such as loss of confidential information and privacy violations. Although no actual harm is reported, the warnings and bans by companies and government authorities indicate a credible risk of future incidents. Hence, the event fits the definition of an AI Hazard, as the AI system's use or malfunction could plausibly lead to an AI Incident involving harm to property, communities, or violations of rights. It is not an AI Incident because no realized harm is described, nor is it Complementary Information or Unrelated since the focus is on the risk and restrictions related to the AI system's use.
Thumbnail Image

네이버·카카오, AI가 PC 마음대로 조작하는 '오픈클로' 금지령

2026-02-08
뉴스핌
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) that autonomously controls PCs by manipulating inputs, which fits the definition of an AI system. The companies' banning of its use is due to concerns about security vulnerabilities that could lead to cyberattacks and data leaks, which are harms to property and potentially to communities. Since no actual harm has been reported but there is a credible risk of such harm, this event is best classified as an AI Hazard. It is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated, as the focus is on the AI system's potential to cause harm and the preventive response.
Thumbnail Image

네이버·카카오·당근, '오픈클로 금지령'...보안 유출 선제적 차단

2026-02-09
포인트경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenClaw) whose use is being restricted due to plausible security risks including unauthorized data leakage and cyberattack pathways. While no direct harm has yet occurred within these companies, the identified vulnerabilities and external warnings indicate a credible risk that the AI system's use could lead to incidents involving harm to property, data, or organizational security. Therefore, this qualifies as an AI Hazard because the development and use of the AI system could plausibly lead to an AI Incident, but no realized harm within the companies is reported yet.
Thumbnail Image

"오픈클로가 뭐길래"...'맥 미니' 가격 40% 오르고 네·카·당은 사용 금지령

2026-02-09
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
OpenClaw is an AI system that autonomously operates user PCs, which fits the definition of an AI system. The article highlights security vulnerabilities and the potential for data leaks, which could plausibly lead to violations of privacy and confidentiality, a form of harm under the framework. Since no actual harm or incident has been reported yet, but companies have proactively banned its use due to these risks, the event represents a credible potential for harm rather than a realized incident. Thus, it is best classified as an AI Hazard.
Thumbnail Image

내 PC 조작하는 'AI 오픈클로'... 보안 위협에 韓기업 '금지령'

2026-02-09
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
OpenClo is an AI system explicitly described as controlling PCs and accessing sensitive data. Its use has led to direct security concerns and organizational responses (bans) due to the risk of data leaks and privacy violations. The article reports actual restrictions imposed by companies because of these risks, indicating realized harm or at least harm that has materialized to a degree warranting action. The involvement of the AI system in causing or enabling these harms is clear and direct, fulfilling the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

'오픈클로' 금지령 내린 韓 기업들, 딥시크 이후 1년 만에 보안 '경고등'

2026-02-09
투데이코리아
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OpenClaw) that autonomously controls PCs and is linked to large-scale data exposure incidents, including API keys and personal data leaks. This constitutes harm to property and communities (data privacy and security breaches). The companies' actions to restrict the AI agent's use are responses to these realized harms. Hence, this qualifies as an AI Incident because the AI system's use and associated security vulnerabilities have directly led to harm.
Thumbnail Image

Một công cụ AI vừa bị loạt doanh nghiệp lớn Hàn Quốc cấm cửa

2026-02-09
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies OpenClaw as an AI autonomous agent performing complex tasks autonomously, which fits the definition of an AI system. The reported data leaks and security vulnerabilities caused by design flaws in OpenClaw have resulted in actual harm to personal data and information security, which constitutes harm to property and communities. The restrictions imposed by companies are responses to these harms, not the primary event. Hence, the event is an AI Incident due to realized harm caused by the AI system's malfunction or misuse.