ByteDance AI Smartphone Triggers App Restrictions and User Harms in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

ByteDance's AI-powered smartphone, featuring the agentic Doubao assistant, faced widespread backlash in China after major apps restricted or blocked its functions. The AI's autonomous operations led to account suspensions, app crashes, and privacy concerns, prompting ByteDance to disable features in financial and gaming apps to mitigate harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Doubao) is explicitly mentioned as controlling smartphone functions and interacting with apps. Its use has directly caused account restrictions, app crashes, and login issues, which constitute harm to users' rights and access to services. These harms fall under violations of rights and harm to communities. The article details realized harm rather than potential harm, making this an AI Incident. The company's mitigation efforts are responses to the incident rather than the main focus, so this is not Complementary Information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeingRespect of human rights

Industries
Consumer productsDigital securityFinancial and insurance services

Affected stakeholders
Consumers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Interaction support/chatbotsGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

ByteDance's agentic AI smartphone dials up a digital backlash from China's top apps

2025-12-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The AI system (Doubao) is explicitly mentioned as controlling smartphone functions and interacting with apps. Its use has directly caused account restrictions, app crashes, and login issues, which constitute harm to users' rights and access to services. These harms fall under violations of rights and harm to communities. The article details realized harm rather than potential harm, making this an AI Incident. The company's mitigation efforts are responses to the incident rather than the main focus, so this is not Complementary Information.
Thumbnail Image

What is the TikTok owner's Agent AI phone? Find out why is it facing backlash in China

2025-12-08
Mashable ME
Why's our monitor labelling this an incident or hazard?
The Nubia M153 smartphone incorporates an AI system (Doubao) that autonomously interacts with apps and performs multi-step tasks. The AI's operation has directly led to harms such as account suspensions, privacy concerns, and restrictions by major platforms, which are violations of user rights and disruptions to service access. These harms are materialized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

ByteDance's agentic AI smartphone dials up a digital backlash by China's top apps

2025-12-07
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The AI system (the voice-operated assistant embedded in the ByteDance AI smartphone) is explicitly mentioned and is involved in controlling apps including financial and gaming services. The restrictions imposed by major platforms indicate that the AI's use has caused or contributed to harms related to fairness (e.g., suspending AI features in competitive games to preserve fair play) and security (e.g., disabling interaction with financial apps). These harms have materialized as operational blocks and controls, which fit the definition of an AI Incident due to direct impact on users and app ecosystems. The event is not merely a potential risk or a complementary update but a realized issue involving AI use and its consequences.
Thumbnail Image

ByteDance's Agentic AI Phone Sparks Panic and Nationwide App Bans - Gizmochina

2025-12-09
Gizmochina
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Doubao) with agentic capabilities autonomously controlling smartphone apps, which led to account suspensions, app crashes, and bans by major platforms. These consequences constitute harms to users (account freezes, privacy concerns) and disruptions to app operations, fitting the definitions of harm to persons and communities. The AI's autonomous operation is the direct cause of these harms, fulfilling the criteria for an AI Incident. The company's response to scale back features and collaborate on safety controls is a mitigation effort but does not negate the incident classification. The event is not merely a potential hazard or complementary information, as actual harms have occurred.
Thumbnail Image

ByteDance's agentic AI smartphone dials up a digital backlash from China's top apps | Today Headline

2025-12-07
Today Headline
Why's our monitor labelling this an incident or hazard?
The AI system (Doubao) is clearly involved as an AI-powered voice assistant embedded in the smartphone OS. The restrictions imposed by apps and ByteDance's mitigation measures indicate concerns about potential misuse or unfair outcomes, which could plausibly lead to harms such as financial fraud or unfair competition. Since no actual harm or incident is reported, but the potential for harm is credible and recognized by stakeholders, this event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

豆包"撕裂"AI手机-钛媒体官方网站

2025-12-13
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a system-level AI agent embedded in the Doubao phone, capable of autonomous cross-application operations. The AI's use has directly led to operational disruptions, such as major apps restricting or blocking AI-driven logins and operations, which constitutes harm to user rights and community functioning. The privacy and security concerns raised further support the presence of harm. Although no physical injury or legal rulings are mentioned, the disruption of app operations and the conflict over AI-driven user input simulation represent significant harms under the framework. Hence, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

罗永浩批手机大厂沦为牙膏厂:AI投入严重不足

2025-12-12
驱动之家
Why's our monitor labelling this an incident or hazard?
The article primarily reports on the introduction of an AI-powered phone and the challenges it faces from existing app ecosystems. While it mentions AI system capabilities and industry critiques, it does not describe any incident where the AI system caused harm or where harm is plausibly imminent. The resistance from app companies is a business and policy issue rather than an AI Incident or Hazard. Therefore, this is best classified as Complementary Information, providing context on AI adoption challenges and industry responses without describing a specific AI Incident or Hazard.
Thumbnail Image

豆包"撕裂"AI手机

2025-12-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The Doubao AI phone is an AI system deeply integrated into the smartphone OS, performing autonomous tasks across multiple apps by simulating user interactions. The article details actual conflicts with major apps (WeChat, Alipay, Taobao) that have restricted or blocked the AI's operations, indicating disruption to app management and operation (harm category b/d). Privacy concerns about data uploading and potential account security risks are discussed, indicating harm to user rights and privacy (harm category c). These harms are realized and ongoing, not merely potential. The AI system's malfunction or use has directly led to these harms, fulfilling the criteria for an AI Incident. Although the article also discusses broader ecosystem and governance implications, the primary focus is on the realized harms caused by the AI system's deployment and use, not just complementary information or future hazards.
Thumbnail Image

知名大厂被约谈?官方最新回应!

2025-12-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Doubao AI assistant) and discusses its use and potential security/privacy concerns. However, the article does not report any realized harm such as data breaches, privacy violations, or security incidents caused by the AI system. The concerns are about potential risks, but the company denies regulatory action and explains the security measures in place. The article also details the company's planned adjustments and communication efforts to address concerns. Therefore, this is best classified as Complementary Information, as it provides updates and clarifications about an AI system and responses to public and regulatory concerns, without reporting a new AI Incident or AI Hazard.
Thumbnail Image

知情人士回应豆包手机被曝被约谈:消息不实

2025-12-13
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Doubao AI phone assistant) and mentions regulatory scrutiny, but no actual harm or incident has occurred or is credibly imminent. The main content is about denying false reports and providing explanations about security, which fits the definition of Complementary Information as it updates and clarifies the situation without reporting a new AI Incident or AI Hazard.
Thumbnail Image

知情人士辟谣豆包手机被监管机构约谈:消息不实

2025-12-13
驱动之家
Why's our monitor labelling this an incident or hazard?
The article primarily addresses rumors about regulatory action and provides official clarifications about the AI assistant's security and operational limits. There is no evidence of realized harm or plausible future harm caused by the AI system. The content is informational and relates to responses and explanations rather than incidents or hazards involving AI-related harm.
Thumbnail Image

"豆包手机"被监管机构约谈?知情人士:消息不实

2025-12-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system (豆包手机 assistant) is clearly involved as it autonomously operates smartphones and interacts with various apps, which fits the definition of an AI system. The concerns raised relate to security, privacy, and market disruption, which could plausibly lead to harms such as rights violations or disruption of app ecosystems. However, the article states no regulatory action has been taken and no direct harm has occurred yet. The main content is about the potential risks, industry responses, and clarifications, making it a discussion of plausible future harms and ecosystem impact rather than a realized incident. Thus, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

豆包手机"被监管机构约谈?知情人士称消息不实

2025-12-13
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm or incident caused by the AI system, nor does it describe a credible risk of future harm. Instead, it centers on the company's public clarifications and regulatory rumors denial, which are responses to concerns about the AI assistant's security and privacy. This fits the definition of Complementary Information, as it provides supporting data and context about the AI system and its governance environment without describing a new AI Incident or AI Hazard.
Thumbnail Image

豆包AI手机跨应用操作遭主流APP集体封禁

2025-12-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (豆包AI手机) is explicitly described as using advanced AI-driven automation to simulate user actions across multiple apps, which triggered security mechanisms leading to account freezes and bans. This constitutes direct harm to users (loss of access, potential security risks) and disruption to critical digital infrastructure (major apps' operation and business models). The article details realized harms, not just potential risks, and the AI system's role is pivotal in causing these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

豆包AI手机跨应用操作遭主流APP集体封禁

2025-12-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system (豆包AI手机) is explicitly described as using AI-driven automation to perform cross-app operations that simulate user behavior, which is a clear AI system involvement. The use of high-risk permissions to automate tasks like price comparison and automatic ordering has led to mainstream apps detecting these as non-human actions, triggering security mechanisms that forcibly log out or freeze user accounts. This constitutes direct harm to users (loss of access, potential security risks) and disruption to app operations and business models. The article also highlights the regulatory and commercial conflicts arising from this AI use, but the realized harm (account freezes, security interventions) makes this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

计算机行业点评报告:火山引擎FORCE大会召开在即,...

2025-12-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The article focuses on the announcement and anticipation of new AI models, tools, and ecosystem growth at an industry conference. It does not report any harm or risk of harm caused or potentially caused by AI systems. The content is informational and forward-looking about AI technology and investment opportunities, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

便捷与安全 谁的优先级?豆包手机助手放弃银行App引责任之辩

2025-12-15
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (豆包手机助手) that uses AI-based screen-reading and voice command technology to operate apps, including banking apps. The AI system's use has led to banks restricting its access due to concerns about financial security and compliance risks. While no actual harm (such as fraud or data breaches) has been reported, the article clearly outlines plausible future harms related to financial security, privacy violations, and regulatory non-compliance. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident involving harm to individuals' financial security and privacy. The article does not describe an actual incident of harm but focuses on the potential risks and the proactive limitation of the AI system's capabilities to prevent such harm.
Thumbnail Image

T早报|2025年我国电影票房突破500亿元;豆包再回应手机助手隐私风险;摩尔线程拟将最多75亿募集资金投入现金管理

2025-12-15
caixin.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the company's explanation and defense of its AI assistant's privacy practices, which is a response to previously raised concerns. There is no indication of actual harm, violation, or malfunction caused by the AI system. The content serves to provide additional context and information about the AI system's use and safeguards, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

2025-12-15
证券之星
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the generation of abusive and violent videos using AI techniques. The harm is realized as these videos are spreading on platforms, potentially harming viewers and communities, including children, by exposing them to harmful content. The platforms' responses indicate recognition of the harm and efforts to mitigate it. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities. Other parts of the article, such as the EU investigation into Google or OpenAI's model releases, are complementary information as they provide context or updates without describing new incidents or hazards.
Thumbnail Image

南财观察 第148期:豆包手机 "超级管家"的困局

2025-12-14
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI-powered mobile assistant) whose use has directly led to disruptions in app operations (login failures, security warnings) and conflicts with app security policies and business models. These disruptions constitute harm to the management and operation of digital infrastructure (mobile apps and services), fitting the definition of an AI Incident under disruption of critical infrastructure or harm to communities through interference with digital services. Although physical injury or direct legal rights violations are not reported, the operational disruptions and security concerns are significant harms directly caused by the AI system's use. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

定焦One 豆包"撕裂"AI手机

2025-12-15
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI assistant embedded in the '豆包' AI phone) that autonomously operates across multiple apps by simulating user interactions. The AI's use has directly caused harm: app platforms like WeChat and Alipay have restricted or blocked access due to detection of automated operations, which disrupts user access and violates expected user rights. Additionally, privacy concerns arise from the AI's need to upload screen data for processing, raising risks of data leakage. These are direct harms to users' rights and privacy, fulfilling the criteria for an AI Incident. The article also discusses operational errors and security risks, further supporting the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

豆包 AI 手机风波之后,能不能有点儿共识?

2025-12-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (AI-powered mobile assistants acting autonomously across apps) and discusses their use and the resulting conflicts with platform policies. However, it does not describe any actual harm occurring due to these AI systems, such as injury, rights violations, or significant disruption. The harms discussed are potential or systemic challenges requiring governance and consensus. The article mainly provides an analysis of the current situation, technical approaches, and policy considerations, which fits the definition of Complementary Information. It does not describe a new AI Incident or AI Hazard but rather elaborates on the ecosystem and responses to AI developments.
Thumbnail Image

豆包 AI 手机风波之后,能不能有点儿共识?

2025-12-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the development and deployment of AI GUI Agents like the Doubao assistant and AutoGLM, which simulate user interactions across apps. While it describes the blocking of these assistants by app platforms (e.g., WeChat), it does not document any realized harm such as injury, rights violations, or property damage. Instead, it highlights the potential for future harm due to unauthorized operations, security risks, and commercial conflicts, and calls for certification systems, user data ownership clarity, and safety protocols. Therefore, the event is best classified as an AI Hazard, reflecting plausible future risks and the need for governance and technical standards, rather than an AI Incident or Complementary Information.
Thumbnail Image

AI重构操作系统的趋势不可逆转

2025-12-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves a clearly described AI system (the Doubao AI phone assistant) that autonomously operates apps, which fits the definition of an AI system. The AI's use has directly led to account suspensions, forced logouts, and service denials, which constitute disruption of platform operations and harm to user access, falling under harm categories (b) disruption of critical infrastructure management and (c) violations of rights (user access and platform security). The article describes realized harm rather than potential harm, so it is an AI Incident rather than a hazard. The discussion of broader societal and economic impacts further supports the classification as an incident rather than merely complementary information or unrelated news.
Thumbnail Image

豆包手機「侵入式操作」遭全境封殺 自動下單強到微信、支付寶都怕 | ETtoday AI科技 | ETtoday新聞雲

2025-12-14
ai.ettoday.net
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as autonomously operating apps by simulating user actions, which is a clear AI system involvement. The use of this AI assistant has directly led to harms including violations of privacy regulations, security breaches, unfair competitive advantages, and account bans, which constitute harm to rights and communities. Therefore, this event qualifies as an AI Incident because the AI system's use has directly caused realized harms, including violations of user rights and disruptions to app ecosystems.
Thumbnail Image

字节跳动被约谈传闻不实 豆包手机助手安全机制详解

2025-12-14
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by the AI system. Instead, it focuses on clarifying misinformation and explaining the safety and security measures implemented in the AI assistant. There is no indication of direct or indirect harm, nor a plausible future harm event described. The content is primarily an update and explanation about the AI system's operation and governance, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

字节跳动推豆包AI手机助手 联合中兴发布努比亚M153引热议

2025-12-14
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes the release of an AI system and related user and regulatory concerns, but no actual harm or incident caused by the AI system is reported. The company's statements about safety and privacy protections and ongoing optimizations suggest a focus on preventing harm rather than harm occurring. Therefore, this is best classified as Complementary Information, providing context and updates about the AI system and its ecosystem rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

一句話搞定所有事!豆包AI手機掀革命 強大功能引發淘寶微信封殺│TVBS新聞網

2025-12-15
TVBS
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI assistant in the smartphone) and its use, but the article does not report any actual harm or incident caused by the AI system. The blocking by platforms is a response to potential misuse or policy violations but does not itself constitute an AI Incident or Hazard. The article primarily provides information about the AI product and the ecosystem's response, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

我为什么旗帜鲜明地支持豆包手机-钛媒体官方网站

2025-12-15
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Doubao AI phone assistant) whose use has led to app malfunctions and privacy concerns, triggering app bans and public debate. While these issues indicate problems related to AI use, the article does not document direct or indirect realized harms such as personal injury, legal rights violations, or significant property/community harm. The concerns and controversies are ongoing and highlight potential risks and regulatory challenges, but no concrete incident of harm has been established. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on AI system deployment, societal reactions, and governance considerations without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

和讯投顾李步娟:豆包手机后又有新玩法-新闻频道-和讯网

2025-12-16
和讯网
Why's our monitor labelling this an incident or hazard?
The article centers on ByteDance's AI product launch and strategic vision for AI agents and ecosystem development. It does not describe any event where the AI system's development, use, or malfunction has directly or indirectly caused harm or violations. Nor does it report any credible imminent risk or hazard of harm. The content is primarily about AI ecosystem evolution, industry impact, and future opportunities, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

豆包手机助手面临安全问题

2025-12-15
爱范儿
Why's our monitor labelling this an incident or hazard?
The '豆包手机助手' is an AI system integrated deeply into the smartphone's operating system with high privileges. The security problems causing WeChat login failure represent a direct harm to users' ability to use critical applications, which can be classified as harm to property or user rights. Since the AI system's malfunction or design is directly linked to this disruption, this qualifies as an AI Incident under the framework.
Thumbnail Image

豆包AI「自動滑手機」惹議!微信、淘寶陸續啟動風控防堵

2025-12-15
蕃新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Doubao AI assistant) whose use has directly caused harm by triggering security and risk control measures in multiple critical apps, leading to account restrictions and disruptions. The AI's automated cross-App operations bypass security mechanisms, raising concerns about user data privacy, financial transaction risks, and fairness in digital platforms. These impacts constitute violations of user rights and harm to communities through unfair competition and operational disruption. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

便捷与安全,谁的优先级?豆包手机助手放弃银行App引责任之辩

2025-12-15
千龙网
Why's our monitor labelling this an incident or hazard?
The AI system (豆包手机助手) is explicitly described as using AI-based screen-reading technology to operate apps, including banking apps. The article details the banks' refusal to allow the AI assistant to operate their apps due to security and compliance concerns, indicating that the AI system's use could plausibly lead to financial harm or breaches of privacy and regulatory obligations. No actual harm or incident has been reported yet; rather, the AI assistant has proactively limited its functionality to avoid such risks. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm but has not yet done so. The article also discusses the broader implications for compliance, data security, and the need for regulatory frameworks, reinforcing the potential for future harm if these issues are not addressed.
Thumbnail Image

豆包手机让大家慌了!让AI干活请交出微信和银行卡?

2025-12-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI agent in the Doubao phone) that autonomously operates across multiple apps, including sensitive ones, which clearly fits the definition of an AI system. The article describes the use of this AI system and the resulting ecosystem conflicts and privacy concerns, indicating plausible risks of harm to users' privacy and financial security. However, no actual harm or breach has been reported yet, only potential risks and challenges. Thus, it does not meet the threshold for an AI Incident but qualifies as an AI Hazard due to the credible possibility of harm arising from the AI system's operation and data access. The article also discusses broader industry context and responses but the main focus is on the potential risks posed by this AI agent technology.
Thumbnail Image

豆包手机让大家慌了!让AI干活请交出微信和银行卡?

2025-12-15
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the AI agent in the Doubao phone) whose use is currently causing ecosystem conflicts and raising privacy and security concerns. However, there is no direct or indirect evidence of realized harm such as data breaches, privacy violations, or other damages. The article mainly discusses plausible future risks and challenges related to AI agent access to sensitive apps and data, as well as the broader implications for mobile ecosystems and user privacy. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI agent could plausibly lead to incidents involving privacy breaches or other harms if not properly managed. It is not an AI Incident because no actual harm has occurred yet, nor is it Complementary Information or Unrelated since the focus is on potential risks from an AI system in active use.
Thumbnail Image

豆包手机被传遭约谈 官方辟谣并强化AI助手安全规范

2025-12-15
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system ('Doubao Phone Assistant') and discusses its use and regulatory scrutiny. However, no actual harm or incident has been reported; the regulatory talks were rumors denied by the company. The article mainly focuses on the company's response and safety measures to address potential concerns, rather than describing any realized harm or direct AI-related incident. Therefore, this is best classified as Complementary Information, providing context and updates on governance and safety practices related to the AI system.
Thumbnail Image

豆包手机为何被封杀 AI挑战互联网生态

2025-12-16
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The Doubao AI phone assistant is a system-level AI agent that autonomously operates apps on behalf of users, which fits the definition of an AI system. Its use has directly led to account suspensions, forced logouts, and access denials on multiple platforms, constituting disruption of platform operations and harm to users' access and experience. This disruption aligns with harm category (b) "Disruption of the management and operation of critical infrastructure" (considering major internet platforms as critical infrastructure for digital services) and (e) "Other significant, clearly articulated harms" due to the substantial impact on the internet ecosystem and user rights to access services. The article describes realized harm, not just potential harm, so the event is an AI Incident rather than an AI Hazard. It is not merely complementary information or unrelated news because the core focus is on the AI system's use causing these disruptions and the broader implications for the internet ecosystem.