Massive Leak Exposes AI-Powered Censorship and Surveillance Behind China's Great Firewall

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Over 500GB of internal documents and source code from Geedge Networks and related institutions behind China's Great Firewall were leaked, revealing the use and export of AI-driven censorship and surveillance technologies. These systems, deployed domestically and in countries like Myanmar and Kazakhstan, have enabled large-scale violations of privacy and freedom of expression.[AI generated]

Why's our monitor labelling this an incident or hazard?

The leaked documents pertain to a company that builds and exports advanced network censorship and surveillance technologies integral to the Chinese government's Great Firewall. These technologies include real-time monitoring, filtering, and blocking of internet traffic, which are highly likely to involve AI systems for automated detection and decision-making. The harm caused includes violations of human rights (freedom of expression, access to information), harm to communities through digital authoritarianism, and politically motivated cyberattacks. The AI systems' development and use have directly led to these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsDemocracy & human autonomyTransparency & explainabilityAccountability

Industries
Government, security, and defenceDigital security

Affected stakeholders
General publicCivil society

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detectionEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

中共防火墻最大規模文件外洩 涉方濱興創建公司 | GFW | 積至公司 | 新唐人电视台

2025-09-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The leaked documents pertain to a company that builds and exports advanced network censorship and surveillance technologies integral to the Chinese government's Great Firewall. These technologies include real-time monitoring, filtering, and blocking of internet traffic, which are highly likely to involve AI systems for automated detection and decision-making. The harm caused includes violations of human rights (freedom of expression, access to information), harm to communities through digital authoritarianism, and politically motivated cyberattacks. The AI systems' development and use have directly led to these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

機密文件大規模泄漏 揭中共防火牆審查機制 | 防火長城 | 積至(海南)信息技術有限公司 | 方濱興 | 大紀元

2025-09-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The leaked documents reveal the use of AI-based systems (e.g., machine learning-powered firewalls and monitoring tools) that perform real-time network traffic analysis, censorship, and surveillance. These systems have been deployed in multiple countries, leading to active suppression of internet freedom and privacy violations, which constitute harm to human rights and communities. The event involves the use and development of AI systems that have directly led to these harms. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

机密文件大规模泄漏 揭中共防火墙审查机制 | 防火长城 | 积至(海南)信息技术有限公司 | 方滨兴 | 大纪元

2025-09-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, notably machine learning-based firewalls and network monitoring tools, used for censorship and surveillance. The leaked documents confirm the development and deployment of these AI systems by Chinese institutions and their export to other countries, where they have been used to block access to information and monitor communications, directly violating human rights and harming communities. The harm is realized, not hypothetical, as the systems are actively used for censorship and surveillance. Thus, the event meets the criteria for an AI Incident because the AI systems' use has directly led to violations of human rights and harm to communities through oppressive internet control.
Thumbnail Image

中共防火牆文件大規模洩密 跨國布局曝光 | 防火長城 | 積至(海南)信息技術有限公司 | 方濱興 | 大紀元

2025-09-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for network surveillance, censorship, and traffic manipulation, including advanced capabilities like deep packet inspection, traffic injection, and VPN endpoint detection. These AI systems have been deployed in multiple countries, leading to realized harms such as violations of human rights (freedom of expression, privacy), harm to communities (social control, political repression), and indirect political consequences (government collapse in Nepal). The leak reveals the development and use of these AI systems causing direct and indirect harm, meeting the criteria for an AI Incident. The detailed description of the AI system's role in enabling these harms confirms the classification.
Thumbnail Image

中共防火墙文件大规模泄密 跨国布局曝光 | 防火长城 | 积至(海南)信息技术有限公司 | 方滨兴 | 大纪元

2025-09-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly described as performing advanced network monitoring, traffic analysis, and censorship functions, including AI-driven VPN detection and behavior profiling. The use of these AI systems has directly led to significant harms: violations of human rights (freedom of expression, privacy), suppression of information, and political consequences such as protests and government resignations. The event meets the criteria for an AI Incident because the AI systems' development and use have directly caused harm to communities and human rights. The detailed description of the AI systems' capabilities and their deployment in multiple countries with documented impacts confirms this classification.
Thumbnail Image

專訪動態網總裁:中共販賣網控技術屬流氓行為 | 防火長城 | 防火牆 | 積至(海南)信息技術有限公司 | 大紀元

2025-09-14
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as the Great Firewall and associated network control technologies rely on AI or advanced algorithmic systems for real-time censorship and surveillance. The leak exposes the development and use of these AI systems, which have directly led to violations of human rights by enabling state censorship and repression. The export of these technologies to other countries further extends the harm. The event describes realized harm (human rights violations and harm to communities) caused by the AI systems' use, meeting the criteria for an AI Incident. It is not merely a potential risk or complementary information but a concrete exposure of harmful AI-enabled practices.
Thumbnail Image

专访动态网总裁:中共贩卖网控技术属流氓行为 | 防火长城 | 防火墙 | 积至(海南)信息技术有限公司 | 大纪元

2025-09-14
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of advanced network monitoring and censorship technologies (likely involving AI for content filtering, traffic analysis, and automated blocking). The leak reveals the development and use of these AI systems and their export, which could lead to violations of human rights and harm to communities through censorship and surveillance. However, the article does not report a specific realized harm event caused by these AI systems but rather exposes their existence and commercial use, which is unethical and potentially harmful. Therefore, this qualifies as an AI Hazard because the development and export of these AI-enabled censorship technologies plausibly could lead to AI Incidents involving human rights violations and harm to communities, but no direct harm event is described as having occurred in this report.
Thumbnail Image

中共对海外输出网络监控技术 专家析其目的 | 防火长城 | 防火墙 | 积至(海南)信息技术有限公司 | 大纪元

2025-09-15
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of advanced network monitoring and censorship technologies that use AI techniques such as deep packet inspection, real-time traffic analysis, user behavior profiling, and automated blocking of VPNs. The leak reveals that these AI systems have been deployed and used in multiple countries, directly enabling violations of human rights and suppression of dissent, which constitute harm to communities and breaches of fundamental rights. The involvement of AI in the development, deployment, and use of these systems is clear, and the harms are realized and ongoing. The event is not merely a potential risk but documents actual use and impact, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中共對海外輸出網絡監控技術 專家析其目的 | 防火長城 | 防火牆 | 積至(海南)信息技術有限公司 | 大紀元

2025-09-15
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for network surveillance, censorship, and control, which have directly led to violations of human rights and harm to communities by enabling authoritarian regimes to suppress dissent and monitor individuals extensively. The leak confirms the development and deployment of these AI systems and their export to other countries, where they are actively used for repression. The harms are realized and ongoing, not merely potential. Hence, this is an AI Incident rather than a hazard or complementary information. The detailed description of the AI system capabilities and their use in harmful ways meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

機密文件大洩露 揭中共「防火長城」審查機制 | 網絡防火牆 | 翻牆 | 積至公司 | 新唐人电视台

2025-09-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The leaked documents reveal the use of AI-enabled network monitoring and censorship systems that directly cause harm by violating human rights, including freedom of expression and privacy. The involvement of AI in real-time filtering, user identification, and VPN blocking is explicit or reasonably inferred. The harms are realized and ongoing, as these systems are actively used domestically and exported internationally. This meets the criteria for an AI Incident due to direct harm to human rights caused by the AI systems' use.
Thumbnail Image

【禁聞】中共防火牆失守 爆史上最大規模文件外洩 | 防火長城 | 內部交流記錄 | 新唐人电视台

2025-09-14
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The leaked documents confirm the existence and use of AI systems for internet censorship, real-time monitoring, VPN blocking, and launching network attacks, which are directly linked to human rights violations and suppression of dissent. The AI systems' development and deployment have caused harm to communities and fundamental rights, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but documents actual use and harm, so it is not an AI Hazard or Complementary Information. It is not unrelated because the AI systems are central to the harm described.
Thumbnail Image

中共防火牆機密文件洩露 分析:震攝參與者 | 「防火長城」洩密事件 | 中共網絡封鎖 | 新唐人电视台

2025-09-14
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The 'Great Firewall' is an AI system used for real-time network filtering, monitoring, and censorship, which directly violates fundamental rights such as freedom of expression and privacy. The leak confirms the system's deployment and export, showing realized harm through surveillance and repression. The involvement of AI in filtering and user identification is explicit. The harm is ongoing and systemic, affecting millions of users domestically and potentially abroad. Hence, this is an AI Incident due to direct human rights violations and harm to communities caused by the AI system's use.
Thumbnail Image

中共防火墙内部机密文件大规模泄漏

2025-09-14
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems, such as machine learning-based firewalls and network monitoring tools, used for censorship and surveillance. The leaked documents reveal the deployment of these AI-enabled systems in multiple countries, leading to realized harms including violations of human rights (freedom of expression, privacy), political repression, and social unrest. The AI systems' use in monitoring, blocking, and manipulating internet traffic directly caused or contributed to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國防火長城爆史上最大外洩!超過 500GB 原始碼、內部文件流出

2025-09-14
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The Great Firewall employs AI systems for deep packet inspection, filtering, and surveillance, which are explicitly described in the leaked source code and operational documents. The leak reveals how these AI systems function and their vulnerabilities, which directly relate to violations of human rights and harm to communities through censorship and surveillance. The event involves the use and development of AI systems and has led to realized harm by exposing the mechanisms of censorship and surveillance, which restrict fundamental rights. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中共对海外输出网络监控技术 专家析其目的

2025-09-15
botanwang.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for network monitoring, censorship, and surveillance, including advanced capabilities like deep packet inspection and user behavior analysis. The use and export of these AI systems have directly led to violations of human rights and suppression of fundamental freedoms in multiple countries, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the technology is actively deployed and used to control populations and restrict information. The detailed leak confirms the AI system's role in these harms, and the event is not merely a potential risk but documents actual harm caused by AI-enabled systems.
Thumbnail Image

中國向緬甸和巴基斯坦出口審查工具的消息曝光

2025-09-15
Yahoo News
Why's our monitor labelling this an incident or hazard?
The leaked data concerns AI systems (DPI platforms) that perform real-time traffic analysis and filtering, which are AI systems by definition. Their use in censorship and surveillance directly leads to violations of human rights, fulfilling the criteria for an AI Incident. The event describes realized harm through the operation of these systems in multiple countries, not just potential harm. Hence, it is not merely a hazard or complementary information but an incident involving AI systems causing significant rights violations.
Thumbnail Image

大陸「網路審查系統」機密外洩!史上最嚴重 西方超關注

2025-09-15
中時新聞網
Why's our monitor labelling this an incident or hazard?
The Great Firewall is an AI-enabled internet censorship system that uses deep packet inspection and other AI techniques to monitor and control internet traffic. The leak exposes the system's inner workings and deployment, confirming the AI system's role in restricting information and surveilling users. This constitutes a violation of human rights (freedom of expression and information access). The event is not merely a data leak but reveals an AI system whose use has caused harm by enabling censorship and surveillance. Hence, it qualifies as an AI Incident due to the realized harm to rights and communities.
Thumbnail Image

中共史上最大規模文件洩露 揭「防火牆」內幕(上) | 中共網絡防火牆 | 方濱興 | 積至公司 | 新唐人电视台

2025-09-16
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The leaked documents reveal an AI-enabled internet firewall system used for extensive surveillance, censorship, and repression of information, which directly violates human rights and restricts freedom of expression. The system's AI capabilities include autonomous learning to monitor and control network traffic, which is a clear AI system involvement. The harm is realized as these technologies are actively used to suppress information and monitor users, both within China and abroad, causing harm to communities and violating fundamental rights. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

"防火长城"最大规模数据外泄 中共阴谋曝光(图) - 大陆时政 -

2025-09-15
看中国
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for network filtering, real-time monitoring, and VPN blocking, which are core AI functionalities. The leak reveals the deployment and use of these systems causing direct harm by violating human rights and enabling authoritarian control domestically and internationally. The harm is realized, not just potential, as the systems are actively used for censorship and surveillance. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information. The scale and nature of the harm—mass surveillance, censorship, and suppression of information—constitute significant violations of fundamental rights and harm to communities.
Thumbnail Image

「防火長城」最大規模數據外洩 中共跨國陰謀曝光(圖) - 大陸時政 -

2025-09-15
看中国
Why's our monitor labelling this an incident or hazard?
The leaked documents reveal the use of AI systems for censorship and surveillance that directly violate human rights and restrict information access, causing harm to communities. The involvement of AI in real-time network filtering and monitoring is explicit. The harm is realized, not just potential, as the technology is actively deployed and used to suppress VPNs and control internet access in multiple countries. This meets the criteria for an AI Incident due to direct harm to rights and communities through AI-enabled surveillance and censorship.
Thumbnail Image

【CDT报告汇】InterSecLab:防火墙之父方滨兴旗下公司将最先进审查技术出口海外(外二篇)

2025-09-16
China Digital Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems developed and deployed by the Chinese company Geedge Networks (积至) and others, which perform advanced real-time internet traffic monitoring, filtering, VPN blocking, and censorship. The leaked internal data confirms these AI systems are actively used by multiple governments to enforce oppressive internet controls, directly causing harm to human rights (freedom of expression, privacy) and communities (restricted access to information, suppression of dissent). The article documents realized harms, not just potential risks, and the AI systems' role is pivotal in enabling these harms. Hence, this is an AI Incident under the OECD framework.
Thumbnail Image

动态网总裁:中共贩卖网控技术属流氓行为

2025-09-15
botanwang.com
Why's our monitor labelling this an incident or hazard?
The article details the exposure of AI-powered network censorship and surveillance technologies used by the Chinese government and sold abroad, which are known to cause violations of human rights and harm to communities. However, the article focuses on the leak of internal documents and the analysis of these technologies rather than reporting a new AI Incident (direct or indirect harm occurring now) or an AI Hazard (plausible future harm). The harm from these AI systems is established and ongoing, but this event is about revealing information and providing deeper understanding, fitting the definition of Complementary Information. There is no new incident or immediate plausible harm described as resulting from the leak itself.
Thumbnail Image

中國防火牆驚爆史上最大外洩事件!超過500GB文件流出 - 國際 - 自由時報電子報

2025-09-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The Chinese Great Firewall is a complex AI-enabled system used for internet censorship and surveillance, which directly impacts human rights by restricting freedom of expression and privacy. The leak of internal source code and operational details reveals how the AI system functions and is deployed, which is a direct consequence of the AI system's development and use. This leak constitutes a breach of obligations intended to protect fundamental rights, fulfilling the criteria for an AI Incident. The harm is realized as the firewall's operation affects rights, and the leak could exacerbate or reveal vulnerabilities in the system, further impacting rights and freedoms.
Thumbnail Image

中共史上最大規模文件洩露 揭「防火牆」內幕(下) | 中共網絡防火牆 | 中共監控技術 | 洩露文件 | 新唐人电视台

2025-09-17
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for internet censorship and surveillance, which are deployed and actively used to restrict freedom of expression and monitor communications. These actions constitute violations of human rights and fundamental freedoms, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the systems are actively deployed and used in multiple countries, not merely a potential future risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

故意泄密?党内严重分裂 中共防火墙机密泄露(图) - 评析 -

2025-09-16
看中国
Why's our monitor labelling this an incident or hazard?
The leaked files pertain to an AI system (the Great Firewall) that performs real-time network monitoring, content filtering, and user identification, which fits the definition of an AI system. The use of this system has directly led to human rights violations, suppression of freedoms, and authoritarian control, constituting harm to communities and violations of fundamental rights. The leak reveals these harms and the system's deployment domestically and abroad, confirming realized harm. Therefore, this qualifies as an AI Incident. The internal leak and political implications do not negate the direct harm caused by the AI system's use.
Thumbnail Image

故意洩密?黨內嚴重分裂 中共防火牆機密洩露(圖) - 評析 -

2025-09-16
看中国
Why's our monitor labelling this an incident or hazard?
The leaked files pertain to an AI system (the Great Firewall) that performs real-time network monitoring, content filtering, and user identification, which are AI functions. The use and export of this system have directly led to violations of human rights and suppression of freedoms, constituting harm. The event is not merely a potential risk but documents actual use and harm. The internal leak and political implications do not negate the AI system's role in causing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kazakhstan buying surveillance system from China? Digital ministry responds

2025-09-15
Tengrinews.kz
Why's our monitor labelling this an incident or hazard?
The leaked documents describe an AI-enabled censorship and surveillance system modeled after China's Great Firewall, which uses AI to filter content and surveil individuals. Such systems can cause violations of human rights and harm communities by restricting access to information and enabling mass surveillance. Although Kazakhstan denies the deployment, the credible report and the nature of the system imply a plausible risk of harm if such systems are or will be used. Since no confirmed harm in Kazakhstan is established, but the potential for harm is clear, this event is best classified as an AI Hazard.
Thumbnail Image

Great Firewall of China Compromised in Historic 600GB Data Exposure - IT Security News

2025-09-15
IT Security News - cybersecurity, infosecurity news
Why's our monitor labelling this an incident or hazard?
The Great Firewall of China is known to use automated and AI-driven technologies for internet censorship and surveillance. The breach exposes sensitive information about this AI system's operation, which directly relates to violations of human rights and privacy. Although the breach itself is a cybersecurity incident, the involvement of an AI system in the surveillance and censorship machinery and the exposure of its internal data constitute an AI Incident due to the direct link to violations of rights and potential harm to individuals and communities.
Thumbnail Image

"Great Firewall in a Box" - How a massive data leak unveiled China's censorship export model

2025-09-17
TechRadar
Why's our monitor labelling this an incident or hazard?
The event involves an AI system, specifically the deep packet inspection engine and associated algorithms that perform real-time traffic analysis and censorship decisions, which are AI-driven. The development and use of this AI system have directly led to harm by enabling authoritarian regimes to restrict access to information, suppress dissent, and surveil citizens, violating their human rights and digital freedoms. The harm is realized and ongoing, affecting millions of people in multiple countries. The article describes the direct impact of the AI system's deployment, not just potential or future harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's Great Firewall leak: What we know

2025-09-17
Newsweek
Why's our monitor labelling this an incident or hazard?
The leaked censorship tools are AI systems because they perform complex monitoring, filtering, and traffic analysis tasks that go beyond simple software, indicating AI involvement. Their deployment by governments to suppress dissent and control information directly leads to violations of human rights and harm to communities. The article describes realized harm through surveillance and censorship, not just potential harm. Therefore, this event meets the criteria for an AI Incident due to the direct link between the AI system's use and human rights violations.
Thumbnail Image

China's Great Firewall Leak Exposes Global Export Of Censorship Tools

2025-09-17
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The leaked information explicitly details an AI system designed for content filtering, traffic monitoring, and user-specific censorship, which are AI system characteristics. The system's deployment in countries like Pakistan and Myanmar has directly led to large-scale surveillance and repression, violating human rights and fundamental freedoms. The harm is realized and ongoing, not merely potential, fulfilling the criteria for an AI Incident. The event is not merely a hazard or complementary information but a clear case where AI system use has caused significant harm to communities and rights.
Thumbnail Image

Leaked Geedge files unmask China's global censorship machinery

2025-09-17
News Nation English
Why's our monitor labelling this an incident or hazard?
The leaked documents expose the use of AI-based censorship and surveillance systems by Geedge Networks, which are actively deployed by authoritarian regimes to monitor and restrict citizens' online activities, constituting violations of human rights and harm to communities. The AI systems' development and use have directly contributed to these harms. Hence, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

How a Chinese company exports the Great Firewall to autocratic regimes

2025-09-18
Global Voices
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the censorship and surveillance toolset similar to the Great Firewall) that performs complex tasks such as real-time online surveillance, filtering content, detecting circumvention tools, and generating behavioral analyses. The system's deployment in various autocratic regimes has directly caused harm by violating human rights, including privacy and freedom of expression, and enabling oppressive government actions like internet shutdowns and censorship. These harms are materialized and ongoing, meeting the criteria for an AI Incident. The involvement of AI in the system's development and use is clear, and the harms are direct and significant.
Thumbnail Image

China's Geedge Breach Exposes Censorship Tools Export to Repressive Regimes

2025-09-17
WebProNews
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used for censorship and surveillance, which are actively causing violations of human rights by enabling repression and stifling free speech. The breach reveals operational details and confirms the use of AI-driven tools in multiple countries, indicating realized harm. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to violations of fundamental rights and harm to communities. The breach also has implications for cybersecurity and governance, but the primary focus is on the harm caused by the AI-enabled censorship tools.