OpenAI Cracks Down on Malicious AI-Driven Influence Operations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI reported several instances where threat actors using ChatGPT for disinformation, propaganda, and fake job applications were banned. Accounts from China, North Korea, and links to Iran-based groups exploited AI tools for covert influence operations, highlighting risks to human rights and national security.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system (ChatGPT) to generate harmful outputs: disinformation articles attacking the US published under false pretenses, and fake resumes/profiles used for fraudulent job applications. These activities constitute violations of rights and harm to communities through misinformation and fraud. Since the AI system's use directly led to these harms, this qualifies as an AI Incident.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomySafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General publicGovernment

Harm types
Public interestHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

OpenAI bans suspected malicious users in China and North Korea

2025-02-22
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) to generate harmful outputs: disinformation articles attacking the US published under false pretenses, and fake resumes/profiles used for fraudulent job applications. These activities constitute violations of rights and harm to communities through misinformation and fraud. Since the AI system's use directly led to these harms, this qualifies as an AI Incident.
Thumbnail Image

OpenAI uncovers evidence of AI-powered Chinese surveillance tool

2025-02-22
The Indian Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used for malicious purposes such as surveillance, disinformation campaigns, and scams that have already occurred. The AI-powered surveillance tool gathers real-time reports on anti-Chinese posts, and the disinformation campaigns generate and translate content to influence public opinion and facilitate scams. These activities have directly led to harms including violations of rights and harm to communities, meeting the criteria for an AI Incident. The involvement of AI in the development and use of these tools is clear and central to the harms described.
Thumbnail Image

OpenAI bans Chinese accounts for social media surveillance - ET Telecom

2025-02-23
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's AI models (ChatGPT) to generate descriptions for a social media listening tool used by Chinese security agencies to monitor protests, which is unauthorized surveillance violating personal freedoms and rights. Additionally, the AI was used to generate politically motivated misinformation targeting Latin American audiences. These activities directly lead to violations of human rights and harm to communities. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

OpenAI bans some Chinese users from using ChatGPT for social media monitoring

2025-02-23
Moneycontrol
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in ways that directly lead to harms: unauthorized surveillance supporting authoritarian regimes (violations of human rights) and the generation of disinformation targeting communities (harm to communities). The misuse of AI for these purposes has already occurred, constituting realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI bans malicious North Korean, Chinese users

2025-02-24
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT and DeepSeek) being used maliciously to generate harmful content (vilifying news articles, fraudulent social media comments) and unauthorized training of AI models using OpenAI's data. These activities have directly led to harms including misinformation dissemination, financial fraud facilitation, and intellectual property rights violations. OpenAI's use of AI tools to detect and ban malicious users further confirms AI system involvement in both harm and mitigation. Hence, the event meets the criteria for an AI Incident due to realized harms linked to AI system misuse and breaches of legal protections.
Thumbnail Image

OpenAI has been actively banning users if they're suspected of malicious activities

2025-02-24
Tom's Guide
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) for malicious activities such as scams, fake job applications, and disinformation campaigns, which constitute violations of rights and harm to communities. These harms have materialized as the malicious outputs were generated and used. OpenAI's banning of accounts is a response to these AI incidents. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms, including fraud and misinformation dissemination.
Thumbnail Image

OpenAI removes users suspected of malicious activities

2025-02-23
iTnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors to generate harmful content such as denigrating news articles, fake resumes for fraud, and social media comments for financial fraud. These activities have directly led to harms including misinformation, fraud, and potential political influence operations, which qualify as harm to communities and violations of rights. The AI system's development and use are central to these harms. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

OpenAI Removes Chinese Accounts Which Published Propaganda in Latin American Newspapers

2025-02-23
PCMAG
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate and disseminate propaganda content, which has been published in mainstream media and social platforms, thereby causing harm through misinformation and manipulation of public opinion. The involvement of AI in generating and translating the content is clear, and the harm is realized as the propaganda has reached a wide audience, influencing political discourse. Therefore, this qualifies as an AI Incident due to direct harm to communities and violation of rights through disinformation.
Thumbnail Image

OpenAI Discovers Evidence Of AI-Powered Chinese Surveillance Tool Tracking Real-Time Discussions

2025-02-23
news.abplive.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered surveillance tools tracking real-time discussions, AI-generated disinformation campaigns targeting dissidents, and AI-generated content used in scams. These activities have directly led to harms including violations of privacy and human rights, manipulation of public opinion, and financial fraud. The AI systems' development and use are central to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI removes users in China, North Korea suspected of malicious activities

2025-02-21
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's ChatGPT) by malicious users to generate misleading news articles, fake resumes, and fraudulent social media comments, which have caused or facilitated harm such as misinformation, fraud, and influence operations. These harms fall under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harms.
Thumbnail Image

OpenAI finds new Chinese influence campaigns using its tools

2025-02-21
Axios
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's ChatGPT and other tools) in active disinformation campaigns and surveillance-related activities by a nation-state actor. This misuse has directly led to harm in the form of spreading false information and aiding authoritarian control, which constitutes harm to communities and violations of rights. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and directly linked to AI misuse.
Thumbnail Image

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

2025-02-21
Fox News
Why's our monitor labelling this an incident or hazard?
The report details concrete instances where AI systems were used by malicious actors to generate and spread disinformation and influence operations, which have already occurred and caused harm to communities by undermining truthful information and democratic discourse. The AI systems' outputs were instrumental in producing and disseminating harmful content, fulfilling the criteria for an AI Incident. The involvement is through the use of AI models for malicious purposes, and the harm is realized, not just potential.
Thumbnail Image

China, Iran-based threat actors have found new ways to to use American AI models for covert influence: Report

2025-02-21
foxwilmington.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenAI and Meta models) being used by malicious actors to generate content that was published and spread, causing harm through disinformation and scams. This constitutes a violation of rights and harm to communities. The AI systems' use in these operations directly led to realized harms, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI bans accounts used to develop Chinese surveillance tools targeting the West

2025-02-21
Neowin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and Llama) used to develop surveillance tools targeting political dissent and human rights discussions in Western countries. This use of AI directly relates to violations of human rights and potential coercion by authoritarian regimes, which fits the definition of an AI Incident. The harm is either occurring or highly plausible given the nature of the surveillance and the intended use of the AI-generated outputs. OpenAI's banning of accounts confirms the AI system's involvement and misuse. Therefore, this is not merely a potential hazard or complementary information but a realized AI Incident involving harm to communities and human rights.
Thumbnail Image

OpenAI bans accounts appearing to work on a Chinese surveillance tool

2025-02-21
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) being used to develop and support a surveillance tool that collects and reports on individuals' activities without consent, which is a violation of human rights and personal freedoms. The AI's role is pivotal in enabling this unauthorized monitoring and suppression, fulfilling the criteria for an AI Incident. The harms are realized, not just potential, as the surveillance tool is actively being developed and promoted using AI. The involvement of AI in these malicious uses and the direct link to violations of rights justify classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Finds Evidence AI-Powered Surveillance Tools Using ChatGPT "Seem to Originate from China"

2025-02-21
eWEEK
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT, Meta's Llama, and other AI models) being used maliciously to create surveillance tools and generate disinformation aimed at influencing political discourse and surveilling protests. These actions have directly led to harms such as violations of rights (privacy, political expression) and harm to communities (through disinformation and political manipulation). Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to realized harms. The report also details OpenAI's response to mitigate these harms, but the primary event is the malicious use causing harm, not just the response.
Thumbnail Image

OpenAI removes users in China, North Korea suspected of malicious activities

2025-02-21
ThePrint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) to generate harmful outputs: denigrating news articles, fake resumes for fraudulent job applications, and coordinated social media comments for financial fraud. These uses have directly led to harms such as misinformation, fraud, and undermining security, fitting the definition of an AI Incident. The company's action to ban these accounts is a mitigation response but does not negate the occurrence of harm caused by the AI system's misuse.
Thumbnail Image

OpenAI Removes Users in China, North Korea Suspected of Malicious Activities

2025-02-21
US News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs were exploited by malicious users to generate misleading news articles, fake resumes, and fraudulent social media content, directly leading to harms such as misinformation, fraud, and potential repression. These harms fall under violations of rights and harm to communities. Since the harms have occurred and the AI system's use was pivotal in enabling them, this qualifies as an AI Incident.
Thumbnail Image

OpenAI detects AI-powered Chinese surveillance tool - The Times of India

2025-02-21
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system developed and used for surveillance and generating disinformation, which are activities that can violate human rights and harm communities. The AI system's use in real-time monitoring and generating posts directly contributes to these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through surveillance and disinformation campaigns. The involvement of AI in the development and use of the surveillance tool and disinformation generation is clear and central to the event.
Thumbnail Image

OpenAI bans Chinese accounts using ChatGPT to edit code for social media surveillance

2025-02-21
Engadget
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of ChatGPT, an AI system, to assist in creating and refining code for a social media surveillance tool used to monitor and suppress political dissent and human rights activism. This constitutes a violation of human rights and harm to communities, fulfilling the criteria for an AI Incident. The AI system's use is central to the harm caused, and the harm is realized, not merely potential. The involvement of AI in generating phishing emails and politically charged content further supports this classification.
Thumbnail Image

OpenAI removes users in China, North Korea suspected of malicious activities

2025-02-22
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) for malicious purposes that have directly led to harms such as misinformation spreading, fraudulent job applications, and opinion manipulation. These harms affect communities and violate rights, fitting the definition of an AI Incident. The removal of accounts is a response to these realized harms. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing harm.
Thumbnail Image

OpenAI bans accounts tied to China and North Korea for malicious AI activity - Profit by Pakistan Today

2025-02-22
Profit by Pakistan Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT) being used maliciously to generate misleading news articles, fraudulent resumes, and content for financial fraud, which are harms to communities, violations of rights, and potentially threats to security. These harms have occurred or are ongoing, making this an AI Incident. The detection and banning of accounts is a mitigation response but does not negate the fact that harms have materialized due to AI misuse.
Thumbnail Image

OpenAI cracks down on misuse in China, North Korea

2025-02-24
Mobile World Live
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT, an AI system, being exploited for harmful purposes including surveillance, influence campaigns, and fraud. These activities have directly led to harms such as misinformation dissemination, fraud risks, and suppression of rights, which fall under violations of human rights and harm to communities. The involvement of the AI system in these harms is clear and direct, meeting the criteria for an AI Incident. The report also discusses mitigation efforts but the primary focus is on the realized harms caused by misuse of the AI system.
Thumbnail Image

Read More

2025-02-22
brudirect.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and Llama-based models) used in the development and operation of a social media surveillance tool targeting human rights activists and dissidents, which is a violation of human rights. The AI's role is pivotal in generating code, content, and facilitating surveillance activities. The harm is realized, not just potential, as the tool is actively used to monitor and suppress dissent, and disinformation is spread via AI-generated articles. This meets the criteria for an AI Incident due to direct involvement of AI in causing harm to human rights and communities.
Thumbnail Image

OpenAI cracks down on users developing social media surveillance tool using ChatGPT

2025-02-24
TechSpot
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and Meta's Llama models) being used to develop and support a surveillance tool that targets social media users for unauthorized monitoring by an authoritarian regime. This use directly breaches privacy and human rights, fulfilling the criteria for harm under the AI Incident definition (violations of human rights and harm to communities). The misuse is realized, not merely potential, as evidenced by disinformation campaigns and phishing activities linked to AI misuse. The AI system's development and use are central to the harm, and the event describes direct consequences of AI misuse rather than a hypothetical risk or a governance response. Hence, it is classified as an AI Incident.
Thumbnail Image

China Using AI-Powered Surveillance Tools, Says OpenAI

2025-02-24
databreachtoday.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used for surveillance and disinformation campaigns that have already occurred, causing harm such as violations of privacy, unauthorized monitoring, and spreading misleading narratives. The AI systems' development and use are directly linked to these harms, fulfilling the criteria for an AI Incident. The involvement of AI in generating content and monitoring social media, as well as the direct impact on communities and rights, confirms this classification.
Thumbnail Image

Open AI bans multiple accounts found to be misusing ChatGPT

2025-02-24
TechRadar
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that AI systems (ChatGPT and other models) were used to generate disinformation and support surveillance operations, which have directly led to harms such as spreading misinformation, undermining democracy, and facilitating surveillance. These harms fall under violations of rights and harm to communities. The involvement of AI in generating content and code for these campaigns is clear, and the harm is realized, not just potential. Hence, this is an AI Incident.
Thumbnail Image

OpenAI targets AI misuse, removes accounts in China and North Korea

2025-02-24
Tech Monitor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (OpenAI's ChatGPT) being misused to generate misinformation, fake resumes, and fraudulent content, which have caused direct harms such as misinformation spread and fraud. The removal of accounts is a response to these harms. The involvement of AI in these harmful activities meets the criteria for an AI Incident, as the misuse of AI has directly led to violations including misinformation and fraud. The additional information about policy changes and sanctions provides context but does not overshadow the primary incident of AI misuse causing harm.
Thumbnail Image

OpenAI Disables China and North Korea Accounts for Misuse - TechNadu

2025-02-24
TechNadu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's AI system for harmful activities including propaganda, fraudulent resumes, financial fraud, and influence operations. These activities have caused or are causing harm to communities and potentially threaten security, fitting the definition of an AI Incident. The AI system's misuse is central to the harms described, and OpenAI's disabling of accounts is a response to these realized harms. Hence, the event is classified as an AI Incident.
Thumbnail Image

OpenAI suspends China-linked accounts using ChatGPT for surveillance tool

2025-02-21
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's language models) in the development and operation of a surveillance tool used by an authoritarian regime to monitor and suppress personal freedoms, which is a violation of human rights. The harm is realized as the AI system's outputs were used to facilitate unauthorized surveillance and repression. The suspension of accounts is a response to this misuse, but the incident itself reflects direct harm caused by AI use. Hence, it meets the criteria for an AI Incident due to violations of human rights and harm to communities resulting from the AI system's use.
Thumbnail Image

OpenAI bans accounts linked to surveillance tool development By Investing.com

2025-02-21
Investing.com India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (OpenAI's models) in the development and operation of a surveillance tool aimed at monitoring individuals and political actors without consent, which is a breach of fundamental rights. The AI system's use directly contributes to activities that violate human rights, fulfilling the criteria for an AI Incident. The harm is realized in the form of unauthorized surveillance and potential suppression of personal freedoms, even if the full impact is not yet fully assessed. Therefore, this is not merely a potential hazard or complementary information but an AI Incident due to the direct involvement of AI in rights violations.
Thumbnail Image

China: OpenAI uncovers Chinese security operation's AI surveillance tool monitoring anti-Chinese social media posts - Business & Human Rights Resource Centre

2025-02-25
Business & Human Rights Resource Centre
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used for surveillance, disinformation, and scam generation, which have directly caused harms such as violations of human rights and harm to communities. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident. The involvement of AI in malicious surveillance and disinformation campaigns constitutes direct harm, not just potential or future risk.
Thumbnail Image

Chinese and Other Actors Leverage AI for Censorship, Surveillance, Propaganda

2025-02-25
中国数字时代
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (chatbots like ChatGPT and DeepSeek) being used to monitor social media, generate propaganda, and enforce censorship, which are activities that violate human rights and harm communities. The AI systems' outputs are used by authorities to suppress dissent and surveil populations, constituting direct harm. The involvement of AI in these activities is clear and central to the harms described. Hence, this qualifies as an AI Incident under the framework, as the AI systems' use has directly led to violations of rights and harm to communities.
Thumbnail Image

OpenAI report reveals alarming rise in AI-enabled malicious activities |

2025-02-25
bobsguide
Why's our monitor labelling this an incident or hazard?
The report explicitly details the use of OpenAI's AI models in harmful activities that have already occurred, such as AI-generated propaganda influencing media, AI-facilitated scams causing fraud, and AI-assisted cyberattacks and surveillance. These constitute realized harms to communities, individuals, and potentially violate rights, fitting the definition of an AI Incident. The AI systems' use is central to these harms, and the event is not merely a warning or general information but documents actual malicious use cases.
Thumbnail Image

OpenAI Bans Chinese Accounts That Used ChatGPT to Create Anti-US Propaganda

2025-02-23
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate and disseminate propaganda and disinformation, which has directly led to harm to communities by spreading misleading political narratives and influencing public opinion covertly. The use of AI-generated content in mainstream media and social platforms to manipulate information and deceive audiences fits the definition of an AI Incident, as it causes significant harm to communities and violates norms of truthful information dissemination. The involvement of AI in generating and translating the content is explicit, and the harm is realized through the spread of propaganda and misinformation.
Thumbnail Image

OpenAI cracks down on ChatGPT scammers

2025-02-25
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that ChatGPT, an AI system, was used by bad actors for fraudulent and malicious activities, including generating deceptive content and facilitating scams. These activities have caused direct harm to individuals and communities, such as through scams and misinformation. OpenAI's intervention by banning accounts confirms the AI system's involvement in causing harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

OpenAI Bans Accounts in China, North Korea Over AI Misuse

2025-02-24
MEDIANAMA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's AI models for malicious activities including social media surveillance feeding information to Chinese security services and influence operations planting propaganda articles. These uses have directly caused harm to communities and violated rights, fulfilling the criteria for an AI Incident. The involvement of AI systems is clear, and the harms are realized, not merely potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

涉利用AI監視帶風向 OpenAI刪中國、北韓用戶 - 自由財經

2025-02-24
ec.ltn.com.tw
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose misuse has directly led to harms including misinformation dissemination, fraud, and surveillance activities by authoritarian regimes. These harms relate to violations of rights and harm to communities. The deletion of accounts is a response to these realized harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harms.
Thumbnail Image

【AI】OpenAI刪除部分中國和北韓帳戶,稱涉惡意活動

2025-02-24
etnet 經濟通|香港新聞財經資訊和生活平台
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by malicious actors to generate harmful content and fake profiles, which has led to violations such as misinformation dissemination and fraud attempts. These activities constitute harm to communities and potentially violate rights, thus qualifying as an AI Incident. The deletion of accounts is a response to these realized harms caused by the AI system's misuse.
Thumbnail Image

OpenAI重拳出擊!封鎖中國、北韓帳號 控涉嫌用AI監控、帶風向 | 國際要聞 | 全球 | NOWnews今日新聞

2025-02-24
NOWnews今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors from China and North Korea to conduct harmful activities such as generating disinformation, surveillance, and fraud attempts. These activities have materialized harms including spreading false information and attempting to deceive employers, which constitute harm to communities and violations of rights. The AI system's use is central to these harms, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

打擊惡意行動,OpenAI 移除中國與北韓特定帳戶

2025-02-24
TechNews 科技新報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's AI system (ChatGPT) by malicious users to generate disinformation and fraudulent content, which has caused harm to communities and individuals through misinformation and scams. The AI system's outputs were instrumental in these harmful activities, fulfilling the criteria for an AI Incident as the harm has materialized and is directly linked to the AI system's use. The removal of accounts is a mitigation response but does not negate the occurrence of harm.
Thumbnail Image

涉利用AI監測與控制輿論 OpenAI刪除中國、北韓用戶 | ChatGPT | 人工智慧 | 中共 | 大紀元

2025-02-24
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT) being used maliciously to monitor and control public opinion, generate disinformation, and commit fraud. These activities constitute violations of rights and harm to communities, fulfilling the criteria for an AI Incident. The harms are realized and ongoing, not merely potential. OpenAI's deletion of accounts is a response but does not negate the incident classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

涉利用AI監測與控制輿論 OpenAI刪除中國、北韓用戶| 台灣大紀元

2025-02-24
台灣大紀元
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI system involvement (OpenAI's ChatGPT) being used maliciously to surveil, manipulate media, generate fake content for fraud, and spread misinformation. These uses have caused or are causing harm to communities (manipulated public opinion), violations of rights (surveillance, misinformation), and financial harm (fraud). The harms are realized, not just potential. Therefore, this qualifies as an AI Incident. The deletion of accounts is a response but does not negate the incident classification.
Thumbnail Image

法國世界報 - 一些中國用戶使用ChatGPT創建監視工具,被停號

2025-02-25
ار.اف.ای - RFI
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to develop surveillance tools and spread false information, which directly harms human rights and communities by enabling government monitoring of protests and disseminating disinformation. The misuse of ChatGPT in this context has led to realized harm, fulfilling the criteria for an AI Incident. The banning of users by OpenAI is a response to this misuse but does not negate the occurrence of harm caused by the AI system's use.
Thumbnail Image

中國:OpenAI發現中國安全機構開發AI監控工具用於追蹤社群媒體反華內容 - Business & Human Rights Resource Centre

2025-02-25
Business & Human Rights Resource Centre
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (based on Meta's Llama) being used by a Chinese security operation for real-time monitoring of social media content, which is a direct use of AI for surveillance purposes. The use of AI to generate and disseminate politically charged content and scams further indicates active harm to communities and violations of rights. These activities have already occurred and caused harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

【禁聞】OpenAI: 中共用AI監控言論 生成大外宣文章 | 中國用戶 | 監控網路 | 新唐人电视台

2025-02-26
NTDChinese
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems developed and used by Chinese teams to monitor social media for anti-China posts and to generate propaganda articles using ChatGPT. This use of AI directly leads to violations of human rights (freedom of expression and information) and harms communities by spreading disinformation and manipulating public opinion. The AI system's role is pivotal in enabling large-scale surveillance and content generation. Therefore, this qualifies as an AI Incident under the framework, as the harms are realized and directly linked to the AI system's use.
Thumbnail Image

韓禁用DeepSeek ChatGPT每週用戶首破200萬 - 自由財經

2025-02-23
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The article describes a government ban on an AI application due to concerns about data management, which is a precautionary regulatory action. There is no evidence of realized harm or incidents caused by the AI systems. The increase in ChatGPT users is a usage statistic without associated harm. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI ecosystem developments and governance responses without reporting a new incident or hazard.
Thumbnail Image

北, 챗GPT로 가짜이력서 작성해 위장취업 - 전파신문

2025-02-21
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system to create fake resumes and profiles for deceptive employment purposes, which directly led to fraud and financial harm. It also describes AI-generated disinformation and AI-assisted cyberattacks causing significant financial and security harms. The harms include violations of rights, theft of property (cryptocurrency), and harm to communities through disinformation. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a report of actual harms caused by AI misuse.
Thumbnail Image

챗GPT로 허위 이력서 쓴 북한..."계정 삭제"

2025-02-22
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT, an AI system, to generate false resumes and profiles directly facilitated cybercrime resulting in substantial financial harm. The AI system's misuse by North Korean actors led to realized harm (theft), meeting the criteria for an AI Incident. The deletion of accounts is a response but does not negate the incident classification.
Thumbnail Image

北, 챗GPT로 허위 이력서·프로필 작성 발각...계정 삭제돼 | 연합뉴스

2025-02-21
연합뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system to create false resumes and profiles to deceive companies for employment fraud, which is a direct harm to property and communities. It also details AI-generated disinformation and surveillance activities linked to Chinese accounts, causing violations of rights and manipulation of public discourse. These harms have already occurred, and the AI system's use was pivotal in enabling these malicious activities. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

서구권 기업 취직 위해 챗GPT '허위 이력서' 만든 북한 계정 발각

2025-02-21
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT) being used maliciously to generate false resumes and disinformation, leading to direct harms such as deception of employers, financial theft exceeding $1 billion, and political disinformation campaigns. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The harms include violations of rights (deception, fraud), harm to property (cryptocurrency theft), and harm to communities (disinformation). The event is not merely a potential risk or a complementary update but a report of realized harms caused by AI misuse.
Thumbnail Image

챗GPT로 허위 이력서프로필 작성 발각계정 삭제돼

2025-02-21
Wow TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to create false resumes and profiles, which directly led to harms such as fraud, theft of cryptocurrency, and manipulation of public opinion. These harms fall under violations of rights and harm to communities and property. The detection and deletion of these accounts by OpenAI is a response to an ongoing AI Incident involving malicious use of AI-generated content causing real harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

북, 챗GPT로 허위 이력서·프로필 작성 발각...''계정 삭제''

2025-02-21
매일방송
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system to create false resumes and profiles to deceive companies, generate disinformation articles to manipulate public opinion, and produce AI-generated comments to lure people into investment scams. These activities have directly caused harm to individuals and communities through fraud, misinformation, and manipulation, fulfilling the criteria for an AI Incident. The involvement of the AI system is clear, and the harms are realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

북한, 챗GPT로 허위 이력서·가짜 프로필 생성... 오픈AI "계정 삭제

2025-02-22
조선일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to create false and misleading content that is actively used to deceive and harm individuals and organizations, including attempts to infiltrate companies and spread disinformation. These actions constitute violations of rights and harm to communities, fitting the definition of an AI Incident. The article reports realized harm through malicious use of AI-generated content, not just potential harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

오픈AI 스파이 계정 대거 차단SNS 감시하고 취업사기 모색

2025-02-23
wowtv.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by malicious actors to collect data, generate misleading content, and create fake resumes for fraudulent employment attempts. These activities have directly led to harms including violations of privacy, misinformation spreading, and deception causing harm to individuals and organizations. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to people and communities, as well as violations of rights.
Thumbnail Image

北, 챗GPT로 핵개발용 외화벌이 - 매일경제

2025-02-23
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, a generative AI system, to create fake resumes and profiles that enabled North Korean hackers to deceive companies and steal cryptocurrency worth over one billion dollars. This is a direct link between the AI system's use and significant harm to property and financial assets. The harm is realized, not just potential, and the AI system's role is pivotal in enabling the deception and subsequent theft. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

"북한은 챗GPT를 이렇게 쓰네"...외화벌이용 가짜 이력서 만들다 딱 걸려 - 매일경제

2025-02-23
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) to generate fake resumes and profiles for fraudulent purposes, leading to direct financial harm and deception. This meets the criteria for an AI Incident because the AI system's use directly caused harm (financial theft, deception, and manipulation). The involvement of AI in generating misleading content and facilitating cybercrime is clear and central to the event. Therefore, this is classified as an AI Incident.
Thumbnail Image

여론전에 활용된 메타 라마오픈AI가 적발

2025-02-23
wowtv.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Meta's LLaMA, OpenAI's ChatGPT) being used by state actors for surveillance, misinformation, and deception. These activities have already occurred and caused harm, such as privacy violations, spreading disinformation, and manipulation of public opinion, which are harms to communities and violations of rights. The AI systems' development and use have directly led to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.