Chinese Officials Use ChatGPT for Cross-Border Intimidation and Disinformation Campaigns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI revealed that Chinese officials used ChatGPT to document and facilitate large-scale cross-border intimidation and disinformation campaigns, including impersonating U.S. officials to threaten dissidents, fabricating false death notices, and attempting to smear Japan's Prime Minister. These AI-enabled actions resulted in real-world harm, violating human rights and spreading misinformation globally.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system (ChatGPT) in the misuse context, leading to direct harm such as financial fraud (dating scams defrauding victims) and violations of rights (impersonation of law firms and officials). The harms are realized and ongoing, meeting the criteria for an AI Incident. The report details actual misuse and resulting harm, not just potential or hypothetical risks, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.[AI generated]
AI principles
Respect of human rightsAccountability

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Human or fundamental rightsPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-26
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the misuse context, leading to direct harm such as financial fraud (dating scams defrauding victims) and violations of rights (impersonation of law firms and officials). The harms are realized and ongoing, meeting the criteria for an AI Incident. The report details actual misuse and resulting harm, not just potential or hypothetical risks, so it is not an AI Hazard or Complementary Information. Therefore, the classification is AI Incident.
Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-25
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the misuse of an AI system (ChatGPT) in conducting cybercrimes, including scams and influence operations that have caused harm such as deception and reputational damage. These harms fall under violations of rights and harm to communities, and the AI system's use is a direct factor in these incidents. Therefore, this qualifies as an AI Incident.
Thumbnail Image

把ChatGPT當日誌!中國官員手滑洩密 跨國鎮壓、抹黑高市全曝光 | 國際要聞 | 全球 | NOWnews今日新聞

2026-02-26
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to record and plan harmful operations that have already caused harm, including misinformation campaigns and suppression of dissent. The AI system's involvement is central to the incident, as it was used to generate plans and document operations that led to real-world harm. Therefore, this qualifies as an AI Incident due to direct involvement of AI in causing violations of rights and harm to communities.
Thumbnail Image

中國官員愛用ChatGPT 意外曝光抹黑高市早苗、跨國恐嚇異議人士 - 國際 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to generate false information and intimidate dissidents, which has led to realized harms such as misinformation, harassment, and violation of rights. The AI system's outputs were used to fabricate fake court documents, false death notices, and defamatory content, which were spread online causing harm to individuals and communities. This meets the criteria for an AI Incident as the AI system's use directly led to violations of human rights and harm to communities.
Thumbnail Image

OpenAI報告揭中國大規模網攻台灣 蕭上農:國家機器的系統性工程 - 政治 - 自由時報電子報

2026-02-26
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Chinese state actors to conduct coordinated cyberattacks and disinformation campaigns against Taiwan and dissidents, leading to realized harms including harassment, account restrictions, and physical detention. The AI systems' development and use are central to these harms, fulfilling the criteria for an AI Incident. The report details actual harms rather than potential risks, and the AI role is pivotal in enabling these state-level operations, thus it is not merely complementary information or a hazard but an incident.
Thumbnail Image

報告指中國官員用ChatGPT跨國鎮壓 曝冒充移民官、散布假訊息手法 | 國際 | 中央社 CNA

2026-02-26
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in the development and execution of harmful activities including impersonation, spreading false information, and coordinated disinformation campaigns. These actions have directly caused harm to individuals (intimidation, harassment) and communities (disinformation, manipulation), fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it was used to generate and plan these harmful outputs. The harm is realized and ongoing, not merely potential.
Thumbnail Image

中共官員用ChatGPT 意外曝光全球恐嚇行動 | 中共跨國鎮壓 | OpenAI | 網絡行動 | 大紀元

2026-02-25
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) by CCP officials to carry out harmful activities such as intimidation, misinformation, and suppression of dissent, which constitute violations of human rights and harm to communities. The AI system's outputs were instrumental in these actions, and the harm is ongoing and realized, not merely potential. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly led to significant harms including human rights violations and community harm.
Thumbnail Image

'From dating scams to fake lawyers': OpenAI bans ChatGPT accounts over misuse

2026-02-26
The News International
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the commission of cybercrimes that have caused direct harm to individuals (fraud victims) and communities (smear campaigns, influence operations). The AI system's outputs were used to generate deceptive content and communications that facilitated these harms. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to people.
Thumbnail Image

OpenAI 威脅報告揭密:中國「網路特戰」,怎麼用 AI 打壓台灣與異議聲音?

2026-02-26
數位時代
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as being used for content generation, translation, monitoring, and manipulation in a coordinated influence operation by a government entity. The use of AI directly contributes to violations of human rights, including suppression of dissent and freedom of expression, and causes harm to individuals (e.g., detention of a dissident) and communities (e.g., disinformation campaigns). The AI's role is pivotal in enabling the scale and coordination of these operations. The harms are realized and documented, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

大翻車!中國官員用ChatGPT寫日記 OpenAI不忍了:跨國鎮壓駭人內幕全公開 | 科技 | Newtalk新聞

2026-02-26
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the development and execution of coordinated disinformation and intimidation campaigns that have caused harm to individuals and communities, constituting violations of human rights and political repression. The AI system's role is pivotal in generating fake documents and messages used for harassment and misinformation. Therefore, this qualifies as an AI Incident due to realized harm linked directly to the AI system's misuse.
Thumbnail Image

From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report

2026-02-26
NTD
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) being used maliciously to generate deceptive content and communications that have caused real harm, including financial fraud and influence operations. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The misuse is not hypothetical or potential but has already occurred, with OpenAI banning accounts linked to these activities, confirming realized harm.
Thumbnail Image

From dating scams to fake lawyers: OpenAI details ChatGPT misuse in new threat report

2026-02-25
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases, where the AI was used to generate content facilitating scams, influence operations, and impersonations. These actions have directly caused harm to individuals (financial fraud victims), communities (smear campaigns), and rights (impersonation of officials). Therefore, this qualifies as an AI Incident because the AI system's misuse has directly led to realized harms as defined in the framework.
Thumbnail Image

中國官員用聊天機器人 意外曝光抹黑高市、跨國恐嚇 | 國際焦點 | 國際 | 經濟日報

2026-02-25
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and execution of harmful activities such as cross-national intimidation, misinformation campaigns, and defamation. The harms described include violations of human rights and harm to communities through misinformation and political manipulation. The AI system's role is pivotal as it was used to generate false documents and content that facilitated these harms. The harm is realized and ongoing, not merely potential, thus qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中國官員用ChatGPT留「日誌」!意外曝抹黑高市、跨國恐嚇│TVBS新聞網

2026-02-26
TVBS
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in the development and execution of coordinated disinformation and intimidation campaigns. These campaigns have caused actual harm, including the spread of false information about dissidents' deaths and attempts to discredit political figures, which constitute violations of human rights and harm to communities. Therefore, this event qualifies as an AI Incident due to the direct link between AI use and realized harm.
Thumbnail Image

OpenAI報告:中國大規模網攻台灣 蕭上農點名「台灣風險」 | 政治 | 三立新聞網 SETN.COM

2026-02-26
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Chinese local AI models like DeepSeek and Qwen) in the development and execution of coordinated disinformation and harassment campaigns by a state actor. These campaigns have directly led to harms such as online harassment, suppression of dissent, and physical detention of individuals, fulfilling the criteria for an AI Incident. The AI systems are not merely potential risks but have been actively used to generate content, monitor targets, and facilitate operations that have caused real harm. The article also notes the failure of AI safety mechanisms in some models, highlighting the systemic nature of the harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

From Dating Scams to Fake Lawyers: OpenAI Details ChatGPT Misuse in New Threat Report

2026-02-26
Claims Journal
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions that ChatGPT, an AI system, was used by malicious actors to conduct cybercrimes including scams and influence operations that harmed individuals and communities. The harms include deception, fraud, and reputational damage, which fall under harm to communities and violations of rights. Since the AI system's use directly contributed to these harms, this qualifies as an AI Incident.
Thumbnail Image

OpenAI揭露中國利用ChatGPT記錄海外影響行動 偽裝移民官員威脅異議人士

2026-02-25
東森美洲電視
Why's our monitor labelling this an incident or hazard?
ChatGPT, an AI system, was instrumental in revealing a coordinated campaign of harassment and threats against dissidents, which is a violation of human rights. The AI system's use in analyzing and cross-referencing data directly contributed to uncovering these harms. The event involves the use of AI in a way that has directly led to the exposure of an ongoing harm, fitting the definition of an AI Incident due to violations of human rights and harm to communities.
Thumbnail Image

快新聞/中國官員將ChatGPT當日誌 意外曝光北京大規模跨海鎮壓行動 - 民視新聞網

2026-02-26
民視新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by Chinese officials to document and facilitate cross-border repression and intimidation campaigns. These campaigns involve impersonation, misinformation, and harassment of dissidents abroad, which are clear violations of human rights and harm to communities. The AI system's role is pivotal as it was used as a tool in these harmful operations. The harm is realized and ongoing, not merely potential, thus classifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A Secret Chinese Campaign Was Exposed By 1 Mistake: Using ChatGPT As A Diary

2026-02-26
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (ChatGPT and other AI tools) in the development and execution of a large-scale disinformation and repression campaign targeting dissidents and world leaders. The harms include violations of human rights (intimidation, suppression, impersonation, spreading false information) and harm to communities (disinformation campaigns). The AI system's role is pivotal as it was used for planning, record-keeping, and content generation, directly or indirectly leading to these harms. Therefore, this qualifies as an AI Incident.
Thumbnail Image

One Mistake, Big Leak: Chinese Official's ChatGPT 'Diary' Exposes Secret Campaign

2026-02-26
News18
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT as a tool for planning and documenting a covert campaign that intimidates dissidents and spreads false information meets the criteria for an AI Incident. The harms include violations of human rights and harm to communities through intimidation, impersonation, and misinformation. Although the AI was not used to generate harmful content directly, its role in enabling the campaign is pivotal. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI shares details from thwarted romance scams, fake law firms, and an effort to smear Japan's prime minister

2026-02-25
Business Insider
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (ChatGPT and other AI models) in the development and execution of scams and influence operations that have caused direct harm, including financial fraud and political repression. The AI systems were used to generate fake content, impersonate officials, and assist in planning and polishing malicious campaigns. These activities meet the criteria for AI Incidents as they have directly led to harm to individuals and communities, including violations of rights and fraud. Therefore, the event is classified as an AI Incident.
Thumbnail Image

OpenAI uncovers global Chinese intimidation operation through one official's use of ChatGPT | CNN Politics

2026-02-25
CNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document and plan a large-scale influence operation that has already caused harm, such as intimidation of dissidents, impersonation of officials, and spreading false information. These actions constitute violations of human rights and harm to communities. The AI system's role was pivotal in enabling these harms, meeting the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's use was central to the incident.
Thumbnail Image

Chinese law enforcement tried using ChatGPT to discredit Japan's PM, OpenAI says

2026-02-25
Axios
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (ChatGPT and Chinese AI models) used in the development and execution of a disinformation campaign aimed at discrediting a political figure and suppressing dissent. The use of AI in generating and amplifying false information and fake accounts has directly led to violations of human rights and harm to communities by undermining political discourse and spreading misinformation. The harm is realized, not just potential, as the campaign went ahead using AI tools. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Chinese official accidentally reveals secret operation to ChatGPT: Smear campaign against Japan PM, impersonating US officials - The Times of India

2026-02-25
The Times of India
Why's our monitor labelling this an incident or hazard?
The involvement of ChatGPT, an AI system, is explicit as it was used to plan and track covert operations. The harms include violations of human rights through transnational repression and misinformation campaigns, which have materialized as described (e.g., false obituaries, social media account takedowns). The AI system's use directly contributed to these harms by enabling the planning and coordination of these activities. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Chinese law enforcement tried to use ChatGPT to plan influence op against Japan PM: OpenAI

2026-02-26
CNA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate harmful content for an influence operation, which is a direct use of AI leading to harm to communities and violation of rights through misinformation and manipulation. The operation was active and involved large-scale coordinated efforts, indicating realized harm rather than just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Crooks and Communists Misusing AI Tools

2026-02-25
HotAir
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT and Claude) being used in harmful ways: Chinese operatives used ChatGPT to document and plan repression and misinformation campaigns, which directly violates human rights and harms communities. Separately, a hacker exploited AI to breach Mexican government servers and steal sensitive data, causing harm to property and privacy. These are direct harms caused or facilitated by AI system use, meeting the criteria for an AI Incident. The article also discusses the AI systems' guardrails being bypassed, indicating malfunction or misuse leading to harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Chinese Official Accidentally Reveals Vast Influence Operation Through ChatGPT Use | National Review

2026-02-25
National Review
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used in the planning and updating of a covert influence operation that includes impersonation and harassment tactics causing harm to communities and violations of rights. The AI system's involvement is in the use phase, and the harm is realized through the ongoing influence campaigns and harassment activities. The disclosure by OpenAI and the evidence of the campaigns confirm that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI: Chinese agent used ChatGPT for smear ops

2026-02-25
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (ChatGPT and other AI models) to carry out malicious operations that have caused harm to individuals and communities, including political repression and psychological harassment. The harms include violations of human rights and harm to communities through coordinated disinformation and harassment campaigns. The AI's role is pivotal as it was used to generate and plan these operations, making this a clear AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI Intelligence Report Identifies New Tactics in AI-Enhanced Scams | PYMNTS.com

2026-02-25
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI models being used to plan covert influence operations, conduct romance scams, and generate disinformation content, all of which have caused harm to individuals and communities. The harms include violations of rights (e.g., intimidation, impersonation), harm to communities (e.g., disinformation), and fraud-related harms. The AI systems' involvement is direct in generating content and automating interactions that facilitate these harms. Although the report notes that AI-generated content was not always decisive, the AI's role was pivotal in enabling these malicious campaigns. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Chinese Police Use ChatGPT to Smear Japan PM Takaichi

2026-02-26
Dark Reading
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the development and use phases to facilitate smear campaigns and influence operations. These campaigns have directly led to harm in the form of reputational damage, political manipulation, and violations of rights, fulfilling the criteria for an AI Incident. The AI system's role is pivotal as it was used to generate and polish the malicious content, making the harm possible and more effective. The description confirms realized harm rather than potential harm, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

OpenAI report reveals Chinese influence campaign exposed through ChatGPT use - Daily Times

2026-02-26
Daily Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves ChatGPT, an AI system, being used by a government official to conduct and document a coordinated influence campaign involving misinformation, harassment, and impersonation. These actions constitute violations of human rights and harm to communities, fulfilling the criteria for an AI Incident. The misuse of the AI system directly contributed to these harms, and the report details actual realized harm rather than potential harm. Hence, this is classified as an AI Incident.
Thumbnail Image

Chinese group's ChatGPT use reveals worldwide harassment campaign against critics

2026-02-25
CyberScoop
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT and other AI models) used in the development and execution of coordinated influence and harassment operations. The use of AI to generate propaganda, impersonate officials, flood social media with fake accounts, and intimidate critics constitutes a direct link to harm, specifically violations of human rights and harm to communities. The report details ongoing and realized harm, not just potential risks, making this an AI Incident rather than a hazard or complementary information. The AI's role is pivotal in enabling the scale and sophistication of these operations.
Thumbnail Image

OpenAI flags China-linked influence ops targeting Japan's Takaichi

2026-02-26
Nikkei Asia
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (ChatGPT, DeepSeek, Alibaba's Qwen) being used to plan and execute influence operations that harm communities and violate rights by spreading disinformation and manipulating political discourse. The AI systems' outputs were instrumental in structuring and refining the malicious campaign, thus directly contributing to the harm. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm to communities and violations of rights.
Thumbnail Image

ChatGPT Slip Reveals Alleged Chinese Smear Campaign On Japan PM

2026-02-26
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) being used in the development and use phases to create disinformation content, which is a form of harm to communities and a violation of rights. Although the campaign was detected and disrupted before full execution, the misuse and intent to cause harm are clear and directly linked to the AI system's use. This qualifies as an AI Incident because the AI system's misuse has directly led to a significant harm scenario (disinformation campaign) that was interrupted but had already begun. The detection and prevention do not negate the incident classification, as the misuse and partial execution occurred.
Thumbnail Image

OpenAI uncovers global Chinese intimidation operation through one official's use of ChatGPT

2026-02-25
Local3News.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, by a Chinese official to document and facilitate a large-scale influence operation that caused real harm to dissidents abroad. The harms include intimidation, impersonation of officials, spreading false information, and attempts to suppress social media accounts, which are violations of human rights and harm to communities. The AI system's role was pivotal in enabling these activities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and the AI system's use was central to the incident.
Thumbnail Image

حملات سرية ورسائل تشويه.. كيف يتم استغلال "تشات جي بي تي"؟

2026-02-25
سكاي نيوز عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system in malicious campaigns involving impersonation, fraud, and disinformation. These uses have directly caused harm to individuals (e.g., victims of dating scams), political figures (e.g., targeted disinformation against Japan's prime minister), and broader communities through influence operations. The AI system's misuse is central to these harms, fulfilling the criteria for an AI Incident.
Thumbnail Image

OpenAI تحظر حسابات صينية استغلت ChatGPT في الاحتيال

2026-02-26
صحيفة عكاظ
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system in fraudulent and malicious activities that have caused real harm to individuals and communities, including scams and political influence operations. The AI system's misuse directly contributed to these harms, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

tayyar.org -

2026-02-26
tayyar.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (OpenAI's ChatGPT) in the commission of cybercrimes and influence operations that have caused direct harm to individuals and communities. The misuse includes fraudulent schemes that have likely harmed hundreds of victims financially and politically, as well as impersonation that breaches trust and possibly legal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

"أوبن.إيه.آي" تصدر تقريراً حول إساءة استخدام "تشات جي.بي.تي"

2026-02-25
صوت بيروت إنترناشونال
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (OpenAI's ChatGPT) in harmful activities including scams, influence operations, and impersonations. These uses have directly caused harm to individuals (financial fraud victims), political figures (disinformation campaigns), and communities (through deception and manipulation). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

"أوبن إيه آي" تحظر حسابات "شات جي بي تي" مرتبطة بالسلطات الصينية - صحيفة الوئام

2026-02-25
صحيفة الوئام الالكترونية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT AI system by malicious actors to commit cybercrimes and influence operations, leading to realized harms such as fraud, deception, and political interference. The AI system's outputs were exploited to generate fake identities, messages, and content that caused harm to individuals and communities. This fits the definition of an AI Incident because the AI system's use directly led to violations of rights and harm to communities.
Thumbnail Image

أوبن إيه آي تكشف إساءة استخدام تشات جي.بي.تي في جرائم إلكترونية وحملات تشويه

2026-02-25
جريدة حابي
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of OpenAI's ChatGPT in criminal activities such as scams on dating sites, impersonation of officials and lawyers, and covert influence operations against a political leader. These uses have resulted in realized harms including fraud, misinformation, and reputational damage, fulfilling the criteria for an AI Incident. The AI system's misuse directly led to these harms, and the involvement is clear and explicit.