AI deepfake fraud and misinformation weaponization alarm Hong Kong and Taiwan authorities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Hong Kong employee was duped of HK$4 million in a deepfake video call impersonating her boss. Taiwan’s National Security Agency reports over 40,000 weekly AI-generated fake news and deepfakes, warns of weaponization by foreign powers and is deploying “AI vs AI” defenses amid Chinese misinformation operations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems in the form of deepfake video generation and the spread of false information, which has caused harm to communities by influencing election fairness and misinformation dissemination. The harm is realized as misinformation circulated during elections and victims inadvertently spreading false content. Therefore, this qualifies as an AI Incident due to the direct harm to communities and the role of AI-generated deepfakes in causing this harm. The article does not merely discuss potential risks or responses but reports on actual misinformation spread and its consequences, meeting the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Digital securityMedia, social platforms, and marketingGovernment, security, and defenceFinancial and insurance services

Affected stakeholders
WorkersGeneral publicGovernment

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

公權力下架深偽影片效率低 徐巧芯批:受害者無端變成加害人

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of deepfake video generation and the spread of false information, which has caused harm to communities by influencing election fairness and misinformation dissemination. The harm is realized as misinformation circulated during elections and victims inadvertently spreading false content. Therefore, this qualifies as an AI Incident due to the direct harm to communities and the role of AI-generated deepfakes in causing this harm. The article does not merely discuss potential risks or responses but reports on actual misinformation spread and its consequences, meeting the criteria for an AI Incident.
Thumbnail Image

蔡英文遭深偽迅速逮到人、高嘉瑜卻抓不到?蔡明彥:層級不同

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake videos created using AI technology that have harmed individuals by falsely portraying them and causing reputational damage. The videos have led to police reports and judicial investigations, indicating realized harm. The harm includes violations of personal rights and potential social disruption, fitting the definition of an AI Incident. The government's involvement and classification of threats related to national security further underscore the seriousness of the harm caused by AI-generated deepfakes.
Thumbnail Image

蔡明彥:國安局運用AI預測中共人事正確率相當高

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used for prediction tasks related to political personnel decisions. However, the article does not report any harm or negative consequences resulting from this AI use. There is no indication of injury, rights violations, disruption, or other harms. The event focuses on the AI system's application and its performance, without any mention of incidents or risks of harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on AI use in national security and methodological development without describing harm or plausible harm.
Thumbnail Image

羅美玲警告生成式AI國家網攻新利器 台灣是重點威脅目標

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically generative AI, being used in cyberattacks that threaten critical infrastructure and national security. Although no actual harm or incident is reported, the discussion centers on the plausible risk that AI-enabled cyberattacks could cause significant harm, including disruption of critical infrastructure and national security breaches. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to an AI Incident. The article also includes governmental responses and plans, but the main focus is on the credible threat posed by AI in cyberattacks, not on a realized incident or complementary information about past events.
Thumbnail Image

國安局揭錯假訊息「每周有4萬件」 通報至行政院原則曝

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to detect and manage AI-generated misinformation and deepfake videos that are actively spreading online. The harms include threats to social order and national security, which fall under harm to communities and potentially violations of rights. Since the misinformation and deepfake content are actively found and filtered, and some are reported for further action due to their harm, this constitutes an AI Incident. The article details ongoing harm caused by AI-generated misinformation and the governmental response to it, not just potential future harm or general information about AI.
Thumbnail Image

AI技術武器化高嘉瑜受害 國安局擬強化辨識能力

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI deepfake systems used to create false videos that have already caused harm to individuals (politicians) and potentially to social stability by spreading misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons and communities. The article also discusses responses and mitigation efforts, but the primary focus is on the realized harm from AI deepfake misuse, not just the response, so it is not merely Complementary Information.
Thumbnail Image

假訊息猖獗 徐巧芯嗆國安局長:辦法不是我想

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated or AI-amplified misinformation (deepfake videos) spreading on a social media platform, which directly harms communities by misleading voters and potentially undermining democratic processes. The failure to promptly remove such content during the election period constitutes an AI Incident because the AI system's outputs (deepfake videos) have directly led to harm to communities through misinformation. The discussion about the inability to immediately take down such content and the challenges posed by the platform's foreign status further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

國安局曝錯假訊息「每周找到4萬件」 將較具危害性通報到行政院做查處

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems generating misinformation and deepfake content, which are being actively monitored and managed by government agencies. The article describes ongoing harm caused by AI-generated false information affecting social order and national security, with concrete examples of detection and reporting. However, the article focuses on the detection and administrative response rather than a specific new incident of harm or a new hazard. Therefore, this is best classified as Complementary Information, providing context and updates on responses to existing AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

深偽危資安,林憶君問公務員私人手機、一般民眾可規範禁抖音? 數發部:無法說禁就禁

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI deepfake technology to generate false videos and audio that have been spread online, causing misinformation and national security risks. This constitutes harm to communities and national security (a form of harm to communities and potentially to critical infrastructure via national security). The AI system's use in creating and disseminating false content has directly led to these harms. Therefore, this qualifies as an AI Incident. The article also discusses government responses and regulatory challenges, but the primary focus is on the realized harm caused by AI deepfake misuse.
Thumbnail Image

社群軟體假訊息愈來愈猖獗 徐巧芯開嗆國安局長:你要想辦法不是我想

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos and misinformation that have been actively spreading during elections, causing harm to communities by misleading voters and disrupting the electoral process. The failure to promptly remove such content exacerbates the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights. The article focuses on the harm caused and the inadequate mitigation, not just potential future harm or complementary information.
Thumbnail Image

「超高速無差別」攻擊!國安局示警AI三大新興挑戰

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article describes realized and ongoing harms caused or plausibly caused by AI systems: AI-augmented cyberattacks (e.g., automated generation of malware variants, enhanced DDoS attacks), AI-generated deepfake and misinformation campaigns spreading rapidly and indiscriminately, and data leakage incidents due to misuse of generative AI tools. These constitute direct or indirect harms to cybersecurity, public information integrity, and intellectual property/confidentiality rights. Therefore, the event qualifies as an AI Incident. The article also discusses responses but the primary focus is on the harms and risks already materializing or ongoing, not just potential future hazards or complementary information.
Thumbnail Image

高嘉瑜成受害者!深偽影片瘋傳抖音 國安局示警3挑戰:將採AI反制AI

2024-11-17
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system's use in creating deepfake videos that have harmed a public figure by spreading false and misleading content, fulfilling the criteria for an AI Incident (harm to communities and violation of rights). The national security agency's report on AI-enabled cyber threats and misinformation, and their plans to use AI to counter these threats, constitute complementary information as they describe responses and ongoing monitoring rather than new incidents or hazards. Therefore, the primary classification is AI Incident due to the realized harm from the deepfake video, with complementary information aspects present but secondary.
Thumbnail Image

薄熙來兒子薄瓜瓜來台娶親,國安局最關切2件事曝光!拜習會到底有沒提到賴清德?原來星國總理也曾遭殃

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology in the creation of deepfake videos that have been used to harm individuals (e.g., the former legislator's manipulated video) and spread misinformation, which affects social stability and personal rights. This constitutes a direct AI Incident as the AI system's use has led to realized harm. The discussion of national security concerns and government responses further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI技術武器化 國安局示警資安風險、深偽、錯假訊息三大挑戰

2024-11-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to create deepfake videos and false information that mislead the public and consume government resources, which is a direct harm to communities and national security. It also details AI-enabled cyber threats such as automated vulnerability analysis, generation of malware variants, and enhanced DDoS attacks, which increase cybersecurity risks. These harms have already materialized or are ongoing, not just potential risks. Hence, the event meets the criteria for an AI Incident as AI's use has directly and indirectly led to significant harms including misinformation dissemination and cybersecurity threats.
Thumbnail Image

AI生成系統預測中共人事佈局 蔡明彥:正確率相當高

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for prediction of political personnel arrangements, which qualifies as an AI system. However, the event does not describe any realized harm or violation resulting from the AI's use. There is no mention of injury, disruption, rights violations, or other harms caused by the AI system. The use is described as intelligence gathering and forecasting, which is a legitimate application without reported negative consequences. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information, as it provides context on AI use in national security and intelligence without describing harm or plausible harm.
Thumbnail Image

國安局示警AI技術武器化 提資安風險三大挑戰

2024-11-16
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article centers on the potential and emerging risks of AI weaponization, including AI-driven cyberattacks and misinformation campaigns, which could plausibly lead to harms such as disruption of critical infrastructure and harm to communities. However, it does not report any actual harm or incidents caused by AI systems at this time. The focus is on risk assessment, warnings, and planned responses, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly involves AI systems and their potential misuse.
Thumbnail Image

國安局AI預測中共人事 蔡明彥:正確率相當高

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems for prediction and analysis, confirming AI system involvement. However, the AI's role is limited to forecasting and intelligence support without any reported harm or plausible risk of harm resulting from its use. The mention of misinformation is attributed to external actors, not the AI system. Thus, the event fits the definition of Complementary Information, as it provides context and insight into AI's role in national security analysis without describing an incident or hazard.
Thumbnail Image

猛追羅致政、趙天麟煽情照!徐巧芯槓上國安局對話曝光

2024-11-18
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake (AI-generated synthetic media) videos and their possible use in spreading misinformation during elections, which is a credible risk of harm to communities and democratic processes. However, the article does not confirm that these AI-generated materials have definitively caused harm yet; investigations are ongoing. Therefore, the event represents a plausible risk of harm from AI systems rather than a realized harm. This fits the definition of an AI Hazard, as the development, use, or misuse of AI deepfake technology could plausibly lead to an AI Incident involving election interference and misinformation.
Thumbnail Image

高嘉瑜深偽影片瘋傳 國安局示警AI三挑戰 - 政治 - 自由時報電子報

2024-11-16
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to create deepfake videos that mislead the public and spread false information, which harms communities by distorting public perception. It also details AI-enhanced cyber threats and a concrete case of data leakage due to generative AI tool misuse, indicating direct or indirect harm to property and information security. These meet the criteria for an AI Incident as the harms have occurred and are linked to AI system use and misuse. The article also discusses governmental responses, but the primary focus is on the harms caused by AI, not just responses or general AI news.
Thumbnail Image

蔡明彥:國安局運用AI預測中共人事佈局 正確率相當高 - 臺北市 - 自由時報電子報

2024-11-18
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI) for prediction in a security intelligence context, which fits the definition of an AI system. However, the article does not describe any realized harm (injury, rights violations, disruption, or other harms) caused by the AI system, nor does it indicate any plausible risk of harm arising from this use. The focus is on the operational deployment and accuracy of the AI system, with no mention of incidents or hazards. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI use in government intelligence without reporting an AI Incident or AI Hazard.
Thumbnail Image

蔡明彥:國安局運用AI預測中共人事 正確率相當高 | 聯合新聞網

2024-11-18
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for predictive analysis in a national security context, but there is no evidence or claim of any harm caused or plausible harm that could arise from this use as described. The article mainly provides information about the AI system's deployment and ongoing methodological improvements, without reporting any incident or hazard. Therefore, it fits best as Complementary Information, providing context and updates on AI use in government intelligence operations without describing an AI Incident or AI Hazard.
Thumbnail Image

每週抓4萬錯假訊息!數發部:相信社群平台願配合增設深偽回報機制 - 政治 - 自由時報電子報

2024-11-18
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article describes realized harm caused by AI systems generating misinformation and deepfake videos that affect social order, individual rights, and national security. The involvement of AI systems in producing these harmful contents is explicit, and the governmental agencies' efforts to detect and report these incidents indicate that harm is occurring. Therefore, this qualifies as an AI Incident because the AI systems' use has directly or indirectly led to harm to communities and violations of rights. The article also discusses responses and cooperation with social media platforms, but the primary focus is on the harm caused and the detection/reporting of such content, not just complementary information or future risks.
Thumbnail Image

每週抓4萬錯假訊息!數發部:相信社群平台願配合增設深偽回報機制(翻攝自立法院IVOD) - 自由電子報影音頻道

2024-11-18
自由時報
Why's our monitor labelling this an incident or hazard?
The article details existing harms caused by AI-generated misinformation and deepfake videos, such as damage to personal reputation and social order, and the governmental mechanisms to detect and report these harms. However, it primarily discusses the institutional response, cooperation with social media platforms, and policy-level communications rather than describing a new AI Incident or AI Hazard event. Therefore, it fits best as Complementary Information, providing context and updates on ongoing AI-related harm mitigation efforts.
Thumbnail Image

AI 深偽氾濫!國安局設小組 數發部 AI 風險分級恐延後發布 | 聯合新聞網

2024-11-18
UDN
Why's our monitor labelling this an incident or hazard?
The article clearly describes realized harms caused by AI systems: AI-generated deepfake videos and AI-enabled scams have led to significant financial losses and misinformation affecting public figures and society. The involvement of AI in generating and detecting deepfakes and scams is explicit. The harms include violation of rights (fraud victims), harm to communities (misinformation), and potential national security risks. The government's efforts to detect, mitigate, and regulate AI risks are responses to these incidents. Therefore, the event qualifies as an AI Incident due to the direct and indirect harms caused by AI systems in use.
Thumbnail Image

事後闢謠效率低 綠委盼主動反制假訊息 | 聯合新聞網

2024-11-18
UDN
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of misinformation and AI-generated content (e.g., deepfakes, AI chatbots). However, it does not report a specific AI Incident where harm has directly or indirectly occurred due to AI system malfunction or misuse. Nor does it describe a particular AI Hazard event with a clear plausible risk of harm materializing imminently. Instead, it mainly covers governmental and political discourse on AI-related misinformation threats and responses, which fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related challenges.
Thumbnail Image

國安局示警AI技術武器化 提資安風險三大挑戰 | 聯合新聞網

2024-11-16
UDN
Why's our monitor labelling this an incident or hazard?
The article discusses AI's potential to cause significant harm through misuse such as deepfake misinformation, automated cyberattacks, and rapid spread of false information, which could plausibly lead to incidents affecting national security and public trust. Since no actual harm or incident is reported but credible risks and challenges are emphasized, this fits the definition of an AI Hazard. The article also includes information on responses and mitigation efforts, but the main focus is on the potential threats posed by AI weaponization, not on a past incident or complementary information about responses alone.
Thumbnail Image

AI深偽泛濫 蔡明彥:每週4萬件有武器化趨勢 | 國安局 | 大紀元

2024-11-18
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create and spread deepfake videos and false information, which are actively causing harm to communities by misleading the public and threatening national security. The article details realized harms such as misinformation proliferation, election interference risks, and financial scams linked to AI-generated content. The government's detection and mitigation efforts confirm the presence of ongoing AI-related harms. Therefore, this qualifies as an AI Incident due to direct harm caused by AI misuse in misinformation and deepfake weaponization.
Thumbnail Image

羅致政性愛影音鑑定 調查局副局長憋笑:應要向國人說明 | 政治 | Newtalk新聞

2024-11-18
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for deepfake detection, which are AI systems analyzing multimedia content to identify manipulation. However, the article does not report any direct or indirect harm caused by the AI systems' development, use, or malfunction. The AI tools are used as investigative aids, and the article centers on the status of investigations and calls for transparency. There is no indication that the AI systems caused injury, rights violations, or other harms, nor that they pose a plausible future harm. Therefore, this is best classified as Complementary Information, providing context and updates on AI use in law enforcement and related governance issues.
Thumbnail Image

蔡明彥:刑事局今年前3季向Meta舉發9萬多件錯假訊息 | 政治 | Newtalk新聞

2024-11-18
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated misinformation and deepfake videos as part of the false information being spread online, indicating AI system involvement. The authorities' use of technical systems to detect and report these issues to Meta shows active use of AI-related tools. However, the article does not describe a specific AI Incident where harm has directly or indirectly occurred due to AI misuse or malfunction. Instead, it reports on the scale of the problem and the governmental response, which fits the definition of Complementary Information as it enhances understanding of AI's societal impact and governance efforts without reporting a new incident or hazard.
Thumbnail Image

蔡英文深偽馬上辦、高嘉瑜罵爛黨查不到?蔡明彥:因涉及國安 | 政治 | Newtalk新聞

2024-11-18
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems through the creation and dissemination of deepfake videos, which have directly led to harm such as defamation and potential national security risks. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The discussion of government and law enforcement responses further supports that these harms are realized and being addressed, rather than merely potential. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

蔡明彥:國安局運用AI預測中共人事 正確率相當高 | 政治 | 中央社 CNA

2024-11-18
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for predictive analysis in a national security context. However, there is no indication that this use has directly or indirectly caused any harm such as injury, rights violations, disruption, or other significant harms. The article focuses on the capabilities and internal use of AI for forecasting, without reporting any incident or harm. Therefore, it does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context and insight into AI applications in national security without describing harm or plausible harm.
Thumbnail Image

國安局示警AI技術武器化 提資安風險三大挑戰 | 政治 | 中央社 CNA

2024-11-16
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to create deepfake videos, generate misinformation, and automate cyberattacks such as DDoS and botnet control. These uses have already led or are leading to harms including misinformation that misleads the public (harm to communities), increased cybersecurity threats (disruption of critical infrastructure), and national security risks. Since these harms are occurring or actively anticipated and the AI systems' use is central to these harms, this qualifies as an AI Incident. The article also mentions responses and mitigation efforts, but the primary focus is on the realized and ongoing harms caused by AI misuse.
Thumbnail Image

中媒稱拜習會提賴清德「台獨」 國安會:白宮聲明未提

2024-11-18
公共電視
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by foreign actors to disseminate false information and deepfake content, which has already affected individuals (e.g., a former legislator) and poses significant challenges to information integrity and security. This is a direct harm to communities through misinformation and deception, fitting the definition of an AI Incident. The article also notes governmental responses to counter these harms, but the primary focus is on the realized harm caused by AI misuse, not just potential or complementary information.
Thumbnail Image

蔡英文遭深偽迅速逮到人、高嘉瑜卻抓不到?蔡明彥:層級不同 | 政治 | 三立新聞網 SETN.COM

2024-11-18
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos, which have caused harm by impersonating individuals and potentially misleading the public. The article references actual incidents where deepfake videos have been produced and led to legal actions, indicating realized harm to individuals' reputations and social order. Although national security threats are distinguished, the harm to individuals and social stability qualifies this as an AI Incident due to violations of rights and harm to communities. The ongoing investigations confirm that harm has occurred and is being addressed.
Thumbnail Image

國安局曝錯假訊息「每周找到4萬件」 將較具危害性通報到行政院做查處 | 政治 | 三立新聞網 SETN.COM

2024-11-18
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to detect AI-generated misinformation, but the article does not report any realized harm or incident caused by AI-generated misinformation. Instead, it describes ongoing surveillance and administrative responses to potential threats. Therefore, this is best classified as Complementary Information, as it provides context on governance and response measures related to AI-generated misinformation without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

國安局AI預測中共人事 蔡明彥:正確率相當高 | 政治 | 三立新聞網 SETN.COM

2024-11-18
三立新聞
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for predictive analysis of political personnel changes, which fits the definition of an AI system. However, the article does not report any injury, rights violation, disruption, or other harm caused by the AI system's development, use, or malfunction. The AI is used as a tool for intelligence analysis and prediction, with human oversight. The mention of misinformation or cognitive warfare is attributed to Chinese media, not the AI system. There is no credible indication that the AI system's use could plausibly lead to harm. Thus, this is a case of AI use without associated harm or hazard, making it Complementary Information as it provides context on AI application in intelligence analysis.
Thumbnail Image

AI深偽侵台 每週4萬件 國安局:有武器化趨勢| 台灣大紀元

2024-11-18
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfake technology, AI-based misinformation generation, and AI detection tools) in the creation and spread of false information that harms social order and national security. The harms are ongoing and substantial, including misinformation campaigns by foreign adversaries aiming to influence elections and public opinion, which qualifies as harm to communities and a violation of rights. The government's response to detect and mitigate these harms further confirms the presence of realized harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly and indirectly led to significant harm.
Thumbnail Image

跨國公司墮 Deepfake 騙局 駐港員工與「假上司」視像會議被騙400萬元

2024-11-14
ezone.hk 即時科技生活
Why's our monitor labelling this an incident or hazard?
The event involves the use of deepfake AI technology to impersonate a company executive in a video call, which directly led to a significant financial loss due to fraud. The AI system's use here is malicious and caused realized harm (financial loss), fitting the definition of an AI Incident. The harm is to property (financial assets) and potentially to the community of the company employees. Therefore, this is classified as an AI Incident.
Thumbnail Image

高嘉瑜成受害者!深偽影片瘋傳抖音 國安局示警3挑戰:將採AI反制AI | 政治 | 三立新聞網 SETN.COM

2024-11-17
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated deepfake technology to create and disseminate false and misleading video content about a public figure, which constitutes a violation of rights and harm to communities through misinformation. The harm is realized as the deepfake video has been widely circulated, leading to reputational damage and public deception. The NSB's response and planned use of AI to counter AI threats is complementary information but does not negate the fact that the deepfake incident itself is an AI Incident. Therefore, the primary classification is AI Incident.