AI-Generated Deepfake News Anchors Used in Pro-China Disinformation Campaigns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI software, notably Synthesia, was used to create deepfake news anchors for fictitious outlets like Wolf News, spreading pro-China propaganda and disinformation on social media. Research by Graphika identified these state-aligned operations as the first to use AI-generated video personas to deceive and influence public opinion, undermining trust in information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI deepfake technology to create fake news anchors in propaganda videos, which are actively used to spread deceptive political messages. This use of AI-generated content causes harm to communities by misleading the public and undermining truthful information, fitting the definition of an AI Incident. The AI system's development and use are directly linked to the harm caused by the disinformation campaign.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

「深偽技術」以假亂真 中國宣傳影片首度運用

2023-02-08
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors in propaganda videos, which are actively used to spread deceptive political messages. This use of AI-generated content causes harm to communities by misleading the public and undermining truthful information, fitting the definition of an AI Incident. The AI system's development and use are directly linked to the harm caused by the disinformation campaign.
Thumbnail Image

「深偽技術」以假亂真 中國宣傳影片首度運用 - 國際 - 自由時報電子報

2023-02-08
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors in propaganda videos, which are actively used to spread deceptive political messages. This is a direct use of an AI system leading to harm to communities through misinformation and manipulation, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the videos are already disseminated. The AI system's role is pivotal in generating the fake content that misleads viewers, thus meeting the definition of an AI Incident.
Thumbnail Image

「深偽技術」以假亂真 中國宣傳影片首度運用 | 聯合新聞網

2023-02-08
UDN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create fabricated video content that impersonates real news anchors, which is used for political propaganda. This directly leads to harm by spreading misinformation and deceptive political content, impacting societal trust and potentially violating rights related to truthful information. The AI system's use here is central to the harm caused, qualifying this as an AI Incident under the framework's criteria for harm to communities and violations of rights.
Thumbnail Image

研究報告:中國利用深度造假"虛擬主播"進行宣傳

2023-02-09
美國之音
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI-generated deepfake virtual anchors are being used to produce deceptive political content linked to a state actor. The AI system's outputs (deepfake videos) are directly used to misinform and manipulate public opinion, which harms communities and violates rights to truthful information. The involvement of AI in generating these videos is clear, and the harm is realized as the videos are actively disseminated online. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

專家:中共又愛又怕的深偽技術 轉成大外宣新招 | 大陸政經 | 兩岸 | 經濟日報

2023-02-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create fabricated videos used in state propaganda, which directly harms communities by spreading misinformation and undermining trust. The article describes actual use of AI-generated content for political influence, not just potential or hypothetical risks. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities through disinformation campaigns. The article also references regulatory measures and concerns, but the primary focus is on the realized harm from AI-generated propaganda, not just complementary information or future risks.
Thumbnail Image

「深偽技術」以假亂真 中國宣傳影片首度運用 | 國際要聞 | 全球 | NOWnews今日新聞

2023-02-08
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors in propaganda videos, which are disseminated on social media. This use of AI directly leads to harm by spreading deceptive political content, misleading the public, and potentially undermining social stability and trust. Such misinformation campaigns are recognized as harm to communities and violations of rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

專家:中共又愛又怕的深偽技術 轉成大外宣新招 | 兩岸傳真 | 全球 | NOWnews今日新聞

2023-02-10
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic media. The Chinese government's use of deepfake-generated videos for propaganda purposes directly leads to harm by spreading false information and manipulating public opinion, which harms communities and undermines social stability. The article describes actual use and dissemination of such AI-generated content, not just potential or hypothetical risks. Therefore, this event meets the criteria for an AI Incident due to realized harm caused by AI-generated disinformation used in state propaganda.
Thumbnail Image

以假亂真 中共首次利用AI虛擬主播進行大外宣 - 大紀元

2023-02-09
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create virtual news anchors that produce deceptive political content. This content is actively disseminated on social media, misleading audiences and serving propaganda purposes. The harm is realized as the AI system's outputs are used to spread false narratives, impacting public perception and political stability, which fits the definition of harm to communities. The AI system's use is central to the incident, and the harm is direct and ongoing. Hence, the event is classified as an AI Incident.
Thumbnail Image

假媒體、深偽假主播、假訊息 Graphika揭露中國資訊戰新手法 | 國際 | Newtalk新聞

2023-02-08
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology used to create fake news anchors, which is an AI system generating misleading content. The use of these AI-generated deepfakes to spread false and politically biased information constitutes harm to communities by manipulating public opinion and spreading disinformation. Since the harm is occurring through the dissemination of false narratives and fake media, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by AI systems.
Thumbnail Image

「深偽技術」以假亂真 大陸宣傳影片首度運用 - 科技

2023-02-09
中時新聞網
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to produce fake news anchors in propaganda videos, which are disseminated to influence public opinion. This is a direct use of an AI system (deepfake generation) leading to harm in the form of misinformation and political manipulation, which harms communities and violates informational rights. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

美分析公司:中共利用「深偽技術」虛假宣傳

2023-02-08
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The use of AI deepfake technology to produce fabricated political content that is disseminated to the public constitutes a violation of rights and causes harm to communities by spreading misinformation and undermining trust in information sources. Since the AI system's use has directly led to the dissemination of false political propaganda, this qualifies as an AI Incident under the framework, specifically as harm to communities through misinformation.
Thumbnail Image

2月8日兩岸掃描

2023-02-09
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of "deepfake" AI technology to create realistic AI virtual anchors producing pro-China, anti-US political propaganda videos on major social media platforms. This is a clear example of an AI system (deepfake generative AI) being used to cause harm to communities by spreading disinformation and manipulating political narratives. The harm is realized as the videos are actively disseminated and influence public opinion. Other parts of the article do not involve AI or AI-related harm. Hence, the event is classified as an AI Incident due to the direct use of AI-generated deepfake content causing harm.
Thumbnail Image

「深偽技術」以假亂真 中國宣傳影片首度運用 | 國際 | 中央社 CNA

2023-02-08
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors in propaganda videos, which directly leads to harm by spreading deceptive political content. This constitutes an AI Incident because the AI system's use has directly caused harm to communities by disseminating false information and manipulating public opinion, fulfilling the criteria of harm to communities and violation of rights. The involvement of AI in generating the fake content is clear and central to the incident.
Thumbnail Image

拙劣的深偽假主播 這公司揭出北京資訊戰新手法(圖) - 亞洲 -

2023-02-09
看中国
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake AI technology) to generate fake video content that spreads disinformation. The disinformation has been actively disseminated and is causing harm by misleading the public and potentially influencing political stability and national security. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation, fulfilling the criteria for harm under (d) harm to communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

中共大外宣引入「深偽技術」 高仿真英文主播說好中國故事

2023-02-09
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create virtual news anchors spreading Chinese government propaganda videos. This involves the use of an AI system (Synthesia's deepfake video generation platform) in the development and use of AI-generated content for political propaganda. The harm is the violation of informational integrity and potential harm to communities through disinformation and manipulation of public opinion, which fits within the definition of harm to communities. Although the impact is currently limited, the AI system's role in producing and disseminating misleading propaganda is direct and material. Therefore, this qualifies as an AI Incident rather than a mere hazard or complementary information. The article also discusses governance and ethical concerns but the primary focus is on the realized use of AI for propaganda, confirming the incident classification.
Thumbnail Image

研究:中國用 Deepfakes「虛擬主播」大外宣

2023-02-12
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technology) used to create virtual news anchors that disseminate false and misleading political content. This use of AI directly leads to harm to communities by spreading deceptive propaganda, which fits the definition of an AI Incident. The harm is realized, not just potential, as the content is actively circulating and influencing public discourse. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing harm through misinformation and political manipulation.
Thumbnail Image

研究:深度偽造主播為中國發動宣傳攻勢

2023-02-08
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors that disseminate deceptive political propaganda, which directly harms communities by spreading misinformation and undermining trust online. The AI system's use is central to the incident, and the harm is occurring, not hypothetical. This fits the definition of an AI Incident due to realized harm to communities through misinformation and political manipulation enabled by AI-generated content.
Thumbnail Image

拙劣的深伪假主播 这公司揭出北京资讯战新手法(图) - 亚洲 -

2023-02-09
看中国
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake systems to produce fabricated news anchors and false news content, which is a direct use of AI technology. The harm caused is the spread of disinformation that can mislead the public, manipulate political opinions, and undermine trust in information sources, constituting harm to communities. Since the disinformation is actively being disseminated and causing harm, this qualifies as an AI Incident under the framework.
Thumbnail Image

以假乱真 中共首次利用AI虚拟主播进行大外宣 - 大纪元

2023-02-09
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create virtual news anchors that produce deceptive political content. This use of AI has directly led to the spread of false narratives and propaganda, which harms communities by undermining truthful information and potentially influencing public perception and political stability. The AI system's role is pivotal as it enables the creation of highly realistic fake videos that are difficult to detect, facilitating the misinformation campaign. Hence, this event meets the criteria for an AI Incident involving violations of rights and harm to communities through misinformation.
Thumbnail Image

「深伪技术」以假乱真 中国宣传影片首度运用

2023-02-08
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake technology was used to create fake news anchors in propaganda videos, which are intended to deceive viewers and spread political misinformation. This use of AI has directly caused harm by misleading the public and potentially destabilizing social trust, fitting the definition of an AI Incident due to harm to communities. The involvement of AI in generating the deepfake content is clear and central to the incident.
Thumbnail Image

中共大外宣引入「深伪技术」 高仿真英文主播说好中国故事

2023-02-09
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology and AI video generation platforms) in the creation and dissemination of propaganda videos by the Chinese government. This use is a form of AI system deployment with potential to cause harm through misinformation and manipulation of public opinion, which can harm communities and violate rights to truthful information. However, the article states that the videos have low viewership and limited impact so far, indicating that the harm is not yet realized but plausible in the future. Therefore, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the use of AI deepfake technology in propaganda, not on responses or governance measures. It is not unrelated because AI systems are central to the event.
Thumbnail Image

研究报告:中国利用深度造假"虚拟主播"进行宣传

2023-02-09
美国之音
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI systems were used to create deepfake virtual news anchors that disseminate deceptive political content linked to a state actor. The harm is realized as these videos spread false narratives and propaganda, misleading the public and harming community trust and information integrity. This fits the definition of an AI Incident because the AI system's use directly leads to harm to communities through misinformation and manipulation, fulfilling the criteria for harm (d) and (c).
Thumbnail Image

The People Onscreen Are Fake. The Disinformation Is Real.

2023-02-07
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake video generation software by Synthesia) being used to create fake news anchors and videos that were distributed as part of a coordinated disinformation campaign. The disinformation is real and has been disseminated, causing harm to communities by spreading false narratives and undermining public trust. The AI system's use is central to the incident, as it enabled the creation of realistic but fake personas and videos that would be difficult to detect otherwise. This meets the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation and manipulation.
Thumbnail Image

AI Deepfake 'News Anchors' Used in Pro-China Videos on Social Media: Report

2023-02-08
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos (an AI system) to produce fake news anchors for propaganda purposes. The deployment of these videos on social media platforms constitutes the use of AI leading to harm, specifically harm to communities through misinformation and manipulation. The harm is realized, not just potential, as the videos have been actively shared and viewed, even if the reach was limited. The involvement of AI in generating the deepfakes is central to the incident. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Deepfake 'news presenters' appear in pro-China footage on social media, research group says

2023-02-08
Australian Broadcasting Corporation
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos of fictitious news anchors used for political disinformation, which is a clear violation of rights and harms communities by spreading false narratives. The AI system's use directly leads to the harm described. Although the videos have low engagement, the harm is realized and ongoing. This fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China Uses AI Deepfake avatars as 'news anchors' to spread disinformation

2023-02-08
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to create deepfake avatars used in a disinformation campaign aligned with a political agenda. The harm caused is the spread of false information and manipulation of public discourse, which is a clear harm to communities. The AI system's use in generating realistic but fake news anchors directly contributed to this harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: Report | Al Arabiya English

2023-02-08
قناة العربية
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technology) to create and spread false political content, which directly harms communities by spreading misinformation and undermining trust in information sources. The involvement of AI in generating realistic fake news anchors is explicit, and the harm (disinformation and political manipulation) is occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities through deceptive propaganda.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
France 24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake video generation software) to create and disseminate false political content, which directly harms communities by spreading misinformation and deceptive narratives. The harm is realized, not just potential, as the videos are actively shared on social media. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through disinformation. The report also highlights the novelty of state-aligned use of AI-generated video for political deception, reinforcing the significance of the incident.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: Research

2023-02-08
CNA
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake videos that are deployed in a coordinated disinformation campaign. This use of AI has directly led to harm by spreading misleading political content, which affects communities and violates rights related to truthful information and political integrity. Therefore, it qualifies as an AI Incident due to the realized harm caused by the AI-generated deepfakes in propaganda videos.
Thumbnail Image

Fake goods, fake spirit: China is using Deepfake anchors for propaganda masquerading as news

2023-02-08
Firstpost
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Deepfake generation technology) used to create realistic fake video content. The AI-generated content was used maliciously to spread propaganda and disinformation, which harms communities by misleading the public and undermining trust in information sources. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident due to direct involvement of AI in causing harm through misinformation and political manipulation.
Thumbnail Image

The people onscreen are fake. The disinformation is real. - The Boston Globe

2023-02-07
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors and distribute disinformation aligned with a state actor's interests. The disinformation campaign has already occurred and caused harm by misleading viewers and potentially influencing public opinion, which constitutes harm to communities. The AI system's use in generating and disseminating these videos is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China's deepfake anchors spread disinformation on social media, Graphika says

2023-02-08
Radio Free Asia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake anchors created using Synthesia's technology to spread disinformation and propaganda aligned with Chinese state interests. The disinformation campaign harms communities by misleading social media users and undermining democratic processes and public trust. The AI system's use directly leads to this harm, fulfilling the criteria for an AI Incident under the OECD framework, specifically harm to communities through disinformation dissemination. The event is not merely a potential risk but an ongoing harm, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Deepfake 'News Anchors' In Pro-China Footage: Research

2023-02-08
International Business Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used in deceptive political content, which is a direct use of AI systems. The harm is realized as the videos are actively disseminated on social media, spreading misinformation and propaganda, which harms communities and potentially violates rights to accurate information. The involvement of AI in creating these deepfakes is central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

China used AI-generated news anchors to propagandize political content on social media: Report | International

2023-02-09
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions AI-generated news anchors used to spread disinformation and propaganda, which is a direct use of AI systems to cause harm by misleading the public and interfering with political discourse. The harm is realized as the disinformation campaign is active and ongoing, affecting social media users and potentially influencing political views and social stability. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and political manipulation.
Thumbnail Image

World News | China Used AI-generated News Anchors to Propagandize Political Content on Social Media: Report | LatestLY

2023-02-09
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated news anchors (deepfake videos) to disseminate political propaganda and disinformation. The AI system's outputs are used to influence public opinion and spread misleading content, which harms communities and violates rights related to truthful information and political expression. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm through disinformation campaigns.
Thumbnail Image

China used AI-generated news anchors to propagandize political content on social media: Report

2023-02-09
Asian News International (ANI)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated news anchors used to disseminate political propaganda and disinformation, which is a direct use of AI systems to cause harm by misleading and manipulating social media users. The disinformation campaign harms communities by undermining truthful information and democratic processes. Therefore, this event meets the criteria for an AI Incident due to the realized harm caused by the AI system's use in spreading harmful political content.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake videos were used in deceptive political content by state-aligned actors, which directly leads to harm to communities through misinformation and propaganda. The AI system's involvement is clear (deepfake generation software), and the harm is realized (disinformation spreading). This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities.
Thumbnail Image

News anchors in pro-China videos are AI-made - report

2023-02-08
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake videos were used by Chinese state-aligned actors to produce deceptive political content, which is a form of misinformation harming communities and potentially violating rights. The AI system's use in creating these fake news anchors directly contributed to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs in a political disinformation context.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
RTL Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos used in political disinformation, which is a direct cause of harm to communities by spreading false narratives and deceptive content. The involvement of AI in creating realistic fake news anchors is central to the incident. The harm is realized as the videos are actively disseminated on social media, influencing public opinion and potentially undermining trust in information sources. This fits the definition of an AI Incident due to the direct link between AI-generated content and harm to communities through misinformation.
Thumbnail Image

China's deepfake anchors spread disinformation on social media, Graphika says

2023-02-09
GlobalSecurity.org
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake anchors used by the Spamouflage operation to spread disinformation and propaganda, which constitutes harm to communities through misinformation. The AI system's use directly leads to the dissemination of false narratives, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the disinformation is actively being spread on major social media platforms. Therefore, this event qualifies as an AI Incident due to the direct role of AI in causing harm through disinformation campaigns.
Thumbnail Image

China Employs AI News Anchors To Spread Disinformation

2023-02-09
CTN News l Chiang Rai Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI-generated news anchors) used in the development and deployment of disinformation campaigns. The AI system's outputs have directly led to harm by spreading false political propaganda and disinformation, which harms communities and potentially violates rights. The use of AI to create realistic but fake news anchors and videos that mislead the public is a clear example of AI misuse causing harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Graphika report: China's deepfake anchors spread disinformation on social media

2023-02-08
BenarNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake video generation) used to create fictitious news anchors spreading disinformation. The disinformation campaign is ongoing and has caused harm by misleading social media users and influencing public opinion, which fits the definition of an AI Incident due to harm to communities and violation of rights. The AI system's use in this context is central to the harm caused, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
SpaceDaily
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake video generation software) to create and disseminate false political content, which directly harms communities by spreading disinformation. The harm is realized as the videos are already circulating on social media promoting misleading narratives aligned with a state actor's interests. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through deceptive political influence operations.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
SpaceWar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos of fictitious news anchors, which were deployed in a coordinated disinformation campaign promoting political interests. This use of AI-generated content has directly caused harm by misleading the public and spreading false information, which harms communities and undermines social trust. Therefore, it meets the definition of an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research - The Online Citizen Asia

2023-02-08
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation technology) to create realistic but fake news anchors for propaganda purposes. The AI-generated content directly contributes to misinformation campaigns, which harm communities by spreading deceptive political narratives. The involvement of AI in producing and disseminating this content meets the criteria for an AI Incident, as the harm (disinformation and manipulation) is occurring and the AI system's role is pivotal. The report also highlights the novelty and significance of this use of AI in state-aligned influence operations, reinforcing the classification as an AI Incident.
Thumbnail Image

Deepfake 'news anchors' in pro-China footage: research

2023-02-08
Iraqi News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate deepfake videos that spread pro-China propaganda, misleading viewers with fabricated content. This use of AI has directly led to harm by deceiving the public and manipulating political discourse, which harms communities and violates informational rights. The involvement of AI in producing these deepfakes and their deployment in disinformation campaigns meets the definition of an AI Incident due to realized harm from the AI system's use.
Thumbnail Image

How Deepfake videos are used to spread misinformation - ExBulletin

2023-02-08
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Synthesia's deepfake video generation software) used to create realistic fake videos that have been employed in disinformation campaigns. This use has directly led to harm by spreading misinformation, which harms communities and undermines trust in information. The article provides concrete examples of such misuse, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to communities through misinformation.
Thumbnail Image

Des deepfakes utilisés dans une vidéo de propagande chinoise

2023-02-10
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology and Synthesia's AI video generation) used to create realistic but fake news presenters. The videos are part of a propaganda campaign spreading misleading political messages, which harms communities by distorting information and undermining trust. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident due to direct involvement of AI in causing harm through misinformation and political manipulation.
Thumbnail Image

Pour la première fois, un "deepfake" a été utilisé dans une vidéo de propagande pro-Pékin

2023-02-08
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake videos used in political propaganda, which is a direct use of AI leading to the spread of misinformation. This misinformation harms communities by undermining trust and potentially destabilizing social and political environments. The harm is realized, not just potential, as the videos are already circulating. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated disinformation.
Thumbnail Image

Des "deepfakes" mettent en scène des présentateurs TV pro-chinois: l'hypertrucage présente un "danger pour la sécurité nationale..."

2023-02-08
La Libre.be
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to spread misleading political content, which is a direct use of AI systems causing harm by threatening national security and social stability. The harm is realized, not just potential, as the videos are already circulating and influencing public perception. The involvement of AI in creating these deepfakes is clear, and the harm falls under harm to communities and possibly violations of rights. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Intelligence artificielle - Des "deepfakes" mettent en scène des présentateurs TV pro-chinois

2023-02-08
24heures
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake presenters used in propaganda videos, which have been disseminated on social media to promote political positions. This use of AI has directly caused harm by spreading false information and manipulating public discourse, which fits the definition of an AI Incident under harm to communities. The AI system's development and use have directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des " deepfakes " mettent en scène des présentateurs TV pro-chinois

2023-02-08
RJB Radio Jura Bernois SA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos used to spread false political narratives, which is a direct use of AI systems causing harm to communities through misinformation. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Des "deepfakes " mettent en scène des présentateurs TV pro-chinois

2023-02-09
Libération
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake videos used for political propaganda, which is a direct use of AI leading to harm through misinformation and manipulation of public discourse. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities by spreading false and misleading information. The harm is materialized, not just potential, and the AI system's role is pivotal in creating the deceptive content. Therefore, the event qualifies as an AI Incident.
Thumbnail Image

假主播報假新聞!陸網媒藉「深偽技術」醜化美國

2023-02-13
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake systems to produce and spread false news content, which has already occurred and is causing harm by misleading the public and potentially influencing political outcomes. The AI system's use is central to the harm, as the realistic fake videos rely on AI-generated virtual anchors. The harm to communities through misinformation and potential election interference fits the definition of an AI Incident. The article describes realized harm, not just potential risk, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

親中組織使用深偽技術進行大外宣 新聞主播都是AI產物 - 國際 - 自由時報電子報

2023-02-14
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI algorithms to create deepfake virtual news anchors and produce fake news content. The dissemination of this AI-generated misinformation is ongoing and directly harms communities by spreading false narratives and propaganda. The harm is realized, not just potential, as the fake news is actively broadcast and viewed, even if the audience is limited. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and propaganda.
Thumbnail Image

主播全是假! 親中組織以「深偽技術」做假新聞 為中共進行大外宣 | 中國 | Newtalk新聞

2023-02-14
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake systems to produce and spread false news, which directly harms communities by misleading the public and manipulating information. The presence of AI systems is explicit (deepfake technology), and the harm (disinformation and propaganda) is realized, not just potential. Therefore, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

主播全是假! 親中組織以「深偽技術」做假新聞 為中共進行大外宣

2023-02-14
HiNet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create fake news anchors and videos that spread false information and propaganda. This constitutes harm to communities by misleading the public and manipulating information, fulfilling the criteria for an AI Incident. The AI system's use in producing and disseminating these fake news videos directly leads to this harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

以假亂真 中共AI虛擬主播大外宣| 台灣大紀元

2023-02-12
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI deepfake technology was used to create virtual news anchors that spread false political propaganda, which is a direct cause of harm to communities through misinformation. The AI system's use in this disinformation campaign has already led to realized harm, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information. The harm is clear, the AI system's role is pivotal, and the event meets the criteria for an AI Incident.