U.S. Lawmakers Warn of Pro-Beijing Bias in Google’s Gemini AI

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google’s generative AI model Gemini, tested in Simplified Chinese, praised Xi Jinping as an “outstanding leader,” echoed Chinese Communist Party propaganda on Taiwan, and refused to address Xinjiang rights. U.S. legislators warn that its pro-Beijing bias could spread misinformation and urge Google to filter training data more robustly.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Google's Gemini) is explicitly mentioned and is shown to produce outputs that align with a foreign government's propaganda, effectively spreading biased and censored information. This dissemination of misleading information can harm communities by influencing public opinion and violating rights to accurate information. The involvement of the AI system in producing these outputs is direct, and the harm (misinformation and influence) is occurring, as evidenced by U.S. lawmakers' concerns and calls for action. Hence, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
FairnessTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomyPrivacy & data governanceSafetyRobustness & digital security

Industries
Media, social platforms, and marketingGovernment, security, and defenceConsumer servicesIT infrastructure and hosting

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rightsReputational

Severity
AI incident

Business function:
Citizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Google聊天機器人成中國官方傳聲筒? 美議員關切 - 國際 - 自由時報電子報

2024-06-12
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly mentioned and is shown to produce outputs that align with a foreign government's propaganda, effectively spreading biased and censored information. This dissemination of misleading information can harm communities by influencing public opinion and violating rights to accurate information. The involvement of the AI system in producing these outputs is direct, and the harm (misinformation and influence) is occurring, as evidenced by U.S. lawmakers' concerns and calls for action. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

Google生成式AI「親中」 恐助紂為虐 - 國際 - 自由時報電子報

2024-06-12
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini language model) whose use and training have resulted in the dissemination of biased, propaganda-aligned content on sensitive political topics. This constitutes a violation of rights and harm to communities through misinformation and censorship. The harm is realized and ongoing, as the AI outputs are actively influencing users. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm in the form of misinformation and suppression of human rights discourse.
Thumbnail Image

讚習近平是「卓越領導人」 Google的AI機器人很政治正確 | 聯合新聞網

2024-06-12
UDN
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates politically biased content due to its training data. The use of the AI system has indirectly led to harm by disseminating misleading political information aligned with a particular government's propaganda, which can harm communities and violate rights to accurate information. This fits the definition of an AI Incident because the AI's outputs have directly contributed to a form of harm (misinformation and political bias). The article does not merely warn of potential harm but documents actual biased outputs and their societal implications. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google AI稱習近平「卓越領導人」 引美國國會關注 | 聯合新聞網

2024-06-13
UDN
Why's our monitor labelling this an incident or hazard?
The AI system Gemini is explicitly involved, as it generates politically biased responses based on its training data. The event stems from the AI system's use and the nature of its training data. While the biased outputs raise concerns about misinformation and potential influence on public opinion, the article does not report any realized harm such as injury, rights violations, or disruption. The main focus is on the political and governance response to the AI's behavior, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

讚習近平「卓越領導人」 Google AI政治正確? 美議員紛表憂心 | 聯合新聞網

2024-06-13
UDN
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates politically biased content that aligns with Chinese government propaganda. This use of AI has directly led to harm in the form of misinformation and potential influence operations, which affect communities and violate rights to accurate information. The article describes actual outputs from the AI causing concern and harm, not just potential risks. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

谷歌AI机器人被训练成中共喉舌?美议员担忧 - 大纪元

2024-06-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The AI system Gemini is explicitly involved, as it is an AI language model producing outputs aligned with Chinese Communist Party propaganda. The event stems from the AI's use and the nature of its training data, which leads to biased outputs. While no direct harm or incident is reported, the biased AI responses could plausibly lead to harm such as misinformation, manipulation of public opinion, or violation of rights if deployed widely without mitigation. The concerns raised by U.S. lawmakers underscore the potential future risk. Since no actual harm has yet occurred, and the article focuses on the potential for harm and calls for better data filtering, the event fits the definition of an AI Hazard.
Thumbnail Image

谷歌AI機器人被訓練成中共喉舌?美議員擔憂 - 大紀元

2024-06-13
The Epoch Times
Why's our monitor labelling this an incident or hazard?
Gemini is an AI system (a large language model) whose use in answering political questions has resulted in biased outputs that align with CCP propaganda, effectively spreading misinformation and suppressing critical information on sensitive topics. This constitutes a violation of rights (freedom of information, right to truthful information) and harm to communities by influencing public opinion with biased content. The AI system's development and use have directly led to these harms, as the training data's bias causes the AI to produce misleading answers. Therefore, this event qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation and biased content dissemination.
Thumbnail Image

袁斌:谷歌Gemini照搬中共论调 令人忧心 - 大纪元

2024-06-14
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates text responses. Its use has led to the dissemination of politically biased and propagandistic content aligned with CCP narratives, which can be considered harm to communities through misinformation and manipulation of public discourse. The article documents realized harm (not just potential), including concerns from U.S. lawmakers about the AI's role in spreading propaganda. This fits the definition of an AI Incident because the AI system's outputs have directly or indirectly led to harm (misinformation and influence on public opinion).
Thumbnail Image

袁斌:谷歌Gemini照搬中共論調 令人憂心 - 大紀元

2024-06-14
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates text responses. The harm arises from the AI's outputs reflecting and amplifying CCP propaganda, which can misinform users and influence public opinion, constituting harm to communities and potentially violating rights to truthful information. The article documents that this is occurring, not just a potential risk, and includes political and societal concern, including from US lawmakers. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

讚揚中共批評美國 谷歌AI親共立場引美議員擔憂 | 谷歌AI機器人Gemini | 中共立場 | 中共話語 | 新唐人电视台

2024-06-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates outputs reflecting a pro-CCP stance and misinformation regarding sensitive geopolitical issues. The event stems from the AI system's use and its biased outputs. While the article details the AI's biased responses and the concerns raised by U.S. lawmakers, it does not report any realized harm such as injury, disruption, or rights violations that have already occurred due to these outputs. Instead, it focuses on the potential for harm through misinformation and influence operations. Therefore, this situation represents a plausible risk of harm (informational and rights-related) that could lead to an AI Incident if realized but is currently a potential threat. Hence, it is best classified as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

【新唐人快報】谷歌AI機器人成中共喉舌?美議員擔憂 | 新唐人电视台

2024-06-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's Gemini language model) whose use has led to biased and politically skewed outputs that align with CCP propaganda. This bias can be seen as a violation of rights, particularly informational rights and potentially human rights, by promoting disinformation and suppressing critical perspectives on sensitive issues. The AI's outputs have already manifested these harms by spreading misleading political narratives, which has drawn concern from US legislators. Therefore, this constitutes an AI Incident due to the realized harm caused by the AI system's biased outputs influencing public discourse and potentially undermining democratic processes.
Thumbnail Image

6月13日國際聚焦 李強訪問新西蘭 法輪功學員現場抗議中共 | 谷歌AI | 西班牙 | 暴雨 | 新唐人电视台

2024-06-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The article describes Google's AI system Gemini providing politically biased answers that align with CCP propaganda and avoiding sensitive questions, raising concerns among US lawmakers. This suggests a plausible risk of AI-driven misinformation or propaganda influencing public opinion, which could harm communities or violate rights. Since no actual harm is reported yet, but the risk is credible and recognized by policymakers, this qualifies as an AI Hazard rather than an Incident. The other parts of the article (political visit, protests, weather events) do not involve AI systems or AI-related harm.
Thumbnail Image

【快報完整版】緬甸內戰 雙方均用中共無人機 | 谷歌AI機器人 | 中共喉舌 | 北方高溫 | 新唐人电视台

2024-06-13
www.ntdtv.com
Why's our monitor labelling this an incident or hazard?
The drones described are AI systems as they perform autonomous or semi-autonomous tasks such as navigation and targeting. Their use in armed conflict has directly led to injuries among combatants, fulfilling the criteria for harm to persons. The article explicitly states that these drones have been used to attack and injure resistance fighters, indicating realized harm caused by AI system use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google AI稱習近平「卓越領導人」 引美國國會關注 | 國際 | 中央社 CNA

2024-06-13
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Google's Gemini) and its use in generating politically biased responses. While this raises concerns about misinformation and bias, the article does not document any realized harm such as injury, rights violations, or disruption caused by the AI outputs. Instead, it highlights the reaction of U.S. lawmakers urging better AI data filtering and testing. This fits the definition of Complementary Information, as it provides context and governance response to an AI-related issue without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Google生成式AI这样介绍习近平 背后竟遭北京严审(图) - 新闻 美国 - 看中国新闻网 - 海外华人 历史秘闻 亚洲 -

2024-06-14
看中国
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Google's Gemini language model and AI-generated video content) whose outputs are aligned with a specific political agenda, spreading misinformation and propaganda. This use of AI has directly led to harm to communities by influencing public opinion with biased or false information, fulfilling the criteria for an AI Incident under harm to communities. The article also highlights concerns from US lawmakers about the impact on US foreign policy and calls for better AI training and filtering, reinforcing the recognition of harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Google生成式AI這樣介紹習近平 背後竟遭北京嚴審(圖) - 新聞 美國 - 看中國新聞網 - 海外華人 歷史秘聞 亞洲 -

2024-06-14
看中国
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly mentioned and is shown to produce outputs that align with a particular political agenda, effectively spreading misinformation and biased narratives. This use of AI has directly led to harm to communities by influencing public opinion with misleading information, which is a recognized form of harm under the framework. The article also references AI-generated videos used for propaganda, further supporting the presence of AI systems causing harm. The involvement is through the use of the AI system's outputs, which have real-world impacts on information integrity and political discourse. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

赞习近平是"卓越领导人":谷歌AI机器人的亲北京"立场"让美议员担忧

2024-06-12
美国之音
Why's our monitor labelling this an incident or hazard?
The AI system (Google's Gemini) is explicitly involved as it generates text responses that propagate biased and misleading information aligned with the Chinese government's propaganda. The harm is realized as the AI's outputs misinform users on sensitive political and human rights issues, which can harm communities by spreading disinformation and violating rights to accurate information. The involvement is through the AI's use and its training data, which is influenced by censored and biased sources. The article also notes concerns from U.S. lawmakers about the AI's role in promoting Beijing's narratives, confirming the significance of the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation and potential influence operations.