Google Manager Alleges Search Algorithm Manipulation to Influence US Elections

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Google Cloud program manager, Ritesh Lakhkar, was recorded by Project Veritas admitting that Google's search algorithms are intentionally skewed to favor Democratic candidates and harm Donald Trump. The manipulation of AI-driven search results is alleged to distort political information, potentially interfering with elections and violating rights to unbiased information.[AI generated]

Why's our monitor labelling this an incident or hazard?

The search algorithm is an AI system that generates outputs (search results) influencing virtual environments (information access). The manager's admission confirms the algorithm is intentionally or unintentionally biased, skewing results politically. This bias can harm communities by limiting fair access to information and violating rights to free speech and information. The harm is realized as the biased search results are actively produced and affect users. Hence, this qualifies as an AI Incident due to indirect harm caused by the AI system's use.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
Public interestHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Watch undercover video: See Google manager confirm company's anti-Trump bias

2020-10-20
WND
Why's our monitor labelling this an incident or hazard?
The search algorithm is an AI system that generates outputs (search results) influencing virtual environments (information access). The manager's admission confirms the algorithm is intentionally or unintentionally biased, skewing results politically. This bias can harm communities by limiting fair access to information and violating rights to free speech and information. The harm is realized as the biased search results are actively produced and affect users. Hence, this qualifies as an AI Incident due to indirect harm caused by the AI system's use.
Thumbnail Image

Watch: Google employee claims company manipulating search results to benefit Biden, hurt Trump

2020-10-20
Law Enforcement Today
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search algorithm) allegedly manipulated to bias search results, which could plausibly lead to harm such as misinformation, censorship, and election interference. The article does not provide verified evidence of actual harm occurring but raises credible concerns about the AI system's use leading to significant societal harm. The involvement is in the use of the AI system, and the potential harm aligns with violations of rights and harm to communities. Since the harm is plausible but not confirmed or demonstrated, the classification as an AI Hazard is appropriate.
Thumbnail Image

BREAKING: O'Keefe gets Google program manager to admit Google skews their searches toward Joe Biden [VIDEO]

2020-10-20
The Right Scoop
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search ranking algorithm) whose use is admitted to be biased in a way that skews search results toward a political candidate. This manipulation of information can be considered a violation of rights and harm to communities by influencing public opinion and access to balanced information. Since the harm is occurring through the use of the AI system, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

Breaking Video: Google Manager Admits to Election Interference to Support Biden and Harm Trump..."When Trump won the first time, people were crying in the corridors of Google"

2020-10-20
100 Percent Fed Up
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search algorithm) whose outputs are intentionally manipulated to influence election outcomes, constituting election interference. This manipulation harms communities by distorting public information and violates rights related to freedom of speech and fair democratic processes. The harm is realized and directly linked to the AI system's use, meeting the criteria for an AI Incident.
Thumbnail Image

'Playing God': New video exposes Google as DOJ files suit after months of apparent election interference

2020-10-20
World Tribune: Window on the Real World
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search algorithm) whose use is alleged to have directly led to harm by manipulating election-related information, thus interfering with democratic processes and potentially violating rights to fair information access. The harm is realized and ongoing as the manipulation affects public knowledge and election fairness. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's use and harm to communities and rights violations. The DOJ lawsuit is complementary context but does not change the classification.
Thumbnail Image

"We Play God" Google Manager Exposes Search Engine "Pow

2020-10-20
USSA News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search algorithms) whose use is described as intentionally biased and suppressive, leading to harm in the form of violations of rights (freedom of speech, access to information) and harm to communities (manipulation of political information). The harm is realized and ongoing, not just potential. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

O'Keefe Strikes Again! Google Program Manager Confirms Election I

2020-10-20
USSA News
Why's our monitor labelling this an incident or hazard?
The Google search algorithm is an AI system that generates search results influencing users' perceptions. The program manager's admission that the algorithm is intentionally skewed constitutes use of the AI system leading to harm by interfering with election fairness and freedom of speech. This meets the criteria for an AI Incident because the AI system's use has directly led to harm to communities and violation of rights. The event is not merely potential harm or complementary information but a direct admission of harm caused by the AI system's outputs.
Thumbnail Image

Project Veritas Does It Again...Google Exposed | USSA News | The Tea Par

2020-10-19
USSA News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as Google's search and YouTube content moderation rely on AI algorithms for ranking and filtering content. The claims describe potential misuse of these AI systems to manipulate information and censor content, which could lead to harm such as violation of rights or harm to communities. However, the article does not provide evidence that these harms have actually occurred or that a specific incident has taken place. Instead, it is a whistleblower's testimony and a report highlighting concerns about AI system use and corporate culture. This fits the definition of Complementary Information, as it enhances understanding of AI's societal impact and governance issues without documenting a concrete AI Incident or AI Hazard.
Thumbnail Image

Video: Google Whistleblower Tells Veritas Search Engine Is 'Skewing Results' To Benefit Democrats

2020-10-20
SGT Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's search engine algorithms) whose use is alleged to have directly led to harm by skewing political information, which can be considered harm to communities and a violation of rights to unbiased information. The manipulation is described as intentional and ongoing, indicating realized harm rather than a potential risk. Therefore, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Playing selective god': Google 'whistleblower' tells Project Veritas that search engine 'skews' results in Democrats' favor

2020-10-20
SGT Report
Why's our monitor labelling this an incident or hazard?
The search engine uses AI algorithms to generate search results, which fits the definition of an AI system. The whistleblower alleges that the algorithm is intentionally skewed to favor a political party, which is a misuse of the AI system's outputs leading to harm in the form of biased information dissemination and potential violation of rights to free and fair access to information. This harm is direct and ongoing, as the skewed results influence public perception and political discourse. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's biased outputs.