Google Autocomplete AI Mislabels Conspiracy Theorists, Amplifying Extremist Views

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Simon Fraser University study found that Google's AI-driven autocomplete and subtitle algorithms consistently assign neutral or positive labels to known conspiracy theorists and extremists, such as calling Proud Boys founder Gavin McInnes a 'writer.' This misrepresentation misleads users, normalizes harmful ideologies, and indirectly amplifies extremist views.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's autocomplete algorithm) whose outputs indirectly contribute to harm by legitimizing conspiracy theorists and extremists through misleading or neutral framing. This can be seen as harm to communities by amplifying extremist views and misinformation. The harm is indirect, stemming from the AI system's use and its influence on public perception. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by an AI system's outputs.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestReputational

Severity
AI incident

Business function:
Other

AI system task:
Content generationOrganisation/recommendersReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Google autocomplete legitimizes conspiracy theorists and extremism: study | Venture

2022-04-01
dailyhive.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's autocomplete algorithm) whose outputs indirectly contribute to harm by legitimizing conspiracy theorists and extremists through misleading or neutral framing. This can be seen as harm to communities by amplifying extremist views and misinformation. The harm is indirect, stemming from the AI system's use and its influence on public perception. Therefore, this qualifies as an AI Incident under the definition of harm to communities caused directly or indirectly by an AI system's outputs.
Thumbnail Image

Read More

2022-03-31
Simon Fraser University
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Google's complex algorithms generating subtitles) whose outputs misrepresent harmful individuals by omitting negative or accurate descriptors. This mislabeling indirectly leads to harm by normalizing extremist and conspiratorial figures, thus harming communities and public trust. The harm is realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's outputs.
Thumbnail Image

SFU researchers find Google algorithms place innocuous job titles on prominent conspiracy theorists

2022-03-31
The Vancouver Sun
Why's our monitor labelling this an incident or hazard?
Google's automatic subtitle generation involves AI systems that analyze and label individuals. The study shows these AI-generated labels are misleading and inconsistent with the subjects' known harmful activities, such as leading terrorist or hate groups. While no direct physical harm is reported, the mislabeling can indirectly harm communities by obscuring the true nature of these figures, potentially enabling misinformation or minimizing perceived risks. Therefore, this event involves an AI system's use leading indirectly to harm to communities through misinformation, qualifying it as an AI Incident.
Thumbnail Image

Google autocomplete helps legitimize conspiracy theorists, study says

2022-04-01
Study Finds
Why's our monitor labelling this an incident or hazard?
Google's autocomplete feature uses AI algorithms to generate subtitles for search results. The study shows that these AI-generated subtitles consistently fail to accurately represent individuals known for harmful conspiracy theories or extremist actions, instead providing neutral or positive labels. This mislabeling can mislead users, normalize harmful ideologies, and contribute to social harm, fulfilling the criteria for an AI Incident due to indirect harm to communities. The AI system's outputs have directly influenced public perception, leading to potential societal harm.
Thumbnail Image

Google Autocomplete Helps Mislead Public, Legitimize Conspiracy Theorists: SFU Study - SCIENMAG: Latest Science And Health News

2022-03-31
Scienmag: Latest Science and Health News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's autocomplete algorithm) whose outputs indirectly lead to harm by misleading users and normalizing conspiracy theorists and terrorists, which can sow distrust and harm vulnerable groups. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities through misinformation and legitimization of harmful actors. The study's findings demonstrate realized harm rather than just potential harm, and the AI system's role is pivotal in this harm.
Thumbnail Image

Researchers at SFU have discovered that Google's algorithms place a harmless position on prominent conspirators. - ExBulletin

2022-04-02
ExBulletin
Why's our monitor labelling this an incident or hazard?
Google's algorithms, which are AI systems generating automatic subtitles based on web-wide sources, are directly involved in producing misleading information about individuals known for harmful conspiracies and terrorism. This misrepresentation has already led to harm by misleading the public and potentially normalizing extremist views, which constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and mislabeling.