AI-Generated Fake Reviews Deceive Consumers and Undermine Online Trust

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI systems like ChatGPT are increasingly used to create fake online reviews and profiles, making them nearly indistinguishable from genuine ones. This misuse has already led to consumer deception and unfair competition, with platforms like Amazon taking legal action against providers of AI-generated fake reviews.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions generative AI being used to create fake profiles and fake customer reviews, which is an AI system's use. Although no actual harm is reported yet, the concern is that such AI-generated fake reviews could mislead consumers and cause harm. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to harm through deception and manipulation in online reviews. There is no indication that an incident has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential risk rather than updates or responses to past events.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingSafetyTransparency & explainability

Industries
Media, social platforms, and marketingLogistics, wholesale, and retailConsumer products

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Verbraucherzentrale fürchtet KI-generierte Fake-Rezensionen

2023-06-14
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI being used to create fake profiles and fake customer reviews, which is an AI system's use. Although no actual harm is reported yet, the concern is that such AI-generated fake reviews could mislead consumers and cause harm. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to harm through deception and manipulation in online reviews. There is no indication that an incident has already occurred, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential risk rather than updates or responses to past events.
Thumbnail Image

Massenhaft mit KI gefälschte Kundenrezensionen im Netz: Was bedeutet das für den Verbraucher?

2023-06-16
T-online.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce fake customer reviews, which directly harms consumers by misleading them about product quality. This is a violation of consumer rights and causes harm to communities by spreading misinformation. The article explicitly states that these AI-generated fake reviews are widespread and difficult to detect, indicating realized harm. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Der Tag: KI könnte massenweise Online-Bewertungen fälschen

2023-06-15
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create fake online reviews, which directly harms consumers by misleading them and undermining trust in online information, a form of harm to communities. The involvement of AI in producing these fake reviews is explicit, and the harm is realized as the fake reviews are already widespread. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

"Das sollten wir ernst nehmen": KI könnte massenweise Online-Rezensionen fälschen

2023-06-15
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI like ChatGPT) to create fake online reviews, which is a misuse of AI technology. While the article does not report actual incidents of harm, it clearly states a credible risk that such AI-generated fake reviews could lead to consumer deception and harm to communities. This fits the definition of an AI Hazard, as the development and use of AI systems could plausibly lead to an AI Incident involving harm to communities and consumers. The article also mentions regulatory responses but focuses mainly on the warning about potential harm rather than on actual incidents or responses, so it is not Complementary Information or an Incident.
Thumbnail Image

Verbraucherschutz - Warnung von gefälschten Bewertungen im Internet

2023-06-15
Deutschlandfunk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI) used to create fake reviews and profiles. Although the article does not describe a realized harm incident, it highlights a credible risk that such AI-generated fake content could mislead consumers, constituting harm to communities and consumers. Therefore, this situation represents a plausible future harm scenario, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Besser betrügen mit KI

2023-06-15
Frankfurter Rundschau
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce fake product reviews, which directly harms consumers by misleading them and violates fair market competition. The article states that this misuse of AI is already occurring and causing harm, fulfilling the criteria for an AI Incident. The harm includes consumer deception and distortion of competition, which fall under violations of rights and harm to communities. The AI system's use in generating realistic fake reviews is central to the harm described, not merely a potential future risk or a general discussion, thus it is not a hazard or complementary information.
Thumbnail Image

Verbraucherschützer warnen vor Massen gefälschter Online-Rezensionen durch KI

2023-06-15
TAH - Täglicher Anzeiger Holzminden
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) to create fake online reviews, which could plausibly lead to harm to consumers through deception and misinformation, a form of harm to communities and individuals. However, the article focuses on warnings and potential risks rather than describing realized harm or a specific incident. Therefore, this qualifies as an AI Hazard rather than an AI Incident. The mention of regulatory measures and enforcement challenges supports the assessment of a plausible future risk rather than a current incident.