AI-Generated Fake Content Misleads Baseball Fans During Playoffs

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A clickbait network in Southeast Asia used AI to generate phony articles and fan pages on Facebook, deceiving baseball fans during the MLB playoffs. These AI-driven pages mimic genuine fan communities, spread misinformation, and drive ad revenue, causing harm through deception and potential disinformation campaigns.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly involved in generating fake content. The use of AI-generated phony articles to mislead fans and drive clicks constitutes harm to communities through misinformation and financial exploitation. Since the harm is occurring (fans are being deceived and scammed), this qualifies as an AI Incident under the harm to communities category.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Scammers drawing in fans with fake AI content of MLB stars during playoffs

2025-10-20
The Japan Times
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating fake content. The use of AI-generated phony articles to mislead fans and drive clicks constitutes harm to communities through misinformation and financial exploitation. Since the harm is occurring (fans are being deceived and scammed), this qualifies as an AI Incident under the harm to communities category.
Thumbnail Image

Phony AI content stealing fan attention during MLB playoffs

2025-10-20
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deceptive content that is actively used to mislead and manipulate social media users, which constitutes harm to communities through misinformation and disinformation. The article indicates that such content is being used to attract engagement and may be rented or sold for nefarious purposes, implying ongoing harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation affecting public discourse and social trust.
Thumbnail Image

Phony AI content stealing fan attention during baseball playoffs

2025-10-20
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used to produce false and misleading posts that attract large audiences and generate revenue. This disinformation campaign is causing harm to communities by spreading false narratives and potentially enabling more nefarious disinformation efforts. The AI system's role in generating and scaling this content is pivotal to the harm occurring, meeting the criteria for an AI Incident involving harm to communities through misinformation.
Thumbnail Image

Phony AI content stealing fan attention during baseball playoffs

2025-10-20
RTL Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used to create false narratives and phony fan pages that mislead and deceive baseball fans, causing harm to communities by spreading misinformation and manipulating public opinion. The AI system's use in generating this content directly contributes to the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and deception. The investigation and platform response are ongoing, but the harm is occurring now, not just potential. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Phony AI content stealing fan attention during baseball playoffs

2025-10-20
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated articles used to produce phony content that mimics genuine fan accounts, misleading users and causing harm to the community by spreading false information and manipulating user engagement. This constitutes harm to communities, fulfilling the criteria for an AI Incident due to the direct role of AI in generating misleading content that causes harm.