AI-Generated Fake Article Exposes Predatory Scientific Journal

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Pascual D. Diago, a professor at the University of Valencia, used ChatGPT to generate a fake, nonsensical scientific article, which was accepted and published by the Clinical Journal of Obstetrics and Gynecology. This incident highlights how AI-generated content can facilitate academic fraud and undermine scientific integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system (ChatGPT) was explicitly used to generate a fake scientific article that was published in a predatory journal. The misuse of AI in this context directly led to the dissemination of fraudulent scientific content, which harms the scientific community's integrity and trust. This constitutes a violation of ethical and intellectual property standards, fitting the definition of harm to communities and breach of obligations under applicable law. The harm is realized, not just potential, as the article remains published and contributes to the problem of predatory journals. Hence, this event is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Media, social platforms, and marketingEducation and training

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

El científico Me-Lo I Nvent O y los números primos de las embarazadas: un matemático expone la estafa de las revistas depredadoras

2026-02-05
EL PAÍS
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to generate a fake scientific article that was published in a predatory journal. The misuse of AI in this context directly led to the dissemination of fraudulent scientific content, which harms the scientific community's integrity and trust. This constitutes a violation of ethical and intellectual property standards, fitting the definition of harm to communities and breach of obligations under applicable law. The harm is realized, not just potential, as the article remains published and contributes to the problem of predatory journals. Hence, this event is classified as an AI Incident.
Thumbnail Image

El español que ha colado un estudio falso escrito con ChatGPT: "Me llevó dos horas"

2026-02-05
El Confidencial
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate false academic content that was published as genuine research, leading to harm in the form of academic fraud and potential damage to scientific integrity and trust. This fits the definition of an AI Incident because the AI system's use directly led to a violation of intellectual property and academic rights and harms the community by enabling fraudulent publications. The harm is realized, not just potential, as the article remains published and has influenced invitations to conferences, demonstrating actual impact.
Thumbnail Image

Un profesor de la UV publica un artículo "deliberadamente absurdo" sobre matemáticas y embarazo | Las Provincias

2026-02-06
Las Provincias
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used to generate a deliberately absurd paper as part of an experiment to expose predatory publishing practices. The AI's involvement is in the development and use of the paper, but no harm resulted from this use. The event does not describe any injury, rights violation, or other harms caused by the AI system. The professor's action is a critique and demonstration rather than a harmful incident. The article's main focus is on revealing the predatory journal's practices and the AI-generated paper's acceptance, which is informative and contextual rather than reporting an AI Incident or Hazard. Therefore, the event fits the definition of Complementary Information.
Thumbnail Image

Un profesor de la UV desenmascara a una revista científica 'fake' con un artículo generado por Inteligencia Artificial

2026-02-05
Levante
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate the fake scientific article, confirming AI system involvement. However, the AI-generated content was used intentionally by the professor to expose a predatory journal rather than causing harm. There is no indication that the AI system's use led to injury, rights violations, or other harms. The event documents a societal and academic response to unethical publishing practices using AI, which fits the definition of Complementary Information. It does not describe an AI Incident or AI Hazard because no harm occurred or is plausibly expected from the AI system's use in this context.
Thumbnail Image

El 'timador de ChatGPT' publica un artículo falso de Ginecología

2026-02-05
Redacción Médica
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used to generate a false scientific article that was accepted and published by a journal, demonstrating a failure in quality control and enabling misinformation. This constitutes a violation of intellectual property and scientific integrity, harming the academic community and public trust. The harm is realized because the article was published and disseminated, not merely a potential risk. Thus, it meets the criteria for an AI Incident as the AI's use directly led to harm to communities (academic and public) through misinformation and undermining scientific standards.
Thumbnail Image

Creó un texto delirante sobre la ansiedad en mujeres embarazadas y expuso la estafa de las revistas falsas

2026-02-06
Diario EL PAIS Uruguay
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to produce a fabricated scientific article, which was then published by a predatory journal. This demonstrates the AI system's involvement in the development and use stages leading to harm in the form of undermining scientific integrity and enabling fraudulent publication practices. The harm is indirect but significant, affecting the scientific community and trust in research publications. The article explicitly states the AI-generated nature of the text and the resulting exposure of predatory journals, confirming the AI system's pivotal role. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Me-Lo I Nvent O

2026-02-07
Granada Hoy
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used to generate a fake scientific article, which was then published by a predatory journal. While this misuse of AI-generated content reveals risks to scientific credibility and potential societal harm through misinformation, the article does not report actual harm caused by the AI system's malfunction or use, nor does it describe a direct causal link to injury, rights violations, or other harms defined as AI Incidents. It also does not describe a specific plausible future harm scenario caused by AI alone. Instead, it highlights a broader issue and calls for improved governance and controls, fitting the definition of Complementary Information.