AI-Generated References Lead to Academic Scandal at University of Hong Kong

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A doctoral thesis at the University of Hong Kong included numerous fabricated references generated by AI, which were not properly checked before publication. The incident, involving student Yiming Bai and faculty including Paul Yip, raised concerns about academic integrity and the misuse of AI in scholarly work. Authors have apologized and corrective actions are underway.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI by a doctoral student to assist in organizing citation data, which was not properly checked, resulting in fabricated references. This misuse of AI has directly led to harm in the form of academic dishonesty and potential violation of intellectual property rights. The involvement of AI in producing false citations that were published in a reputable journal meets the criteria for an AI Incident, as it has caused realized harm to the academic community and undermines trust. The university and publisher's responses confirm the seriousness of the issue. Therefore, this is not merely a hazard or complementary information but an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital security

Industries
Education and training

Affected stakeholders
ConsumersBusiness

Harm types
Reputational

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

港大論文爆「AI 幻覺」爭議 涉引虛假文獻 葉兆輝稱博士生未檢查 港大:展開核查|Yahoo

2025-11-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI by a doctoral student to assist in organizing citation data, which was not properly checked, resulting in fabricated references. This misuse of AI has directly led to harm in the form of academic dishonesty and potential violation of intellectual property rights. The involvement of AI in producing false citations that were published in a reputable journal meets the criteria for an AI Incident, as it has caused realized harm to the academic community and undermines trust. The university and publisher's responses confirm the seriousness of the issue. Therefore, this is not merely a hazard or complementary information but an AI Incident.
Thumbnail Image

港大論文引虛構文獻 葉兆輝致歉:博士生用AI沒檢查 非誠信問題

2025-11-09
香港01
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate references, which produced fabricated citations (AI hallucinations). The AI's malfunction (hallucination) directly led to the inclusion of false information in a published academic paper, which is a violation of intellectual property rights and harms the academic community's trust. The harm has already occurred as the paper was published with these false citations. Although the authors argue it is not an academic integrity issue, the AI's role in producing false references and the failure to check them caused a clear harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

港大博士論文引AI虛構文獻 作者之一葉兆輝:學生未檢查 研究內容非揑造 - 20251110 - 港聞

2025-11-09
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate or organize references, which were not properly checked, resulting in fabricated citations in a published academic paper. This misuse of AI has directly led to a violation of intellectual property rights and academic integrity, which are recognized harms under the AI Incident definition. The harm is realized, not merely potential, as the fabricated references have been published and publicly questioned. The institutional investigation and public acknowledgment further confirm the seriousness of the incident. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

港大博士生論文疑引AI虛構文獻 社科院副院長承認並致歉

2025-11-09
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate fabricated citations in a published academic paper, which has caused harm to the academic community's trust and the reputations of the involved institutions. The AI system's misuse in generating false references directly led to misinformation and potential violation of intellectual property rights, fitting the definition of an AI Incident. The harm is realized, not just potential, as the paper was published and is accessible with false citations. The involvement of AI in the development and use of the paper's references is clear, and the incident has prompted remedial actions such as retraction and correction, further confirming the incident's nature.
Thumbnail Image

港大論文涉AI生成虛構文獻 葉兆輝致歉:非誠信問題 強調內容無造假 - 香港文匯網

2025-11-09
香港文匯網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate references, which led to the inclusion of fabricated citations in a published academic paper. This constitutes a violation of intellectual property rights and academic integrity, which are recognized harms under the AI Incident definition (violations of intellectual property rights and breach of obligations intended to protect fundamental rights). The harm has materialized as the paper was published with false references, causing reputational damage and raising concerns about scholarly trustworthiness. The event is not merely a potential risk but an actual incident involving AI misuse. The corrective measures and apologies are responses to this incident, making them complementary information but not changing the classification of the primary event as an AI Incident.
Thumbnail Image

學術爭議|港大博士候選人論文列出虛構參考文獻 學者指會令人懷疑整個研究可信性

2025-11-09
香港經濟日報HKET
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI hallucination as the cause of fabricated references in a doctoral thesis, indicating AI system involvement in generating false information. The harm is realized in the form of academic dishonesty and damage to the credibility of the research and institution, which falls under violations of intellectual property and academic rights. Since the AI system's malfunction directly contributed to this harm, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI文獻|港大博士候選人論文列出虛構參考文獻 學者指會令人懷疑整個研究可信性

2025-11-09
香港經濟日報HKET
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI hallucination as the cause of fabricated references in the doctoral thesis, indicating AI system involvement in the development or use of the research content. The harm is realized in the form of damage to the credibility and trustworthiness of academic research, which falls under violations of intellectual property rights and academic integrity obligations. Although no physical harm occurred, the reputational and ethical harm to the academic community is significant and directly linked to the AI system's malfunction or misuse. Therefore, this qualifies as an AI Incident.
Thumbnail Image

AI幻覺 教授唔覺 | am730

2025-11-09
am730
Why's our monitor labelling this an incident or hazard?
The article centers on the risks and characteristics of AI hallucinations and their potential to spread false information, which is a recognized AI-related risk. However, it does not document a concrete AI Incident where harm has occurred, nor does it describe a specific AI Hazard event with imminent plausible harm. Instead, it offers explanatory and advisory content about AI hallucinations and their implications, which fits the definition of Complementary Information as it enhances understanding and awareness of AI-related risks without reporting a new harm or imminent hazard.
Thumbnail Image

港大論文爆 AI 幻覺涉引虛假文獻|金融 KOL 無牌意見判囚 6 週|張頴康宣布離巢 TVB|11 月 10 日・Yahoo 晚報

2025-11-10
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI-assisted citation organization) in the development and preparation of a research paper. The AI's malfunction or misuse (hallucination leading to fabricated citations) directly led to the inclusion of false references, which is a breach of academic and intellectual property norms. This harm to academic integrity and potential violation of rights qualifies as an AI Incident under the framework, as the AI system's role was pivotal in causing the harm. Therefore, the classification is AI Incident.
Thumbnail Image

港大博士論文引「AI虛構」文獻 署名作者的社科院副院長致歉 | 聯合新聞網

2025-11-10
UDN
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used in the development of the thesis references, which malfunctioned or was misused by the student leading to fabricated citations. This has directly caused harm in the form of academic misinformation and reputational damage, which falls under violations of intellectual property rights and academic integrity obligations. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm (misinformation and breach of academic standards).
Thumbnail Image

社評:警惕「AI幻覺」 慎防墮入陷阱 - 20251112 - 社評

2025-11-11
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (AI-assisted research tools) that produced fabricated references ('AI hallucinations') in a doctoral thesis, which is a direct misuse or malfunction of AI leading to harm in the form of academic integrity violation and intellectual property rights breach. The harm is realized, not just potential, as the fabricated references were included in a published academic paper, causing reputational damage and undermining trust. The article also discusses broader risks and responses but the core event meets the criteria for an AI Incident because the AI system's use directly led to a significant harm (academic misconduct through false citations).
Thumbnail Image

社評:警惕「AI幻覺」 慎防墮入陷阱 - 20251112 - 社評

2025-11-11
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of generative AI systems (AI hallucination) that directly caused harm by producing fabricated academic references, undermining research integrity and trust. The article also references other harms caused by AI chatbots, such as psychological harm and misinformation dissemination. These harms fall under violations of intellectual property and harm to communities. The involvement of AI in generating false information and the resulting academic and societal impacts meet the criteria for an AI Incident rather than a hazard or complementary information. The article's focus is on realized harms and institutional responses, not just potential risks or general AI news.