AI-Generated Errors in Deloitte's Canadian Healthcare Report Spark Scrutiny

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Deloitte's $1.6 million healthcare report for Newfoundland and Labrador contained fabricated or inaccurate citations attributed to AI assistance. The incident has led to public criticism, calls for stricter AI regulations, and concerns over misinformation in official government documents, highlighting risks of AI use in consulting and policy-making.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of generative AI (large language models) to produce false research citations in official reports, which were then used to support government strategies. This misuse of AI has directly caused harm by spreading misinformation and potentially influencing policy and investment decisions based on fabricated data. The harm includes violation of intellectual property rights (using false or fabricated citations) and harm to communities through misleading information affecting public health and education sectors. The AI system's role is pivotal as the hallucinated citations stem from the AI's outputs. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securityTransparency & explainability

Industries
Healthcare, drugs, and biotechnologyGovernment, security, and defence

Affected stakeholders
BusinessGovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Research and development

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

How AI May Be Undermining Your Investments

2025-11-29
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (large language models) to produce false research citations in official reports, which were then used to support government strategies. This misuse of AI has directly caused harm by spreading misinformation and potentially influencing policy and investment decisions based on fabricated data. The harm includes violation of intellectual property rights (using false or fabricated citations) and harm to communities through misleading information affecting public health and education sectors. The AI system's role is pivotal as the hallucinated citations stem from the AI's outputs. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Not again: Deloitte's $1.6 million report contains AI hallucinations

2025-11-28
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (large language models) used in the development of government reports. The AI hallucinations directly caused fabricated citations, which are errors that misinform and potentially harm public decision-making and trust. This fits the definition of an AI Incident because the AI system's malfunction (hallucination) directly led to harm in the form of misinformation in official documents. Although the harm is not physical, it is significant and clearly articulated as undermining the reliability of government reports and public trust, which affects communities. The repeated nature of the issue and the financial cost further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PCs suggest A.I. errors in Education Accord originated within government

2025-11-29
thepeterboroughexaminer.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false citations in official government reports, which directly caused harm by disseminating inaccurate information and damaging the credibility of important public documents. The government's acknowledgment of the AI-generated errors and the subsequent removal and revision of the reports confirm that harm has materialized. The involvement of AI in the development and use of these reports, and the resulting misinformation, fits the definition of an AI Incident due to harm to communities and violation of trust in public information.
Thumbnail Image

PCs suggest A.I. errors in Education Accord originated within government - The Independent

2025-11-29
The Indy
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false citations in official government reports, which directly led to misinformation and credibility harm. This fits the definition of an AI Incident because the AI system's use in producing the reports caused harm to communities through dissemination of false information. The government's response and plans for stricter review and human verification are complementary information but do not negate the fact that harm occurred. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Deloitte's $1.6 million Canadian healthcare report flagged for AI errors -- weeks after its Australia report scandal | Company Business News

2025-11-26
mint
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-related errors in Deloitte's reports, including fabricated citations and false academic references, which are direct consequences of AI system use in report generation. These errors have materialized and caused harm by misleading government decision-making processes and potentially affecting public policy and trust. The involvement of AI in producing erroneous content that has been disseminated and used by government bodies meets the criteria for an AI Incident, as it has directly led to harm to communities and breaches of intellectual property rights.
Thumbnail Image

Newfoundland NDP calls for A.I regulations after errors found in Deloitte healthcare report

2025-11-26
Prince Albert Daily Herald
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the probable use of AI in generating fabricated citations in a government healthcare report, which has already caused harm by undermining confidence in government decision-making and healthcare policy. The AI system's outputs (fabricated citations) directly contributed to misinformation in an official document, leading to harm to communities through erosion of trust and potential misinformed policy decisions. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm. The calls for regulation and review are responses to this incident, not the primary event itself.
Thumbnail Image

Another government paid Deloitte for work with AI-generated hallucinations - Muvi TV

2025-11-27
Muvi Television Homepage - Latest Local News, Sports News, Business News & Entertainment
Why's our monitor labelling this an incident or hazard?
The event describes a report produced with AI assistance that contained hallucinated citations, which is a malfunction or misuse of AI in content generation. While the inaccuracies are significant and have led to public scrutiny and calls for refunds, there is no explicit mention of direct harm such as injury, rights violations, or disruption of critical infrastructure. The AI system's malfunction has led to misinformation in an official government document, which is a form of harm to the integrity of information and potentially to communities relying on accurate data. However, since the harm is indirect and the issue is being addressed, this fits the definition of an AI Incident due to the realized harm from AI-generated misinformation in an official context.
Thumbnail Image

AI Again? Deloitte Draws Flak For $1.6-Million Canadian Healthcare Report, Weeks After Row In Australia

2025-11-26
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event describes a report commissioned by a government that contained multiple AI-related errors, specifically fabricated or incorrect citations generated or supported by AI. This misinformation in a government healthcare report can harm communities by misleading policy decisions and undermining trust. The AI system's use in generating citations is explicitly mentioned and directly linked to the errors. Although Deloitte states AI was only used selectively, the errors are attributed to AI involvement. This meets the criteria for an AI Incident because the AI system's use directly led to harm in the form of misinformation and potential breach of obligations related to intellectual property and professional standards. The event is not merely a potential hazard or complementary information, as the harm has already occurred and is documented.
Thumbnail Image

Deloitte faces new scrutiny over suspected AI-generated mistakes

2025-11-26
semafor.com
Why's our monitor labelling this an incident or hazard?
The reports contained inaccuracies suspected to be caused by AI, which directly led to errors in official government documents. This constitutes harm to the integrity of information relied upon by public institutions, potentially affecting communities and public trust. The AI system's involvement in generating these errors is explicit and central to the incident. Although the fundamental findings were not altered, the errors represent a clear harm linked to AI use in consulting, meeting the criteria for an AI Incident.
Thumbnail Image

Another government paid Deloitte for work with AI-generated hallucinations

2025-11-27
Straight Arrow News
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI assistance in generating citations and content in government reports, which led to the inclusion of fabricated information (hallucinations). This misinformation in official reports can cause harm to communities and public trust, fulfilling the harm criteria (d) for AI Incidents. The AI system's malfunction (hallucination) directly contributed to the harm. The company's response to fix the issues does not negate the fact that harm has occurred. Therefore, this event is best classified as an AI Incident.