AI Hallucinations in Legal Practice Lead to Court Sanctions and Fines

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI tools like ChatGPT and Perplexity are increasingly used for legal research and drafting in the US. However, their tendency to generate fabricated legal citations ('hallucinations') has led to court sanctions, fines, and procedural disruptions, harming litigants and undermining legal standards.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being used in legal contexts, including for research and drafting. While some users have benefited, there are multiple documented cases where AI-generated hallucinations (false legal citations) led to sanctions and fines against attorneys and litigants. This is a clear example of harm caused by AI malfunction or misuse, specifically violations of legal obligations and professional standards, which fits the definition of an AI Incident. The harm is direct and realized, not merely potential, and involves breaches of legal and ethical duties.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainability

Industries
Other

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Organisation/recommendersContent generation


Articles about this incident or hazard

Thumbnail Image

The new public defender: Some are turning to ChatGPT for legal advice and winning

2025-10-08
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in legal contexts, including for research and drafting. While some users have benefited, there are multiple documented cases where AI-generated hallucinations (false legal citations) led to sanctions and fines against attorneys and litigants. This is a clear example of harm caused by AI malfunction or misuse, specifically violations of legal obligations and professional standards, which fits the definition of an AI Incident. The harm is direct and realized, not merely potential, and involves breaches of legal and ethical duties.
Thumbnail Image

These people ditched lawyers for ChatGPT in court

2025-10-08
NBC News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI chatbots) used in drafting legal documents. The AI's hallucinations have directly led to harms including court sanctions, fines, and procedural disruptions, which fall under violations of legal rights and harm to the justice system. The harms are realized and documented, not merely potential. The AI's malfunction (hallucination) and misuse by litigants and lawyers have caused these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Part 2 - AI for legal professionals: Hallucinations

2025-10-09
Lexology
Why's our monitor labelling this an incident or hazard?
The article explicitly describes real incidents where AI systems (large language models) generated fabricated legal case citations that were used in court documents, leading to harm such as misleading the court and risking legal outcomes. This constitutes a violation of professional and legal standards, which falls under harm to rights and communities. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident. The article also discusses mitigation efforts but the primary focus is on the realized harm from AI hallucinations in legal practice.
Thumbnail Image

The new public defender: Some are turning to ChatGPT to offer legal advice and win small claims cases

2025-10-08
AOL.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and similar generative AI) used for legal advice and document drafting. The AI's malfunction in generating false legal citations ('hallucinations') has directly caused harm, including fines against attorneys and misleading courts, which constitutes violations of legal obligations and harms to individuals and the legal system. These harms fall under violations of applicable law and human rights protections related to fair legal processes. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Americans turn to AI over lawyers and win

2025-10-10
AzerNews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT, Perplexity AI) used in legal research and drafting, which fits the definition of AI systems. However, the event describes positive outcomes without any reported harm or violation. There is no indication that the AI use caused injury, rights violations, or other harms, nor does it suggest plausible future harm from the AI use described. The article mainly provides context on AI adoption in legal services and expert opinions on its limitations, which aligns with Complementary Information rather than an Incident or Hazard.