Law Firms Sanctioned Over AI-Generated Fake Citations in Brief

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two California law firms, K&L Gates and Ellis George, faced a $31,100 sanction from Judge Michael R. Wilner after submitting an AI-generated brief containing fabricated legal citations. The undisclosed reliance on AI led to misleading research, breaching legal standards and impacting the integrity of the proceedings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI-generated research that included fabricated legal citations directly caused harm by misleading the court and risking incorrect judicial outcomes. The AI system's involvement in producing false information that was relied upon in official legal documents is a clear case of harm resulting from AI use. The judge's sanctions and statements confirm the harm and the AI's pivotal role. Therefore, this event qualifies as an AI Incident due to the realized harm to legal integrity and the violation of legal standards.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Government, security, and defenceBusiness processes and support services

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generationReasoning with knowledge structures/planning

In other databases


Articles about this incident or hazard

Thumbnail Image

Judge slams lawyers for 'bogus AI-generated research'

2025-05-13
The Verge
Why's our monitor labelling this an incident or hazard?
The use of AI-generated research that included fabricated legal citations directly caused harm by misleading the court and risking incorrect judicial outcomes. The AI system's involvement in producing false information that was relied upon in official legal documents is a clear case of harm resulting from AI use. The judge's sanctions and statements confirm the harm and the AI's pivotal role. Therefore, this event qualifies as an AI Incident due to the realized harm to legal integrity and the violation of legal standards.
Thumbnail Image

Judge fines lawyers $31k after they use AI to generate brief with made-up citations

2025-05-14
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate legal documents with fabricated citations, which directly misled a judge and could have led to incorrect judicial rulings. The AI's role in producing false information that was relied upon in a legal proceeding caused harm to the justice system's integrity and violated legal obligations. This meets the criteria for an AI Incident because the AI system's use directly led to harm (misleading the court and breaching legal standards).
Thumbnail Image

Judge Slams Firms for Secret AI Use with Fake Citations

2025-05-14
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems to generate legal research content that included fabricated citations and quotations. The undisclosed and unchecked reliance on AI outputs led to the submission of false information to a court, which is a direct harm to the legal process and a violation of legal obligations. The judge sanctioned the firms for this misconduct, confirming the harm caused. Therefore, this qualifies as an AI Incident due to the direct link between AI use and realized harm involving legal rights and judicial integrity.
Thumbnail Image

Law Firms Use Artificial Intelligence To Earn Very Real $31K Sanction! - Above the Law

2025-05-14
Above the Law
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (ChatGPT) by lawyers to generate legal citations, many of which were fabricated or incorrect. This misuse of AI led to misleading the court, which is a violation of legal and professional standards, thus constituting harm to the legal system and a breach of obligations under applicable law. The sanctions imposed on the law firms confirm that harm occurred. The AI system's involvement is direct, as the AI-generated content was submitted as part of official legal documents. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Judge Sanctions Law Firms $31,000 for Error-Filled AI-Generated Brief

2025-05-14
WebProNews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for legal research and brief generation. The AI's hallucinations and inaccuracies directly caused the submission of false legal citations and quotations, misleading the court and affecting judicial decision-making. The harm is realized and significant, as it undermines the legal process and led to sanctions against the law firms. The AI system's malfunction and the firms' improper use of AI are central to the incident. Hence, this is an AI Incident due to direct harm caused by AI-generated misinformation in a legal context.
Thumbnail Image

Sanctions imposed for 'collective debacle' involving AI hallucinations and 2 firms, including K&L Gates

2025-05-14
ABA Journal - Law News Now
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to prepare an outline containing legal research, which included fabricated and incorrect citations. The use of this AI-generated content in a legal brief led to misleading the court, which is a violation of legal and ethical standards, constituting harm to the legal process and potentially to the rights of involved parties. The sanctions imposed reflect the materialized harm caused by the AI system's outputs. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in legal proceedings.
Thumbnail Image

Lawyers Used AI to Make a Legal Brief -- and Got Everything Wrong

2025-05-15
VICE
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate legal content that was not properly vetted, resulting in the filing of false information in court documents. This misuse of AI directly caused harm by misleading the court and breaching legal and ethical standards, which falls under violations of human rights or breach of obligations under applicable law. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's use and the resulting sanctions.
Thumbnail Image

Anthopic's law firm blames Claude hallucinations for errors

2025-05-15
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly details how the AI system (Claude) produced inaccurate legal citations that were used in official court documents, leading to sanctions and legal consequences for the attorneys involved. This shows direct involvement of an AI system's malfunction causing harm in the form of legal and professional violations. The harm is realized and materialized, not merely potential. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to violations of legal obligations and professional standards, which is a breach of applicable law and rights.
Thumbnail Image

Lawyers Sanctioned Again for Relying on Bogus Legal AI Citations Lawyers Sanctioned Again for Relying on Bogus Legal AI Citations -

2025-05-16
LawFuel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating bogus legal citations that were incorporated into official court documents, resulting in sanctions against lawyers and judicial concern about the reliability of AI-generated legal research. The harm includes violations of legal professional duties and risks to the judicial process, which fall under violations of obligations under applicable law and harm to communities (the legal system and public trust). The AI systems' outputs directly contributed to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Law Firms Caught and Punished for Passing Around "Bogus" AI Slop in Court

2025-05-15
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to produce legal research and citations that were false and misleading. The AI's outputs were incorporated into an official court filing without proper verification, leading to the risk of judicial orders being based on fabricated information. This constitutes a violation of legal obligations and harms the judicial process, fitting the definition of an AI Incident where the AI system's use directly led to harm (misinformation in legal proceedings). The fines and judicial response further confirm the recognition of harm caused by AI misuse.
Thumbnail Image

Judge initially fooled by fake AI citations, nearly put them in a ruling

2025-05-14
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fake legal citations that were submitted in court briefs, misleading a judge and nearly affecting a court ruling. The AI system's outputs were directly used in the legal process, causing harm by violating legal and ethical standards and misleading the court. This fits the definition of an AI Incident as the AI system's use directly led to a violation of legal obligations and harm to the judicial process. The sanctions and court actions further confirm the materialized harm. Hence, the classification is AI Incident.