US Lawyers Sanctioned for Submitting AI-Generated Fake Legal Citations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple US attorneys, including in California and New Jersey, were fined and sanctioned for submitting court filings containing fabricated legal citations generated by AI tools like ChatGPT. These incidents highlight the risks of unverified AI outputs in legal proceedings and have prompted courts to issue warnings and impose record fines.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (ChatGPT) that generated fake legal citations, which the attorney submitted without verification. This misuse of AI directly caused harm by misleading the court, violating legal norms, and wasting judicial resources. The fine and court opinion demonstrate that the AI system's outputs led to a breach of obligations under applicable law and harmed the integrity of the legal process. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI use in a professional legal context.[AI generated]
AI principles
AccountabilityFairnessTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

California Attorney Fined $10k for Filing an Appeal With Fake Legal Citations Generated by AI

2025-09-22
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) that generated fake legal citations, which the attorney submitted without verification. This misuse of AI directly caused harm by misleading the court, violating legal norms, and wasting judicial resources. The fine and court opinion demonstrate that the AI system's outputs led to a breach of obligations under applicable law and harmed the integrity of the legal process. Therefore, this qualifies as an AI Incident due to realized harm stemming from AI use in a professional legal context.
Thumbnail Image

What this N.J. lawyer did with AI landed him a hefty fine and a warning to all attorneys

2025-09-23
NJ.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to create fabricated case law, which was submitted in a legal proceeding. This misuse of AI directly caused harm by misleading the court and breaching professional and legal obligations, constituting a violation of applicable law protecting intellectual property and legal procedural rights. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's misuse in a critical legal context.
Thumbnail Image

Bergen County lawyer fined $3,000 for misuse of artificial intelligence

2025-09-23
North Jersey
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to create fake legal case law that was submitted to a court, which is a misuse of AI in a legal context. This misuse directly led to a legal sanction (fine) and disciplinary measures, indicating a violation of legal and professional obligations. The harm here is a violation of legal rights and the integrity of the judicial process, fitting the definition of an AI Incident under violations of human rights or breach of obligations under applicable law. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

LA attorney issued historic fine over ChatGPT fabrications

2025-09-22
LAist
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the attorney used ChatGPT to generate text and citations without verifying them, resulting in fabricated legal quotes being submitted to the court. This misuse of the AI system directly caused harm by filing a frivolous appeal, wasting court time and taxpayer money, and violating court rules. The harm is realized and significant, involving legal and ethical breaches. Hence, this qualifies as an AI Incident due to the direct link between AI-generated fabrications and the harm caused in the judicial context.
Thumbnail Image

US Courts Sanction Lawyers for AI-Generated Fake Citations

2025-09-19
WebProNews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI tools) whose use by lawyers led to the submission of fake legal citations, which is a direct violation of legal and ethical obligations, thus constituting a breach of applicable law protecting fundamental rights and the justice system's integrity. The harms are realized and materialized, including sanctions and disqualifications, showing direct causation from AI misuse. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of law and harm to the legal process and involved parties.
Thumbnail Image

California Court Fines Lawyer $10K for Fake ChatGPT Quotes in Brief

2025-09-22
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (ChatGPT) in generating legal citations that were fabricated, leading to a court ruling that imposed a fine on the lawyer. The harm includes violation of legal obligations, erosion of judicial trust, and waste of judicial resources. This fits the definition of an AI Incident because the AI system's malfunction directly led to harm (legal and professional harm). The article also discusses broader implications and regulatory responses, but the primary focus is on the realized harm caused by the AI's outputs in this case.
Thumbnail Image

California Issues Historic Fine Over Lawyer's ChatGPT Fabrications

2025-09-23
GV Wire
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) whose outputs included fabricated legal citations that were submitted in a court appeal. This misuse directly caused harm by violating court rules, misleading the court, and wasting taxpayer money. The harm is a violation of legal and professional obligations, which falls under violations of applicable law protecting fundamental rights and legal integrity. The fine and court opinion confirm the harm has materialized, not just a potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawyer who relied on ChatGPT for courtroom quotes fined in California

2025-09-23
Cybernews
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs were fabricated legal quotes and citations. The lawyer's failure to verify these AI-generated outputs led to the submission of false information in a legal proceeding, causing harm to the judicial system and violating court rules. This is a direct harm linked to the AI system's malfunction (hallucination) and misuse, fulfilling the criteria for an AI Incident under violations of legal obligations and harm to institutional processes. The article also discusses regulatory responses, but the primary focus is on the realized harm caused by the AI system's use in this case.