Utah Lawyer Sanctioned for AI-Generated False Legal Citations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Utah lawyer Richard Bednar was sanctioned by the Court of Appeals after filing a brief that included fabricated case law generated using ChatGPT. The document, prepared by a law clerk, contained false legal citations, including a nonexistent case, raising concerns about the misuse of AI in legal filings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of ChatGPT, an AI system, in generating content for a court brief. The AI-generated content included fabricated legal citations, which were submitted to a court, constituting a breach of legal and professional obligations. The harm here is indirect but clear: the AI system's outputs led to false legal claims being made, undermining the integrity of the legal process and resulting in sanctions against the lawyer. This fits the definition of an AI Incident as the AI system's use directly led to a violation of legal obligations and harm to the legal process. The event is not merely a potential risk or complementary information but a realized incident with consequences.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
WorkersGovernmentGeneral public

Harm types
ReputationalPublic interestEconomic/Property

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

US lawyer sanctioned after caught using ChatGPT for court brief

2025-05-31
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in generating content for a court brief. The AI-generated content included fabricated legal citations, which were submitted to a court, constituting a breach of legal and professional obligations. The harm here is indirect but clear: the AI system's outputs led to false legal claims being made, undermining the integrity of the legal process and resulting in sanctions against the lawyer. This fits the definition of an AI Incident as the AI system's use directly led to a violation of legal obligations and harm to the legal process. The event is not merely a potential risk or complementary information but a realized incident with consequences.
Thumbnail Image

Lawyer in deep water after using AI to prepare brief

2025-05-31
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the lawyer used ChatGPT, an AI system, to prepare a legal filing that contained fabricated case references. This misuse of AI led to sanctions, including reprimands and financial penalties, which constitute harm related to violations of legal and professional obligations. The AI system's involvement is direct and causal in the incident. Therefore, this qualifies as an AI Incident under the framework, as it involves harm to legal rights and professional standards due to the AI system's use.
Thumbnail Image

US lawyer sanctioned after court discovers false citations filed using ChatGPT

2025-06-01
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) whose outputs were used in a legal filing containing fabricated citations. The AI's involvement directly led to harm in the form of a violation of legal and ethical standards, resulting in court sanctions against the attorney. This fits the definition of an AI Incident because the AI system's use directly caused harm related to a breach of legal obligations and professional misconduct.
Thumbnail Image

'Fake precedent': Lawyer punished for filing brief with case made up by artificial intelligence

2025-05-29
The Salt Lake Tribune
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the preparation of a legal brief, producing fabricated case precedents that were submitted to a court. This directly caused harm by misleading the court and violating legal and ethical standards, resulting in sanctions against the lawyer. The harm is a violation of legal obligations and professional conduct, which fits the definition of an AI Incident. The event is not merely a potential risk or complementary information but a realized harm caused by AI-generated misinformation in a legal context.
Thumbnail Image

Utah lawyer sanctioned for court filing that used ChatGPT and referenced nonexistent court case

2025-05-30
ABC 4
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of ChatGPT, an AI system, in generating a court filing that included false legal citations. The failure to review and verify these AI-generated citations led to sanctions by the court, indicating a direct link between the AI system's use and a violation of legal and ethical obligations. This breach harms the integrity of the legal process and the rights of involved parties, fulfilling the criteria for an AI Incident. The harm is realized, not merely potential, as sanctions and reputational damage have occurred. The event is not merely complementary information or a hazard, as the AI's role in causing harm is clear and direct.