Federal Judge Bans Unverified AI-Generated Legal Filings After ChatGPT Hallucination Incident

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Attorney Steven Schwartz used ChatGPT to draft legal filings, resulting in fabricated case citations submitted to a federal court. Judge Brantley Starr responded by banning unverified AI-generated content in his courtroom, requiring lawyers to certify that filings are either AI-free or thoroughly fact-checked by humans to prevent future misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (ChatGPT) was explicitly used in the development of legal documents that contained false information, directly leading to harm in the form of misleading the court and potentially violating the rights of involved parties. This is a clear AI Incident because the AI's outputs were relied upon and caused harm in a critical societal domain (justice system). The article also includes complementary information about calls for regulation, but the primary focus is the incident itself involving misuse of AI-generated content in court.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

Παρέμβαση των Ευρωπαίων δικηγόρων (CCBE) για την τεχνητή νοημοσύνη

2023-05-29
Liberal.gr
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems but focuses on the potential risks and the need for regulatory and governance measures. It is primarily about societal and governance responses to AI developments, including calls for legislation and standards to prevent future harms. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Συμβούλιο Ευρωπαίων Δικηγόρων: Παρέμβαση στο θέμα των εφαρμογών τεχνητής νοημοσύνης

2023-05-29
Η Ναυτεμπορική
Why's our monitor labelling this an incident or hazard?
The content centers on the CCBE's advocacy and policy recommendations in response to potential risks from AI systems like ChatGPT. It highlights concerns about fairness, discrimination, and human oversight but does not document any realized harm or incident. Therefore, it is best classified as Complementary Information, as it provides important context and governance-related responses to AI developments without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Όρους στην τεχνητή νοημοσύνη ζητά το Συμβούλιο Δικηγορικών Συλλόγων της ΕΕ

2023-05-29
CNN.gr
Why's our monitor labelling this an incident or hazard?
The article focuses on societal and governance responses to AI, including calls for legislation and standards to prevent potential harms and ensure ethical use, especially in justice. There is no description of an actual AI system causing harm or a specific event where AI use led or could lead to harm. Therefore, it fits the definition of Complementary Information, as it provides context and advocacy related to AI governance rather than reporting a new incident or hazard.
Thumbnail Image

"Πάρτε μέτρα τώρα": Οι Ευρωπαίοι Δικηγόροι για τους κινδύνους από τη χρήση της τεχνητής νοημοσύνης στη Δικαιοσύνη

2023-05-29
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
The article discusses potential risks and calls for regulatory and safety measures regarding AI use in the judicial system, but does not report any realized harm or incident. It is therefore best classified as Complementary Information, as it provides important context and governance response to AI-related concerns without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Τεχνητή νοημοσύνη: Σάλος με κακόβουλη χρήση της από δικηγόρο στη Νέα Υόρκη - Παρέμβαση των Ευρωπαίων για τον περιορισμό της

2023-05-29
enikos.gr
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used in the development of legal documents that contained false information, directly leading to harm in the form of misleading the court and potentially violating the rights of involved parties. This is a clear AI Incident because the AI's outputs were relied upon and caused harm in a critical societal domain (justice system). The article also includes complementary information about calls for regulation, but the primary focus is the incident itself involving misuse of AI-generated content in court.
Thumbnail Image

Παρέμβαση των Ευρωπαίων δικηγόρων (CCBE) στο θέμα της τεχνητής νοημοσύνης

2023-05-29
bankingnews.gr
Why's our monitor labelling this an incident or hazard?
The article focuses on the societal and governance response to the potential risks of AI systems, particularly in the judicial context. It highlights calls for regulation and precautionary measures but does not report any realized harm or direct incident involving AI malfunction or misuse. Therefore, it fits the definition of Complementary Information as it provides context and response to AI developments without describing a new AI Incident or AI Hazard.
Thumbnail Image

Ευρωπαίοι δικηγόροι παρεμβαίνουν για τεχνητή νοημοσύνη και ζητούν όρους-απαγορεύσεις

2023-05-29
cyprustimes.com
Why's our monitor labelling this an incident or hazard?
The article does not describe any specific AI Incident where harm has already occurred due to AI systems, nor does it report a particular AI Hazard event with imminent risk. Instead, it focuses on calls for regulation and preventive measures to address potential risks associated with AI use, especially in legal contexts. Therefore, it constitutes Complementary Information as it provides governance and societal response context to AI developments and their potential impacts, without reporting a concrete incident or hazard.
Thumbnail Image

Dallas Judge Bans Legal Filings That Rely On AI Content

2023-06-03
710 KURV - The Valley's News/Talk Station
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI platforms like ChatGPT) in legal filings, and the judge's order addresses risks of AI-generated hallucinations and bias that could lead to misinformation in legal proceedings. However, the article describes a judicial policy response to potential AI-related harms rather than an actual incident of harm occurring. Therefore, this is Complementary Information about governance and societal response to AI risks, not an AI Incident or Hazard.
Thumbnail Image

Texas judge bans filings solely created by AI after ChatGPT made up cases

2023-06-02
CBS News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of an AI system (ChatGPT) in the legal domain, where its outputs have directly led to the submission of false information (made-up court cases) in legal briefs. This constitutes a violation of legal and professional obligations, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The judge's order is a response to an AI Incident where the AI's hallucinations caused harm by misleading the court. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm in the legal process.
Thumbnail Image

Texas judge bans legal filings that rely on AI-generated content

2023-06-03
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article centers on a judicial order responding to prior misuse of AI-generated legal filings that contained fabricated information, which could cause harm to the legal process and rights. However, the article does not report a new AI Incident causing harm directly but rather a governance measure to mitigate such risks. The judge's order is a societal and governance response to known AI risks, enhancing understanding and management of AI harms in legal contexts. Hence, it fits the definition of Complementary Information, as it provides an update on responses to AI-related issues rather than describing a new incident or hazard.
Thumbnail Image

No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

2023-05-31
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI, ChatGPT) in legal filings that led to the submission of hallucinated, inaccurate legal precedents, which could have caused harm to the legal process and justice. Although no direct harm such as injury or property damage is described, the submission of false legal information constitutes a violation of legal obligations and could undermine the integrity of the judicial process, which is a form of harm to the legal system and potentially to rights. The judge's order to require declaration and human verification of AI-generated content is a governance response to prevent such harms. Since the event describes a realized misuse of AI that led to harm (submission of false legal arguments), it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US judge orders lawyers to sign AI pledge, warning 'they make stuff up'

2023-05-31
Reuters
Why's our monitor labelling this an incident or hazard?
The event describes a direct harm caused by the use of an AI system (generative AI) in legal filings, where the AI produced hallucinated or fabricated content (false citations), which can mislead the court and undermine legal processes. This constitutes a violation of legal and professional obligations, thus a breach of applicable law and fundamental rights related to justice. The judge's order is a response to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm in the legal context.
Thumbnail Image

Texas judge bans legal filings that rely on AI-generated content

2023-06-03
MSN International Edition
Why's our monitor labelling this an incident or hazard?
The article centers on a court's regulatory response to the misuse of AI in legal filings after an incident where AI-generated content caused misinformation in court documents. The AI system's malfunction (hallucinations) indirectly led to harm by potentially misleading the court. However, the article's main focus is on the judge's order and the legal framework to prevent future harm, which is a governance response. Therefore, this is best classified as Complementary Information, as it provides an update on societal and governance responses to AI-related risks rather than reporting a new AI Incident or Hazard itself.
Thumbnail Image

US Judge bans lawyers from using ChatGPT-drafted content at court - Times of India

2023-05-31
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI like ChatGPT) and their use in legal filings, but no actual harm has been reported. Instead, the judge is proactively establishing rules to mitigate risks associated with AI-generated misinformation in court documents. This constitutes a governance response to AI-related risks rather than an incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context on societal and governance responses to AI use without describing a specific AI Incident or AI Hazard.
Thumbnail Image

No ChatGPT in my court: Judge orders all AI-generated content must be declared and checked

2023-05-30
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems in legal filings, which previously led to a case of hallucinated and fabricated legal citations. The judge's new rule aims to prevent harm by ensuring AI-generated content is declared and verified by humans, thereby mitigating risks of misinformation and legal errors. Although no direct harm is reported in this event, the policy is a response to a realized AI-related problem and aims to prevent future harm. This constitutes Complementary Information because it is a governance response to an AI-related issue, enhancing understanding and management of AI risks in the legal domain, rather than reporting a new AI Incident or AI Hazard itself.
Thumbnail Image

Texas judge says no AI in courtroom unless lawyers certify it was verified by human

2023-06-01
Fox Business
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems (e.g., ChatGPT, Google Bard) in drafting legal documents. The judge's order addresses the risk of harm from AI-generated inaccuracies (hallucinations) that could mislead legal proceedings, which implicates potential violations of legal rights and justice. However, the article does not report any actual harm or incident caused by AI use but rather a preventive measure to mitigate such risks. Therefore, this is a governance response to a potential AI-related risk rather than an incident or hazard itself.
Thumbnail Image

Judge Bans AI-Generated Filings In Court Because It Just Makes Stuff Up

2023-06-01
VICE
Why's our monitor labelling this an incident or hazard?
The article does not report an actual harm caused by AI-generated filings but addresses the potential risk of AI hallucinations leading to inaccurate legal documents. The judge's order is a governance measure to mitigate this risk. Therefore, this is Complementary Information about societal and governance responses to AI-related risks, not an AI Incident or Hazard itself.
Thumbnail Image

Federal judge: No AI in my courtroom unless a human verifies its accuracy

2023-05-31
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential misuse of AI systems (generative AI tools like ChatGPT) in legal filings, which previously led to the submission of fabricated case citations—a direct harm to the integrity of the legal process and potential violation of legal standards. The judge's order aims to prevent further harm by mandating human verification of AI outputs. Since the AI's malfunction (hallucinations) has already caused harm (submission of false legal documents), this qualifies as an AI Incident. The judge's order is a response to an incident where AI use directly led to harm in the legal system.
Thumbnail Image

Prone to hallucinations and bias': A Texas judge puts A.I. in its place

2023-05-31
Fortune
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI like ChatGPT and GPT-4) used in legal research and discovery. The submission of fabricated case law based on AI hallucinations directly harmed the legal process, constituting an AI Incident due to misinformation and potential violation of legal integrity. The judge's order and concerns about AI bias and hallucinations further underscore realized harms. The mention of AI tools potentially being used to 'crush' civil rights claims indicates possible indirect harm to human rights. Therefore, the event qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm in the justice system and risks violations of rights.
Thumbnail Image

Federal Judge Requires All Lawyers to File Certificates Related to Use of Generative AI

2023-05-30
Reason
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI tools) and their use in legal document preparation, but it does not describe any realized harm or incident caused by AI. Instead, it is a regulatory or procedural response by a court to manage potential risks associated with AI use. There is no direct or indirect harm reported, nor a plausible future harm event described here. Therefore, this is Complementary Information as it provides governance and societal response to AI use in a specific domain, enhancing understanding and risk management without reporting an AI Incident or Hazard.
Thumbnail Image

Texas Federal Judge Implements Measures to Prevent AI-Generated Arguments in Court

2023-05-31
Tech Times
Why's our monitor labelling this an incident or hazard?
The article references a past AI incident where AI-generated legal arguments caused harm by presenting fabricated information, which is a violation of legal and ethical standards. However, the current event is about the judge's implementation of a procedural rule to prevent recurrence, which is a governance response. The main focus is on the new certification requirement as a preventive measure, not on a new incident or hazard. Therefore, this is Complementary Information providing context and response to a prior AI Incident.
Thumbnail Image

Attorneys Must Certify AI Policy Compliance, Judge Orders

2023-05-31
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI) and addresses their use in legal document drafting. However, it does not describe any realized harm or incident caused by AI, nor does it indicate a plausible future harm from AI use beyond the court's precautionary measures. Instead, it is a governance response to potential risks associated with AI-generated content in legal filings. Therefore, it fits the definition of Complementary Information as it provides a societal/governance response to AI-related issues without reporting a specific AI Incident or Hazard.
Thumbnail Image

Texas judge demands lawyers declare AI-generated docs

2023-05-31
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in legal document preparation, which produced hallucinated (false) court cases. This misuse led to a direct harm: violation of legal and ethical obligations, misleading the court, and potential miscarriage of justice. The Texas judge's order to require certification about AI use is a response to this harm. The AI system's malfunction (hallucination) and the attorneys' reliance on it caused a breach of legal rights and professional standards, fitting the definition of an AI Incident due to violation of obligations under applicable law and harm to the legal process.
Thumbnail Image

Judge bans ChatGPT from courtroom after lawyer's mishap

2023-05-31
WPTV
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs included fabricated legal cases, directly leading to misinformation in a federal court filing. This misinformation can be considered harm to the legal process and a violation of obligations under applicable law, fitting the definition of an AI Incident. The judge's response to ban AI-generated content without human verification further confirms the recognition of harm caused by the AI system's malfunction. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

If You Aren't The One Writing Your Briefs In Texas, It Better Be Some Other Human!

2023-05-31
Above the Law
Why's our monitor labelling this an incident or hazard?
The event centers on a judicial order addressing the use of generative AI in legal filings, focusing on preventing potential harms such as inaccuracies and bias. However, no actual harm or incident caused by AI is reported, nor is there a direct or indirect link to injury, rights violations, or other harms. The order is a proactive governance measure to mitigate risks, making this a case of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Federal Judge Orders Against Using ChatGPT In Court Proceedings; Asks Lawyers To Check All AI-Generated Content

2023-05-31
The Tech Report
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) in legal filings that contained fabricated facts, which is a direct misuse of AI leading to harm in the legal domain. The harm includes potential violations of legal rights and undermining the judicial process, which falls under violations of obligations under applicable law protecting fundamental rights. The judge's order to require certification and human verification is a governance response to this AI Incident. Therefore, the event qualifies as an AI Incident due to the realized harm caused by the AI system's use in court filings.
Thumbnail Image

US judge orders lawyers not to use ChatGPT-drafted content - Asian News from UK

2023-06-01
Local News for British Asian and Indian Community in London
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) whose outputs were directly used in legal filings, resulting in bogus citations and false accusations. This misuse of AI led to harm in the form of misinformation in official court documents, risking judicial errors and reputational damage. The judge's response and the affidavit admission confirm the AI's role in causing these harms. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm in a legal and reputational context.
Thumbnail Image

Judge bans ChatGPT from courtroom after lawyer's mishap

2023-05-31
Scripps News
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used and malfunctioned by generating false legal citations, which directly caused harm by misleading the court. This constitutes a violation of legal obligations and undermines the integrity of judicial proceedings, fitting the definition of an AI Incident. The judge's order and the lawyer's apology confirm the harm occurred and the AI's role was pivotal. Hence, the event is classified as an AI Incident.
Thumbnail Image

Judge bans AI-generated filings -- unless they get human oversight

2023-06-02
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI (ChatGPT) to prepare legal briefs that contained fabricated facts and errors, which were discovered and led to a judicial ban on AI-generated filings without human oversight. The AI system's malfunction (hallucination) directly contributed to the risk of harm in legal proceedings, including misinformation and potential miscarriage of justice. This fits the definition of an AI Incident, as the AI's use has directly or indirectly led to harm (violation of legal obligations and potential harm to parties). The judge's order is a response to this incident, not merely a general policy or future risk, so it is not just complementary information or a hazard. Hence, the classification is AI Incident.
Thumbnail Image

Steven schwartz, un avvocato di new york, ha usato chagpt per cercare delle sentenze precedenti...

2023-05-30
DAGOSPIA
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in a legal context to provide case precedents, but it produced fabricated information (hallucinations). The lawyer relied on this false output without verification, which directly caused harm in the form of professional embarrassment and undermined the legal process. The harm is realized and directly linked to the AI system's malfunction. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L’avvocato in aula a New York con ChatGPT, che si sbaglia e cita sentenze fasulle

2023-05-28
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a legal context where it generated false information (fake case citations). This misuse directly led to harm in the form of misinformation in a judicial setting, which can be considered a violation of legal and professional standards, thus constituting harm to rights and the legal process. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous outputs in a critical context.
Thumbnail Image

Usa, avvocato usa ChatGpt per un ricorso ma le sentenze citate sono false

2023-05-29
Tgcom24
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved in the development and use stages, providing fabricated legal precedents that the attorney used in court. This led to misinformation that misled the court and could undermine the legal process, constituting a breach of legal obligations and harm to the judicial system's integrity. The harm is direct and material, as the false information was presented in a formal legal proceeding. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Un avvocato usa ChatGPT in tribunale ma sbaglia tutto: ora dovrà rispondere dei suoi errori

2023-05-29
Fanpage
Why's our monitor labelling this an incident or hazard?
The article explicitly involves the use of ChatGPT, an AI system, in generating legal case citations that were fabricated and led to a legal issue. The AI system's use directly caused harm by misleading the court and potentially affecting legal proceedings, which constitutes a violation of legal and professional standards. The harm is realized, not just potential, as the lawyer faces sanctions and the court must address the issue. This fits the definition of an AI Incident because the AI system's malfunction (producing false information) and the user's reliance on it led to harm related to legal rights and obligations.
Thumbnail Image

ChatGpt, ancora guai grossi. Questa volta in tribunale

2023-05-29
il Giornale.it
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was directly involved in generating fabricated legal information that was used in court, leading to harm in the form of reputational damage and professional consequences for the lawyer. This constitutes a breach of obligations under applicable law (legal professional standards) and thus fits the definition of an AI Incident. The harm is realized and directly linked to the AI system's malfunction (hallucination and fabrication of false data).
Thumbnail Image

La prima volta di ChatGPT in tribunale: a New York cita sentenze fasulle e il giudice gli dà torto

2023-05-29
Open
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used in the legal defense and produced fabricated legal precedents, which the lawyer relied upon in court. This led to a failure in the legal process and a breach of trust in the legal defense, which is a violation of legal rights and harms the administration of justice. The harm is realized and directly linked to the AI system's malfunction (hallucination). Therefore, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

La figuraccia dell'avvocato che ha citato sentenze false "suggerite" da ChatGPT

2023-06-01
Giornalettismo
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly involved in the development and use of legal documents. Its malfunction—providing fabricated legal precedents—directly led to harm, including potential legal sanctions against the lawyer and damage to the judicial process's integrity. This fits the definition of an AI Incident because the AI's outputs caused real-world harm through misuse in a legal context, violating trust and potentially legal obligations.
Thumbnail Image

Avvocato usa sentenze di ChatGpt per un ricorso: ma erano tutte false

2023-05-29
Il Primato Nazionale
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the legal argumentation process, and its fabricated outputs were relied upon by the lawyer, leading to the presentation of false legal precedents in court. This constitutes a violation of legal obligations and professional ethics, which falls under violations of applicable law intended to protect fundamental and legal rights. The harm is realized as the legal process was misled by AI-generated false information, thus qualifying as an AI Incident due to the direct role of the AI system in causing harm through misinformation in a legal context.
Thumbnail Image

Avvocato usa ChatGPT per ricerche legali. Finisce in tribunale.

2023-05-29
Tra me & Tech
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs were directly relied upon by a professional (lawyer) without proper verification, leading to the presentation of false legal cases in court. This caused harm in the form of legal and professional consequences for the lawyer and potentially affected the legal process. The AI system's malfunction or misuse is a direct factor in the harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of legal obligations and harm to the professional's standing and possibly to the justice process.