Chicago Housing Authority Lawyers Cite Fake AI-Generated Case in Court Filing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Lawyers for the Chicago Housing Authority used ChatGPT to generate a legal motion that cited a nonexistent court case, failing to verify the AI's output. This error, discovered in a post-trial motion, led to professional embarrassment, potential sanctions, and further complicated efforts to overturn a $24 million jury verdict.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system (ChatGPT) whose output was incorporated into legal documents without verification, leading to the citation of a nonexistent court case. This misuse directly caused misinformation in a legal context, which is a violation of professional and ethical standards and could be considered a breach of legal obligations. Although no physical harm or direct injury occurred, the incident caused harm to the integrity of the legal process and could potentially affect the rights of the parties involved. The AI system's malfunction or misuse was pivotal in causing this harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
GovernmentWorkers

Harm types
ReputationalEconomic/PropertyPublic interestPsychological

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-17
Chicago Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) whose output was incorporated into legal documents without verification, leading to the citation of a nonexistent court case. This misuse directly caused misinformation in a legal context, which is a violation of professional and ethical standards and could be considered a breach of legal obligations. Although no physical harm or direct injury occurred, the incident caused harm to the integrity of the legal process and could potentially affect the rights of the parties involved. The AI system's malfunction or misuse was pivotal in causing this harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-18
ArcaMax
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in a legal context, where it generated a fictitious court case citation that was used in a legal motion. This misuse led to a significant error that could affect the legal rights and outcomes for the parties involved, constituting a violation of legal and ethical standards. The harm is indirect but material, as it undermines the integrity of the legal process and could lead to unjust outcomes. Therefore, this qualifies as an AI Incident under the definition of violations of human rights or breach of obligations under applicable law intended to protect fundamental rights, including legal rights. The firm's failure to verify the AI output and the subsequent disciplinary actions further confirm the seriousness of the incident.
Thumbnail Image

Housing authority lawyer cites fake AI-generated case in court filing

2025-07-18
The Real Deal New York
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was used to draft a legal motion that contained fabricated case law, which is a direct misuse of AI-generated content leading to harm in the form of legal and reputational damage to the Chicago Housing Authority and its stakeholders. The fabricated information in a legal filing constitutes a breach of legal obligations and could be seen as a violation of rights related to fair legal process. The harm is realized and significant, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-18
The Quad-City Times
Why's our monitor labelling this an incident or hazard?
The article clearly states that ChatGPT was used to generate legal citations, leading to the citation of a fictitious case, which is an error caused by AI use. This is a misuse or failure to verify AI output, which is a malfunction or misuse in the use phase. However, the harm is limited to a procedural/legal error without direct or indirect physical harm, rights violations, or other significant harms as defined. The firm has taken responsibility, implemented corrective measures, and the court is addressing the issue. This fits the definition of Complementary Information, as it updates on an AI-related issue and responses rather than describing an AI Incident or Hazard.
Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-17
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (ChatGPT) used in the development of legal documents. The AI-generated fabricated citation was included in a court filing, which is a misuse of the AI system and a failure to verify its outputs. This led to a breach of professional and ethical standards in the legal process, which can be considered a violation of obligations under applicable law protecting fundamental rights (legal rights to fair process). The harm is realized, not just potential, as the fabricated citation was actually submitted to the court, causing disruption and requiring remedial actions. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm related to legal rights and professional integrity.
Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-18
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (ChatGPT) whose output was relied upon without proper verification, leading to the citation of a fictitious court case. This misuse directly led to a professional and procedural harm in the legal process, which can be considered a violation of legal and ethical standards. While no physical harm or direct violation of human rights is reported, the incident constitutes a breach of obligations under applicable law related to professional conduct and legal integrity. Therefore, it meets the criteria for an AI Incident due to the direct harm caused by the AI system's misuse in a legal context.
Thumbnail Image

Lawyers for Chicago Housing Authority used ChatGPT to cite nonexistent court case

2025-07-18
Denver Gazette
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (ChatGPT) in a professional legal context. The AI's output directly led to the citation of a nonexistent court case, which is a form of misinformation causing harm to the integrity of legal proceedings and professional standards. This constitutes a violation of obligations under applicable law and professional ethics, fitting the definition of an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights (here, legal procedural rights and professional conduct). The harm is realized, not just potential, as it has led to court hearings, sanctions motions, and professional consequences. Therefore, this event qualifies as an AI Incident.