Sullivan & Cromwell Apologizes for AI-Generated Errors in Court Filing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sullivan & Cromwell, a leading Wall Street law firm, apologized to a federal judge after submitting a court filing containing numerous fabricated legal citations generated by an AI system. The errors, discovered by an opposing firm, led to a review of the firm's internal processes and raised concerns about AI reliability in legal practice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Other

Affected stakeholders
Business

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-21
The New York Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.
Thumbnail Image

AI hallucinated -- and now an elite law firm is profusely apologizing to a federal judge

2026-04-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fabricated legal citations and errors (hallucinations) that were included in official court filings, which is a malfunction of the AI system. This malfunction directly led to harm in the form of misinformation in legal proceedings, which can be considered a violation of legal standards and potentially a breach of obligations under applicable law protecting intellectual property and procedural rights. The law firm's apology and corrective actions confirm the recognition of harm caused. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction directly led to harm in a legal context.
Thumbnail Image

Sullivan & Cromwell law firm apologizes for AI 'hallucinations' in court filing

2026-04-21
Reuters
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate legal citations and content, and its malfunction (hallucinations) directly led to the submission of inaccurate and fabricated legal information in a court filing. This constitutes a violation of legal and ethical standards, which falls under violations of applicable law and obligations protecting fundamental rights (here, the right to a fair legal process). The harm is realized as the court was presented with false information, which could have influenced judicial decisions. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction in a legal setting.
Thumbnail Image

Top Law Firm Apologizes to Bankruptcy Judge for AI Hallucination

2026-04-21
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (generative AI producing hallucinated citations) in a legal context, leading to a breach of legal obligations and potential harm to the court and involved parties. This fits the definition of an AI Incident because the AI system's malfunction directly caused a violation of legal rights and obligations (harm category c). The apology and corrective steps do not negate the fact that harm occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI 'hallucinations' created errors in US court filing, top law firm says

2026-04-21
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in generating legal documents, which produced fabricated and erroneous content ('hallucinations'). This misuse or malfunction of AI led to the submission of false information in a court filing, which is a breach of legal and professional standards, thus constituting harm under the framework's category of violations of human rights or breach of obligations under applicable law. The harm is realized, not just potential, as the court filing contained errors that could affect judicial processes. Hence, this qualifies as an AI Incident.
Thumbnail Image

Top US law firm Sullivan & Cromwell apologises for AI 'hallucinations' in court filing

2026-04-22
CNA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating false legal citations and information ('hallucinations') that were submitted in a court filing, which is a misuse or malfunction of the AI system. This led to a violation of legal and ethical obligations, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The harm is realized as the inaccurate information was submitted to a federal court, potentially misleading the court and affecting legal outcomes. Although the errors were later corrected, the initial submission constitutes an AI Incident due to the direct involvement of AI in causing the harm. The firm's failure to follow AI policies and the secondary review's failure to detect the errors further support this classification.
Thumbnail Image

AI Hallucinations in Filing by a Top Law Firm

2026-04-21
Reason
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate legal filings, which contained hallucinations (fabricated or false information). This misuse or malfunction of the AI system directly led to harm by introducing incorrect information into official court documents, potentially affecting legal outcomes and violating professional and ethical standards. The harm is realized and not merely potential, meeting the criteria for an AI Incident. The article also highlights the irony of a firm advising on safe AI deployment yet making such errors, underscoring the direct link between AI use and harm.
Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-22
Democratic Underground
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the court filing, and its malfunction (hallucinations) caused fabricated citations and errors. This directly led to harm in the form of misinformation in a legal proceeding, which can be considered a violation of legal obligations and potentially human rights related to fair legal process. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous outputs in a critical legal context.
Thumbnail Image

Elite law firm Sullivan & Cromwell admits to AI 'hallucinations'

2026-04-21
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT or similar) used in legal document preparation, whose malfunction (hallucinations) caused incorrect legal citations and misquotations. These errors were formally recognized in court and could undermine legal rights and the administration of justice, which are protected under applicable law. The AI's role was pivotal in causing these errors, fulfilling the criteria for an AI Incident. Although the firm is taking steps to improve policies, the harm has already occurred through the filing of erroneous legal documents.
Thumbnail Image

Premier Wall Street law firm apologizes for AI 'hallucinations'

2026-04-21
Maryland Daily Record
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating erroneous legal citations and fabrications that were submitted in a court filing, which is a direct consequence of AI malfunction (hallucinations). This led to a breach of legal and ethical obligations, constituting harm under the framework's category of violations of obligations under applicable law. The firm acknowledged the errors and apologized, indicating the harm occurred and was materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sullivan & Cromwell Apologizes to Judge for AI Hallucinations

2026-04-21
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The AI system's hallucinations directly caused the filing of erroneous legal documents, which is a malfunction in the use of AI. This led to harm in the form of violations of legal procedural obligations and potential undermining of judicial processes, fitting the definition of harm under (c) violations of human rights or breach of obligations under applicable law. The event is not merely a potential risk but a realized harm, as evidenced by the apology and corrective filings. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Top Law Firm Apologizes to Bankruptcy Judge for AI Hallucination

2026-04-21
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate legal citations, but it produced inaccurate information (hallucinations), which led to an apology from the law firm. This indicates a malfunction or misuse of the AI system's outputs. While the harm is not physical or severe, it affects the integrity of legal proceedings and could undermine trust in legal documents. This fits the definition of an AI Incident because the AI system's malfunction directly led to a harm (inaccurate legal citations causing procedural and reputational harm).
Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-21
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in generating legal documents, which produced fabricated and erroneous content ('hallucinations'). These errors were submitted to a federal court, leading to a breach of legal and professional obligations. The harm is realized as the integrity of the legal process was compromised, and the law firm had to apologize and review other filings. This meets the criteria for an AI Incident because the AI system's malfunction directly led to a violation of legal obligations and potential harm to the judicial process.