Sullivan & Cromwell Apologizes for AI-Generated Errors in Court Filing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Sullivan & Cromwell, a leading Wall Street law firm, apologized to a federal judge after submitting a court filing containing numerous fabricated legal citations generated by an AI system. The errors, discovered by an opposing firm, led to a review of the firm's internal processes and raised concerns about AI reliability in legal practice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.[AI generated]
AI principles
Robustness & digital securityTransparency & explainability

Industries
Other

Affected stakeholders
Business

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-21
The New York Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.
Thumbnail Image

AI hallucinated -- and now an elite law firm is profusely apologizing to a federal judge

2026-04-21
Business Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fabricated legal citations and errors (hallucinations) that were included in official court filings, which is a malfunction of the AI system. This malfunction directly led to harm in the form of misinformation in legal proceedings, which can be considered a violation of legal standards and potentially a breach of obligations under applicable law protecting intellectual property and procedural rights. The law firm's apology and corrective actions confirm the recognition of harm caused. Hence, the event meets the criteria for an AI Incident as the AI system's malfunction directly led to harm in a legal context.
Thumbnail Image

Sullivan & Cromwell law firm apologizes for AI 'hallucinations' in court filing

2026-04-21
Reuters
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate legal citations and content, and its malfunction (hallucinations) directly led to the submission of inaccurate and fabricated legal information in a court filing. This constitutes a violation of legal and ethical standards, which falls under violations of applicable law and obligations protecting fundamental rights (here, the right to a fair legal process). The harm is realized as the court was presented with false information, which could have influenced judicial decisions. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction in a legal setting.
Thumbnail Image

Top Law Firm Apologizes to Bankruptcy Judge for AI Hallucination

2026-04-21
Bloomberg Business
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (generative AI producing hallucinated citations) in a legal context, leading to a breach of legal obligations and potential harm to the court and involved parties. This fits the definition of an AI Incident because the AI system's malfunction directly caused a violation of legal rights and obligations (harm category c). The apology and corrective steps do not negate the fact that harm occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI 'hallucinations' created errors in US court filing, top law firm says

2026-04-21
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in generating legal documents, which produced fabricated and erroneous content ('hallucinations'). This misuse or malfunction of AI led to the submission of false information in a court filing, which is a breach of legal and professional standards, thus constituting harm under the framework's category of violations of human rights or breach of obligations under applicable law. The harm is realized, not just potential, as the court filing contained errors that could affect judicial processes. Hence, this qualifies as an AI Incident.
Thumbnail Image

Top US law firm Sullivan & Cromwell apologises for AI 'hallucinations' in court filing

2026-04-22
CNA
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating false legal citations and information ('hallucinations') that were submitted in a court filing, which is a misuse or malfunction of the AI system. This led to a violation of legal and ethical obligations, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law. The harm is realized as the inaccurate information was submitted to a federal court, potentially misleading the court and affecting legal outcomes. Although the errors were later corrected, the initial submission constitutes an AI Incident due to the direct involvement of AI in causing the harm. The firm's failure to follow AI policies and the secondary review's failure to detect the errors further support this classification.
Thumbnail Image

AI Hallucinations in Filing by a Top Law Firm

2026-04-21
Reason
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate legal filings, which contained hallucinations (fabricated or false information). This misuse or malfunction of the AI system directly led to harm by introducing incorrect information into official court documents, potentially affecting legal outcomes and violating professional and ethical standards. The harm is realized and not merely potential, meeting the criteria for an AI Incident. The article also highlights the irony of a firm advising on safe AI deployment yet making such errors, underscoring the direct link between AI use and harm.
Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-22
Democratic Underground
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the court filing, and its malfunction (hallucinations) caused fabricated citations and errors. This directly led to harm in the form of misinformation in a legal proceeding, which can be considered a violation of legal obligations and potentially human rights related to fair legal process. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's erroneous outputs in a critical legal context.
Thumbnail Image

Elite law firm Sullivan & Cromwell admits to AI 'hallucinations'

2026-04-21
Financial Times News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT or similar) used in legal document preparation, whose malfunction (hallucinations) caused incorrect legal citations and misquotations. These errors were formally recognized in court and could undermine legal rights and the administration of justice, which are protected under applicable law. The AI's role was pivotal in causing these errors, fulfilling the criteria for an AI Incident. Although the firm is taking steps to improve policies, the harm has already occurred through the filing of erroneous legal documents.
Thumbnail Image

Premier Wall Street law firm apologizes for AI 'hallucinations'

2026-04-21
Maryland Daily Record
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating erroneous legal citations and fabrications that were submitted in a court filing, which is a direct consequence of AI malfunction (hallucinations). This led to a breach of legal and ethical obligations, constituting harm under the framework's category of violations of obligations under applicable law. The firm acknowledged the errors and apologized, indicating the harm occurred and was materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sullivan & Cromwell Apologizes to Judge for AI Hallucinations

2026-04-21
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The AI system's hallucinations directly caused the filing of erroneous legal documents, which is a malfunction in the use of AI. This led to harm in the form of violations of legal procedural obligations and potential undermining of judicial processes, fitting the definition of harm under (c) violations of human rights or breach of obligations under applicable law. The event is not merely a potential risk but a realized harm, as evidenced by the apology and corrective filings. Hence, it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Top Law Firm Apologizes to Bankruptcy Judge for AI Hallucination

2026-04-21
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate legal citations, but it produced inaccurate information (hallucinations), which led to an apology from the law firm. This indicates a malfunction or misuse of the AI system's outputs. While the harm is not physical or severe, it affects the integrity of legal proceedings and could undermine trust in legal documents. This fits the definition of an AI Incident because the AI system's malfunction directly led to a harm (inaccurate legal citations causing procedural and reputational harm).
Thumbnail Image

A.I. 'Hallucinations' Created Errors in Court Filing, Top Law Firm Says

2026-04-21
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in generating legal documents, which produced fabricated and erroneous content ('hallucinations'). These errors were submitted to a federal court, leading to a breach of legal and professional obligations. The harm is realized as the integrity of the legal process was compromised, and the law firm had to apologize and review other filings. This meets the criteria for an AI Incident because the AI system's malfunction directly led to a violation of legal obligations and potential harm to the judicial process.
Thumbnail Image

AI hallucinations found in high-profile Wall Street law firm filing

2026-04-22
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions that AI-generated hallucinations caused errors in a court filing, which is a direct consequence of AI system malfunction or misuse. The harm involves inaccurate legal documents submitted to a federal court, which can disrupt legal proceedings and violate ethical/legal obligations. This fits the definition of an AI Incident because the AI system's use directly led to harm (errors in legal filings) affecting rights and legal processes. The firm's failure to follow AI policies and secondary review processes further supports the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

Law firm that charges $3,000 an hour caught using AI

2026-04-22
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The law firm's use of AI to draft legal documents that contained false information constitutes an AI Incident because the AI system's malfunction (hallucinations) directly led to harm in the form of misinformation in legal proceedings. This impacts the integrity of the judicial process and could be seen as a violation of legal obligations and professional rights. The harm is realized, not just potential, as the flawed submission was made to a court and required an apology and acknowledgment of error. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US Law Firm Apologizes After AI Hallucinations Made It To Legal Filing

2026-04-22
ZeroHedge
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating hallucinated (fabricated) citations that were included in a legal filing, which is a direct malfunction of the AI system. This malfunction led to the submission of incorrect legal information to a court, which can be considered a violation of legal obligations and professional standards, thus constituting harm under the framework's category (c) - violations of human rights or breach of obligations under applicable law. The harm is realized, not just potential, as the filing was submitted and required an apology and remedial action. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Founder of one of Wall Street's biggest law firms Sullivan & Cromwell sends apology letter to judge after AI 'messes up' court filing; says: We sincerely ...

2026-04-23
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating erroneous legal content that was filed in court, causing inaccuracies and misleading information. The law firm's failure to follow verification protocols and the resulting errors disrupted the court process and violated legal standards, which aligns with harm under violations of applicable law protecting fundamental and intellectual property rights. The harm is realized, not just potential, as the court and parties were burdened by the errors. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Even the fancy lawyers are getting pantsed by AI.

2026-04-22
The Verge
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate legal citations, but it produced fabricated references, which were then submitted in official court filings. This misuse of AI led to a direct harm in the form of legal procedural violations and misinformation in a judicial context. The harm is realized and significant, as it affects the administration of justice and legal rights. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs in a legal setting.
Thumbnail Image

Top Wall Street Law Firm Apologizes to Judge for AI Hallucination in Court Filing

2026-04-22
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI hallucinations causing inaccurate citations and errors in a court filing, which is a direct consequence of AI system malfunction. The harm here is the submission of false legal information, which can undermine legal proceedings and violate legal standards, thus constituting harm under violations of obligations under applicable law. The law firm's apology confirms recognition of the AI system's role in causing this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US Law Firm Apologizes For AI Hallucinations in Filing

2026-04-22
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (AI hallucinations causing incorrect citations) in a high-stakes legal context. The AI system's errors directly led to harm in the form of inaccurate court filings, which breach legal obligations and could undermine the integrity of judicial processes. The law firm's failure to follow AI policies and oversight contributed to the incident. This fits the definition of an AI Incident because the AI system's malfunction directly caused harm related to violations of applicable law and legal obligations. The firm's remedial actions and investigation are complementary information but do not change the classification of the event as an AI Incident.
Thumbnail Image

AI hallucinations found in high-profile Wall Street law firm filing - AOL

2026-04-22
AOL.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system in generating legal content that contained hallucinated errors, leading to inaccurate court filings. These errors were material enough to require a formal apology and corrected submission, indicating realized harm to the legal process and possibly to the parties' rights. The AI system's malfunction directly caused these harms, fulfilling the criteria for an AI Incident under the framework. The event is not merely a potential risk or a complementary update but a realized harm caused by AI.
Thumbnail Image

Elite law firm apologises for AI 'hallucinations' in bankruptcy case

2026-04-22
CityAM
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate legal content that was flawed and misleading, resulting in harm to the integrity of legal processes and potentially violating legal obligations. The AI's malfunction (hallucinations) directly caused the submission of incorrect information in a high-profile bankruptcy case, which is a breach of obligations under applicable law protecting fundamental rights and legal standards. Therefore, this qualifies as an AI Incident due to the realized harm stemming from AI-generated errors in a legal context.
Thumbnail Image

28 fake citations: How AI failed major law firm in court

2026-04-22
The News International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI in drafting legal documents, which produced hallucinated and fabricated citations. This misuse directly led to harm by undermining the accuracy and reliability of court filings, potentially prejudicing the opposing party and the court. The law firm's failure to follow internal AI use policies and the resulting submission of false information meets the criteria for an AI Incident due to violation of legal rights and harm to the judicial process. The harm is realized, not just potential, and the AI system's malfunction or misuse is a direct contributing factor.
Thumbnail Image

Top Law Firm Admits to AI 'Hallucinations' in Bankruptcy Filing Tied to Alleged Scam Network - Decrypt

2026-04-22
Decrypt
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system in generating legal filings, which produced fabricated and erroneous citations ('hallucinations'). This misuse directly led to harm in the form of misleading the court and potentially prejudicing the legal process, which is a violation of legal rights and obligations. The law firm's failure to follow AI use policies and the resulting submission of incorrect information to the court demonstrate a malfunction or misuse of the AI system causing harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

US Law Firm Issues Apology After AI-Generated Errors Appear in Court Filing

2026-04-22
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
An AI system was used in the preparation of a court filing, and its hallucinated outputs directly led to the submission of incorrect legal citations and errors in an official legal document. This constitutes a failure in the use and oversight of the AI system, resulting in harm to the legal process and potentially violating legal obligations. The harm is realized, as the court filing contained false information that could mislead judicial decision-making. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated errors causing harm in a legal context.
Thumbnail Image

Elite Wall Street law firm apologizes for error-laden motion created by AI

2026-04-22
ABA Journal - Law News Now
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a legal motion with fabricated citations, which is a malfunction of the AI system. This malfunction directly led to the filing of an erroneous legal document, which can be considered harm to the legal process and potentially to the parties' rights and interests. The firm's apology and disclosure to the court confirm the recognition of harm caused. Therefore, this qualifies as an AI Incident because the AI system's malfunction directly led to harm (errors in legal proceedings).
Thumbnail Image

Wall Street Law Firm Apologises For AI Errors | Silicon UK Tech

2026-04-22
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated fabricated content ('hallucinations') that was used in an official legal filing, leading to misinformation presented to a federal judge. This constitutes a violation of legal obligations and intellectual property rights, fulfilling the criteria for harm under the AI Incident definition (specifically, violations of human rights or breach of legal obligations). The AI system's malfunction (fabrication of false citations and quotes) directly led to the harm. The apology and acknowledgment of policy failure confirm the AI system's role in causing the incident. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Sullivan & Cromwell Issues Court Apology After AI Generates False Legal Citations

2026-04-22
Blockonomi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system for legal research and document creation, which produced fabricated case citations and incorrect legal references. These AI-generated inaccuracies were submitted to a federal court, constituting a direct harm to the legal process and potentially to the parties involved in the litigation. The law firm acknowledged verification failures and has taken remedial actions, confirming the AI system's malfunction and misuse led to the incident. This meets the criteria for an AI Incident because the AI system's malfunction directly caused harm through the submission of false legal information, violating legal obligations and potentially impacting judicial decisions.
Thumbnail Image

Sullivan & Cromwell Apologises to Judge After Fabricated Citations Exposed - LawFuel

2026-04-22
LawFuel
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI-generated hallucinations caused fabricated legal citations and misquotations in an official court document. This is a direct harm resulting from the AI system's malfunction or misuse, impacting legal proceedings and trust in legal processes. Therefore, it meets the criteria for an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Top US law firm admits AI hallucinations in legal filing

2026-04-22
crypto.news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating hallucinated (false) citations and errors in a legal filing, which were submitted to a court. This directly led to harm in the form of inaccurate legal documents, which can disrupt legal proceedings and violate legal standards. The AI system's malfunction and the failure of human review caused this harm. The harm is realized, not just potential, and relates to violations of legal obligations and risks to the integrity of judicial processes. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Top Wall Street Law Firm Embarrassed As AI Hallucinations Derail Bankruptcy Motion

2026-04-23
TimesNow
Why's our monitor labelling this an incident or hazard?
The AI system's hallucinations directly led to the submission of inaccurate legal documents, which is a misuse of AI in a critical legal context. This caused harm by misleading the court and potentially affecting the administration of justice, which is a violation of legal obligations. The event is not merely a product launch or update but a concrete incident where AI malfunction caused harm, fitting the definition of an AI Incident.
Thumbnail Image

Another 'hallucinated' court filing highlights the difference between Silicon Valley and the rest of the world

2026-04-23
CNN International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI that produced fabricated legal citations and errors in a court filing, which were only caught after submission. This constitutes a malfunction of the AI system that directly caused harm by introducing false information into a legal process, potentially violating legal and professional obligations. The harm is a violation of legal standards and could be considered a breach of obligations under applicable law protecting intellectual property and procedural rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's malfunction in a critical legal context.
Thumbnail Image

US law firm apologizes after AI hallucinations made it to a legal filing

2026-04-23
Signs Of The TImes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating hallucinated (false) citations in a legal filing, which is a direct malfunction of the AI system's outputs. The law firm used AI tools in preparing the filing, but the review process failed to catch the errors, leading to the submission of inaccurate legal information to a court. This constitutes a violation of legal obligations and risks harm to the legal process and potentially to individuals' rights. The harm is realized in the form of incorrect court filings, and the AI system's malfunction is a pivotal factor. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sullivan & Cromwell Legal Snafu Shreds AI's Hype: Gautam Mukunda

2026-04-23
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate citations in a court filing, but it produced hallucinated (false or fabricated) citations. This malfunction directly led to a legal procedural error requiring an apology to a judge. The harm here relates to a violation of legal obligations and potentially undermines the rights of parties involved in the legal process, fitting the definition of an AI Incident due to breach of obligations under applicable law. The event involves the use and malfunction of an AI system causing realized harm, not just a potential risk or complementary information.
Thumbnail Image

Law firm that bills over $2,000 per hour apologizes for error-riddled court documents

2026-04-24
The Cool Down
Why's our monitor labelling this an incident or hazard?
The AI system was used in the development and use phases (document preparation) and malfunctioned by generating hallucinated content, which directly led to harm in the form of misinformation in official court filings. This undermines the legal process and could lead to violations of legal rights or mismanagement of justice. The harm is realized, not just potential, as the erroneous documents were submitted to the court. Hence, this fits the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sullivan & Cromwell law firm apologizes for AI 'hallucinations' in court filing

2026-04-24
Missouri Lawyers Media
Why's our monitor labelling this an incident or hazard?
An AI system was used in the preparation of a court filing, and its malfunction (hallucinations) caused inaccurate and fabricated legal citations. This constitutes an AI system's malfunction leading to a breach of legal and ethical obligations in court submissions, which is a violation of applicable law protecting legal process integrity and professional standards. The harm is indirect and potential but materialized in the form of misleading court documents, which could have caused harm to the judicial process or parties involved if not corrected. Therefore, this qualifies as an AI Incident due to the realized harm of submitting inaccurate legal information generated by AI in a formal legal context.
Thumbnail Image

Elite Wall Street Law Firm Sullivan & Cromwell Apologizes to Federal Judge for AI Hallucinations in Court Filing

2026-04-25
Breitbart
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fabricated legal citations ('hallucinations') that were included in an official court filing, leading to a breach of legal standards and professional obligations. The harm is realized as the court was presented with false information, which could have misled judicial decision-making. The firm's failure to detect these errors before submission further implicates the AI system's malfunction or misuse. Hence, the AI system's use directly led to a violation of legal obligations, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Escándalo en Wall Street: un importante bufete de abogados usó IA para la redacción de documentos judiciales y pide disculpas por las "alucinaciones" generadas por la inteligencia artificial

2026-04-22
Clarin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system for drafting judicial documents, which malfunctioned by generating false and fabricated legal citations ('hallucinations'). This misuse led to the submission of erroneous documents in a federal court, directly impacting legal proceedings and violating legal obligations. The harm includes violation of legal rights and undermining the judicial process, fitting the definition of harm to human rights and breach of legal obligations. The incident is materialized and not merely potential, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Escándalo en Wall Street: el bufete de abogados que intervino en el caso YPF redactó documentos con IA y se disculpó por las alucinaciones generadas

2026-04-22
Ambito
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to draft legal documents, and its malfunction (hallucinations) led to errors requiring an apology. The involvement of AI in producing inaccurate legal content in a serious legal case implies direct harm to the legal process and potentially to the rights of affected parties. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malfunction in a critical context.
Thumbnail Image

Una firma de abogados que cobra 8.000 euros la hora pide perdón por errores en documentos creados por IA

2026-04-22
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system for legal document generation, which produced inaccurate and fabricated content ('hallucinations'). These errors were not caught due to failure to follow established AI use policies and review processes. The harm includes violation of legal and ethical duties, which falls under breaches of obligations intended to protect fundamental and intellectual property rights. The AI system's malfunction and misuse directly contributed to this harm, meeting the criteria for an AI Incident.
Thumbnail Image

Firma de abogados que representa a Trump pide perdón a un juez por presentar citas inventadas por una IA

2026-04-22
www.elcolombiano.com
Why's our monitor labelling this an incident or hazard?
The law firm explicitly used AI tools to generate legal citations, and the AI produced fabricated information ('hallucinations'). This led to the submission of incorrect legal documents to a court, which is a violation of legal obligations and ethical standards, thus constituting harm under the category of violations of human rights or breach of legal obligations. The incident is materialized (not just potential), and the AI system's malfunction is the direct cause. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

El bufete de abogados SullivanandCromwell reconoce 'alucinaciones' de la IA

2026-04-22
Expansión
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (licensed ChatGPT) in generating legal documents. The AI's malfunction (hallucinations) directly caused the submission of incorrect legal citations and references, which is a breach of legal standards and could harm the legal process and parties involved. This fits the definition of an AI Incident as the AI system's malfunction directly led to harm in the form of violations of legal obligations and potential harm to the rights of parties in the case. The firm's apology and acknowledgment of policy violations further confirm the AI's role in causing harm. Thus, the event is classified as an AI Incident.
Thumbnail Image

Sullivan & Cromwell admite errores por alucinaciones de IA en caso ligado a Prince Group

2026-04-22
DiarioBitcoin
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system in generating legal documents, where the AI's hallucinations caused factual errors in a court submission. These errors represent a breach of professional and legal standards, constituting harm to the legal process and potentially to the parties involved. The AI system's malfunction directly led to this harm, fulfilling the criteria for an AI Incident. Although the firm corrected the errors and is reviewing procedures, the harm occurred and was acknowledged publicly, making this more than a mere hazard or complementary information.
Thumbnail Image

Bufete de abogados se disculpa por errores de AI en documento legal

2026-04-22
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating hallucinated (false) citations in a legal document, which is a malfunction of the AI system. This malfunction directly led to the submission of incorrect legal information, which can be considered harm to the legal process and a violation of legal obligations (harm under category (c): violations of obligations under applicable law). The law firm admits responsibility and has taken corrective measures, but the harm has already occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un prestigioso bufete de abogados presentó documentos judiciales plagados de "alucinaciones" de IA

2026-04-23
La Nacion
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in the preparation of legal documents, which contained fabricated and inaccurate information due to AI 'hallucinations'. The flawed documents were submitted in a legal case, which can be reasonably inferred to cause harm related to violations of legal obligations and professional standards. The AI system's malfunction or misuse directly led to this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to a breach of legal obligations and potential harm to the judicial process.
Thumbnail Image

Otro documento judicial con "alucinaciones" de la IA pone de relieve la diferencia entre Silicon Valley y el resto del mundo | CNN

2026-04-23
CNN Español
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI/large language model) that produced fabricated and incorrect legal citations, which were then submitted in a judicial document. This misuse directly caused harm by misleading the court and undermining legal processes, constituting a violation of professional and legal standards. The harm is realized and not merely potential, as the document was officially submitted and required correction. Hence, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs in a legal context.
Thumbnail Image

Otro documento judicial con "alucinaciones" de la IA pone de relieve la diferencia entre Silicon Valley y el resto del mundo

2026-04-23
Local3News.com
Why's our monitor labelling this an incident or hazard?
The event involves the use and malfunction of an AI system (generative AI producing legal text) that directly led to harm: the submission of false legal citations and errors in a judicial document. This constitutes a breach of professional and legal obligations, impacting the integrity of legal proceedings and potentially violating rights related to fair legal process. The harm is realized and documented, not merely potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.