Michael Cohen Submits AI-Generated Fake Legal Citations in Court Filing

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Michael Cohen, former lawyer for Donald Trump, submitted a court motion containing fake legal case citations generated by Google Bard, an AI chatbot. Although the incident caused embarrassment and procedural issues, a federal judge declined to impose sanctions, finding no evidence of bad faith or intentional misconduct.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Google Bard) was used for legal research and generated fake cases that were mistakenly believed to be real and cited in court filings. This misuse of AI-generated content led to false legal claims, which is a violation of legal obligations and could be considered a breach of law and ethical standards. The event involves realized harm in the form of misleading the court and potential perjury, directly linked to the AI system's outputs. Therefore, it meets the criteria for an AI Incident due to the direct role of AI in causing harm related to legal rights and obligations.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Michael Cohen will not face sanctions after generating fake cases...

2024-03-20
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google Bard) generating false information that was used in a legal context. However, the harm was limited to reputational embarrassment and procedural issues rather than direct or indirect harm as defined by the framework (e.g., injury, rights violations, or operational disruption). The judge declined to impose sanctions, indicating no legal harm or violation occurred. Therefore, this event does not meet the threshold for an AI Incident. It also does not represent a plausible future harm scenario (AI Hazard) since the harm has already occurred and was minimal. Instead, it provides contextual information about AI's impact on legal proceedings and the need for caution, fitting the definition of Complementary Information.
Thumbnail Image

Michael Cohen Won't Be Sanctioned For Citing Fake Cases -- Though He Likely Committed Perjury, Judge Rules

2024-03-20
Forbes
Why's our monitor labelling this an incident or hazard?
The AI system (Google Bard) was used for legal research and generated fake cases that were mistakenly believed to be real and cited in court filings. This misuse of AI-generated content led to false legal claims, which is a violation of legal obligations and could be considered a breach of law and ethical standards. The event involves realized harm in the form of misleading the court and potential perjury, directly linked to the AI system's outputs. Therefore, it meets the criteria for an AI Incident due to the direct role of AI in causing harm related to legal rights and obligations.
Thumbnail Image

Judge won't sanction Michael Cohen for citing fake cases in AI-generated legal filing

2024-03-20
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Google Bard) generating fake legal citations that were used in a court filing. This is a clear example of AI involvement in the development and use of content. However, the judge ruled no sanctions were warranted, indicating no direct or indirect harm meeting the definitions of injury, rights violation, or other significant harm occurred. The event highlights risks and challenges of AI-generated misinformation in legal documents but does not describe an incident causing harm or a plausible future harm scenario. It mainly provides an update on the judicial handling of AI-generated misinformation, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

UPDATE 1-Michael Cohen will not face sanctions after generating fake cases with AI

2024-03-20
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Bard) generating false legal citations that were used in court filings, which is a misuse or malfunction of the AI system leading to harm in the form of misleading the court and undermining legal integrity. This constitutes a violation of legal procedural norms and could be seen as harm to the judicial process and potentially a breach of obligations under applicable law. Although no sanctions were imposed and no physical or direct personal harm occurred, the AI's role was pivotal in causing this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ex-Michael Cohen attorney who used AI in court docs made...

2024-03-20
New York Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Bard) generating false legal citations that were used in court documents. This constitutes a malfunction or misuse of AI outputs in a legal setting. However, the court ruled the mistake as negligent but not malicious or harmful in a way that meets the criteria for an AI Incident. No direct or indirect harm as defined (injury, rights violations, disruption, etc.) is reported. The event is primarily about an error due to AI-generated content, with no evidence of resulting harm or legal breach caused by the AI system. Therefore, it does not qualify as an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on the challenges and risks of AI use in legal practice without a new primary harm event.
Thumbnail Image

Manhattan Judge Blasts Michael Cohen's Lawyer For Citing Fake Cases Generated By AI In Court

2024-03-21
The Daily Caller
Why's our monitor labelling this an incident or hazard?
The AI system (Google Bard) generated false legal cases that were used by a lawyer in court, leading to misinformation and undermining the integrity of the legal process. This is a direct consequence of the AI system's outputs being relied upon without proper verification, causing harm to the administration of justice and potentially violating legal obligations. Therefore, this qualifies as an AI Incident due to the realized harm stemming from the AI system's use in a critical legal context.
Thumbnail Image

Judge declines to sanction Michael Cohen, lawyer over AI-generated fake case citations

2024-03-20
The Hill
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating fake case citations that were submitted in court, constituting misuse of AI in a legal context. This misuse directly led to harm in the form of fraudulent legal filings, which is a violation of legal obligations and undermines the judicial process. Although no sanctions were imposed, the harm and violation occurred. Therefore, this qualifies as an AI Incident due to the direct involvement of AI misuse causing harm related to legal rights and obligations.
Thumbnail Image

Michael Cohen and lawyer avoid sanctions for citing fake cases invented by AI

2024-03-20
Ars Technica
Why's our monitor labelling this an incident or hazard?
The AI system (Google Bard) generated fake legal citations that were used in a court filing, directly leading to a violation of legal and professional standards. This constitutes a breach of obligations under applicable law protecting intellectual property and legal integrity, fitting the definition of an AI Incident. Although no sanctions were imposed and no physical harm occurred, the incident caused harm to the legal process and professional conduct, which is a recognized form of harm under the framework. The event is not merely a potential risk or a complementary update but a realized incident involving AI misuse or malfunction leading to harm.
Thumbnail Image

Judge Suggests Michael Cohen Perjured Himself, Won't End Supervised Release

2024-03-20
NewsMax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (Google Bard) generating fake case citations, which is an AI system involvement. The AI output was used in legal filings, leading to embarrassment and judicial criticism. However, the harm is limited to professional misconduct and reputational damage, not meeting the threshold for injury, rights violations, or other significant harms defined for AI Incidents. There is no indication of plausible future harm beyond the existing embarrassment, and the judge did not impose sanctions, indicating no severe legal harm occurred. The article mainly discusses the judicial response and implications for AI use in law, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

No Sanctions in Michael Cohen Hallucinated Citations Matter

2024-03-20
Reason
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Google Bard, a generative AI) whose outputs were used in legal filings. The AI generated hallucinated (non-existent) case citations, which were incorporated into a court motion, leading to misinformation submitted to the court. This constitutes an AI Incident because the AI system's use directly led to a harm: the submission of false legal citations, which can be considered harm to the integrity of legal proceedings and potentially a violation of legal norms. Although no physical harm or direct legal sanctions were imposed, the reputational and procedural harm is significant and directly linked to the AI's malfunction (hallucination). The court's decision not to impose sanctions does not negate the fact that harm occurred due to the AI system's outputs. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Michael Cohen will not face sanctions after generating fake cases with AI | Politics

2024-03-20
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Google Bard) generating fake legal citations that were mistakenly used in court filings. This shows AI system involvement and misuse. However, the judge declined to sanction the involved parties, indicating no direct harm or legal violation was established from the AI-generated content. The event is more about the implications and caution needed when using AI-generated information in legal settings. It does not describe realized harm or a plausible future harm scenario that would qualify as an AI Incident or AI Hazard. Instead, it informs about the judiciary's stance and the risks of AI-generated misinformation in legal documents, fitting the definition of Complementary Information.
Thumbnail Image

Michael Cohen Lawyer Who Cited Bogus Cases Avoids Sanctions

2024-03-20
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
An AI system was involved in assisting the creation of a legal motion that contained false information (bogus case citations). This constitutes a misuse or error in the use of AI, but the court explicitly ruled no sanctions due to lack of bad faith. There is no indication that this led to any direct or indirect harm as defined by the framework (e.g., injury, rights violation, or disruption). Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI's role in legal proceedings and the court's response to AI-related errors without a new primary harm occurring.
Thumbnail Image

Judge won't sanction Michael Cohen over AI-generated fake legal cases

2024-03-20
Court House News Service
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system that produced fabricated legal case citations. These false citations were submitted in a court filing, which directly implicates the AI system's outputs in causing misinformation within a legal context. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of legal obligations and potential harm to the judicial process. Although no sanctions were imposed, the harm occurred through the submission of phony legal texts. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Michael Cohen will not face sanctions after generating fake cases with AI

2024-03-20
Colorado Springs Gazette
Why's our monitor labelling this an incident or hazard?
An AI system (Google Bard) was used to generate fake legal case citations, which were then mistakenly submitted in court documents. This constitutes a misuse of AI-generated content leading to misinformation in a legal setting. However, the judge explicitly declined to impose sanctions, indicating no direct legal harm or violation resulted from this incident. The harm is reputational and procedural embarrassment rather than injury, rights violation, or operational disruption. Therefore, this event does not meet the threshold for an AI Incident, as no actual harm occurred. It also does not qualify as an AI Hazard since the harm has already occurred and is not merely potential. The article primarily reports on the legal and procedural response to the AI-generated misinformation, making it Complementary Information about AI misuse and its implications in legal practice.
Thumbnail Image

Michael Cohen's Fake Legal Cases Exposed: AI-generated Citations Wreak Havoc on Former Trump Lawyer's Legal Troubles

2024-03-20
DC Weekly
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Google Bard) generating fake legal citations that were used in a court filing, which directly led to harm in the form of misinformation and potential legal procedural issues. This fits the definition of an AI Incident because the AI's outputs were used in a way that caused harm related to legal rights and obligations (a breach of applicable law and legal process). Although no sanctions were imposed, the harm to the legal process and the credibility of the involved parties is clear and material. Therefore, this is classified as an AI Incident.