Lawyer Sanctioned for Submitting AI-Generated Fake Cases in B.C. Court

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Vancouver lawyer used ChatGPT to generate legal materials containing fictitious case citations, which were submitted in a B.C. Supreme Court case. The incident led to a judicial order for the lawyer to pay costs, a Law Society investigation, and warnings about the risks of uncritical AI use in legal practice.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of an AI system (ChatGPT) to generate fake legal cases that were submitted in court filings. This use led to harm: disruption of legal proceedings, extra work and costs for opposing counsel, and potential risk to the justice system's integrity. Although there was no intent to deceive, the AI-generated false information caused real harm and was a direct factor in the incident. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm (disruption and breach of legal process).[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Court hits B.C. lawyer with costs over fake AI-generated cases, despite no intent to deceive | Globalnews.ca

2024-02-26
Global News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) to generate fake legal cases that were submitted in court filings. This use led to harm: disruption of legal proceedings, extra work and costs for opposing counsel, and potential risk to the justice system's integrity. Although there was no intent to deceive, the AI-generated false information caused real harm and was a direct factor in the incident. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to harm (disruption and breach of legal process).
Thumbnail Image

Court Orders Lawyer to Pay Opposing Counsel After Citing Fake AI-Generated Cases | Law.com

2024-02-29
Law.com
Why's our monitor labelling this an incident or hazard?
The lawyer's use of ChatGPT to generate fake legal cases constitutes misuse of an AI system in a legal context, directly causing harm by misleading the court and wasting opposing counsel's time and resources. The judge's order to pay costs reflects the realized harm and abuse of process stemming from the AI-generated false information. This meets the criteria for an AI Incident because the AI system's use directly led to a violation of legal process and potential miscarriage of justice.
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a legal proceeding where the AI-generated content contained false information (hallucinated case citations). This misuse directly caused harm by misleading the court and breaching legal professional standards. The incident is not merely a potential risk but a realized harm with consequences including an investigation and public admonishment. Therefore, it qualifies as an AI Incident due to the direct link between AI use and harm to the legal process and professional obligations.
Thumbnail Image

Canada lawyer under fire for submitting fake cases created by AI chatbot

2024-02-29
The Guardian
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was explicitly used for legal research and produced fabricated case law, which was then submitted to the court. This directly led to harm in the form of misleading the court and opposing counsel, causing significant time and expense, and risking miscarriage of justice. The lawyer's conduct is under investigation, highlighting the seriousness of the incident. The AI's hallucination is a malfunction that caused a violation of legal process and professional conduct, fitting the definition of an AI Incident as it caused harm to rights and the justice system.
Thumbnail Image

B.C. lawyer reprimanded for citing fake cases invented by ChatGPT

2024-02-27
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) whose outputs were relied upon in a legal proceeding. The AI system generated false case citations (hallucinations), which were then used in court filings, causing procedural harm and additional costs to the opposing party. Although there was no intent to deceive, the AI's malfunction led to a breach of professional legal obligations and disrupted the judicial process. This fits the definition of an AI Incident because the AI system's use directly led to harm (legal procedural harm and breach of obligations).
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
The Star
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI, ChatGPT) was used in the preparation of legal documents submitted to a court, and its malfunction (hallucination producing false citations) directly led to misinformation being presented in a legal case. This constitutes a violation of professional and legal standards, potentially harming the integrity of the justice system, which is a harm to a community and a breach of obligations under applicable law. The event involves the use and malfunction of an AI system leading to realized harm (misleading court submissions). Therefore, it qualifies as an AI Incident.
Thumbnail Image

Canada lawyer under fire for submitting fake cases created by AI chatbot

2024-02-29
Aol
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was used in the development and use phase to generate legal case references. Its hallucinated outputs were directly submitted to the court, causing harm by misleading the judicial process and wasting resources, which fits the definition of an AI Incident. The harm includes potential miscarriage of justice and violation of legal professional conduct, which are significant harms under the framework. The event is not merely a potential risk or complementary information but a realized incident involving AI misuse and malfunction (hallucination).
Thumbnail Image

Supreme Court Chastises B.C. Lawyer for Citing AI-Generated Cases

2024-02-28
www.theepochtimes.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate legal case citations that were fabricated ('hallucinations'). The lawyer's reliance on these AI-generated false cases led to a direct harm: misleading the court and wasting opposing counsel's time, which is a breach of legal obligations and professional conduct. The judge's reprimand and order for compensation reflect recognition of this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a violation of legal obligations and harm to the judicial process.
Thumbnail Image

B.C. lawyer reprimanded for inserting fake cases invented by ChatGPT into court documents | CBC News

2024-02-27
CBC News
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (ChatGPT) whose outputs (fabricated legal cases) were incorporated into official court documents. Although the lawyer did not intend to deceive, the AI-generated false information caused harm by misleading the court and wasting opposing counsel's time, which is a breach of professional and legal standards. This harm is indirect but material and directly linked to the AI system's malfunction (hallucination). Therefore, this qualifies as an AI Incident under the framework, as it involves realized harm (legal procedural harm and ethical violations) caused by the AI system's outputs.
Thumbnail Image

B.C. lawyer who submitted ChatGPT 'hallucinations' to the court ordered to review files, pay costs

2024-02-28
CTV Newsnet
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) whose outputs were incorporated into legal filings without verification, resulting in the submission of fabricated case citations to a court. This misuse of AI led to harm including misleading the court, additional costs for opposing counsel, and potential risk to the integrity of the justice system. The harm is direct and material, fulfilling the criteria for an AI Incident under the framework. The lawyer's failure to verify AI-generated content and the resulting consequences demonstrate the AI system's role in causing harm through its use.
Thumbnail Image

AI 'hallucination' in B.C. court prompts caution | Globalnews.ca

2024-02-29
Global News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a lawyer used ChatGPT to generate legal submissions containing false citations, which were discovered and acknowledged as AI hallucinations. This misuse led to a direct harm in the legal process, including a breach of professional conduct and potential harm to the parties' legal rights. The AI system's involvement is clear and central to the incident. The event meets the criteria for an AI Incident because it involves realized harm (violation of legal and professional standards) caused directly by the AI system's outputs. The article also discusses responses and guidance, but the primary focus is on the incident itself.
Thumbnail Image

B.C. lawyer reprimanded for citing fake cases invented by ChatGPT | RCI

2024-02-27
Radio Canada
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved as the source of fabricated legal precedents used in court documents. The use of these AI-generated hallucinations directly led to harm by misleading the court and opposing counsel, causing wasted resources and procedural disruption. Although the lawyer did not intend to deceive, the AI's malfunction (hallucination) and the lawyer's reliance on it caused a breach of professional standards and legal obligations. This fits the definition of an AI Incident because the AI system's use directly led to a violation of legal and ethical standards, harming the judicial process and parties involved.
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
National Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) whose outputs directly led to a legal issue due to fabricated information being submitted in court documents. This constitutes an AI Incident because the AI system's use directly caused a harm related to legal process integrity and professional conduct, which can be considered a violation of legal and professional obligations. The harm is realized, as the court ruling and professional consequences have occurred. Therefore, this is an AI Incident.
Thumbnail Image

Fake case law in B.C. divorce court points up pitfalls with AI tools for lawyers

2024-02-28
The Vancouver Sun
Why's our monitor labelling this an incident or hazard?
The article describes a lawyer using AI tools to generate case law that was fabricated and submitted in court, which was discovered to be fake. This misuse of AI in legal practice directly led to harm by undermining the integrity of the legal process and potentially affecting the rights of the parties involved. The AI system's role in producing false legal documents is pivotal to the incident. Hence, it meets the criteria for an AI Incident as it involves the use of AI leading to a breach of legal obligations and harm to the legal system.
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
The Vancouver Sun
Why's our monitor labelling this an incident or hazard?
The event describes a direct consequence of using an AI system (ChatGPT) that generated false information used in a court case, which is a violation of legal obligations and professional conduct. This constitutes harm under the definition of AI Incident, specifically a breach of obligations under applicable law. The AI system's malfunction (hallucination) directly led to this harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

B.C. Law Society investigates lawyer who used AI to make fake case law

2024-02-28
Times Colonist
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) to generate false legal information that was submitted in court, which constitutes a violation of legal and professional obligations. Although the fake case law was withdrawn before the hearing and no direct physical harm occurred, the incident undermines the integrity of the justice system and breaches legal and ethical standards, qualifying as harm under the framework. The Law Society's investigation and the judge's ruling confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident due to the realized harm related to legal rights and the justice system's integrity caused by the AI system's misuse.
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
Sudbury.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a lawyer used generative AI (ChatGPT) to prepare legal materials that contained false citations, an AI 'hallucination.' This misuse directly led to harm by misleading the court and opposing counsel, which is a violation of legal and professional obligations. The involvement of AI in producing false information that was submitted in a court case meets the criteria for an AI Incident due to the direct harm to the justice system's integrity and potential violation of legal rights. The subsequent investigation and guidance from the Law Society further confirm the seriousness of the incident.
Thumbnail Image

AI 'hallucination' in B.C. court case called wakeup call for justice system

2024-02-29
Salmon Arm Observer
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI, ChatGPT) in a legal proceeding where the AI-generated content contained false information (non-existent case citations). This directly led to harm in the form of a violation of legal and professional obligations, which falls under violations of human rights or breach of obligations under applicable law (legal integrity and due process). The incident is a clear example of AI misuse causing harm, meeting the criteria for an AI Incident. The article also discusses responses and guidance from legal authorities, but the primary focus is the incident itself.
Thumbnail Image

An AI 'hallucination' turned up in a B.C. court case. Experts say it's a wake-up call

2024-02-29
CHAT News Today
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (generative AI ChatGPT) in a legal context, where its outputs included false information that was submitted in court. This misuse of AI led to a direct harm: undermining the integrity of legal proceedings and professional misconduct, which can be considered a violation of legal and professional obligations. The incident is a clear example of harm caused by AI use, meeting the criteria for an AI Incident. The article also discusses responses and guidance, but the primary focus is the realized harm from AI hallucination in court submissions.
Thumbnail Image

D.C. Circuit to Opine on Whether AI May Be Author of Copyrightable Work in Thaler v. Perlmutter

2024-03-27
Lexology
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating content autonomously, but the focus is on the legal question of copyright eligibility rather than any harm caused by the AI system. There is no injury, rights violation, property harm, or other significant harm resulting from the AI's use described. The article discusses a court case and legal arguments, which constitute a societal and governance response to AI developments. Therefore, this is Complementary Information as it provides important context and updates on AI-related legal governance without describing an AI Incident or AI Hazard.
Thumbnail Image

Understanding IP Risks in the Age of AI

2024-03-29
Lexology
Why's our monitor labelling this an incident or hazard?
The article centers on the potential legal risks and uncertainties arising from the use of generative AI, particularly regarding IP rights and confidentiality. It does not report any realized harm, incident, or event involving AI systems causing injury, rights violations, or other harms. Nor does it describe a specific plausible future harm event or hazard scenario. The content is primarily informative and advisory, discussing the evolving legal landscape and recommending risk mitigation. Therefore, it fits the definition of Complementary Information, as it provides context and understanding about AI-related risks without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Intellectual Property Analysis of Digital Contents and Works in the Background of AIGC (II)

2024-03-29
Lexology
Why's our monitor labelling this an incident or hazard?
The article focuses on legal and intellectual property discussions surrounding AI-generated content, including court cases and policy considerations. It does not report any incident of harm or risk of harm caused by AI systems, nor does it describe a new AI hazard or an update on a previously reported incident. Therefore, it fits the category of Complementary Information as it enhances understanding of AI-related legal and governance issues without describing a new AI Incident or AI Hazard.
Thumbnail Image

Balancing Innovation + Integrity: Navigating AI's Impact On The Legal Profession - Patent - Canada

2024-03-25
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) generating false legal cases that were cited in court documents, which directly led to harm in the form of undermining legal integrity and judicial resources spent verifying false information. This meets the definition of an AI Incident as the AI system's use directly caused harm to the legal profession and judicial process. The article also covers regulatory and policy responses, but these serve as context to the primary incident. Therefore, the event is best classified as an AI Incident due to the realized harm from AI misuse in legal practice.
Thumbnail Image

The IP In AI: Can AI Infringe IP Rights? - Copyright - European Union

2024-03-29
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the context of copyright and patent infringement claims, which are violations of intellectual property rights, a recognized category of AI harm. However, it does not report a new specific incident causing direct or indirect harm, nor does it describe a plausible future hazard event. Instead, it summarizes ongoing litigation, government responses, and practical challenges in addressing these harms. This aligns with the definition of Complementary Information, as it enhances understanding of the AI ecosystem and responses to AI-related IP issues without introducing a new primary harm event.
Thumbnail Image

US copyright laws regarding AI set to come later this year

2024-03-28
Android Headlines
Why's our monitor labelling this an incident or hazard?
The article focuses on the planned legal and regulatory responses to AI-related copyright issues, such as digital replicas and copyrightability of AI-generated works. There is no mention of any realized harm or direct AI system involvement causing harm, nor does it describe a plausible future harm event. Therefore, this is best classified as Complementary Information, as it provides context and updates on governance responses to AI rather than reporting an incident or hazard.
Thumbnail Image

Sponsored content: Understanding IP risks in the age of AI - Business Leader

2024-03-27
Business Leader
Why's our monitor labelling this an incident or hazard?
The article is an informative piece outlining potential legal risks and uncertainties in IP law as it relates to generative AI. It does not report any realized harm or incident caused by AI, nor does it describe a specific hazard event where AI use could plausibly lead to harm. The content is primarily educational and advisory, aimed at helping businesses understand and mitigate IP risks. Therefore, it fits the definition of Complementary Information, as it provides context and guidance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Can a fashion designer use GenAI to continue their legacy?

2024-03-28
World IP Review
Why's our monitor labelling this an incident or hazard?
The article centers on the use of generative AI as a tool for fashion design and the associated intellectual property issues. It does not describe any event where AI use has directly or indirectly caused harm, nor does it present a credible risk of harm occurring imminently. The content is primarily informational and advisory, discussing legal frameworks and potential risks without reporting an incident or hazard. Therefore, it fits the category of Complementary Information, as it provides context and understanding about AI's role in fashion design and IP law without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Balancing Innovation + Integrity: Navigating AI's Impact On The Legal Profession - Patent - Canada

2024-03-25
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The core event is the misuse of AI-generated content (nonexistent legal cases) in official court documents, which directly led to harm by undermining legal integrity and causing additional burdens on opposing counsel. This meets the definition of an AI Incident due to harm to the legal profession and judicial process (harm to communities and violation of legal obligations). The article also details regulatory and policy responses, which are complementary information enhancing understanding of the ecosystem but secondary to the primary incident. Therefore, the classification is AI Incident with complementary information elements present but not overriding the primary classification.