South African Lawyers Investigated After AI Tool Generates Fake Legal Citations in Court

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A South African legal team used the AI tool Legal Genius to generate court submissions, resulting in fabricated case citations being presented in a licensing dispute. The court discovered the false references, prompting a referral to the Legal Practice Council for professional misconduct investigation and highlighting risks of unverified AI use in legal proceedings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system was used in legal research and produced fabricated case citations ('hallucinations'), which were included in legal arguments before a court. This misuse or malfunction of the AI system directly led to misinformation presented in a legal proceeding, undermining the administration of justice and potentially violating legal standards. The judge's concern and referral for investigation further underscore the seriousness of the harm. Hence, the event meets the criteria for an AI Incident due to indirect harm to the justice system and legal rights.[AI generated]
AI principles
AccountabilityRobustness & digital securityTransparency & explainabilitySafetyRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
WorkersGovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation

In other databases


Articles about this incident or hazard

Thumbnail Image

When using Artificial Intelligence goes wrong: Judge slams lawyers for legal bungle

2025-07-02
IOL
Why's our monitor labelling this an incident or hazard?
The AI system was used in legal research and produced fabricated case citations ('hallucinations'), which were included in legal arguments before a court. This misuse or malfunction of the AI system directly led to misinformation presented in a legal proceeding, undermining the administration of justice and potentially violating legal standards. The judge's concern and referral for investigation further underscore the seriousness of the harm. Hence, the event meets the criteria for an AI Incident due to indirect harm to the justice system and legal rights.
Thumbnail Image

AI mishap in court: Judge criticises lawyers for citation errors

2025-07-03
IOL
Why's our monitor labelling this an incident or hazard?
The AI system (Legal Genius) was used in the development and preparation of legal arguments, and it malfunctioned or produced erroneous outputs (hallucinated citations). This directly led to harm by misleading the court and potentially affecting judicial decisions, which is a violation of legal rights and harms the justice system's integrity. The event clearly involves an AI system's malfunction leading to realized harm, fitting the definition of an AI Incident. The involvement of AI is explicit, and the harm is direct and significant.
Thumbnail Image

Another South African lawyer caught using AI has landed in big trouble

2025-07-03
MyBroadband
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Legal Genius) whose outputs (hallucinated case law) were relied upon in legal proceedings, leading to professional misconduct investigations and reputational harm. The AI system's malfunction (hallucination) directly caused harm by introducing false information into court documents, which is a violation of legal and ethical obligations. The harm is realized and significant, affecting the integrity of the legal process and the rights of involved parties. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

South Africa: Lawyers Used Fake Cases From AI in Court Papers

2025-07-03
allAfrica
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (an online AI tool generating legal case citations) whose outputs were incorporated into official court documents without proper verification. This misuse led to the submission of fabricated legal cases, constituting a breach of legal and professional obligations. The harm is realized in the form of undermining the integrity of the legal process and potential professional misconduct, which fits the definition of an AI Incident under violations of obligations under applicable law. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Lawyers face probe for using 'hallucinating' GenAI in court

2025-07-02
ITWeb
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system for legal research, which produced false citations that were submitted in court documents. This misuse of AI has directly led to harm in the form of undermining the integrity of the legal process and potential violations of professional and legal standards. The harm is realized and significant, as it affects the justice system and public trust. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm through the submission of fabricated legal references in official court proceedings.
Thumbnail Image

SA Court Refers Advocate to Legal Council After AI Tool Cites Fake Case Law - iAfrica.com

2025-07-03
iAfrica
Why's our monitor labelling this an incident or hazard?
The AI system (Legal Genius) was used in the development of court submissions, and it generated false legal citations (hallucinations). This directly caused harm by misleading the court and resulting in professional disciplinary action against the advocate. The event describes realized harm linked to the AI system's malfunction and misuse, meeting the criteria for an AI Incident rather than a hazard or complementary information. The harm is significant as it undermines legal integrity and professional conduct.
Thumbnail Image

South African Lawyer Faces Discipline Over AI-Generated Case Citations

2025-07-05
News Ghana
Why's our monitor labelling this an incident or hazard?
The AI system ('Legal Genius') was explicitly used to generate legal citations, but it produced false information ('hallucinated' case law). This led to the submission of misleading court documents, which is a breach of legal ethics and professional conduct, causing reputational harm and triggering disciplinary action. The harm is directly linked to the AI system's malfunction during its use, fulfilling the criteria for an AI Incident. The event does not merely describe potential or future harm, nor is it a general update or unrelated news; it documents a concrete incident of harm caused by AI outputs.
Thumbnail Image

Keep ChatGPT out of court, warns law expert after AI blunder

2025-07-16
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article references a case where AI-generated citations were false, indicating a malfunction or misuse of an AI system that could lead to harm in legal proceedings. However, it does not detail direct harm caused by this malfunction, nor does it describe a broader incident or ongoing harm. The focus is on potential risks, ethical concerns, and the need for safeguards, which aligns more with Complementary Information that provides context and expert analysis on AI's impact in law. There is no clear evidence of an AI Incident or AI Hazard as defined, since the harm is not explicitly realized or a plausible future harm event is not described in detail.
Thumbnail Image

Attorneys -- Track AI Hallucination Case Citations With This New Tool

2025-07-18
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses documented cases where AI systems generated fabricated legal citations that attorneys submitted to courts, resulting in sanctions and monetary penalties. This directly links AI system use and malfunction to realized harm (violation of legal obligations and reputational damage). The presence of AI systems is clear, and the harm is concrete and ongoing. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI and Professional Negligence: Lessons from Ayinde

2025-07-18
Lexology
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of generative AI tools for legal research that produced fabricated and inaccurate legal citations, which were then submitted in court proceedings. This misuse directly led to judicial findings of negligence and professional misconduct, harming the administration of justice and public confidence in the legal system. The harm includes violations of professional and ethical obligations, which fall under breaches of applicable law protecting fundamental rights and the integrity of legal processes. The AI system's malfunction (hallucinations) and misuse by legal professionals were pivotal in causing these harms. Hence, this is an AI Incident rather than a hazard or complementary information.