Generative AI Hallucinations Undermine Legal Proceedings

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI is now widely used in legal practice but has produced fabricated case citations that were submitted in real proceedings, jeopardizing legal outcomes. Experts warn such AI hallucinations, alongside emerging political deepfakes, reveal acute risks to justice and democracy, underscoring urgent need for stronger human oversight.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes real cases where generative AI produced false legal citations that were used in legal proceedings, which constitutes a direct or indirect harm to the legal process and potentially to the rights of involved parties. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation affecting legal outcomes. Although the article stresses human oversight and responsibility, the harm from AI hallucinations has already occurred. Therefore, this is not merely a hazard or complementary information but an AI Incident involving harm to rights and the legal system.[AI generated]
AI principles
Robustness & digital securitySafetyTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
Public interestHuman or fundamental rightsEconomic/PropertyReputational

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generationReasoning with knowledge structures/planning

In other databases

Articles about this incident or hazard

Thumbnail Image

Beware the 'botshit': why generative AI is such a real and imminent threat to the way we live | André Spicer

2024-01-03
The Guardian
Why's our monitor labelling this an incident or hazard?
The article explicitly involves generative AI systems and their use in creating false or misleading political content. While it describes the potential for significant harm to democratic processes and communities, it does not document an actual incident where such harm has materialized. Therefore, the event is best classified as an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities and violations of democratic rights in the near future if unmitigated.
Thumbnail Image

What are AI hallucinations?

2024-01-03
Android Police
Why's our monitor labelling this an incident or hazard?
The article provides an educational overview of AI hallucinations as a phenomenon inherent to generative AI systems. It does not describe a particular incident of harm caused by AI hallucinations, nor does it report a specific plausible future harm event. Instead, it offers general information and advice on recognizing and reducing hallucinations, which fits the definition of Complementary Information as it enhances understanding of AI-related issues without reporting a new incident or hazard.
Thumbnail Image

Use of generative AI in the legal profession accelerating despite accuracy concerns

2024-01-03
ITPro
Why's our monitor labelling this an incident or hazard?
The article describes real cases where generative AI produced false legal citations that were used in legal proceedings, which constitutes a direct or indirect harm to the legal process and potentially to the rights of involved parties. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation affecting legal outcomes. Although the article stresses human oversight and responsibility, the harm from AI hallucinations has already occurred. Therefore, this is not merely a hazard or complementary information but an AI Incident involving harm to rights and the legal system.