
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Generative AI is now widely used in legal practice but has produced fabricated case citations that were submitted in real proceedings, jeopardizing legal outcomes. Experts warn such AI hallucinations, alongside emerging political deepfakes, reveal acute risks to justice and democracy, underscoring urgent need for stronger human oversight.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article describes real cases where generative AI produced false legal citations that were used in legal proceedings, which constitutes a direct or indirect harm to the legal process and potentially to the rights of involved parties. This meets the definition of an AI Incident because the AI system's use has directly led to harm in the form of misinformation affecting legal outcomes. Although the article stresses human oversight and responsibility, the harm from AI hallucinations has already occurred. Therefore, this is not merely a hazard or complementary information but an AI Incident involving harm to rights and the legal system.[AI generated]