Generative AI Causes Errors and Raises Fairness Concerns in US Courts

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Generative AI is increasingly used in US courts for legal research, drafting documents, and even creating virtual testimonies. However, its use has led to legal filings with significant errors and false citations, resulting in court sanctions and fines, raising concerns about the integrity and fairness of judicial proceedings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (generative AI like ChatGPT and specialized legal AI tools) used in the judicial process. It reports actual incidents where AI-generated legal documents contained significant errors, leading to court sanctions and financial penalties, which are harms to the legal process and potentially to the rights of involved parties. The AI's malfunction (producing incorrect or fabricated legal citations) and use have directly led to these harms. The article also discusses the broader impact on justice and fairness, which falls under violations of rights and harm to communities. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as harms have already materialized due to AI use in courts.[AI generated]
AI principles
AccountabilityFairnessRobustness & digital securitySafetyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
WorkersBusinessGovernmentGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

دور الذكاء الاصطناعي التوليدي يتزايد في القضاء الأميركي

2025-06-20
العربي الجديد
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used in legal research and document drafting, confirming AI system involvement. While it notes errors and legal sanctions related to AI-generated filings, these are described as past issues or risks rather than a specific AI Incident causing direct harm. The discussion focuses on the evolving use, benefits, and challenges of AI in courts, including expert opinions and judicial reactions, which fits the definition of Complementary Information. There is no clear, direct or indirect harm currently caused by AI use reported, nor a plausible imminent hazard described. Hence, the classification as Complementary Information is appropriate.
Thumbnail Image

الذكاء الاصطناعي يدخل قاعة المحكمة.. أداة جديدة في يد القضاة

2025-06-20
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI like ChatGPT and specialized legal AI tools) used in the judicial process. It reports actual incidents where AI-generated legal documents contained significant errors, leading to court sanctions and financial penalties, which are harms to the legal process and potentially to the rights of involved parties. The AI's malfunction (producing incorrect or fabricated legal citations) and use have directly led to these harms. The article also discusses the broader impact on justice and fairness, which falls under violations of rights and harm to communities. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as harms have already materialized due to AI use in courts.
Thumbnail Image

الذكاء الصناعي يدخل قاعات المحاكم: خطوة نحو عدالة أكثر كفاءة أم مغامرة محفوفة بالمخاطر؟

2025-06-20
Alwasat News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems in courts, including AI-generated virtual testimonies and AI-assisted legal research and drafting. It reports actual harms such as legal documents containing multiple errors leading to financial penalties and the submission of false citations, which are direct harms to the legal process and potentially to the rights of involved parties. The AI systems' involvement in these harms meets the definition of an AI Incident, as the AI use has directly led to violations of legal procedural integrity and risks to justice outcomes. The article also discusses the broader impact on the justice system, confirming the AI's pivotal role in these harms.
Thumbnail Image

صعود الذّكاء الاصطناعي التّوليدي ودوره المتزايد في السّاحة القضائيّة

2025-06-21
annahar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems being used in courts for research, drafting legal documents, and even generating virtual testimonies. It reports concrete harms: AI-generated legal filings with multiple errors causing court penalties, and concerns about AI inaccuracies affecting judicial fairness. These harms relate to violations of legal obligations and potentially human rights (fair trial, due process). The AI's malfunction or misuse in producing inaccurate legal content has directly led to these harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information, as the harms are realized and significant.
Thumbnail Image

الذكاء الاصطناعي التوليدي يقتحم عالم القضاء

2025-06-20
مركز الاتحاد للأخبار
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI systems by judges, lawyers, and individuals in court cases, including AI-generated virtual testimonies and AI-drafted legal documents. It reports actual harms such as legal filings containing multiple errors due to AI use, resulting in court fines and sanctions. These errors and inaccuracies can undermine the administration of justice, a violation of legal rights and due process, thus meeting the criteria for an AI Incident. The article also discusses the potential for AI to influence case outcomes, further indicating direct or indirect harm linked to AI use in the judicial system.
Thumbnail Image

دور متعاظم للذكاء الاصطناعي التوليدي في عالم القضاء

2025-06-21
https://www.alanba.com.kw/newspaper/
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI) in legal proceedings, which can plausibly lead to harms such as miscarriages of justice or violations of legal rights if AI-generated errors influence court decisions. However, the article does not describe any specific incident where harm has already occurred. Instead, it discusses the potential risks and the growing adoption of AI tools in courts, making it an AI Hazard rather than an AI Incident. The focus is on plausible future harm and the need for careful integration and training.
Thumbnail Image

الخوارزميات تدخل قاعة المحكمة | | صحيفة العرب

2025-06-21
صحيفة العرب
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (e.g., ChatGPT, legal AI tools) being used in court-related activities, with concrete examples of harm such as legal filings containing false citations and errors that resulted in court sanctions. This shows direct involvement of AI in causing harm to the legal process and potentially violating rights related to fair trial and justice. The use of AI-generated virtual victim testimony also raises questions about the impact on judicial decisions. These factors meet the criteria for an AI Incident because the AI's use has directly led to harm in the judicial context, including violations of legal rights and risks to fairness and justice.
Thumbnail Image

الذكاء الاصطناعي يقتحم المحاكم: أداة للنجدة أم مدخل للفوضى؟ | MEO

2025-06-21
MEO
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems in drafting legal documents and assisting in court cases is explicit. The article reports actual harms resulting from AI-generated errors, including a federal judge imposing fines due to flawed AI-generated legal filings. These errors can undermine the administration of justice, constituting harm to the legal process and potentially violating rights to fair trial and due process. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to realized harm in the judicial context.