South Korean Courts Respond to AI-Generated Fake Legal Documents

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korean courts have faced increasing incidents of AI-generated fake legal precedents and evidence being submitted in legal proceedings, causing delays and unnecessary costs. In response, the judiciary has proposed measures including cost penalties, disciplinary action for lawyers, mandatory AI-use disclosure, and system upgrades to verify legal documents' authenticity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating fake legal precedents and laws that have been actually submitted to courts, causing delays and unnecessary costs, which constitute harm to the legal system and potentially violate legal rights. The involvement of AI in producing false information that disrupts court operations and leads to financial and procedural harm fits the definition of an AI Incident. The article focuses on the harm caused and the responses to it, not just potential future harm or general AI news, so it is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
Economic/PropertyPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

[자막뉴스] 법원 뒤흔드는 판례 속속 ...과태료에 변호사 징계 예고

2026-04-01
YTN
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake legal precedents and laws that have been actually submitted to courts, causing delays and unnecessary costs, which constitute harm to the legal system and potentially violate legal rights. The involvement of AI in producing false information that disrupts court operations and leads to financial and procedural harm fits the definition of an AI Incident. The article focuses on the harm caused and the responses to it, not just potential future harm or general AI news, so it is classified as an AI Incident.
Thumbnail Image

AI '환각' 대응 나선 법원..."가짜 판례 내면 소송비용 독박

2026-03-31
Chosun.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false legal precedents ('hallucinations') that are being submitted in court, causing direct harm by increasing litigation costs and potentially undermining judicial integrity. This fits the definition of an AI Incident because the AI system's malfunction (hallucination) has directly led to harm (a) injury or harm to persons/groups (through legal and financial harm) and (c) violations of legal rights and judicial process. The judiciary's response and proposed legal changes are complementary information but the core event is an AI Incident due to realized harm from AI-generated false evidence.
Thumbnail Image

법원 'AI 가짜 판례'로 골머리...소송비 물리고 징계까지 검토 - 매일경제

2026-03-31
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false legal precedents and evidence that have been submitted in court, causing harm by increasing litigation costs, delaying justice, and potentially violating legal rights. The judiciary's response to impose sanctions and develop detection systems confirms the recognition of actual harm caused by AI misuse. Since the harm is realized and the AI system's role is pivotal, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'AI 가짜 판례' 대응 나선 법원...소송비용 물고 징계까지

2026-03-31
경향신문
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake legal precedents and evidence that have been submitted to courts, causing actual harm such as unnecessary litigation costs and potential violations of legal rights. The court's response to mitigate these harms confirms that the AI system's misuse has materialized into a real incident. The AI system's involvement is explicit and central to the harm described, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

대법, AI 허위 판례·증거 제출 대응...소송비용 물고 징계까지

2026-03-31
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems to generate false legal documents and evidence, which directly harms the judicial process and legal rights, constituting violations of legal obligations and potentially human rights. The judiciary's measures are responses to these harms. Since the article focuses on the harms caused by AI-generated false evidence and the judiciary's response to these harms, this qualifies as an AI Incident. The AI system's misuse has directly led to harm in the form of legal process disruption and potential rights violations.
Thumbnail Image

'AI 부작용' 연구한 법원 TF ''가짜 판례 인용 시 소송비용 부담·과태료 부과 제안''

2026-03-31
매일방송
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating legal content (AI-generated false precedents) and the misuse of such outputs in court proceedings, which can cause harm (delays, costs, legal process disruption). However, the article does not report a specific AI Incident where harm has already occurred or a near-miss hazard event. Instead, it details the formation of a task force and proposed measures to prevent and respond to such harms. This fits the definition of Complementary Information, as it focuses on governance responses and system improvements to address AI-related risks, rather than reporting a new incident or hazard.
Thumbnail Image

'AI 허위 판례 인용'...법원, 소송비용 부담 등 책임 묻는다

2026-03-31
이투데이
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating false legal precedents, which could plausibly lead to harm in the form of litigation delays, increased costs, and undermining of judicial processes. The judiciary's task force is proposing rules and tools to prevent such harms before they materialize. Since the article does not report actual incidents of harm caused by AI-generated false precedents but focuses on the risk and preventive responses, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. The involvement of AI is clear, the harm is plausible and credible, and the event concerns the development of mitigation strategies to address this risk.
Thumbnail Image

대법원 "AI 허위정보 활용 변호사 징계 필요

2026-03-31
채널A
Why's our monitor labelling this an incident or hazard?
The article focuses on the judiciary's formation of a task force and proposed legal and procedural reforms to address the misuse of AI-generated false information in legal proceedings. While AI-generated false evidence and claims can cause harm (e.g., misleading courts, unfair trials), the article does not report a specific realized harm or incident but rather the response and preventive measures being developed. Therefore, this is Complementary Information providing context and governance responses to potential AI harms in the legal domain.
Thumbnail Image

유령판례·가짜사건···법원 'AI 환각' 대책 "제재·판결문 적시

2026-03-31
대한전문건설신문
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false legal information (hallucinations) that have been submitted in courts, causing harm such as unnecessary litigation costs, delays, and potential undermining of judicial integrity. The AI's malfunction directly contributes to these harms. The judicial system's responses are reactive measures to an ongoing AI Incident rather than mere potential hazards or complementary information. The presence of realized harm and direct AI involvement in producing false information justifies classification as an AI Incident.
Thumbnail Image

AI 가짜판례 검증 없이 내면 과태료 부과·징계

2026-03-31
YTN
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it generates fake legal precedents and laws. The misuse or failure to verify these AI-generated outputs leads to direct harm by causing delays, unnecessary costs, and misinformation in the judicial system, which constitutes disruption of critical infrastructure (judicial process) and violation of legal rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm. The article focuses on the harm caused and the responses to it, not just potential future harm or general information.