Spanish Judge Fined for Using ChatGPT to Draft Judicial Sentence

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Spanish judge was fined €1,000 by the General Council of the Judiciary for using ChatGPT to draft a judicial sentence, breaching confidentiality and judicial protocols. The incident highlights legal and ethical concerns over AI use in sensitive judicial processes, as the judge failed to protect case data and inform colleagues.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (ChatGPT) in the drafting of a legal sentence, which is a direct use of AI. The sanction arises because this use led to a violation of legal obligations related to confidentiality and judicial procedure, which constitutes a breach of applicable law protecting fundamental rights and legal frameworks. Therefore, the AI system's use directly led to a violation of legal obligations, qualifying this as an AI Incident under the framework.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
Human or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Sancionan a juez español que utilizó IA para redactar una sentencia: Debe pagar multa de mil euros

2026-04-27
El Mercurio de Santiago
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the drafting of a legal sentence, which is a direct use of AI. The sanction arises because this use led to a violation of legal obligations related to confidentiality and judicial procedure, which constitutes a breach of applicable law protecting fundamental rights and legal frameworks. Therefore, the AI system's use directly led to a violation of legal obligations, qualifying this as an AI Incident under the framework.
Thumbnail Image

El CGPJ sanciona con 1.000 euros a un juez que usó ChatGPT para redactar una sentencia

2026-04-27
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) by a judge to draft a sentence, which is a misuse of AI in judicial functions. However, there is no indication that this misuse caused direct or indirect harm such as injury, violation of rights, or harm to communities. The disciplinary sanction and the judicial body's instructions on AI use represent governance and regulatory responses to AI misuse. Since the event focuses on the sanction and regulatory framework rather than an AI-caused harm or plausible future harm, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

El primer expediente a un juez por usar IA en una sentencia se salda con una multa: fue denunciado por otros jueces

2026-04-27
El Español
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the judicial process, which led to a disciplinary sanction for the judge. The AI's involvement was central to the incident, as the judge uploaded confidential judicial documents to a general AI platform without authorization, violating legal and ethical obligations. Although no physical harm or direct injury occurred, the breach of confidentiality and professional duties constitutes a violation of applicable law protecting fundamental rights, qualifying as harm under the AI Incident definition. The disciplinary sanction and investigation confirm the realized harm linked to AI use. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mil euros de multa para el juez que usó la Inteligencia Artificial para dictar una sentencia

2026-04-27
LaVanguardia
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in drafting a judicial sentence, which is explicitly stated. The judge's use of AI without proper oversight or data protection led to a breach of judicial duties and confidentiality, which are legal obligations protecting fundamental rights. This constitutes a violation of applicable law and rights, fulfilling the criteria for an AI Incident. The disciplinary sanction confirms the harm and breach occurred. The AI system's involvement is direct in the use phase, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Multan a juez con 1,000 euros: Redactó una sentencia con IA... y olvidó eliminar su consulta a ChatGPT

2026-04-27
El Financiero
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in drafting a judicial sentence, fulfilling the AI system involvement criterion. However, the harm is not direct or indirect physical, legal, or societal harm caused by the AI system's malfunction or outputs. Instead, the sanction arises from the judge's procedural failure to remove AI-generated content and comply with judicial confidentiality and oversight rules. The AI system was used as an auxiliary tool, not as a replacement for judicial functions, and no harm to persons, rights, or infrastructure is reported. The main focus is on the disciplinary and governance response to AI use in the judiciary, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Multan a juez español por usar ChatGPT en una sentencia judicial

2026-04-27
Montevideo Portal / Montevideo COMM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in a judicial process, where confidential information was shared with the AI, leading to a breach of judicial confidentiality and procedural rules. This constitutes a violation of legal obligations protecting fundamental rights and judicial integrity, which fits the definition of an AI Incident under violations of human rights or breach of applicable law. The sanction and disciplinary action confirm that harm occurred due to the AI system's use. Although the AI was used as support and not as a substitute for judicial decision-making, the misuse and resulting breach are sufficient to classify this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Multan a un juez con 1.000 euros por usar ChatGPT en el borrador de una sentencia | Canarias7

2026-04-27
Canarias7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in the drafting of a judicial sentence, which is a use of AI in a critical legal context. The disciplinary sanction was imposed because of improper sharing of information, not directly because of the AI's malfunction or harm caused by AI outputs. However, the use of AI in this context raises concerns about compliance with legal and ethical standards, and the event reflects an incident where AI use in a sensitive domain led to a sanction. Since the AI system's use is directly linked to a disciplinary action and the event concerns the consequences of AI use in judicial work, it qualifies as an AI Incident under violations of legal and professional obligations. The event does not describe physical harm or damage but involves a breach of judicial duties related to AI use, fitting the definition of an AI Incident.
Thumbnail Image

Multado con 1.000 euros por usar Chat GPT para redactar una sentencia

2026-04-27
Telemadrid
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in drafting a judicial sentence, which is a core judicial function. The disciplinary sanction was imposed because the AI was used in a way that bypassed proper judicial responsibilities, constituting a breach of legal and professional duties. This use of AI directly led to a violation of obligations under applicable law intended to protect fundamental rights and judicial independence. Although the AI was used as an aid and not a full replacement, the improper reliance on AI outputs in judicial reasoning caused harm to the integrity of the judicial process. Hence, it meets the criteria for an AI Incident as the AI system's use directly led to a breach of legal obligations and professional standards.
Thumbnail Image

Sancionan con 1,000 euros a un juez español que usó ChatGPT para redactar una sentencia

2026-04-27
www.diariolibre.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in drafting a legal document. However, the sanction was imposed not because of the AI use per se, but because the judge bypassed judicial protocols, which is a violation of legal obligations. There is no indication that the AI use directly or indirectly caused harm such as injury, rights violations, or other harms defined as AI Incidents. Nor is there a plausible future harm described. The event is primarily about governance and legal response to AI use in judiciary, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

España sanciona a juez por redactar una sentencia con ChatGPT

2026-04-27
SDPnoticias.com
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the drafting process, and its use led to a breach of confidentiality, which is a violation of legal and ethical obligations. Although no physical harm or injury occurred, the event constitutes a violation of legal obligations protecting confidentiality and judicial independence, which falls under violations of applicable law intended to protect fundamental rights. Therefore, this qualifies as an AI Incident due to the realized harm (breach of confidentiality and legal principles) directly linked to the AI system's use.
Thumbnail Image

Sancionan con 1.000 euros a un juez español que usó ChatGPT para redactar una sentencia

2026-04-27
La Capital MdP
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in drafting a judicial sentence, which is a clear AI system involvement. However, the harm is not direct injury or rights violation but a breach of judicial confidentiality and procedural rules, leading to a disciplinary sanction. The main focus is on the governance and regulatory response to AI use in the judiciary, including the imposition of sanctions and official instructions limiting AI's role. There is no evidence of direct or indirect harm caused by the AI system itself, nor plausible future harm from the AI's use in this context. Therefore, this event is best classified as Complementary Information, as it provides important context on societal and governance responses to AI use in sensitive domains like the judiciary, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Juez copia y pega respuesta de ChatGPT para sentencia en España; lo multan con ás de 20 mil pesos

2026-04-27
El Mañana de Nuevo Laredo
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the drafting of a judicial sentence, which is a critical legal document. The misuse of the AI system—lack of supervision, failure to protect personal data, and failure to inform colleagues—directly led to a breach of legal obligations and confidentiality, which is a violation of applicable law protecting fundamental rights. This meets the criteria for an AI Incident because the AI system's use directly led to harm in the form of legal and ethical violations. The sanction imposed confirms the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Sancionan con 1,000 euros a un juez español que usó ChatGPT para redactar una sentencia

2026-04-27
Noticias SIN
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the drafting of a judicial sentence, which is explicitly mentioned. The judge's use of AI led directly to a disciplinary sanction for breaching judicial procedural rules, which is a violation of legal obligations. This constitutes a breach of obligations under applicable law intended to protect fundamental rights (fair judicial process), fitting the definition of an AI Incident. The event is not merely a potential risk or a complementary update; it involves realized harm in the form of a legal sanction and procedural violation caused by AI use. Therefore, it is classified as an AI Incident.
Thumbnail Image

Sancionan con mil euros a un juez español que usó ChatGPT para redactar una sentencia

2026-04-27
unionradio.net
Why's our monitor labelling this an incident or hazard?
The article describes a judge using ChatGPT to draft a sentence, which led to a sanction for procedural violations, not for harm caused by the AI system itself. The AI system's involvement is clear, but no direct or indirect harm (such as injury, rights violations, or operational disruption) is reported. The sanction is related to the judge's failure to follow judicial protocols rather than the AI system malfunctioning or causing harm. This fits the definition of Complementary Information, as it details a governance response to AI use in the judiciary without reporting an AI Incident or Hazard.
Thumbnail Image

Juiz redige sentença com IA e acaba a desembolsar mil euros

2026-04-27
SIC Notícias
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly involved in the drafting of a judicial sentence, which is a critical legal decision. The misuse or improper reliance on AI in this context led to a disciplinary sanction against the judge, constituting a violation of judicial responsibilities and potentially undermining legal rights and principles. Although no direct harm such as injury or property damage occurred, the event involves a breach of legal and professional obligations related to the use of AI, which fits the definition of an AI Incident under violations of human rights or breach of obligations under applicable law protecting fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Juiz espanhol multado em mil euros por usar IA em sentença

2026-04-27
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used by a judge in drafting a sentence, which is a clear AI system involvement. The disciplinary action stems from the use of AI in judicial decision-making, which is a misuse or improper use of AI. However, the article does not report any direct or indirect harm resulting from this use, such as wrongful convictions, rights violations, or other harms defined under AI Incident. Instead, it focuses on regulatory and governance responses, including sanctions and instructions to prevent improper AI use in the judiciary. This fits the definition of Complementary Information, as it updates on governance and oversight measures related to AI use, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Juiz espanhol multado em mil euros por usar Inteligência Artificial em sentença

2026-04-27
Correio da Manha
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in the judicial decision-making process, which is explicitly prohibited without proper human oversight and authorization. The judge's use of AI to draft a sentence without following the required legal and procedural safeguards led to a disciplinary sanction, indicating a breach of legal obligations and judicial responsibilities. This breach can be classified as a violation of fundamental rights and legal frameworks, as it undermines judicial independence and the proper administration of justice. Hence, it meets the criteria for an AI Incident due to the direct involvement of AI in causing a legal and rights-related harm.
Thumbnail Image

Juiz espanhol multado em mil euros por usar inteligência artificial em sentença

2026-04-27
ECO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT) by a judge to draft a sentence, which is a clear AI system involvement. The event stems from the use of the AI system in judicial work. However, the sanction is for procedural and disciplinary reasons, not because the AI system caused direct or indirect harm such as injury, rights violations, or disruption. The article focuses on the governance and regulatory response to AI use in the judiciary, including official instructions prohibiting AI from replacing judicial functions. There is no indication that the AI use led to an AI Incident or that it plausibly could lead to harm (AI Hazard). Instead, it is a governance and disciplinary update, fitting the definition of Complementary Information.
Thumbnail Image

Juiz espanhol multado em mil euros por usar IA em sentença

2026-04-27
AngolaPress
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) used in the drafting of a judicial sentence. However, the main issue is the regulatory breach and disciplinary sanction for improper use, not a harm caused by the AI system itself. There is no reported injury, rights violation, or other harm resulting from the AI's use. The disciplinary action and regulatory instructions represent a governance response to AI use in the judiciary. Therefore, this event is best classified as Complementary Information, as it provides important context on societal and governance responses to AI use in sensitive domains, rather than describing an AI Incident or AI Hazard.