Portuguese Judge Disciplined for Using AI to Draft Court Ruling with Fabricated Legal References

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Portuguese Superior Council of the Judiciary opened a disciplinary process against judges from the Lisbon Court of Appeal for allegedly using ChatGPT to draft a court ruling. The AI-generated decision contained fabricated laws and jurisprudence, undermining the legal process and prompting complaints from defense lawyers.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of ChatGPT, an AI system, in drafting a judicial decision. The suspected AI involvement has led to an investigation due to irregularities and errors in the legal text, which could undermine the legal process and the rights of the parties involved. This constitutes an AI Incident because the AI system's use has directly led to potential harm to legal rights and the integrity of judicial decisions, fitting the definition of violations of human rights or breach of legal obligations.[AI generated]
AI principles
AccountabilityFairnessTransparency & explainabilityRobustness & digital securityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
General public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Portugal: juiz terá recorrido ao ChatGPT para escrever acórdão

2025-10-04
Pplware
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of ChatGPT, an AI system, in drafting a judicial decision. The suspected AI involvement has led to an investigation due to irregularities and errors in the legal text, which could undermine the legal process and the rights of the parties involved. This constitutes an AI Incident because the AI system's use has directly led to potential harm to legal rights and the integrity of judicial decisions, fitting the definition of violations of human rights or breach of legal obligations.
Thumbnail Image

CSM abre processo disciplinar a juiz que terá usado ChatGPT para acórdão

2025-10-03
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of ChatGPT, an AI system, in producing a court ruling that contained fabricated legal references. This misuse of AI has directly led to harm by undermining the legal process and causing procedural and reputational damage, which falls under violations of legal obligations and rights. The disciplinary action and the controversy around the ruling confirm that harm has materialized. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Juiz do Tribunal da Relação alvo de processo disciplinar por suspeita de uso de IA em acórdão

2025-10-03
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the suspected use of AI in producing a judicial decision with fabricated legal references, which directly affects the integrity of the legal process and potentially violates legal rights. The disciplinary process and complaints by defense lawyers confirm that harm has occurred or is ongoing. The AI system's involvement is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The harm is indirect but significant, relating to violations of legal rights and procedural fairness.
Thumbnail Image

Conselho da Magistratura abre processo disciplinar a juiz que terá usado IA em acórdão

2025-10-03
Observador
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the alleged use of AI in drafting a judicial decision, which led to errors such as citing non-existent laws and jurisprudence. This has caused harm by undermining the legal process and the rights of the parties involved, which is a violation of legal obligations and fundamental rights. The disciplinary process and investigation confirm the seriousness of the issue. The AI system's involvement is central to the harm, fulfilling the criteria for an AI Incident. Although the investigation is ongoing, the harm from the flawed ruling has already occurred, making this more than a potential hazard or complementary information.
Thumbnail Image

Conselho Superior da Magistratura abre processo disciplinar a juiz que terá usado ChatGPT para fazer acórdão

2025-10-03
SIC Notícias
Why's our monitor labelling this an incident or hazard?
The use of ChatGPT by judges to produce a legal decision with fabricated laws and jurisprudence constitutes a misuse of an AI system in a critical legal context. This misuse has directly caused harm by compromising the legal validity and reliability of the judicial decision, which is a violation of legal obligations and rights. Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the AI system's use in judicial decision-making.
Thumbnail Image

Conselho da Magistratura abre processo disciplinar a juiz que terá usado IA em acórdão

2025-10-03
Açoriano Oriental
Why's our monitor labelling this an incident or hazard?
The article describes a concrete event where an AI system was allegedly used in producing a judicial ruling that contained fabricated legal references, which is a direct harm to the legal process and the rights of the parties involved. The disciplinary process and complaints by lawyers indicate that the AI's involvement has already caused harm, fulfilling the criteria for an AI Incident. The AI system's malfunction or misuse has led to violations of legal rights and undermined judicial integrity, which fits under violations of human rights and breach of legal obligations. Hence, this is not merely a potential hazard or complementary information but an AI Incident.