AI Use in Judicial Decision Leads to Acquittal in Child Rape Case in Brazil

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A judge in Minas Gerais, Brazil, used an AI tool (ChatGPT) to draft a judicial opinion that acquitted a man accused of raping a 12-year-old girl. The AI-generated prompt was found in the official court document, raising concerns about AI's influence on legal decisions and potential human rights violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event describes the use of AI to rewrite parts of a judicial decision that acquitted a defendant of child sexual abuse, a crime legally defined as rape of a minor under 14 years old. The AI system's involvement in drafting the decision is directly linked to the outcome, which is legally and ethically problematic. This constitutes a violation of legal obligations and fundamental rights, as the acquittal contradicts the law protecting children. The use of AI in this context, especially under judicial secrecy, also raises concerns about compliance with regulations governing AI use in the judiciary. Given the AI's pivotal role in producing the text that influenced the decision, this event meets the criteria for an AI Incident involving violation of human rights and legal obligations.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
ChildrenGeneral public

Harm types
Human or fundamental rightsPublic interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Voto de desembargador que absolveu réu por estupro de criança de 12 anos tem prompt de IA

2026-02-24
Brasil 247
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to rewrite parts of a judicial decision that acquitted a defendant of child sexual abuse, a crime legally defined as rape of a minor under 14 years old. The AI system's involvement in drafting the decision is directly linked to the outcome, which is legally and ethically problematic. This constitutes a violation of legal obligations and fundamental rights, as the acquittal contradicts the law protecting children. The use of AI in this context, especially under judicial secrecy, also raises concerns about compliance with regulations governing AI use in the judiciary. Given the AI's pivotal role in producing the text that influenced the decision, this event meets the criteria for an AI Incident involving violation of human rights and legal obligations.
Thumbnail Image

Desembargador utiliza I.A em decisão que absolveu homem por estupro de vulnerável de 12 anos em MG - Bahia Notícias

2026-02-24
Marcelo Bittencourt
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used in the drafting of a judicial decision that led to the acquittal of a defendant in a serious criminal case. The AI's role in shaping the legal reasoning and decision-making process directly influenced the outcome, which has significant implications for human rights and justice. This meets the criteria for an AI Incident because the AI system's use in the judicial decision-making process has directly led to a significant impact on fundamental rights and legal outcomes. The event is not merely about AI use or development but about its concrete influence on a legal decision with potential harm or controversy regarding justice and rights.
Thumbnail Image

Decisão que absolveu homem por estupro de criança em MG tem prompt de IA

2026-02-24
UOL notícias
Why's our monitor labelling this an incident or hazard?
While the AI system was used to assist in drafting the judicial opinion, there is no evidence that this use led to any harm or legal violation. The article focuses on the presence of an AI prompt in the document, not on any consequences or incidents caused by the AI. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context about AI's role in legal document preparation without indicating harm or risk of harm.
Thumbnail Image

Voto que absolveu acusado de estuprar menina de 12 anos em MG tem trecho de IA

2026-02-25
InfoMoney
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate part of the judicial decision text. The decision led to the acquittal of a man accused of sexually abusing a minor, which constitutes a violation of fundamental rights and harm to the victim and community. The AI's role in drafting the decision is a contributing factor to the harm, even if the final legal judgment is ultimately the responsibility of the human judges. The event involves realized harm (violation of rights and potential harm to the victim and society) linked to the AI system's use, meeting the criteria for an AI Incident.
Thumbnail Image

Desembargador usa IA em sentença de estupro em MG

2026-02-25
pleno.news
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in drafting a judicial sentence, confirming AI system involvement. However, the AI use is limited to text improvement and does not directly or indirectly cause harm as defined by the framework. The harms discussed stem from the judicial decision and alleged misconduct, not from AI malfunction or misuse. There is no indication that the AI use could plausibly lead to harm beyond the current context. The main focus is on the AI's role in the sentence drafting process and the institutional responses, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Relator usa IA em absolvição de réu por estupro e esquece comando em voto

2026-02-25
O Liberal
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used by the judge to assist in drafting the legal decision, demonstrating AI system involvement. The AI's output influenced the acquittal of a defendant accused of statutory rape, which is a violation of the minor's rights and the law. This constitutes harm to human rights and a breach of legal obligations protecting vulnerable individuals. The AI's role was pivotal in the decision's formulation, and the harm (violation of rights) has materialized through the acquittal. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Decisão que absolveu acusado de estuprar menina de 12 anos tem comando e trecho reescrito por IA

2026-02-24
Alagoas 24 Horas: Líder em Notícias On-line de Alagoas
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to rewrite parts of a judicial decision, indicating AI system involvement in the development (writing) of the document. However, the harm (acquittal of an accused rapist) is a legal outcome decided by human judges, not directly caused by the AI system. There is no clear causal link that the AI system's use led to a violation of rights or other harms as defined. The AI was used as a tool to improve text, not to decide or influence the legal judgment autonomously. Thus, the event does not meet the criteria for an AI Incident or AI Hazard. Instead, it is Complementary Information about AI's role in legal processes and its implications.
Thumbnail Image

Decisão que absolveu acusado de estuprar menina de 12 anos foi escrita com IA - Brazil Urgente

2026-02-25
brazilurgente.com.br
Why's our monitor labelling this an incident or hazard?
The AI system was used to draft the judicial decision text, which is a form of AI use in document generation. However, the harm (acquittal in a statutory rape case) stems from the legal reasoning and judgment, not from AI malfunction or misuse. The AI did not cause or contribute to the harm; it was a tool for text improvement. The article focuses on the presence of AI in the judicial writing process, which is a governance and societal issue rather than a direct or plausible harm caused by AI. Thus, the event is Complementary Information rather than an Incident or Hazard.