
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A judge in Minas Gerais, Brazil, used an AI tool (ChatGPT) to draft a judicial opinion that acquitted a man accused of raping a 12-year-old girl. The AI-generated prompt was found in the official court document, raising concerns about AI's influence on legal decisions and potential human rights violations.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to rewrite parts of a judicial decision that acquitted a defendant of child sexual abuse, a crime legally defined as rape of a minor under 14 years old. The AI system's involvement in drafting the decision is directly linked to the outcome, which is legally and ethically problematic. This constitutes a violation of legal obligations and fundamental rights, as the acquittal contradicts the law protecting children. The use of AI in this context, especially under judicial secrecy, also raises concerns about compliance with regulations governing AI use in the judiciary. Given the AI's pivotal role in producing the text that influenced the decision, this event meets the criteria for an AI Incident involving violation of human rights and legal obligations.[AI generated]