Lawyer Sanctioned for Submitting AI-Generated Fake Legal Precedents in Siracusa Court

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A lawyer in Siracusa, Italy, was sanctioned after submitting four fabricated legal precedents generated by an AI system in a civil case. The court found the cited rulings did not exist, highlighting the risks of unverified AI-generated content in legal proceedings and resulting in a breach of professional conduct.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI by a lawyer to produce legal citations that were fabricated or incorrect, leading to a sanction. The AI system's use directly contributed to a violation of legal obligations and professional conduct, which falls under violations of applicable law protecting intellectual property and legal rights. Therefore, this constitutes an AI Incident due to the realized harm of legal misconduct and breach of legal standards caused by AI-generated false information.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Siracusa, avvocato usa l'AI: sanzionato per aver citato sentenze inesistenti

2026-03-18
SiciliaNews24
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI by a lawyer to produce legal citations that were fabricated or incorrect, leading to a sanction. The AI system's use directly contributed to a violation of legal obligations and professional conduct, which falls under violations of applicable law protecting intellectual property and legal rights. Therefore, this constitutes an AI Incident due to the realized harm of legal misconduct and breach of legal standards caused by AI-generated false information.
Thumbnail Image

Atti suggeriti dall'Ai, avvocato sanzionato dal tribunale - Notizie - Ansa.it

2026-03-17
ANSA.it
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI in drafting legal acts, which produced false citations. This misuse led to a court sanction against the lawyer, indicating realized harm related to violation of legal obligations and professional standards. The AI system's involvement is direct in generating the misleading content, and the harm is a breach of legal and ethical duties, fitting the definition of an AI Incident under violations of law and rights.
Thumbnail Image

Incredibile in Sicilia, avvocato cita sentenze inesistenti generate dell'intelligenza artificiale: sanzionato

2026-03-17
Stretto Web
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence to generate false legal precedents that were cited in court documents. This misuse of AI led to a sanction, indicating a direct harm related to legal rights and obligations. The AI system's malfunction or misuse caused a violation of legal norms, fitting the definition of an AI Incident due to harm to rights and breach of legal obligations.
Thumbnail Image

Siracusa | Tribunale: avvocato utilizza precedenti inesistenti suggeriti dall'Ai, sanzionato

2026-03-18
webmarte.tv
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to produce fabricated legal precedents that were presented as factual in a legal proceeding. The AI system's outputs were not verified against primary sources, resulting in the lawyer citing non-existent rulings. This misuse directly led to a sanction and represents a violation of legal obligations and professional conduct, fitting the definition of an AI Incident due to harm to legal rights and the judicial process.
Thumbnail Image

Atti suggeriti dall'IA, avvocato sanzionato dal Tribunale di Siracusa

2026-03-18
NewSicilia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI tools by the lawyer to create legal arguments that included fabricated case law. This misuse directly led to a sanction by the court, indicating harm in terms of violation of legal obligations and professional misconduct. The AI system's involvement in producing false information that influenced judicial processes fits the definition of an AI Incident, as it caused a breach of obligations under applicable law and harmed the integrity of the legal process. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

Porta in tribunale sentenze suggerite dall'intelligenza artificiale: avvocato sanzionato a Siracusa | Siracusa Post

2026-03-18
Siracusa Post
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to produce false legal documents that were submitted as evidence in a court case. This misuse of AI led to a direct harm in the form of legal and ethical violations, as the fabricated precedents misled the court and undermined the judicial process. Therefore, this qualifies as an AI Incident due to the direct involvement of AI in causing harm related to legal rights and obligations.
Thumbnail Image

Avvocato "tradito" dall'Intelligenza Artificiale a Siracusa: cita sentenze inventate e viene sanzionato

2026-03-18
Siracusa Press
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI generative system produced false legal precedents ('hallucinations') that were used by a lawyer in court, leading to sanctions. The AI system's malfunction (generating fabricated content) directly caused harm by misleading the court and breaching legal obligations. This fits the definition of an AI Incident because the AI system's use and malfunction directly led to a breach of legal rights and professional standards. The harm is realized and concrete, not hypothetical, and the AI system's role is pivotal in causing the incident.