Madrid Bar Association Proposes Penal Reform to Address AI Legal Advice Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Madrid Bar Association (ICAM) has proposed a reform of Spain's Penal Code to criminalize the unauthorized provision of legal advice by AI platforms and chatbots without professional oversight. The initiative aims to prevent potential harm to citizens from relying on unsupervised automated legal advice, highlighting risks of errors and lack of accountability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems providing legal advice, which fits the definition of an AI system. The ICAM's proposal addresses the use and commercialization of such AI systems without professional oversight, which could plausibly lead to harm (e.g., individuals making important legal decisions based on inaccurate AI advice). No actual harm or incident is reported; rather, the article is about a preventive legal proposal to address potential risks. This aligns with the definition of an AI Hazard, as it concerns plausible future harm from AI use. The article also includes governance response elements, but the primary focus is on the hazard posed by unregulated AI legal advice tools. Therefore, the correct classification is AI Hazard.[AI generated]
AI principles
AccountabilitySafety

Industries
Government, security, and defence

Affected stakeholders
Consumers

Harm types
Economic/PropertyPublic interest

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Interaction support/chatbots


Articles about this incident or hazard

Thumbnail Image

El ICAM propone al Congreso vetar las herramientas de asesoramiento jurídico con IA

2026-02-18
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the risks posed by AI legal advisory tools and the need for legal safeguards to prevent harm, but it does not report any actual harm or incident caused by these AI systems. The AI systems are involved in providing legal advice, which can plausibly lead to harm if errors occur, but the event focuses on a proposed legal reform to prevent such harm. This fits the definition of Complementary Information, as it details a governance response to an AI-related risk rather than describing a realized AI Incident or an immediate AI Hazard.
Thumbnail Image

El ICAM pide castigar como intrusismo las aplicaciones de IA que dan asesoramiento jurídico

2026-02-18
elEconomista.es
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems providing legal advice, which fits the definition of an AI system. The ICAM's proposal addresses the use and commercialization of such AI systems without professional oversight, which could plausibly lead to harm (e.g., individuals making important legal decisions based on inaccurate AI advice). No actual harm or incident is reported; rather, the article is about a preventive legal proposal to address potential risks. This aligns with the definition of an AI Hazard, as it concerns plausible future harm from AI use. The article also includes governance response elements, but the primary focus is on the hazard posed by unregulated AI legal advice tools. Therefore, the correct classification is AI Hazard.
Thumbnail Image

La nueva amenaza digital que el Congreso tiene sobre la mesa: el peligro de las "consultas" con IA

2026-02-19
Confidencial Digital
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI generative systems providing legal advice without human supervision, which can produce serious errors leading to harm to users (e.g., incorrect legal decisions). Although no actual harm is reported, the potential for widespread harm is credible and recognized by the legal professional body proposing regulatory reform. The focus is on preventing harm before it occurs, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. The event is not unrelated, as it centers on AI systems and their risks.
Thumbnail Image

El ICAM propone reformar el Código Penal para frenar el "neointrusismo digital" en el asesoramiento jurídico con IA

2026-02-18
El Derecho
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI legal advisory tools) whose use could plausibly lead to harm by misleading citizens into making important legal decisions without proper professional oversight, which constitutes a risk to public interest and legal rights. Although no specific harm has yet occurred, the article clearly identifies the potential for significant harm and the need for preventive legal measures. Therefore, this is an AI Hazard, as it concerns plausible future harm from AI systems in legal advisory roles without adequate human-in-the-loop safeguards.
Thumbnail Image

El ICAM propone tipificar como delito el instrusismo digital del asesoramiento jurídico con IA

2026-02-18
Cinco Días
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (automated legal advice platforms and chatbots) and addresses the potential harm from their unsupervised use, which could lead to violations of legal rights and professional misconduct. Since no actual harm or incident is reported but a credible risk is identified and legislative action is proposed to prevent it, this qualifies as an AI Hazard. The event focuses on plausible future harm from AI systems providing unauthorized legal advice without professional oversight, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

El ICAM propone una reforma del Código Penal para impedir el intrusismo profesional a través de los chatbots o sistemas automatizados

2026-02-19
Lawyerpress NEWS
Why's our monitor labelling this an incident or hazard?
The article centers on a legislative initiative aimed at preventing harm that could plausibly arise from the use of AI systems (chatbots and automated platforms) providing legal advice without professional oversight. Although no actual harm or incident is reported, the described AI systems' potential to cause significant harm to individuals by misleading them in legal matters is clearly articulated. Therefore, this event fits the definition of an AI Hazard, as it involves the plausible future risk of harm stemming from the development and use of AI systems in legal advisory roles without proper human supervision. The article does not describe an AI Incident (no realized harm), nor is it merely complementary information or unrelated news.
Thumbnail Image

El ICAM propone al Congreso vetar las herramientas de asesoramiento ...

2026-02-21
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The article discusses a legislative proposal addressing the use of AI chatbots for legal advice, which is an AI-related governance response. There is no indication that harm has already occurred or that an AI system malfunctioned or was misused leading to harm. Instead, the proposal aims to prevent potential future harm by regulating AI use in legal consultancy. Therefore, this is Complementary Information about societal and governance responses to AI, not an AI Incident or Hazard.