Turkish Bar Associations Oppose AI-Based Legal Defense Platform

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Turkey's Justice Minister Akın Gürlek proposed an AI-supported platform to assist citizens in legal processes without lawyers. In response, 78 bar associations issued a joint statement warning that such AI use could undermine the right to defense and weaken the legal profession, emphasizing the risks to justice and constitutional rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly described as planned for use in legal processes to generate legal documents and guide users. The event involves the use of AI in a sensitive domain affecting fundamental rights (right to legal defense). Although no actual harm has yet occurred, the bar associations' objections emphasize credible risks of harm to legal rights and justice, which fits the definition of an AI Hazard. The event does not describe a realized harm or incident, nor is it merely complementary information or unrelated news. Hence, it is best classified as an AI Hazard due to the plausible future harm from the AI system's deployment in legal proceedings.[AI generated]
AI principles
Respect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
WorkersGeneral public

Harm types
Human or fundamental rightsPublic interest

Severity
AI hazard

Business function:
Citizen/customer service

AI system task:
Interaction support/chatbotsReasoning with knowledge structures/planning


Articles about this incident or hazard

Thumbnail Image

Yapay zekanın hukuk sistemine entegre edilmesi projesine barolardan tepki - ensonhaber.com

2026-04-25
En Son Haber
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly described as planned for use in legal processes to generate legal documents and guide users. The event involves the use of AI in a sensitive domain affecting fundamental rights (right to legal defense). Although no actual harm has yet occurred, the bar associations' objections emphasize credible risks of harm to legal rights and justice, which fits the definition of an AI Hazard. The event does not describe a realized harm or incident, nor is it merely complementary information or unrelated news. Hence, it is best classified as an AI Hazard due to the plausible future harm from the AI system's deployment in legal proceedings.
Thumbnail Image

78 Barodan Ortak Açıklama: Savunma Hakkı Yapay Zekaya Devredilemez

2026-04-25
Haberler
Why's our monitor labelling this an incident or hazard?
The article centers on a public and professional reaction to the idea of AI use in legal defense and judicial processes, highlighting concerns about potential weakening of defense rights. There is no mention of an AI system currently causing harm or malfunctioning, nor is there a direct or indirect link to realized harm. The focus is on the implications and governance challenges of AI adoption in the judiciary, which fits the definition of Complementary Information as it informs about societal and governance responses to AI developments without describing a new AI Incident or AI Hazard.
Thumbnail Image

78 barodan ortak açıklama: "Savunma hakkı yapay zekaya devredilemez"

2026-04-25
birgun.net
Why's our monitor labelling this an incident or hazard?
The article centers on a public and institutional reaction to proposed AI use in legal defense, emphasizing the importance of human judgment and legal rights. There is no description of an AI system causing harm or malfunctioning, nor is there a direct or indirect link to realized harm. The concerns are about potential impacts and the preservation of legal principles, which aligns with governance and societal response. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

78 barodan 'yapay zeka' açıklaması: 'Savunma makamı teknolojik araçlar gerekçe gösterilerek zayıflatılamaz' - Evrensel

2026-04-25
Yeni Evrensel Gazetesi
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their potential use in legal processes, which could plausibly lead to harm such as undermining defense rights if implemented improperly. However, no actual harm or incident has occurred yet, and the article mainly presents a societal and professional response to proposed AI use. This fits the definition of Complementary Information, as it provides context and governance-related reactions to AI developments without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

Barolardan Bakan Gürlek'e ortak tepki: Yapay zeka destekli avukatlık projesini anlatmıştı

2026-04-25
Yeniçağ Gazetesi
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly described: an AI-supported platform to generate legal documents and guide users through legal procedures. The bar associations' reaction highlights potential violations of rights and harm to the justice system if the AI system is used as proposed. Since the AI system is still in development or planning and no actual harm or incident has occurred, this qualifies as an AI Hazard. The event focuses on the plausible future harm that could arise from deploying such AI in legal contexts, rather than reporting an incident or complementary information about responses to a past incident.
Thumbnail Image

78 barodan Akın Gürlek'e tepki: Savunma hakkı yapay zekaya devredilemez

2026-04-25
Gazete Pencere
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the context of their potential use in legal defense and judicial processes, which could plausibly lead to harm such as violations of rights or undermining justice. However, no actual harm or incident has occurred yet; the bar associations are expressing concerns and opposition to proposed or ongoing AI use in this domain. Therefore, this is best classified as an AI Hazard, as it highlights credible risks and potential future harms from AI deployment in the justice system, but does not report a realized AI Incident or provide complementary information about an existing incident.