UK Barrister Faces Disciplinary Action for Submitting AI-Generated Fictitious Legal Cases

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

UK immigration barrister Chowdhury Rahman used AI tools like ChatGPT to prepare legal submissions, resulting in the citation of fictitious and irrelevant cases in an asylum hearing. Rahman attempted to conceal his AI use, wasting tribunal time and prompting potential disciplinary investigation for professional misconduct.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of generative AI (ChatGPT-like software) in a professional legal context, where the AI's outputs were inaccurate and fictitious. The barrister's reliance on these AI-generated false cases without proper checks led to misleading the tribunal and wasting its time, which is a clear harm to the legal process and potentially to the rights of the asylum seekers represented. This meets the criteria for an AI Incident because the AI system's use directly contributed to a violation of legal and professional obligations, causing harm to the administration of justice and potentially to the individuals involved. The harm is realized, not just potential, and the AI's role is pivotal in the incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defence

Affected stakeholders
Government

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Barrister found to have used AI to prepare for hearing after citing 'fictitious' cases

2025-10-16
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI (ChatGPT-like software) in a professional legal context, where the AI's outputs were inaccurate and fictitious. The barrister's reliance on these AI-generated false cases without proper checks led to misleading the tribunal and wasting its time, which is a clear harm to the legal process and potentially to the rights of the asylum seekers represented. This meets the criteria for an AI Incident because the AI system's use directly contributed to a violation of legal and professional obligations, causing harm to the administration of justice and potentially to the individuals involved. The harm is realized, not just potential, and the AI's role is pivotal in the incident.
Thumbnail Image

Barrister found to have used AI to prepare for hearing after citing 'fictitious' cases

2025-10-16
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to produce legal research and citations that were inaccurate or fabricated, which directly caused harm by misleading the tribunal and wasting its time. This constitutes a violation of professional and legal obligations, fitting the definition of an AI Incident under violations of human rights or breach of obligations under applicable law. The AI system's malfunction or misuse (lack of proper accuracy checks) led to this harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Barrister could face disciplinary probe after he was caught using AI

2025-10-16
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The barrister explicitly used generative AI to prepare legal research and submissions, which contained fabricated and irrelevant legal authorities. This misuse of AI led to misleading the tribunal, wasting judicial time, and undermining the integrity of legal proceedings. These outcomes constitute violations of legal and professional standards, which fall under harm to rights and the justice system. The AI system's involvement is direct and central to the harm, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Judge blasts lawyer for using AI after he cited 'entirely fictitious' cases

2025-10-16
The Independent
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (ChatGPT) in legal research, which led to the citation of entirely fictitious cases and misleading submissions. This misuse of AI directly harmed the judicial process by wasting court time and potentially affecting the outcome of an asylum appeal. The harm includes violation of professional and ethical standards, which falls under violations of obligations intended to protect fundamental rights and the integrity of legal processes. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to significant harm.
Thumbnail Image

Judge blasts lawyer for using AI after he cited 'entirely fictitious' cases in asylum appeal

2025-10-16
AOL.com
Why's our monitor labelling this an incident or hazard?
The lawyer used generative AI to prepare legal research, resulting in fabricated and irrelevant case citations. This misuse of AI directly caused harm by misleading the court and wasting judicial resources, which is a breach of legal and professional obligations. The AI system's involvement is explicit and central to the incident, and the harm is realized, not merely potential. Hence, this is an AI Incident involving violations of legal and professional standards.
Thumbnail Image

Barrister found to have used AI to prepare for hearing after citing 'fictitious' cases

2025-10-16
AOL.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI to prepare legal documents that contained fabricated and irrelevant case citations. This misuse of AI led to misleading the tribunal and wasting judicial resources, constituting a violation of legal and professional obligations. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident under violations of law and harm to the legal process. Therefore, this is classified as an AI Incident.
Thumbnail Image

Barrister May Undergo Disciplinary Investigation Following AI Usage Allegations - Internewscast Journal

2025-10-16
internewscast.com
Why's our monitor labelling this an incident or hazard?
The barrister's reliance on AI-generated content that included fictitious legal authorities directly led to misleading the tribunal and wasting judicial time, which constitutes harm to the legal system's proper functioning. The AI system was used in the preparation of submissions, and its outputs were not adequately checked, resulting in professional misconduct allegations. This meets the criteria for an AI Incident because the AI system's use directly caused harm (misleading the court and wasting judicial resources) and raises concerns about legal and ethical violations in professional practice.
Thumbnail Image

Asylum judge rebukes barrister whose AI created fictitious cases

2025-10-16
thetimes.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (generative AI like ChatGPT) used in the development and submission of legal documents. The AI's outputs were inaccurate and fictitious, directly causing harm by misleading the court and wasting judicial resources. This misuse breaches professional and ethical duties, impacting the administration of justice and public trust, which aligns with violations of legal obligations and harm to communities. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's misuse in a legal context.
Thumbnail Image

Barrister cited 'entirely fictitious' AI legal cases to defend migrants

2025-10-17
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI like ChatGPT) used in the preparation of legal documents. The AI's outputs included fabricated legal cases, which were presented as factual by the barrister, leading to misleading submissions and wasting judicial resources. This misuse of AI directly caused harm by breaching legal and professional obligations, thus qualifying as an AI Incident under the framework. The harm is a violation of legal procedural rights and professional standards, which falls under category (c) violations of human rights or breach of obligations under applicable law.