
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
UK immigration barrister Chowdhury Rahman used AI tools like ChatGPT to prepare legal submissions, resulting in the citation of fictitious and irrelevant cases in an asylum hearing. Rahman attempted to conceal his AI use, wasting tribunal time and prompting potential disciplinary investigation for professional misconduct.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI (ChatGPT-like software) in a professional legal context, where the AI's outputs were inaccurate and fictitious. The barrister's reliance on these AI-generated false cases without proper checks led to misleading the tribunal and wasting its time, which is a clear harm to the legal process and potentially to the rights of the asylum seekers represented. This meets the criteria for an AI Incident because the AI system's use directly contributed to a violation of legal and professional obligations, causing harm to the administration of justice and potentially to the individuals involved. The harm is realized, not just potential, and the AI's role is pivotal in the incident.[AI generated]