
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Sullivan & Cromwell, a leading Wall Street law firm, apologized to a federal judge after submitting a court filing containing numerous fabricated legal citations generated by an AI system. The errors, discovered by an opposing firm, led to a review of the firm's internal processes and raised concerns about AI reliability in legal practice.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to generate legal citations, which were fabricated ('hallucinations'), leading to errors in a court filing. This directly caused harm by misleading the court and opposing counsel, constituting a violation of legal and professional standards. The AI system's malfunction or misuse is central to the incident. The harm is realized, not just potential, as the false citations were submitted and discovered, prompting an apology and review. Hence, it meets the criteria for an AI Incident under violations of legal obligations and harm to the judicial process.[AI generated]