AI Legal Citation Errors on the Rise

Source: businessinsider.com

Published on May 28, 2025

AI-Generated Legal Errors Increase

New data indicates that judges are increasingly detecting fake legal citations, frequently due to lawyers' over-reliance on AI. Legal data analyst and consultant Damien Charlotin has created a public database of 120 cases where courts discovered AI had hallucinated quotes, created fake cases, or cited nonexistent legal authorities. The actual number of AI-generated errors may be higher, as some might go unnoticed.

Lawyers Increasingly at Fault

While most errors were initially made by individuals representing themselves, lawyers and paralegals are now increasingly responsible. In 2023, pro se litigants were responsible for seven out of ten detected cases of hallucinations, while lawyers were at fault in three. Last month, legal professionals were found at fault in at least 13 of 23 cases where AI errors were discovered. Charlotin noted on his website that mistakenly citing hallucinated cases has become common.

Global Problem

The database includes 10 rulings from 2023, 37 from 2024, and 73 from the first five months of 2025, primarily from the US. Other countries where judges have identified AI errors include the UK, South Africa, Israel, Australia, and Spain.

Punishments Issued

Courts globally are imposing monetary fines for AI misuse, with sanctions of $10,000 or more in five cases, four of which occurred this year. Many individuals involved lack the resources or expertise for thorough legal research. A South African court described an "elderly" lawyer using fake AI citations as "technologically challenged."

High Profile Cases

Recently, attorneys from top US law firms have been caught using AI in high-profile cases. Lawyers at K&L Gates and Ellis George admitted to relying on made-up cases due to miscommunication and failure to verify their work, resulting in a sanction of about $31,000.

ChatGPT Mentioned

The specific AI website or software used was often not mentioned in the cases in Charlotin's database. In some instances, judges concluded AI was used despite denials. However, when a specific tool was identified, ChatGPT was the most frequently mentioned.