A disturbing trend is sweeping through courtrooms worldwide as lawyers increasingly rely on artificial intelligence to draft legal documents, only to discover the technology is inventing fake cases and citations. A growing network of legal professionals has taken it upon themselves to track and expose these AI-generated errors, documenting over 500 instances of what they term 'AI slop' in court filings.
The Rise of Legal Vigilantes
Robert Freund, a Los Angeles-based attorney, represents this new breed of legal watchdog. Earlier this year, Freund identified a Texas bankruptcy court motion that referenced a 1985 case called Brasher v. Stewart. The problem? The case never existed. Artificial intelligence had completely fabricated this citation along with 31 others in the same document.
The judge handling the case didn't take the error lightly. In a strongly worded opinion, the judge referred the lawyer to the state bar's disciplinary committee and mandated six hours of specialized AI training. This case became one of many that Freund and his colleagues have added to a global database tracking legal AI misuse.
Freund explained his motivation: "These cases are damaging the reputation of the entire legal profession. We need to bring attention to this problem before it undermines the justice system."
The Global Scale of AI Legal Misuse
Damien Charlotin, a French lawyer and researcher, took the initiative to create a comprehensive online database in April 2025 to track these incidents globally. What started as a trickle has become a flood. Initially finding three or four examples monthly, Charlotin now receives that many reports in a single day.
The database has documented 509 cases to date, with contributions from lawyers across multiple countries. These legal vigilantes use sophisticated legal research tools like LexisNexis, setting up alerts for keywords including "artificial intelligence," "fabricated cases," and "nonexistent cases" to catch new instances as they emerge.
Stephen Gillers, an ethics professor at New York University School of Law, expressed concern about the trend: "Lawyers everywhere should be ashamed of what members of their profession are doing. This isn't just about technical errors—it's about fundamental failures in professional responsibility."
Courts Respond with Fines and Discipline
As the problem escalates, courts are beginning to establish clear consequences. While judges generally agree that using AI for legal research is acceptable, they emphasize that lawyers bear ultimate responsibility for verifying the accuracy of their filings.
The legal profession has become a particular hotbed for AI blunders in recent months, according to court filings and interviews with legal scholars. While some errors come from individuals representing themselves without legal training, an increasing number originate from practicing attorneys.
Penalties have included:
- Monetary fines up to $5,000
- Mandatory AI training courses
- Referral to state bar disciplinary committees
- Professional sanctions
One notable case involved Tyrone Blackburn, a New York employment and discrimination lawyer, who used AI to draft briefs containing numerous hallucinations. Blackburn initially dismissed allegations about the errors but eventually admitted his mistake and was fined $5,000. His client, whom he was representing pro bono, fired him and filed a complaint with the bar.
Jesse Schaefer, a North Carolina lawyer who contributes to the database, noted the dual nature of the technology: "Chatbots can help self-represented individuals speak in language judges understand, but professionals have higher obligations. The convenience of AI cannot override our duty to verify."
Why the Problem Keeps Growing
Despite increasing awareness and penalties, the volume of AI-generated errors continues to rise. Freund, who has publicly flagged more than four dozen examples this year alone, observed: "Court-ordered penalties are not having a deterrent effect. The proof is that it continues to happen."
The types of errors vary widely. Some filings include completely fabricated cases, while others contain fake quotes from real cases or cite relevant cases that are completely irrelevant to the legal arguments being made.
Academic institutions are joining the fight against AI legal misuse. Peter Henderson, a Princeton computer science professor, started his own database and is working on developing technology to detect fake citations automatically, moving beyond the current hit-or-miss keyword search approach.
Eugene Volokh, a UCLA law professor who blogs frequently about AI misuse on The Volokh Conspiracy and contributes to Charlotin's database, sees a pattern in these incidents: "I like sharing with my readers little stories like this—stories of human folly. They reveal how new technology interacts with human fallibility."
As AI becomes more integrated into legal practice, the tension between efficiency and accuracy continues to challenge the profession. The legal vigilantes tracking these errors emphasize that their goal isn't to shame individual lawyers but to protect the integrity of the legal system itself.