Case Insights
- Despite judicial warnings and sanctions dating back to 2023 in Mata v. Avianca, attorneys continue submitting AI-generated documents with fabricated citations to courts nationwide
- Recent cases show AI misinformation has spread beyond legal briefs to expert testimony, with even state Attorney General offices falling victim to hallucinated citations
- California’s adoption of new AI rules for judges and court staff signals an urgent need for comprehensive AI compliance training across the legal profession
By Samuel Lopez – USA Herald
The legal profession finds itself at a critical juncture as mounting cases of AI-generated hallucinations continue to plague courtrooms across the nation, despite clear judicial precedents and escalating sanctions. What began as isolated incidents of technological mishaps has evolved into a systemic crisis threatening the integrity of legal proceedings and the credibility of practitioners nationwide.
The Persistent Pattern of AI Negligence
The landmark case of Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023), served as what many hoped would be a definitive wake-up call for the legal profession. The court’s sanctioning of an attorney for submitting fake, AI-generated legal citations should have been the shot heard round the legal world. Yet, despite widespread publication and commentary, the pattern continues with disturbing regularity.
Recent cases demonstrate that the problem has not only persisted but expanded in scope. In Park v. Kim, 91 F.4th 610, 614–16 (2d Cir. 2023), another attorney faced potential disciplinary action for including fabricated AI-generated citations. The Missouri Court of Appeals went further in Kruse v. Karlan, 692 S.W.3d 43, 53 (Mo. Ct. App. 2024), dismissing an entire appeal due to multiple fake citations generated by artificial intelligence.