Hallucinating AI at it again in the courts

Ambrogi’s Law Sites

Not Again! Two More Cases, Just this Week, of Hallucinated Citations in Court Filings Leading to Sanctions

For all the discussion of how generative AI will impact the legal profession, maybe one answer is that it will weed out the lazy and incompetent lawyers.

By now, in the wake of several cases in which lawyers have found themselves in hot water by citing hallucinated cases generated by ChatGPT, most notoriously Mata v. Avianca, and in the wake of all the publicity those cases have received, you would think most lawyers would have gotten the message not to rely on ChatGPT for legal research, at least not without checking the results.

Yet it happened again this week — and it happened not once, but in two separate cases, one in Missouri and the other in Massachusetts. In fairness, the Missouri case involved a pro se litigant, not a lawyer, but that pro se litigant claimed to have gotten the citations from a lawyer he hired through the internet.

The Massachusetts case did involve a lawyer, as well as the lawyer’s associate and two recent law school graduates not yet admitted to practice.

In the Missouri case, Kruse v. Karlen, the unwitting litigant filed an appellate brief in which 22 of 24 cases were fictitious. Not only that, but they were fictitious in ways that should have raised red flags, including that they had made-up-sounding generic names such as Smith v. ABC Corporation and Jones v. XYZ Corporation.

In the Massachusetts case, Smith v. Farwell, the lawyer filed three separate legal memoranda that cited and relied on fictitious cases. He blamed the mistake on his own ignorance of AI and attributed the inclusion of the cases to two recent law school grads and an associate who worked on the memoranda.

Let’s dive in to the details.

Read here

https://www.lawnext.com/2024/02/not-again-two-more-cases-just-this-week-of-hallucinated-citations-in-court-filings-leading-to-sanctions.html