Picture this: You’re reading a judicial opinion that perfectly analyzes complex precedent, beautifully articulates legislative intent, and arrives at a thoughtful conclusion. The writing is crisp, the reasoning sound. Then you discover it was drafted primarily by AI, with the judge serving as editor and ultimate decision maker.

Does that change how you feel about the opinion?

As ChatGPT and other LLMs become ubiquitous in law firms (with proper oversight), we’re inching toward an inflection point. Lawyers are already using AI to draft documents without disclosure. But judges… that’s different, isn’t it? Or is it?

If a judge’s AI-assisted opinion contains hallucinations, what then? If we accept AI-drafted briefs without disclosure, should judicial opinions be any different? After all, judges, like lawyers, are responsible for what bears their signature.

But are we ready for AI to help write the very decisions that shape our law? I’m not talking about research or summarization – but the actual judicial reasoning itself?

I suspect your gut reaction to this scenario says a lot about the future of AI in our justice system.

For more posts like this, visit www.judgeschlegel.com.

Welcome to the Practice Source & House of Butter’s Global Law Blogs directory.

At PracticeSource.com and The House of Butter Blog we have been writing about lawyers, legal publishing and legal information on a daily basis for over 20 years.

We have decided to compile what we think, are, the best law / legal blogs written across language by lawyers, barristers & law firm (teams) from around the world.

https://lawblogs.practicesource.com/