Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty

EDRM – Electronic Discovery Reference Model

Some Legal Ethics Quandaries on Use of AI, the Duty of Competence, and AI Practice as a Legal Specialty by Ralph Losey
Image: Ralph Losey with his GPT Visual Muse.

This blog considers some of the ethical issues of competence that arise when a lawyer or law firm uses generative AI to assist in rendering services. Prior to the advent of artificial intelligence, the legal profession devised many ways to meet the duty of competence, including continuing education and the creation of legal specialties. The profession is now supplementing these methods with the use of AI. This raises new ethical and practical issues of competence discussed here. All words by human Ralph Losey alone without AI assistance. All images created by Ralph using AI.

The legal specialty tradition allows different client needs to be met by different attorneys. This involves splitting legal work into subareas. A law firm with a number of different specialists uses different lawyers to perform particular tasks as a team effort. One example of this today is in litigation. There are often attorneys who specialize in pleadings-motions practice, others who specialize in discovery or e-discovery, others who specialize in the conduct of trials and still others that only handle appeals. The specialist attorneys collaborate with each other, and the prime client interface lawyer, to perform the work competently. This allows for both very high quality work and more efficient, cost effective services in complex cases.

Many simple cases today are still handled by a solo general practice attorneys, often economically and sometimes with good quality too, but not always. Could AI help both law firms and solo practitioners? This article addresses ethical issues of AI use as a specialist co-counsel. When and how can generative AI be used by lawyers to collaborate to meet their ethical duties of competent legal services?

Specialties and Complex Legal Work

The legal practice of specialization and collaboration allows a lawyer to competently represent a client in very complex situations. These are situations where one lawyer’s skills alone would not be adequate to meet their duty of competence. Competence is required by Model Rule 1.1 of Professional Ethics. That is one reason that law firms have evolved and grown ever larger to include attorneys with a variety of legal skills. This allows lawyers to more easily assist each other in the representation of the firm’s clients.

In today’s world lawyers routinely delegate some of the work involved in representing a client to other attorneys with skills in a field they may not have. Some lawyers may not like to hear this, but the truth is, no one lawyer knows it all. For example, a corporate lawyer specializing in mergers will routinely delegate complex electronic discovery issues they encounter. Moreover, few litigation lawyers would dare approach estate planning or tax issues, and visa versa. Will the advent of AI change this?

Task Splitting is also a Prompt Engineering Strategy

This strategy of splitting tasks is also one of the six strategies recommended by OpenAI for best-practice use of its generative AI. See Transform Your Legal Practice with AI: A Lawyer’s Guide to Embracing the Future (OpenAI’s third strategy is “Splitting Complex Tasks Into Simpler Subtasks”). This is one reason lawyers can easily learn this particular prompt engineering strategy, one of six, for the competent use of generative AI. It is a familiar strategy. The idea in AI is to split up a single task into subparts. That makes it easier for the generative AI to understand and follow. That in turn improves the quality of the AI speech generated, and reduces the errors and hallucinations.

That is like human lawyers splitting up a single task – litigation – into many subtasks. That also reduces minor errors and reduces the colossal near hallucinatory mistakes, which humans, much like AI, can sometimes make. It typically happens to humans lawyers when they are acting way out of their depth. The same thing tends to occur to generative AI.

Questions Raised by Lawyer Use of Generative AI to Meet their Duty of Competence

What happens when a lawyer seeks to meet their ethical duty of competence by delegating some of their work to an AI? It appears that more and more lawyers are trying this now. There are many reasons for this. First of all, generative AI and various LLM applications have knowledge of almost all legal fields, all specialties. Plus, many work for free, or nearly so, and do not request a share of the client’s fee, like a human lawyer specialist. Not only that, they make the human lawyer look good; well, usually.

To get away with using AI to meet your duty of competence to handle a particular matter, lawyers must, however, first have competence to use AI. They must know how to properly delegate work to them. For example, should they use a Centaur method or go full Cyborg? From Centaurs To Cyborgs: Our evolving relationship with generative AI (April 24, 2024).

Legal professionals must know all about GPT errors and hallucinations and not be fooled by false claims to the contrary. They should know what kinds of prompts and methods are most likely to generate errors and hallucinations and what to do about it. They should know about basic prompt engineering strategies, including splitting complex tasks.

There are a host of questions raised concerning competence and the use of AI by legal professionals. Here are some of thoughts on competence and the splitting work strategy. It is spoken through an AI image with a Nigerian accented voice that I like. The transcript follows below. Here you will find many questions. None of them have simple answers.

Cyborg woman
Left Click Image to see YouTube Video, Image by Ralph Losey using his custom GPT Visual Muse.

Transcript of the Video

Hello, human friends. Let’s talk about the legal ethics issues inherent in this strategy. In law, you almost always split your work into many different tasks. You have to do that because your work is usually very complicated. Lawyers long ago figured out that the best way to perform complex actions like litigation is to split the work into subtasks. For instance, a lawsuit usually begins with talking to your client. Next, the pleading is prepared and then timely filed with the proper court. Then there may be motions and discovery and arguing to a judge. Ultimately, if the process continues, there may be a trial and then an appeal. Each step is an important part of the whole process of dispute resolution.

In today’s world, there are attorneys that specialize in each of these tasks. Some, for instance, are great at discovery, but not so good at trials. One ethics issue is when a lawyer should bring in another lawyer to help them with one or more of the tasks. What should you do if you are not competent In all parts of litigation? Ethics rules require that a lawyer have the necessary skills and knowledge required to do their work competently. Either that ,or should bring in another lawyer who is competent. For instance, many trial lawyers routinely bring in an appellate law specialist to help with appeals. Sometimes the help will be behind the scenes and the trial lawyer remains in charge. Other times, the appellate lawyer makes an appearance and handles everything, and the trial lawyers take the second chair to just help.

What happens if a lawyer uses an AI as the expert to handle a particular subtask in which that lawyer is inexperienced, what happens then? Obviously, the AI cannot just take over and appear in court. Not yet anyway, so the human lawyer remains in the first chair, but has a whispering AI expert to help them. That can work, but only if the human checks everything the AI does.

Plus one other key condition must be met. Do you know what that is? The human AI team must together be competent. They must meet the minimum standards of professional skills and knowledge required by legal ethics.

Here are more questions for you to ponder. Could a lawyer bring a GPT chatbot into a court to help them? Could the AI whisper into the lawyer’s ear to give them advice? For instance, could an AI suggest how to respond to a judge’s question? What if the AI also explained the reason for the suggestion to the human lawyer’s satisfaction? How about this? Should the judge allow the AI to speak directly to them? Should the judge ask the AI questions? There are so many new and interesting questions ahead of us.

Could Use of AI Become a Specialty?

Expertise in Artificial intelligence is already a legal specialty for some lawyers. I predict this new specialty in generative types of AI will quickly grow in popularity and importance. It requires significant skill and experience to use generative AI competently. Some argue AI is just a passing fad. It is hard to take those arguments seriously. But others admit it is here to stay, but argue the need for this specialty will quickly pass, that the software will get so good so fast, that there will be no need for AI specialists. Typically there is a economic motive for this argument, as it is usually made by vendors and their experts. But putting motives aside, the argument goes that sometime in the near future the proper use of generative AI, and other forms of AI, will become so easy that any lawyer can use it.

The hard now, but easy soon argument often uses the analogy of Email. They predict that AI use will become like email use. At first, in the eighties and early nineties, only a few tech expert attorneys could send and receive emails, typically through CompuserveThe Source and the like. With the advent of the internet, that became easier. Today almost all attorneys can send and receive emails. The same thing happened with word processing, although perhaps fewer attorneys today are in fact expert at word processing, with many still yearning for the days of tape dictation and secretaries. You know who you are. Many are my friends. I am pretty sure some still have their secretaries print and send emails for them and ask about faxing too. In the medical field, in Florida at least, the use of fax machines is still widespread and often used to send paper medical records. Every medical office uses fax machines all of the time, a few law firms do too. Hey, I studied the patent for fax machines as one of my first assignments as a young lawyer in 1980. Incredible it is still widely used today.

The hard now, but easy soon argument does have some merit. Email is now far easier than it was in 1980 and any attorney can do it. Most do it very well with no training at all. They grew up with it. But, I do not think email and AI are comparable. I was practicing law and began using email in 1980 while the fax machine was still just a patent. As one of the first lawyers to use emails, faxes and word processors (first Wang then WordPerfect), I can say with confidence that these technologies are not at all comparable to artificial intelligence, not even close. So the argument is flawed. Even if you accept exponential change, which I do, I am very skeptical of AI ever becoming so easy that every lawyer can use it, the way they now use email, faxes and word processors.

Artificial intelligence is a far different creature. It is far more complex and far more difficult to learn how to use. For example, look at discovery and the review of paper as opposed to predictive coding review of ESI. Predictive coding is a type of AI – active machine learning for binary classifications. It is easier to use than the new LLM types of generative AI. Yet the vast majority of attorneys still do not use predictive coding. Although to specialists in predictive coding, many of whom have been using it for well over ten years now, it seems pretty darn easy. Admittedly, it did start off challenging, but we figured out the best methods to use predictive coding. In ten years it was so easy as to be boring for me (and many others). That is one reason I moved on to generative AI. It is a breakthrough technology with new challenges and many open ended legal uses, not just discovery.

But look around in the law today. Years after Judge Peck approved the use of predictive coding in Da Silva Moore in 2012, the legal profession has still not fully adopted predictive coding. Most discovery today is still done with keywords (started in the 1980s), or worse yet, discovery is done manually by linear review. Incredible but true. Even worse, a lot of it is still done with paper. You know who you are and you are legion. So please, do not talk to me about AI becoming so easy to use that even a partner can do it. Change is coming much faster than ever before, but it still comes relatively slow in the legal profession. Bottom line, legal specialization in the use of generative AI is here to stay for the next twenty to thirty years, at least.

To summarize, like generative AIs love to do, there are two main reasons that special AI tech skills are here to stay, no matter how fast the software improves. Number one, the improvements in generative AI will create as many new complexities and challenges as they solve. Overall, it will not become easier because the AI will keep on doing new and even more incredible things. Sure, the summary part may be easy, or easier, but what about the new skills that the next versions of AI will do? For example, how will the new expert panels work? The AI judges? The use of AI will change, and fast, and the learning curve will have to speed up too. Only specialists will be able to keep up.

Number two, the parts of generative AI that do become easy in the future, such as, perhaps legal research, will still be better and faster done by specialists. It will like predictive coding in e-discovery. Although today it is almost boringly simple to specialists, and many could learn it, they do not. The pros still do most of this work even after it has become easy because the specialists are still much faster and make fewer mistakes than the dabblers. Ah the stories on this I could tell, but don’t worry, I won’t.


The words in the following avatar video are by Ralph, not an AI. But the image was generated by AI using Ralph’s prompts, so was the voice. A transcript follows the video.

Sci fi male image with intense look
Left click on image to see the YouTube video. Image by Ralph Losey using Visual Muse.

Transcript of the Centaur Video

This is Ralph Losey in one of his avatar forms. I want to conclude this blog with final comments on AI competence and whether AI specialists will continue to be needed in the future.

I am sure the software will improve, GPT5 will be smarter than GPT4. But I am also sure that, in so far as legal use is concerned, as opposed to making a new website or drafting a sales email, the use of AI by lawyers will still require extensive training. It will still require skill and and experience to use competently. There will still be errors and hallucinations, even with next generation AI, especially in the hands of amateur jockeys. That is just how predictive word and image generation works. Perfection is a myth!

Prompt engineering will, for many years to come, be a critical skill for any attorney who wants to use AI as part of their legal work. It will be of great importance, imperative even, for anyone who wants to specialize in the professional use of generative AI. The competency requirement of Rule 1.1 of Model Rules of Professional Conduct, demand it for law. Other professions such as Medicine (AMA Code of Medical Ethics) have similar or even more stringent requirements. AI is a far, far more powerful tool than email and word processing. It must be used skillfully and carefully to avoid harm to your clients.

Dabblers will continue to get sanctioned, specialists will not. Put another way, a little knowledge is a dangerous thing. Goodbye. Have to trot off and talk to my Cyborg, Wilbur! Do you remember him?