Legal IT Professionals
Artificial intelligence (AI) is a growing presence in today’s corporate legal and compliance departments — and for a good reason. With ever-increasing pressure on GCs and compliance officers to demonstrate value, any tool that increases efficiency and lowers costs without sacrificing quality and accuracy will be of great interest. But with the growth of AI comes a responsibility to ensure it is used ethically, which creates unique responsibilities for in-house counsel and compliance officers.
At the core of the problem is that AI tools are not readymade for solving an organization’s problems from the start. Legal and compliance teams must train AI programs to analyze and make decisions, just as they would with any employee and their own biases and blindspots may creep in. Given how opaque an AI’s algorithms can be, it might then go on to make decisions that are influenced by those biases and blindspots without any indication that this is happening. That could lead to a whole host of ethical issues when the AI is put to work.
To address this, in-house legal and compliance teams must institute effective processes and systems to ensure their AI tools operate ethically and fairly. Fortunately, in-house attorneys and compliance officers can be proactive in how they train their AI to make fair and balanced decisions. This article outlines some steps to take.
Read at