The systems and processes your legal team and broader organization have in place are crucial if you’re to avoid the kind of disaster that was widely reported last week, when a lawyer using a generative AI tool included cases in a federal personal injury lawsuit that were later flagged as fake.
“To leverage AI, you have to get your house in order, so knowledge management in some ways becomes more important in the near-term,” Mary O’Carroll, chief community officer at digital contracts company Ironclad, said last month in a Corporate Legal Operations Consortium (CLOC) Global Institute session.
The CLOC panel preceded the fake-cases incident, but the group, which included OpenAI General Counsel Jason Kwon, anticipated this very thing happening as people rush into using generative AI tools without properly integrating them into the broader system and setting up controls.
“When you’re looking at generative AI, is it telling you the truth?” said Christina Wojcik, director of innovation and technology for Citi’s global legal department. “How do you validate it’s actually telling you the truth and you have confidence over the truth it’s telling you?”
Enterprise-level complexity
A big part of Wojcik’s job at Citi, a company with hundreds of thousands of employees and a legal team of more than 2,000 people, is to make sure any technology that’s adopted includes guardrails to protect and validate information while letting those who need it get access to it.
“You don’t know what an enterprise is until you have tens of millions of documents that are legacy and millions of documents that could be created on an annual basis,” she said.
Read full piece at
https://www.legaldive.com/news/knowledge-management-generative-ai-disasters-fake-cases-legal-operations-GC-tech/651975/