Article: The Law vs AI: Now the legal battles are starting to intensify

With OpenAI’s Voice Engine promising to convincingly replicate an individual’s speech from just a 15-second clip, the focus on AI regulation and legal challenges to its operation are intensifying.

While the astonishing progress toward photorealistic generative video from OpenAI’s Sora has been getting an enormous amount of attention, behind the scenes there are a lot of legal battles under way. These involve most of the biggest players in the field of generative AI, including nVidia and Microsoft, now owner of OpenAI, and involve allegations of both copyright violations and of defamation.

There are several copyright lawsuits underway presently. Here’s a quick summary.

A group of book authors are alleging that nVidia used pirated copies of their books in its platform NeMo to train custom chatbots. They are seeking damages from lost income and to force nVidia to destroy all copies of the dataset containing their pirated works.

OpenAI is facing several similar suits, though the plantiffs there, including the New York Times and several well known authors including Sarah Silverman and Christopher Golden, are saying that they have evidence that ChatGPT is directly copying copyrighted books for training. The NY Times has also alleged that ChatGPT would actually repeat direct copies of copyrighted content from the NYT, effectively giving users a way around the NYT paywall.

Google faced a similar copyright suit when it launched its book search, and defended itself by proving that Google would only deliver snippets to search users, thus encouraging book sales rather than depriving authors of sales revenue. The difference here is that the Times says that ChatGPT regurgitated several paragraphs of NYT articles in a chat. Essentially, the Times is alleging that OpenAI stole and reproduced copyrighted works.

It is telling that in its response filing, OpenAI does not dispute the Times’ claim that OpenAI copied millions of the NYT’s works to train its AI without permission.

Hallucinatory experiences

The Times also provided examples of some ChatGPT hallucinations, generating fake articles which appear realistic, which has lead to another suit.

Hallucinations are not a new phenomenon; lawyers and students alike have been caught using AI-generated text that turned out to be false; some lawyers even filed papers in court citing cases that an AI chatbot simply invented at his behest. Whether or not that lawyer knew beforehand that the cases cited were fictional,

Hallucinations have also led to another more insidious issue.

An AI chatbot cost Air Canada money when it misled a passenger, telling him that he could buy his plane ticket and then apply for a bereavement fare after the funeral. That contradicted Air Canada’s official policy of not allowing refunds after travel, but the company lost the case in small claims court and had to pay the refund.

Some other hallucinations have been outright defamatory, such as when ChatGPT falsely claimed that the Australian regional mayor Brian Hood was a criminal. He had his lawyer give OpenAI 28 days to clean up the lies or face a law suit for the defamation. OpenAI filtered the false statements that time.

Some hallucinations have been even more deleterious, and lead to law suits against Microsoft for defamation. One is from an author who discovered that Bing search and Bing chat falsely labeled him as a convicted terrorist, ruining his reputation and costing millions in revenue from sales of his book. Elsewhere, a radio host sued OpenAI alleging that ChatGPT falsely labeled him as charged with embezzlement.

Some AI companies are working on the hallucinations issue, such as nVidia’s NeMo Guardails software that looks to prevent chatbots from publishing false statement, but it’s effectiveness is an open question. It appears to rely on prior knowledge of prompts that generate defamatory responses, which could turn defamation filtering a game of whack-a-mole.

There are other solutions in development for preventing chatbots from engaging in these types of overt character assassination, such as detecting linguistic patterns common to defamatory statements in order to filter them out of chatbot outputs.However, it is still not able to fact check such statements, which remains a problem.

The ongoing and likely fallout

While the hallucination-driven defamation issue might be solved with technology, the copyright issue still looms large over the AI industry. The copyright lawsuits facing nVidia and OpenAI are ongoing, and the outcome far from certain. The fines should the plaintiffs win could be as high as $150,000 per violation, and potentially go so far as to force OpenAI to rebuild its training dataset from scratch, a costly endeavour.

However, even in the unlikely event that these lawsuits end in total victories for the plaintiffs, the overall impact to the AI industry will be relatively small. The industry is huge, and public facing generative AIs are a relatively small part of the industry. Given how much more computing power is available now even retraining their models from scratch would not take all that long any more. Most likely the outcome will be some fines, fees, and stricter licensing agreements.

Read more

https://www.redsharknews.com/the-law-vs-ai-now-the-legal-battles-are-starting-to-intensify