Article: Library of Congress Offers AI Legal Guidance to Researchers

Researchers testing generative AI systems can use prompt injection, re-register after being banned, and bypass rate limits without running afoul of copyright law.

In a net positive for researchers testing the security and safety of AI systems and models, the US Library of Congress ruled that certain types of offensive activities — such as prompt injection and bypassing rate limits — do not violate the Digital Millennium Copyright Act (DMCA), a law used in the past by software companies to push back against unwanted security research.

The Library of Congress, however, declined to create an exemption for security researchers under the fair use provisions of the law, arguing that an exemption would not be enough to provide security researchers safe haven.

Overall, the triennial update to the legal framework around digital copyright works in the security researchers’ favor, as does having clearer guidelines on what is permitted, says Casey Ellis, founder and adviser to crowdsourced penetration testing service BugCrowd.

“Clarification around this type of thing — and just making sure that security researchers are operating in as favorable and as clear an environment as possible — that’s an important thing to maintain, regardless of the technology,” he says. “Otherwise, you end up in a position where the folks who own the [large language models], or the folks that deploy them, they’re the ones that end up with all the power to basically control whether or not security research is happening in the first place, and that nets out to a bad security outcome for the user.”

Security researchers have increasingly gained hard-won protections against prosecution and lawsuits for conducting legitimate research. In 2022, for example, the US Department of Justice stated that its prosecutors would not charge security researchers with violating the Computer Fraud and Abuse Act (CFAA) if they did not cause harm and pursued the research in good faith. Companies that sue researchers are regularly shamed, and groups such as the Security Legal Research Fund and the Hacking Policy Council provide additional resources and defenses to security researchers pressured by large companies.

In a post to its site, the Center for Cybersecurity Policy and Law called the clarifications by the US Copyright Office “a partial win” for security researchers — providing more clarity but not safe harbor. The Copyright Office is organized under the Library of Congress’s purview.

“The gap in legal protection for AI research was confirmed by law enforcement and regulatory agencies such as the Copyright Office and the Department of Justice, yet good faith AI research continues to lack a clear legal safe harbor,” the group stated. “Other AI trustworthiness research techniques may still risk liability under DMCA Section 1201, as well as other anti-hacking laws such as the Computer Fraud and Abuse Act.”

Read the full article

https://www.darkreading.com/cyber-risk/library-congress-ai-legal-guidance-researchers