AGs and AI: Transparency is Key

As we have previously reported, State Attorneys General have joined other enforcers in addressing the latest AI technology. At the recent 2023 NAAG Consumer Protection Spring Conference, two separate panels discussed how the AGs are focusing on AI.

When asked about concerns with AI, New Hampshire Attorney General Formella explained that technology often moves faster than the government. He is working to engage with the private sector to understand better what emerging technologies are doing, and encourages an open line of communication. New York’s First Assistant Attorney General, Jennifer Levy, noted that her office has brought recent actions involving algorithmic decision-making, including: 1) working with the state education department to put guardrails around a contract with a vendor using facial recognition for school discipline, given potential algorithmic bias, 2) bringing litigation with the CFPB against Credit Acceptance Group, alleging they used algorithms to skew the principal and interest ratio, and 3) settling with lead generators of fake comments regarding the repeal of net neutrality. She echoed that laws don’t always catch up to practices.

Later in the day, attendees were treated to a panel on “Artificial Intelligence & Deep Fakes: The Good, The Bad & The Ugly.” Kashif Chand, Chief of the New Jersey Division of Law’s Data Privacy & Cybersecurity Section, moderated with Patrice Malloy, Chief of the Multistate and Privacy Bureau of the Florida Attorney General’s Office, and they were joined by panelists Santiago Lyon, Head of Advocacy and Education for the Adobe-led Content Authenticity Initiative and Serge Jorgensen, Founding Partner & CTO of the Sylint Group. Chand began by explaining that years ago states relied on general UDAP laws to address new technologies, and now many states have technologists and additional laws to handle privacy and technology issues. He noted that to deal with deep fake issues, for instance, states can use misrepresentation and deception claims as well as unfairness and unconscionability. Turning to AI, Chand focused on whether consumers are being told what the intended use of the AI is. Specifically, there may be significant omissions by creators that would lead consumers to think something is going to happen when it is not, which could give rise to an unfairness claim. Chand pointed to Italy’s block of Chat GPT because of potential processing issues and children’s access, not relying on new laws, but instead using the GDPR generally. But even states without specific data privacy laws can still rely on UDAP theories to address these same concerns.

Lyon described the importance of provenance to the future of AI: the Internet must allow for transparency and labeling of content’s origins to determine authenticity. Jorgensen echoed that one issue is consumers may not even know when AI in use, such as meeting software transcribing notes or AI making hiring decisions. Malloy raised the question as to how consumers can consent if they don’t even know the technology is being used. Jorgensen said developers can consider security and privacy by design, and that the industry will have to think more about this.

Lyon and Jorgensen both raised concerns that data training sets could become tainted with either copyrighted or illicitly gained data. However, as panelists pointed out, if more limits are put in place over data sets, it is an open question how certain AI models can gain enough data to generate output. Chand emphasized that transparency is key for consumers to understand what they are giving up and what they are getting in return. Chand also noted that once a company makes data claims, it is hard to verify other than with the use of white hat hackers and researchers. Chand noted that as AI learns more, businesses need to monitor how it is being used to ensure they do not create deceptive trade practices.

With misinformation becoming tougher to spot, panelists emphasized the need for increased transparency and consumer education and information. Chand noted that future generations will continue to have a better understanding of the use of technology and controls over privacy as they benefit from today’s regulations and education.

Based on this panel, adopters of AI in their business should consider the following:

  • How will you disclose the use of AI technology?
  • How will you educate consumers about the potential risks, benefits, and limitations?
  • How can you consider consumer choice when training AI?
  • How will you monitor how your AI is evolving?
  • How will you prevent potential algorithmic bias?
  • How will you protect children’s data?
  • How will you protect proprietary or copyrighted data?

While answers to the aforementioned may differ depending on the specific situation of each business, remember that transparency with consumers and the public is key to staying off the radar of enforcers.