Yahoo News
A lawyer in Minnesota who claims to be an expert on how “people use deception with technology,” has been accused of using an AI chatbot to draft an affidavit — in support of an anti-deepfake law in the state.
As the Minnesota Reformer reports, lawyers challenging the law on behalf of far-right YouTuber and Republican state representative Mary Franson found that Stanford Social Media Lab founding director Jeff Hancock’s affidavit included references to studies that don’t appear to exist, a telltale sign of AI text generators that often “hallucinate” facts and reference materials.
While it’s far from the first time a lawyer has been accused of making up court cases using AI chatbots like OpenAI’s ChatGPT, it’s an especially ironic development given the subject matter.
The law, which calls for a ban on the use of deepfakes to influence an election, was challenged in federal court by Franson on the grounds that such a ban would violate First Amendment rights.
But in an attempt to defend the law, Hancock — or possibly one of his staff — appears to have stepped in it, handing the plaintiff’s attorneys a golden opportunity.
Law Fare
One study cited in Hancock’s affidavit titled “The Influence of Deepfake Videos on Political Attitudes and Behavior” doesn’t appear to exist.
“The citation bears the hallmarks of being an artificial intelligence (AI) ‘hallucination,’ suggesting that at least the citation was generated by a large language model like ChatGPT,” Franson’s attorneys wrote in a memorandum. “Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question.”