Harvard Law Today, “Justice is not justice if it is a desiccated calculating machine”

Legal experts discuss the impact of artificial intelligence on elections, the law and justice

here was the phony robocall from President Joe Biden asking New Hampshire voters not to cast a primary ballot. There was the doctored image of Taylor Swift endorsing Donald Trump, and the fake Kamala Harris ad rife with misleading information. But did that content, altered with the help of artificial intelligence, change voters’ minds in this election?

It’s likely too soon to tell, according to two legal experts well-versed in law and politics in the United States and the United Kingdom who discussed the impact of AI on justice systems and democracy at a recent Harvard Law School talk, “AI, the Law, and the 2024 Election.”

During the lunchtime discussion, Nicholas Stephanopoulos, Kirkland & Ellis Professor of Law at Harvard Law School, and Sir Robert Buckland, former lord chancellor and justice secretary of the United Kingdom, agreed that AI can help streamline some legal processes as long as it is used with caution and careful human oversight. They also acknowledged that the impact of AI on the U.S. presidential election and its potential to affect future races need to be studied in greater detail.

“The known examples of AI-produced disinformation in the 2024 election are pretty paltry,” said Stephanopoulos, who has spoken and written about the voting shifts among the American electorate based on income, education level, race, and geography. He said it was too soon to know if AI was also a factor in helping drive voter behavior more broadly or whether the impact of AI-generated falsehoods was “greater than or different from the historical impact of “regular old political lies” spread in the days before social media.

While open to being convinced, Stephanopoulos said at the Nov. 20 session that he wanted “to see a lot more proof before I would leap to regulations of AI that I wouldn’t support for newspapers or television or speeches by politicians. This could be some kind of paradigm shift, but I don’t think that’s evident at this point, and so I would advise caution until we have more information.”

Buckland agreed, adding the caveat that people are becoming skeptical of everything due to the “background noise” of disinformation. “Whether it’s real or not, perception is everything. And there’s a whole cadre of people out there who will just not believe anything they hear or see, even though it’s patently, accurately true. And I think that’s deeply worrying,” said Buckland. “That means there’s a whole section of people [who] are very hard to reach.”

Discussing how AI might ease the severe backlog of cases in Britain and the burden on overstretched judges, Buckland doesn’t consider technology “a quick fix” but thinks its use “in a measured, ethical way could well help in the administration justice quite significantly.”

“There’s a whole cadre of people out there who will just not believe anything they hear or see, even though it’s patently, accurately true. And I think that’s deeply worrying. … That means there’s a whole section of people [who] are very hard to reach.”

Sir Robert Buckland

Buckland, who is currently a senior fellow at Harvard Kennedy School studying the impact of AI and machine learning on the ethics of administrative justice, said he sees augmented decision-making — the use of technology to supply analysis, facts, and recommendations to decision makers — as one possible way forward. “Sentencing now in England and Wales is quite a formulaic exercise with guidelines that you have to follow. It’s a bit like a decision tree, and that can take time. Immediately, I think minds are turning to whether or not augmented decision-making can indeed help speed up that process for busy judges.”

More at 

https://hls.harvard.edu/today/stephanopoulos-buckland-discuss-the-impact-of-ai-on-justice-systems-and-democracy/?utm_medium=social&utm_source=hltLinkedInNewsletter