SLAW – ESSAY: How Black Is The AI Black Box

Is it just us, but are the SLAW writers the only ones writing with insight when it comes to AI & the Law.

Tim Knight authors this piece and his concluding para is a great precis of the issue the article tackles..

“leaves both lawyers and research companies fumbling in the dark: Lawyers don’t have a complete picture of what is happening, and research companies are relying on the lawyers to teach their machines.”

Here’s the introduction

It’s always interesting to me how things can sometimes coalesce and synchronize around an idea. For example, I’ve been thinking about a comment that Nicole Shanahan made in a recent collection of presentations delivered at Codex, the Stanford Center for Legal Informatics. She was talking about “lawyering in the AI age” and touched on “predictive policing” where the computer is used to predict human behaviour. Based on her experience with how algorithms and data work Shanahan characterizes this as “not really a rational goal.”

However, she notes, there are products on the market today and,

“… no one is reviewing what those algorithms are doing, and some are even proprietary so we can’t even access them. I think we as a legal community need to decide if we have computers doing our policing for us we should probably have some standards of reviewing those machines. And we don’t. We don’t.”

And she’s absolutely right, we need standards for reviewing machine algorithms. We cannot blindly rah rah our way toward an AI future without taking a close look at the processes that often manifest themselves to us as a “black box” full of machine learning algorithms.

Cathy O’Neil writes about this in her recent book, “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy.” Despite being written by a “data scientist,” this is a very accessible book that provides some good cautionary tales illustrating how things can go bad under the algorithmic hood.

She considers some “critical life moments”: going to university, interacting with banks or the justice system, or trying to find and keep a job. “All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.” And she wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.”

And later, in reference to gaming Google search, O’Neil observes:

“Our livelihoods increasingly depend on our ability to make our case to machines. … The key is to learn what the machines are looking for. But there too, in a digital universe touted to be fair, scientific, and democratic, the insiders find a way to gain a crucial edge.”

And this reminded me of Lawrence Lessig‘s “Code and Other Laws of Cyberspace” originally written in 1990 and updated using a “collaborative Wiki” in 2005. Specifically his exploration of regulation in cyberspace and the idea that “code is law.”

“We can build, or architect, or code cyberspace to protect values that we believe are fundamental. Or we can build, or architect, or code cyberspace to allow those values to disappear. There is no middle ground. There is no choice that does not include some kind of building. Code is never found; it is only ever made, and only ever made by us.”

And to help us get a sense of what might be inside the black box he suggests we ask,

“Who are the lawmakers? Who writes this law that regulates us? What role do we have in defining this regulation? What right do we have to know of the regulation? And how might we intervene to check it?”

And while thinking about all of this a colleague* was kind enough to send around a link to a recent post by Brian Sheppard over on the Legal Rebels blog called, “Does machine-learning-powered software make good research decisions?: Lawyers can’t know for sure.” A provocative title to be sure. And, for the nice short primers he includes on algorithms and machine learning alone, is well worth the read.

Read on at  http://www.slaw.ca/2016/11/28/how-black-is-the-ai-black-box/