Findlaw Article: Robots Are Doing Pretrial Risk Assessments, and They’re Not Great at It

In the American legal system, you are presumed innocent until proven guilty. But that doesn’t mean you can’t be incarcerated before being proven guilty. Whether it’s because courts have deemed them a flight or safety risk, or because they can’t afford bail, almost half a million people are jailed before their criminal trials ever take place.

This kind of imprisonment, and its attendant costs (loss of income, a job, and even a place to live), runs counter to our guiding principles of criminal justice, and reformers have been proposing fixes, like eliminating money bail and only detaining people based on their risk of harm to others or fleeing their trial. This sounds great, especially when we can harness the latest technology to make those risk assessments instead of fallible judges or magistrates. But it turns out the machines aren’t any better at this process than man, so where does that leave us?

Fundamentally Flawed

“As researchers in the fields of sociology, data science and law, we believe pretrial risk assessment tools are fundamentally flawed,” the New York Times Op-Ed page declares. “They give judges recommendations that make future violence seem more predictable and more certain than it actually is. In the process, risk assessments may perpetuate the misconceptions and fears that drive mass incarceration.”

The article, written by Chelsea Barabas, Karthik Dinakar, and Colin Doyle, notes the explosion of the U.S.’s pretrial detention population

Read the full article at

https://blogs.findlaw.com/blotter/2019/07/robots-are-doing-pretrial-risk-assessments-and-theyre-not-great-at-it.html