SLAW Article: Artificial Intelligence and Law Reform: Justice System

John Gregory writes..

Artificial intelligence (AI) is sometimes thought of as a cure for the complexities of the world, but perhaps even more often as a threat to humans. Stephen Hawking said that “[w]hereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

At a somewhat less general level, a good deal of concern has been expressed about the impact of artificial intelligence on the law, and notably on the criminal justice system. My own musings are here. That article considered the evolution of AI from painstaking mimicry of human decision-making to machine learning, where computers review huge amounts of data and decide what patterns they show and how to achieve specified ends. It also considered some of the problems encountered in doing so, including the usual but still important limitation that a computer can only do what it is told. If there are policy limits to what is an acceptable solution, the machine has to be told.

A dramatic example is asking a computer to increase the number of fish caught, and having it recommend draining the lake.

A lot of people were very enthusiastic about the potential for AI in the justice system, some years ago – enabling the system to base its decisions on better understanding of what actually happens in the system, with less human discretion and thus – it was thought – less susceptibility to discrimination and arbitrary judgments. Then it turned out that the data available sometimes normalized the results of other social problems or justice system problems.

For example, AI might extrapolate – accurately – from justice system data that certain crimes in the system tended to be committed by certain types of people. Based on that extrapolation, it may “find” that such a type in a particular case was more likely to commit such a crime, or be more likely to repeat an offence. However, the data “showing” this tendency might be a result of social or human factors, like police practices or jury bias, that did not reflect the individuals about whom judgments were to be made or even the group to which they belonged.

As a result, many of those who were enthusiastic about algorithmic analysis, say 20 years ago, became much less enthusiastic about its results in practice.

Cathy O’Neil, whose thorough examination of such issues appeared in 2016 under the brilliant title Weapons of Math Destruction, set out four factors producing harmful results, in a 2017 article about whether algorithms could lie.

  1. Unintentional problems that reflect cultural bias, e.g. results that reflect unstated bias in the records.
  2. Algorithms that go bad due to neglect, e.g. scheduling part-time workers in ways that do not give them any opportunity to do child care or further education, or failing to check the quality of results before using the algorithms widely
  3. Nasty but legal algorithms, e.g. targeting poor people for lower-quality goods and services, or raising prices for those who seem willing or able to pay more.
  4. Intentionally nefarious or outright illegal algorithms, e.g. mass surveillance tools that allow targeting of legal protesters, or tools that detect regulatory testing and adjust results accordingly (think of Volkswagen and emissions controls).

A number of the policy challenges and possible ways forward were reviewed in the text Responsible AI with contributions from around the legal world. My view of that text is here.

Among several writers on Slaw who have commented on the issues, it is worth noting the many contributions of F. Tim Knight as recently as 2020, including a number of reports of conferences.

A knowledgeable analysis of the use of AI by law enforcement was published in September 2020 by the Citizen Lab and the International Human Rights Program at the University of Toronto, under the title To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in CanadaAn executive summary is here.

Recently the law and policy have been reviewed by the Law Commission of Ontario with its characteristic thoroughness. Under the general rubric of Digital Rights, its AI work has three parts: criminal justice, civil justice, and regulatory uses, notably consumer protection. (There is also a good deal of work being done in Canada on AI and privacy, worth a column here on its own. A recent comment by Martin Kratz outlines some issues.)

Read the full article at  http://www.slaw.ca/2021/02/15/artificial-intelligence-and-law-reform-justice-system/