SLAW Article: Issues with the use of AI in criminal justice risk assessment Algorithmic Racism – Old wine in a new bottle

 

By algorithmic racism, I refer to systemic, race-based bias arising from the use of AI-powered tools in the analysis of data in decision making resulting in unfair outcomes to individuals from a particular segment of the society distinguished by race. AI is subsistent on big data. Vast amount of data is required to train AI algorithm to enable it make predictions. Some of the data used to train the AI algorithm in recidivism risk assessment are historical data from eras of mass incarceration, biased policing, and biased bail and sentencing regimes characterised by systemic discrimination against particular sections of the society.

Canada is not immune from the problems associated with data from biased policing as evident from data from the decades-old “carding” practice by some police departments. Toronto, Edmonton and Halifax are notorious for this practice which has been routinely criticized for disproportionately targeting young Black and Indigenous people. Of more serious concern is the fact that some of this data is blind to recent risk reduction and anti-discrimination reforms aimed at addressing the over-representation of particular segments of the society in the criminal justice system. Unlike explicit racism which is self-evident and obvious, algorithmic racism is not overtly manifest, but rather is obscured and buried in data. In fact, it is further blurred by a belief system that tends to portray technology as race neutral and colour blind. Algorithmically biased assessments by AI tools (unlike expert evidence) are accepted in the criminal justice system without further examination or cross examination. This is related to “anchoring” – a term used by behavioural scientists to refer to cognitive bias that arises from human tendency to rely on available piece of data in decision making with little regard (if any) to flaws in the data.

Understanding algorithmic racism would require the use of the appropriate lens to examine its hidden tentacles embedded or obscured in AI risk assessment technologies used in the criminal justice system. Critical Race Theory (CRT) provides a fitting lens for the study of algorithmic racism. CRT was developed by legal scholars who were intent in understanding the lived experiences of people of colour in a judicial system that presented itself as objective and race neutral. CRT adopts the notion that racism in endemic in society. According to Devon W. Carbado, in “Critical What What?” (2011) 43:5 Connecticut Law Review 1593 at 1609, CRT challenges the dominant principles that colour-blindness results in race neutrality and that colour consciousness generates racial preferences. CRT sees the notion of colour-blindness as a mechanism that rather blinds people to racist policies which further perpetuates racial inequality.

Bennett Capers noted that the writings that influenced the critical race movement tend to center on some recurring themes such as – that colour-blind laws tend to conceal real inequality in the society, that reforms that apparently benefit the minorities are only possible when they are in the interest of the white majority, and that race tends to be avoided in the law. CRT scholars are progressively utilizing research studies on implicit bias to illustrate the assertions. Examination of these research studies and data tends to unmask the implicit racism buried in laws and social practices that results in unfair outcomes or bias against individuals from particular segment of the society characterised by race.

Using CRT to study these AI risk assessment tools and their operation will reveal how these new technologies reinforce implicit and explicit bias against minority groups – especially Blacks and Indigenous offenders who constitute a disproportionate population in the Canadian criminal justice system.

“Madea Goes to Jail” – Individualized versus Generalized Sentence

An important issue that goes to the legality of risk assessment tools in criminal justice sentencing relates to the use of group-based analytics in sentencing decisions pertaining to an individual offender as opposed to an individualized sentence based on accurate information specific to the offender. In a best-case scenario, algorithmic risk assessment tools base their risk assessment on general factors similar to the offender’s background and not particularly specific to the offender. In R. v. Jackson 2018 ONSC 2527 (CanLII), Justice Nakatsuru of the Ontario Superior Court noted that “[s]entencing is and has always been a very individual process. A judge takes into account the case-specific facts of the offence and the offender to determine a just and fit sentence… The more a sentencing judge truly knows about the offender, the more exact and proportionate the sentence can be.” (para 3 [emphasis added]).

Modern recidivism risk assessment tools built on algorithms and big data provide anything but individualized assessment or prediction of recidivism. At best they provide predictions based on average recidivism of a general population of people who share characteristics similar to that of the accused. This process has the inadvertent tendency to perpetuate stereotypes associated with certain group (e.g. racial minorities). Sentencing judges as front-line workers in the criminal justice system have an obligation to ensure that the information they utilize in their sentencing decisions does not directly or indirectly contribute to negative stereotypes and discriminations. R. v. Ipeelee, 2012 SCC 13 (CanLII) at para 67.

The use of recidivism risk assessment tools is very common in the Canadian criminal justice system. Kelly Hannah-Moffat in “Actuarial Sentencing: An ‘Unsettled’ Proposition”, noted the tendency by lawyers and probation officers to classify individuals who obtain high risk assessment scores as high-risk offenders rather than simply as individuals who share similar characteristics with average members of that group. She noted that “[I]nstead of being understood as correlations, risk scores are misconstrued in court submissions, pre-sentence reports, and the range of institutional file narratives that ascribe the characteristics of a risk category to the individual.” (at page 12)

Our criminal justice system is founded on the idea that people should be treated as individuals under the law and not as part of a statistic – and this applies even in criminal justice sentencing. Hence, recidivism risk assessment technologies built on AI and big data have been criticized as depriving the accused of the right to an individualized sentence based on accurate information. (See the US case of State v. Loomis, 881N.W.2d 749 (Wis. 2016) hereafter Loomis). This raises a Section 7 Charter issue arising from the constitutionality of assessments made by the technology, especially in relation to the right of a convicted offender to an individualized sentencing based on accurate information.

In Loomis, the offender challenged the use of algorithmic risk assessment in his sentencing. He argued that the use of the risk assessment score generated by COMPAS algorithmic risk assessment tool violated his right to an individualized sentence because the tool relied on information about a broader group in making an inference about his likelihood of recidivism. The offender argued that any consideration of information about a broader population in the determination of his likelihood of recidivism violates his right to due process. The Loomis court noted the importance of individualized sentencing in the criminal justice system and acknowledged the fact that COMPAS data on recidivism in not individualized, but rather based on data of groups similar to the offenders.

Sentencing is a critical aspect of our criminal justice system. The more the sentencing judges know about offenders’ past, present and likely future behaviour including their personal background – historical, social, and cultural – the more exact and proportional sentences the judges are able to craft. While risk scores can effectively compliment judges’ effort to craft appropriate sentences, judges should always be reminded that algorithmic risk scores are just one of many factors to be used in the determination of appropriate sentences and, hence, appropriate weight should be attached to this factor along with many other factors in order to ensure that the sentence being imposed on the offender is as individualized as it could be. Justice Nakatsuru in R. v. Jackson rightly observed that:

A sentence imposed based upon a complex and in-depth knowledge of the person before the court, as they are situated in the past and present reality of their lived experience, will look very different from a sentence imposed upon a cardboard cut-out of an “offender” (at para 103).

Sentencing judges should not at any point in the sentencing process relent to use their discretion to overrule or ignore algorithmic risk scores that seems to fall out of tune with other factors considered in the sentencing process, especially where such risk scores tend to aggravate rather than mitigate the sentence.

Another problem that may further impair the ability of AI risk assessment tools to achieve individualized risk score arises where AI tools that are developed and tested on data from a particular group are used on another group not homogeneous to the original group. This will result in representation bias. Attempts to deploy AI technology to group(s) not effectively represented in the training data used to develop and train the technology would usually result in flawed and inaccurate results. This problem has been evident in facial recognition software. A report in The New York Timesshows that AI facial recognition software in the market today are developed and trained on data of predominantly white males. While the software has been able to achieve 99 percent accuracy in recognizing faces of white males, this has not been the case with other races and the female gender. The darker the skin, the more inaccurate and flawed the result – up to 35% error rate for darker skinned women.

In Ewert v. Canada 2018 SCC 30 (CanLII), an Indigenous man brought a Charter challenge against his risk assessment by the Correctional Services Canada (CSC). The tools used in the assessment had been developed and tested on predominantly non-Indigenous population. The Supreme Court ruled that CSC’s obligation under s.24(1) Corrections and Conditional Release Act applies to results generated by risk assessment tools. Thus, an algorithmic tool which is developed and trained based on data from one predominant cultural group will be, more likely than not, cross-culturally variant to some extent when applied to another cultural group not represented (or not adequately represented) in the data used to train the tool. This will be unlikely to generate an individualized assessment of the offender, but will be more likely to result in flawed assessment of the risk posed by the offender.

Proprietary Right versus Charter Right

The methodology used in the assessment of recidivism by AI risk assessment tools are considered proprietary trade secrets and not generally available for scrutiny by the court, the accused, or the prosecution. The proprietary rights attached to these tools restrict the ability of the judge, the prosecution, or the accused to access or determine what factors are taken into consideration in the assessment, and how much weight is attached to those factors. The secretive process associated with these tools becomes problematic where offenders challenging their adverse sentence resulting from assessments made by these tools seek access to the proprietary information to prove the arbitrariness of the deprivation of their liberty or to invalidate the sentence resulting from the assessment.

In our criminal justice system, an accused person has the charter rights to personal liberty and procedural fairness both at trial as well as at sentencing. These rights also arise at incarceration when decisions affecting the liberty of the offender are being made by correctional officers (e.g. security classification). (See May v. Ferndale Institution, 2005 SCC 82 (CanLII) at para 76 (hereafter May v. Ferndale). The imposition of a criminal sentence requiring incarceration clearly involves a deprivation of the charter rights of the offender. Such deprivation must be in accordance with the law. But what if a convicted offender seeks access to a proprietary trade secret in a commercial AI tool used to assess the recidivism resulting in the criminal sentence? This will give rise to conflict between the proprietary right of a business corporation to its trade secret and the charter rights of the offender.

In May v. Ferndale, Correctional Services Canada (CSC) had used a computerised risk tool – Security Reclassification Scale (SRS) – in reviewing of the security classification of some inmates from minimum to medium risk. The inmates sought access to the scoring matrix used by the computerised SRS tool. They were denied access by the CSC. The Supreme Court of Canada ruled that the inmates were clearly entitled to access the SRS scoring matrix, and that a failure to disclose the information constituted a major breach of procedural fairness. According to the court:

The appellants were deprived of information essential to understanding the computerized system which generated their scores. The appellants were not given the formula used to weigh the factors or the documents used for scoring questions and answers. The appellants knew what the factors were, but did not know how values were assigned to them or how those values factored into the generation of the final score. (at para 117 [Emphasis added])

The Supreme Court noted that it was commonsensical that the matrix scores as well as the methodology used in arriving at the security classification should have been made available to the inmates:

As a matter of logic and common sense, the scoring tabulation and methodology associated with the SRS classification score should have been made available. The importance of making that information available stems from the fact that inmates may want to rebut the evidence relied upon for the calculation of the SRS score and security classification. This information may be critical in circumstances where a security classification depends on the weight attributed to one specific factor. (at para 118 [Emphasis added])

Conclusion

Artificial intelligence technologies will continue to revolutionize our justice system. If properly used, it could greatly enhance efficient and effective administration of our criminal justice system. However, the use of AI in our criminal justice system does raise some serious and novel legal issues that must be addressed. It is important to study these legal issues with the ultimate objective of developing a framework that will mitigate the adverse and discriminatory impacts of these technologies on the rights of accused persons and offenders.

________

Gideon Christian, PhD
Assistant Professor (AI and Law)
Faculty of Law, University of Calgary