SLAW: Regulating Artificial Intelligence and Automated Decision-Making

The Law Commission of Ontario has been reviewing the principles and impact of artificial intelligence (AI) in the Canadian justice system for some years. Its three points of focus have been on the use of AI in criminal justice, in civil justice and in government. A report was issued in late 2020 on criminal justice aspects. It was described in Slaw.ca here.

The second report is on government uses, under the title Regulating AI: Critical Issues and Choices. As with the criminal paper, there is a helpful Executive Summary as well.

Regulating AI presents a lot of challenges, starting with defining just what one needs to regulate. Definitions of AI can be very broad, while regulations should be precise, both to be enforceable and to be consistent with legal principle. The LCO speaks of AI and “ADM”, being automated decision-making. The terms seem to be largely interchangeable in the report.

There is a long list of the kinds of actions governments around the world are using AI and ADM for. Besides detection and analysis of criminal activities, one finds the attribution of government benefits, the priorities for access to public services such as housing, education and health, determining eligibility for priority for immigration, and making hiring decisions or evaluating employees.

The report recommends a combination of “hard” and “soft” law for regulation. “Hard” law consists of firm rule with prohibitions and penalties. “Soft” law takes the form of ethical guidelines: “this is what you should do.” The European Commission’s High-level Expert Group on Artificial Intelligence has taken a soft law approach, “intended for flexible use.” But what if those using AI – including governments – are not inclined to be ethical?

Hard law could prohibit, for example, the use of facial recognition software, on the basis that the potential for abuse is just too serious to count on proper restrictions being imposed.

In Canada, the federal government has recently issued a Directive on Automated Decision-Making to make its own uses transparent and fair. The LCO takes a positive view of this Directive as a whole but points out that it applies only to the federal government itself and not to the private sector or other levels of government, and even some parts of the federal structure are exempted. And it notes Professor Teresa Scassa’s observation that a directive gives no private rights and provides for no independent enforcement.

The LCO recommends a mixed model, with some hard and some soft provisions. One hopes of course for a “smart mix” of methods. The hard provisions would touch overall direction and public accountability mechanisms. Guidelines, standards and best practices can have their uses as well, “to supplement or expand upon mandatory legal obligations.”

The content of the regulations will depend on the assessment of the risks presented by various applications of AI. The EU group divided systems into high or low risk. The LCO agrees with other critics that it cites in finding this too simple: a high or low rating leaves too much space in between. The LCO prefers the Canadian federal directive, which has four risk levels but which also “establishes baseline requirements that apply to all ADM systems, regardless of impact [i.e. risk] level.” Some of these requirements are notice to those affected by the proposals, employee training and human intervention in the operation of AI.

Accountability and transparency are essential to proper legal regulation of AI. Accountability is not possible without sufficient transparency. This can be achieved, says the LCO, by a mix of disclosure, impact assessments and procurement policy.

Read full article  http://www.slaw.ca/2021/05/07/regulating-artificial-intelligence-and-automated-decision-making/