Science fiction’s visions of the future include many versions of artificial intelligence (AI), but relatively few examples where software replaces human judges. For once, the real world seems to be changing in ways that are not predicted in stories.
In February, a Colombian judge asked ChatGPT for guidance on how to decide an insurance case. Around the same time, a Pakistani judge used ChatGPT to confirm his decisions in two separate cases. There are also reports of judges in India and Bolivia seeking advice from ChatGPT.
These are unofficial experiments, but some systematic efforts at reform do involve AI. In China, judges are advised and assisted by AI, and this development is likely to continue. In a recent speech, the master of the rolls, Sir Geoffrey Vos – the second most senior judge in England and Wales – suggested that, as the legal system in that jurisdiction is digitised, AI might be used to decide some “less intensely personal disputes”, such as commercial cases.
AI isn’t really that smart
This might initially seem to be a good idea. The law is supposed to be applied impartially and objectively, “without fear or favour”. Some say, what better way to achieve this than to use a computer program? AI doesn’t need a lunch break, can’t be bribed, and doesn’t want a pay rise. AI justice can be applied more quickly and efficiently. Will we, therefore, see “robot judges” in courtrooms in the future?
There are four principal reasons why this might not be a good idea. The first is that, in practice, AI generally acts as an expert system or as a machine learning system. Expert systems involve encoding rules into a model of decisions and their consequences – called a decision tree – in software. These had their heyday in law in the 1980s. However, they ultimately proved unable to deliver good results on a large scale.
Machine learning is a form of AI that improves at what it does over time. It is often quite powerful, but no more so than a very educated guess. One strength is that it can find correlations and patterns in data that we don’t have the capacity to calculate. However, one of its weaknesses is that it fails in ways that are different to the way people do, reaching conclusions that are obviously incorrect.
Read more