Commentary and opinion: On our guard against AI legal imperialism

Author Jonathan Goldsmith

 

 

 

 

A long time ago, I used to become exercised about American cultural imperialism (music, films, words). I gave it up because I realised that the American version was a good deal better than any offered by alternative powers – and there was nothing I could do about it anyway.

Now, looking to the future, I wonder about a creeping American legal imperialism. Again, it is obviously a good deal better than any other on offer, and we may not notice it much because of the similarity between our legal systems. But we should at least be aware of it.

It has risen to recent prominence because of the ongoing spat between Twitter/X and the Supreme Court of Brazil. Should an American behemoth, operating under an understanding of the First Amendment to the US constitution, be able to impose its version of free speech on another country, which has a different legal framework for free speech? The legal answer may be ‘no’, but politics and money can settle these issues, and so we will see.

The Brazil instance reminds me of developments in AI, where again we operate under an American umbrella. Good for the USA: it has the brains, energy, money and legal framework to become a world leader. It hasn’t conquered the AI landscape through war, but through size, wealth and talent. Yet should its AI legal framework become our legal framework by default?

We know that the EU is trying to regulate AI, to much criticism because it has not developed economic or technological competitors to the US models. But I am not speaking about regulation, rather the development of AI systems.

Large language models of AI operate by being fed huge quantities of other peoples’ work. (That gives rise to another legal complaint around copyright/theft/breach of contract, but the copyright/theft issues are not my concern here, either.)

My focus is mainly on countries which don’t speak English. If those countries have large populations and a commonly spoken language – think France or Germany – then little or no problem might arise, since there might be enough material to feed the machine to come up with a properly operating large language model, and there might be sufficient investment to back its development.

As we know, what is fed in is what comes out. There are already concerns that AI-produced material is re-entering the feeding cycle, with the possibility (given the tendency for the current state of AI to come up with nonsense) that rubbish is fed in and more rubbish comes out.

But what if you are a little country like Latvia or Slovenia, with a language little-spoken within or outside your borders (compared to English) – is enough material going in to ensure that quality comes out? And is the market big enough to ensure that someone somewhere is feeding the machine with up-to-date legal material to ensure that your lawyers and courts can rely on it? If AI is the future for competitiveness and access to justice by citizens, will the lawyers and citizens of those countries be stuck with a second-class service? More to the point, will they rely instead on English language models, so easily available, but which have been fed with documents which do not reflect their legal culture?

Even if you speak a widely-spoken language, you may live in a small neighbouring country with a separate legal system – say, Austria or Luxembourg. Will the AI product be focused on Germany and France, and so unsuitable for your own jurisdiction? (All these problems are compounded if you work in a small law firm, since access to AI costs money, and small law firms may not be in a position to compete in a market where efficiency may increasingly be defined by access to AI tools.)

As I have said repeatedly, I salute America for being the brains and power behind AI. But in a myriad of ways, our legal world is likely to become more Americanised. AI is not just present in obvious uses like ChatGPT, but in many tools we will use in the law, where the fundamentals will have been set by US companies, programmers, algorithms – whatever.

In the US, despite the differences in states’ laws, there is a huge single common law market. In the EU, the fractured legal landscape offers, on the other hand, a grave challenge for training AI models.

We in the UK are closer to the small neighbouring countries I mentioned above. We are fortunate to speak the same language as in the US. Nevertheless, we will need to guard against the specifics of our legal system being overlooked, or gradually merged into an American perspective which may not suit us.

I conclude: since the major AI companies are largely based in the US, and there is much more US material available for AI training, our own legal values will likely be blunted in the future.

Jonathan Goldsmith is Law Society Council member for EU & International, chair of the Law Society’s Policy & Regulatory Affairs Committee and a member of its board. All views expressed are personal and are not made in his capacity as a Law Society Council member, nor on behalf of the Law Society

 

Source https://www.lawgazette.co.uk/commentary-and-opinion/on-our-guard-against-ai-legal-imperialism/5120805.article?utm_source=gazette_newsletter&utm_medium=email&utm_campaign=Society+Council+members+to+be+paid+for+first+time+%7c+%27No+fault%27+evictions+back+on+parliamentary+agenda+%7c+AI+legal+imperialism_09%2f11%2f2024