The Internet For Lawyers Newsletter writes
The (UK)Government recently published a White Paper, “A pro-innovation approach to AI regulation,” setting out its proposals for AI regulation, in conjunction with an impact assessment and consultation paper. Jo Frears, IP & Technology Leader at Lionshead Law, considers some of the key points.
The meaning of “AI”
The White Paper acknowledges that there is no single accepted definition of artificial Intelligence (“AI”) and so it defines AI as being technology that has characteristics of either “adaptivity” or “autonomy” or both.
Adaptivity refers to the initial or continual training of the technology and through such training, operating and performing in ways that are both anticipated and expected, and/or in ways that were not planned or intended by the programmers.
Autonomy refers to the ability of the technology to make determinations or decisions without human input.
By choosing not to define AI rigidly by reference to specific technologies or applications, but by the characteristics of its functional abilities, it hopes to “future -proof’ the regulatory framework. IP lawyers might equate this to a “look and feel” test and the idea for being non-specific has some merit. Where the lack of specific definition lets itself down is that it claims there is no need for “rigid legal definitions … as these can quickly become outdated and restrictive,” but without them, the range of AI is potentially so vast already and the unanticipated new technologies so broad, that this may become a framework that is at once too broad for the small applications such as chatbots and not broad enough for the types of generative AI already thinking of new ways to deploy themselves.
As lawyers we need to be aware that there is a risk for law when the lack of clarity around expectation and intention of outcome and the autonomy of a non-corporeal entity to make decisions not based on human input or judgement makes it difficult to assign responsibility for outcomes made by that technology. In lay terms, how do you provide human-centric regulation for a non-human decision maker operating adaptively (ie in ways AI devises) and autonomously (ie in ways AI determines based on the data it has learnt)? If however you can determine what the field markings of the regulation should be, you can at least get on to a level playing field. If that regulatory field can be marked out and the game players identified, it is possible then from a legal perspective to determine who is responsible for that and who therefore owns the AI’s decisions, output and risk.
Where the White Paper succeeds, is in identifying that there is a risk for public trust where there is a lack of regulation and that there is a need to build confidence in innovation. It is this driver and the desire to make the UK a leader in AI that has ultimately led to the consultation that has been ongoing since 2017 and has culminated in this White Paper. What is ironic is that this White Paper, with its stated desire for “an AI-enabled country” has been published just at the tide of opinion seems to be turning against AI and as thought leaders and technologists call for the brakes to be applied to AI development.
Legal challenges posed by AI
Since the Statute of Anne (1709), the law has sought to protect the creators of original works and to control the right to copy creative endeavours; first to control the means to print, then to ensure that patrons did not find their investment undermined with multiple copies and later to encourage and reward for inventiveness.
Different types of AI create different legal issues.
Read Full Article: