This article focuses on the issue of granting the status of a legal subject to artificial intelligence (AI), especially based on civil law. Legal identity is defined here as a concept integral to the term of legal capacity; however, this does not imply accepting that moral subjectivity is the same as moral personality. Legal identity is a complex attribute that can be recognized for certain subjects or assigned to others.
I believe this attribute is graded, discrete, discontinuous, multifaceted, and changeable. This means that it can contain more or less elements of different types (e.g., duties, rights, competencies, etc.), which in most cases can be added or removed by the legislator; human rights, which, according to the common opinion, cannot be deprived, are the exception.
Nowadays, humanity is facing a period of social transformation related to the replacement of one technological mode with another; “smart” machines and software learn quite quickly; artificial intelligence systems are increasingly capable of replacing people in many activities. One of the issues that is arising more and more frequently due to the improvement of artificial intelligence technologies is the recognition of artificial intelligent systems as legal subjects, as they have reached the level of making fully autonomous decisions and potentially manifesting “subjective will”. This issue was hypothetically raised in the 20th century. In the 21st century, the scientific debate is steadily evolving, reaching the other extreme with each introduction of new models of artificial intelligence into practice, such as the appearance of self-driving cars on the streets or the presentation of robots with a new set of functions.
The legal issue of determining the status of artificial intelligence is of a general theoretical nature, which is caused by the objective impossibility of predicting all possible outcomes of developing new models of artificial intelligence. However, artificial intelligence systems (AI systems) are already actual participants in certain social relations, which requires the establishment of “benchmarks”, i.e., resolution of fundamental issues in this area for the purpose of legislative consolidation, and thus, reduction of uncertainty in predicting the development of relations involving artificial intelligence systems in the future.
The issue of the alleged identity of artificial intelligence as an object of research, mentioned in the title of the article, certainly does not cover all artificial intelligence systems, including many “electronic assistants” that do not claim to be legal entities. Their set of functions is limited, and they represent narrow (weak) artificial intelligence. We will rather refer to “smart machines” (cyber-physical intelligent systems) and generative models of virtual intelligent systems, which are increasingly approaching general (powerful) artificial intelligence comparable to human intelligence and, in the future, even exceeding it.
By 2023, the issue of creating strong artificial intelligence has been urgently raised by multimodal neural networks such as ChatGPT, DALL-e, and others, the intellectual capabilities of which are being improved by increasing the number of parameters (perception modalities, including those inaccessible to humans), as well as by using large amounts of data for training that humans cannot physically process. For example, multimodal generative models of neural networks can produce such images, literary and scientific texts that it is not always possible to distinguish whether they are created by a human or an artificial intelligence system.
Read more