Juridification of concept “artificial intelligence” and limits of using its technology in litigation

Active development of artificial intelligence (AI) technology states the problem of integrating this phenomenon into legal reality, limits of using this technology in social practices regulated by law and, ultimately, the development of an optimal model for legal regulation of AI. This article focuses on the problem of developing the legal content of the concept of AI including some methodological and ontological foundations of such work. The author suggests certain invariant characteristics of AI significant for legal regulation, which, if adopted by the legal scientific community, could be used as a scientifically grounded basis for constructing specific options for legal regulation that correspond to the needs of a particular sphere of social practice. The author believes that the scientifically grounded legal concept of AI is largely able to determine the direction and framework of applied legal research on the multifaceted problems of using AI technology in social interactions including ministering of justice, to separate the related legal issues and problems from issues of ethical, philosophical, technological and other nature.


Introduction
The philosophy of artificial intelligence (hereinafter referred to as "AI") implies a distinction between weak AI and strong AI [1][2]. This article focuses on weak AI. Discussion of the legal aspects of a strong AI functioning seems premature since its creation is a scientific and technical problem; fundamental possibility of its successful solution remains debatable today.
In October 2019, the National Strategy for the Development of AI in the Russian Federation for the period up to 2030 was approved [3]. Following it, in August 2020, the Concept for the development of interfacing between AI and robotics technologies until 2024 was approved [4].
The goals of the Concept imply the definition of approaches to the legal regulation of the use of AI in various spheres of social relations and the creation of prerequisites for the foundations of such legal regulation. At the same time, the Concept describes a number of problems of regulating relations in the field of AI, among which, there are problems that can be entered into the procedural context of the ministering of justice, namely: 1) the issue of the possibility of legal delegation of decision-making to AI systems (conceptual problem of a legal nature); 2) using probabilistic assessments for decisionmaking by AI systems and the impossibility in some cases of a complete explanation of their decision; 3) maintaining a balance between the requirements for the protection of personal data and the need to use them to train AI systems; 4) the tasks of developing and clarifying terms and definitions in the field of AI technologies.
The identified problems can be reduced to the issues of a more general theoretical nature: 1) the problem of developing a legal concept of AI (juridification of AI concept); 2) the problem of determining the limits of the use of AI technology in judicial enforcement.

Juridification of AI concept
The Concept draws attention to the lack of a clear understanding of the term AI and to the fact that this leads to terminological problems in building regulation. At the same time, when solving this problem, it is proposed: 1) to focus on the variability of specific definitions depending on the industry the AI technology is applied in; 2) if possible, avoid introducing a normative definition of the term being uniform for all sectors into Russian legislation.
It seems that the problem is somewhat deeper than formulating legal definitions in normative acts (by the way, such definitions already exist [3,5]). Prior to proposing specific formulations for the definition of the term, it is necessary to determine the very concept of AI possible to be used in law, to understand that there is AI not so much and not only from a technical, but also from a legal perspective, to understand how to write this phenomenon being new for law in the system of generally accepted legal concepts, how to relate to AI from the viewpoint of law and what place should be given to this phenomenon in legal taxonomy. Juridification of the concept of AI means conceptualization of this phenomenon in a system of specific concepts and categories the lawyer's professional consciousness operates with. Specific definitions may indeed be different depending on their purposes. However, before considering the definitions of the term AI, it is necessary to define invariant characteristics since the invariant sets the framework of legal regulation and the permissible limits of definitions variability. Such an invariant must be scientifically grounded since it is the scientific approach enabling to regard the methodological and ontological foundations of the analysis.
I.N. Tarasov expressed and argued the opinion that the use of definitions of AI based on the description of technical and technological characteristics is senseless in jurisprudence as it does not influence legal regulation, and incorporation of AI into legal reality requires either an independent term for designating it in the field of law or filling the concept with specific legal content, while the development of a conceptual-categorical apparatus according to the rules and regarding the science of law is a necessary condition for effective and correct legal regulation [6]. We do not completely agree with the statement regarding the use of an independent term (as the choice of the signifier here can be arbitrary) but we fully share the opinion of I.N. Tarasova on the need for theoretical developments aimed at the actual legal understanding of AI and the ways of integrating it into legal reality. Perhaps the statement about the complete uselessness of the definitions of AI based on the description of its technical and technological characteristics is too categorical but in any case such definitions should not be transferred to the sphere of law without a preliminary theoretical analysis of the possibility, methods and limits of their use for the purposes of legal regulation.

Methodological bases of AI concept juridization
In the literature devoted to the methodology of legal science, it is noted that legal concepts are formed in different ways [7]. First, it can be organic concepts formed as a result of professional reflection on legal practice or its doctrinal design, or arisen as a result of the theoretical organization of legal reality, its conceptualization within the framework of a certain legal theory. In both cases, such concepts are initially a product of legal thinking. Secondly, these can be concepts associated with law that arise in non-legal spheres of thought and practice and are involved in legal circulation without changing their volume and content. Such concepts have no proper legal content. Thirdly, these can be concepts consolidated by law that arise in non-legal spheres of knowledge but subsequently adapted to the needs of legal practice; consequently, their content changes and acquires a specific legal character. Thus, the concept is subject to juridization.
The concept of legal regulation of relations in the field of AI proceeds from their association of this concept (Section 6): it is proposed to build and harmonize the ontology of the subject area by the efforts of the expert community and specialized technical committees under the Federal Agency for Technical Regulation and Metrology of the Russian Federation; it is proposed, where necessary, to use the definitions contained in the documents on standardization, or to give definitions relevant specifically for this area of regulation for legal regulation.
In my opinion, the technical characteristics poorly contribute to organically fitting AI as a phenomenon into legal reality. Although as a palliative solution, this is perfectly acceptable. Perhaps it is worth trying to consolidate the non-legal concept of AI into law, to breathe legal content into it. By the way, the desire of lawyers to consolidate the concept of AI can be traced in the scientific literature. However, such a desire does not exclude cases of abundant borrowing, the inclusion of technical aspects in the content of the developed legal concept [8].
How can the concept of AI be consolidated into law? Two approaches are possible here: essentialist and constructivist ones.
The essentialist approach presupposes the desire to identify and fix the essence of the phenomenon under study in the concept, to determine, to grasp the "what" nature of artificial intelligence in the legal concept. This approach implicitly assumes that there is some immutable essence of AI, which can and should be identified by lawyers and recorded in the corresponding legal concept. Essentialist approach describes the properties of artificial intelligence, it is descriptive.
The constructivist approach implies the rejection of the desire for analysis, identification and representation in the legal concept of the essence of artificial intelligence. This approach does not describe the entity but ascribes properties, it is ascriptive. The legal concept of artificial intelligence here may not coincide at all with its technological content. This approach seems to me more promising from a practical perspective.
The essentialist approach leads to the active inclusion of the concept of "artificial intelligence" proposed by representatives of legal science of various signs of a technological nature in the content of the definitions [8,9], which is more characteristic of the association rather than this concept consolidation. Within the framework of the same approach, a discussion arises about endowing artificial intelligence with a sign of subjectivity (it will be discussed below), when such a possibility is discussed from the standpoint of assessing the cognitive properties of artificial intelligence [10]. At the same time, it seems that we, lawyers, cannot say anything about the essence of artificial intelligence since it is initially not a legal but technological phenomenon. However, we can credit artificial intelligence with those legally significant properties that correspond to the tasks of legal regulation.
The constructivist approach is utilitarian. If it is necessary to include the technology of artificial intelligence in legal circulation, no identifying of the technological essence of this phenomenon is required. It is enough to credit artificial intelligence with the properties significant from the viewpoint of law that would serve the implementation of certain practically significant tasks.
Thus, it seems that the concept of AI can and should be consolidated within the framework of the utilitarianconstructivist approach rather than the essentialist one.

2.2
Ontological foundations of consolidating AI concept into law The issue of subjectivity is a key issue that is explicitly or implicitly raised in the legal literature in relation to the ontological legal status of AI. Often, researchers in the field of jurisprudence try to include the property of subjectivity in the concept of AI. Such proposals are substantiated either through arguments of a utilitarian nature (the need to include AI into legal circulation), or through attempts to reveal the essence of AI that constitutes subjectivity; sometimes such proposals are not substantiated at all but are proposed de lege ferenda (draft law of Grishin) [5,[11][12][13][14][15][16][17][18][19][20].
In order to include a phenomenon in legal circulation, it is not necessary to ascribe the properties of subjectivity; the subject of law and the subject of activity should not be identified. According to the just remark of Professor S.I. Arkhipov, the subject of law does not have to be identified with a participant in legal relations or with the legal role he plays [21]. In this sense, the technology of artificial intelligence, if necessary, can fulfill the legal role assigned to it by a person even without the status of a subject of law. If we associate subjectivity with cognitive properties, then we get into the subject field of the general theory of law and philosophy of AI.
From the standpoint of the general theory of law, the question about the features constituting legal subjectivity arises. As applied to a person, subjectivity, as a rule, is substantiated either through the attributive properties of consciousness (will, the ability to think and make autonomous decisions, self-awareness), or through an axiological approach. In the first option, we cannot assert that a weak AI thinks, which has been long discussed in the philosophy of AI [1,2,22,23]. Moreover, some authoritative researchers deny the possibility of a positive answer to the question of consciousness even in the prospect of creating a strong AI [1]. Finally, the very scientific concept of consciousness still remains somewhat vague.
According to the apt remark of Prof. E.A. Mamchur, no one is unambiguously able to answer the question of what consciousness is today [24]. In relation to law, this approach cannot be considered universal. For example, a person who does not possess all the fullness of mental properties is anyhow a subject of law (incapable). At the same time, some animals such as the great primates are not endowed with legal subjectivity despite the fact that they demonstrate certain mental properties that are characteristic for a person as well (the ability to experience psychological reactions, self-identification, empathy, etc.) [25]. As for the axiological approach, where arguments of an ethical nature come to the fore, where the attitude towards a person as a social value recognized by law is a sufficient and limiting basis for giving him the status of a subject [26,27], then, obviously, this approach is inapplicable to AI due to perceiving it as a means due to its official character [28].
In addition, the anthropocentrism of law prevents a serious discussion of the issue of AI subjectivity in the framework of this field. Reflecting on the subjectivity of AI means striving to overcome the anthropocentrism of law, which is doubtful in itself (more detailed argumentation is given in [6]). The attitude towards a weak AI is an attitude towards a means not a value, while the status of a subject of law presupposes an attitude towards it as a self-sufficient value, and not as a means.
In contrast to the essentialist, the utilitarianconstructivist approach to the definition of the legal content of the AI concept enables to overcome all the designated controversial aspects. Thus, from the standpoint of utilitarian constructivism, the question of whether artificial intelligence has mental properties that reproduce or imitate individual cognitive abilities of human conscience is deactivated. Law regulates not thinking but behavior. In this regard, it does not matter whether artificial intelligence thinks or only imitates thinking, how complete this imitation is, whether it is sufficient to raise the question of subjectivity, etc. It is important that artificial intelligence is capable of autonomous rational action, specifically, making decisions without operator's intervention. Due to the anthropocentrism of law, the issue of the subjectivity of artificial intelligence becomes irrelevant.
Artificial intelligence technology, if necessary, can perform the legal role assigned by a person even not possessing the status of a subject of law. Here, too, numerous technological aspects of the concept of AI are losing their significance since they do not matter for the legal regulation of using this technology in legal practice. The technological nature can be simply stated by lawyers without disclosing its essence in the content of the concept, only in order to emphasize the anthropogenic nature of AI. However, this also seems redundant since the concept of technology itself presupposes and includes this feature. Thus, all these issues not only cease to be debatable, they are simply removed from the current agenda.
When constructing a scientifically grounded invariant of the legal AI concept, it is important not so much to formulate a specific definition as to fix the key characteristics important for legal regulation since these characteristics will remain unchanged when formulating options for specific regulatory definitions. Taking into account the above argumentation, we believe that such characteristics could be as follows: 1) capacity of AI technology for autonomous rational action (independent decision-making without operator's intervention); 2) objectivity of AI technology (understood as a fundamental focus on the lack of legal capacity in artificial intelligence); 3) the service nature of artificial intelligence (understood as the limitation of the goals and objectives of AI functioning by human needs).

Limits of using AI technology in judicial enforcement
Is it possible to entrust judicial enforcement to a nonsubject of law? Can AI perform law enforcement functions in litigation? Such question formulation avoids the well-known discussion in the procedural doctrine about the limits of the meaning of the concept of justice (whether the concept of justice covers judicial enforcement that is not related to the resolution of disputes about law (writ proceedings, special proceedings)).
I believe that the first question can be answered in the affirmative, and the second -in the negative. The arguments for my position are as follows.
Judicial enforcement requires not only perfunctory measures but also meaningful legal thinking including dealing with evaluation categories. Even in indisputable proceedings, the judge evaluates the written evidence according to his/her inner conviction, exercises judicial discretion, qualifies the legal relationship, assesses it for indisputability, etc. The principle of free not formal assessment of evidence is implemented in the trial. All this requires operating with specific legal meanings.
Unlike humans, AI is incapable of operating with meanings. Back in the early 80s of the last century, J. Searle being a well-known expert in the field of philosophy of AI conducted the mental experiment "Chinese room" and convincingly demonstrated that AI operates exclusively with syntax, and not semantics [1,23].
The AI activity space is a sphere of bare form without any content. In this regard, it is hardly possible to entrust AI with the function of value judgments in the framework of judicial enforcement. Consequently, it is hardly possible to entrust AI with the function of value judgments in the framework of judicial enforcement. In addition, the use of AI in assessing evidence would contradict the principle of free assessment of evidence based on the inner conviction of the court (AI cannot have an inner conviction since AI does not operate on meanings). The limits of using AI technology in litigation are limited by the concept "predictive justice") [28]. Within the framework of this concept, AI can only be used in an instrumental sense as a means of analyzing large amounts of data.
At the same time, this issue can be resolved differently in relation to alternative forms of dispute resolution, in particular, in arbitration, the competence of which is of a contractual nature rather than a public-law one. When referring to arbitration, the parties must understand the possibilities and limitations of this form of enforcement, their will and expression of their will to appeal to an arbitrator must coincide and not contain defects. Therefore, if the parties voluntarily and consciously submit the dispute to the AI for resolution, understand and accept the impossibility of meaningful enforcement and agree with this, then there is no reason to deprive them of such an opportunity.

Conclusion
The effectiveness of legal regulation of using AI technology in various areas of social practice including law enforcement largely depends on the scientifically grounded legal concept of AI, that is, on the legal characteristics of this non-legal phenomenon, on the way lawyers perceive and understand this phenomenon, on the role of AI in legal taxonomy; The task of forming the legal concept of AI is not reduced to the formulation of specific legal definitions and cannot be solved at this level; juridization of the concept of AI requires theoretical legal reflection, thinking through the content of this concept in the context of law, applying the means and methods of legal science; juridization of the AI concept means expansion of its content by defining legally significant characteristics and properties; The result of the juridization of the AI concept should be a set of invariable (invariant) legal characteristics while the specific definitions of this term in regulatory acts may differ depending on the needs of the practice of legal regulation; The legal concept of AI will determine the proper legal framework for the study of this multidimensional phenomenon, promising directions of applied developments in the formation and optimization of the model of legal regulation; When forming a legally significant concept of AI, it is proposed to abandon the descriptive essentialist approach aimed to identify the essence of AI in favor of the ascriptive constructivist approach, which involves attributing the following legal properties to the content of the concept of AI: on the one hand, these are the properties significant for the purposes of legal regulation, and on the other hand, they restrain the limits of legal regulation; The following are proposed as invariant legally significant elements of the content of the AI concept: a) objectivity of AI (understood as the refusal of attempts to ascribe any properties of legal subjectivity to AI); b) autonomy (understood as the ability of an AI to act including making decisions without operator intervention); c) service nature (understood as the limitation of the goals and objectives of AI functioning by human needs); The limits of using AI in law enforcement depend on the form of law enforcement: in the administration of justice, AI can only be used for the purpose of analyzing large amounts of data since AI does not operate on meanings but acts at a formal, syntactic level, which excludes the possibility of meaningful work with evaluative legal concepts, and also contradicts the principle of free evaluation of evidence; in those areas of law enforcement where the competence is of a