Application of human-computer interaction technology in rehabilitation treatment of mental and nervous system diseases

. With the development of human-computer interaction technology, how to use intelligent, natural and efficient interaction to promote the development of medicine has gradually become a hot topic of research. Mental and nervous system diseases have a great impact on the quality of people's daily life. The use of human-computer interaction technology to rehabilitate mental and nervous system diseases can improve the treatment effect and reduce the work intensity of doctors, so it has far-reaching clinical significance. This paper first describes the development process of human-computer interaction technology, and then focuses on the application of human-computer interaction technology such as interactive pen, voice interaction, gait/gesture interaction and physiological computing in the rehabilitation treatment of mental and nervous system diseases, which has important practical significance for improving the use of computer technology to improve traditional medical treatment methods.


Introduction
Human-computer interaction is a discipline that studies the interaction between users and systems. Its main purpose is to enhance the interaction experience between users and computer systems, and make it easier for users to complete interactive tasks. At present, the application of human-computer interaction technology in the medical field is mainly to evaluate the changes of patients' sensory functions through pen, voice, gait, etc., and to assist doctors in diagnosis and treatment of patients. Human-computer interaction technology is noninvasive. Doctors can objectively determine the current health status of patients and take corresponding rehabilitation treatment measures by analyzing the characteristics of diseases in the interaction mode through the natural sensory interaction results between patients and computers. On the one hand, it saves patients' treatment time and costs, improves the treatment effect of the disease, and on the other hand, it also reduces the work intensity of doctors. Therefore, how to use human-computer interaction to carry out early warning and rehabilitation treatment of the disease has been widely concerned by scholars at home and abroad. Mental illness usually manifests as emotional loss and low spirits. Memory decline, etc., while neurological diseases are characterized by dyskinesia, impaired language ability, abnormal physiological signals and other problems, and manifested by multiple sensory functions such as touch and hearing, which is the best application field of human-computer interaction technology. This paper mainly analyzes the application status of pen interaction, voice interaction, gait interaction, physiological computing and other humancomputer interaction technologies in the rehabilitation treatment of mental and nervous system diseases, and prospects the future prospects of this field.

Development history of humancomputer interaction technology
Human-computer interaction technology refers to the technology that users exchange information with the virtual scene generated by intelligent devices in a convenient and natural way [1]. In an ideal state, humancomputer interaction technology cannot rely on external devices such as keyboard, mouse and touch screen, and people and computers can naturally communicate at anytime and anywhere, thus finally realizing the seamless integration of virtual and real scenes. At present, the ideal state has not yet been reached. Only through the intelligent computer and mobile device at this stage can the user's behavior state, such as the user's muscle movement, posture or language and other behavior state input information be converted into a kind of information that can be understood and operated by an intelligent device, and the user can be provided with real-time interactive information by simulating the user's multiple perceptions. The development of humancomputer interaction technology has greatly affected the user experience and the application field of augmented reality technology, which is the key supporting technology [2]. As shown in Figure 1, it is the development process of human-computer interaction technology.

Traditional interactive technology
Interactive tools for human-computer interaction using traditional hardware devices include mouse, keyboard, etc. With the mouse and keyboard in the computer, you can select a point in the vertical and horizontal coordinate system of the display system, and select, drag, zoom in and zoom out the point by pressing and dragging. This kind of method is simple and easy to operate, but it needs the support of external devices, which reduces the immersion of interactive experience, and cannot well adapt to the application scenario of realtime human-computer interaction in three-dimensional space.

Touch interaction technology
The touch interaction technology takes the human hand as the main input mode, and still needs to contact with the device. However, compared with the traditional hardware device, it is more humanized. It can be used as an input device to capture the user's actions, and can also be used as an output device to provide the user with tactile experience, so that the user has a real sense of immersion. Touch interaction technology has developed from single-touch to multi-touch, realizing the transformation from single-finger to multi-finger or even multi-user interaction, which is easier to be recognized by users. In terms of recognizing instructions, in addition to the traditional interactive technology, the accuracy of touch interaction technology at this stage is the highest compared with other interactive technologies, which only requires users to operate in the right way in the right place. At present, representative touch interaction technologies include 3D tactile feedback touch screen and Touch•Sense ® Tactile feedback technology, etc. [3].

Voice interaction technology
Language is the most natural and direct way. Speech recognition technology has become a mature humancomputer interaction technology in the current augmented reality system. In some environments where the traditional keyboard cannot be used to input information, speech is the preferred friendly way. In augmented reality, voice can interact directly on the one hand and assist users on the other. The focus of voice interaction is speech recognition engine. The existing engines, including Speech API, Via Voice and domestic iFLYTEK, have good performance, providing a powerful platform for the development of humancomputer interaction technology.

Somatosensory interaction technology
The somatosensory interaction technology uses sensors or computer vision-based human gesture recognition or motion capture as the interaction strategy to track the position of key parts of the human body, such as hands, head or legs, and analyze and obtain the user's movement posture data in the real world as the interaction information input [4]. Kinect somatosensory device has strong dynamic capture ability, can accurately track bones and map joint limb movements, bringing revolutionary progress to somatosensory interaction. In addition, the emergence of body-sensing devices such as Leap Motion, Hololens and Magic Leap popularized the application of augmented reality for entertainment interaction. The somatosensory interaction mode conforms to the natural habits of human beings, and is one of the hotspots of human-computer interaction technology at present.

Physiological interaction technology
Physiological interaction technology refers to the interaction method of analyzing and recognizing user's psychological state and interaction intention and feedback through real-time measurement of human physiological signals such as electroencephalogram (EEG), respiratory signal, electrocardiogram (ECG) and electromyography (EMG). The multi-parameter biofeedback instrument developed by Thought Technology in Canada has a rich concentration feedback training system [5]. At present, physiological interaction technology has certain requirements for system hardware and software. The operation is complex and the price is expensive, which increases the difficulty of development and limits the research and use of augmented reality devices by some people. However, it is crucial to enrich the interaction mode of augmented reality devices.

Hybrid interaction technology
Hybrid interaction technology is one of the main research directions of human-computer interaction and augmented reality in recent years. Hybrid interaction technology supports users to interact with computers in multiple ways, which can give full play to the unique advantages of their interaction methods and enrich interaction information. Hybrid interaction uses the flexibility of mutual information complementarity to improve the user's perception efficiency and interaction efficiency with the computer. The user reduces the cognitive load generated by a single interaction mode, and the computer will not be overburdened by a single interaction mode, so as to improve the interaction effect. In the hybrid interaction technology, there are few studies on how to integrate the interaction information, obtain the user's real interaction intention, and the interaction fusion mechanism, which are the key issues of current research [6].

Interactive pen
Pen interaction uses an electronic pen to input the touch screen, record the user's movement track while writing, and use symbols or signs to obtain interactive data that can reflect the user's pen operation status and cognitive ability. Patients with mental and nervous system diseases may have various conditions such as slowness, tremor, cognitive impairment and so on when using the pen. Therefore, by analyzing the motor function status (such as speed, acceleration) or graphic drawing results when using the pen, we can assess whether the patient has problems such as impaired motor function and cognitive function, and provide diagnostic basis for doctors [7]. Common handwriting interaction tasks include Archimedes spiral, repeated letter writing, and drawing tasks include TMT (Trail Making Test) and CDT (Clock Drawing Test) (Fig. 2).

Voice interaction
Voice interaction is the process of expressing interaction intention through user pronunciation and obtaining system response. Patients with mental illness may be depressed and need to adjust their mood through voice dialogue guidance. Nervous system diseases can lead to partial damage of vocal tract and change of pronunciation. Compared with pathological features such as limb movement disorder and brain injury, language expression and pronunciation disorders often occur in the early stage of the disease, resulting in problems such as tone change, aphasia, dysarthria [8]. At present, the treatment methods of mental and nervous system diseases based on voice interaction mainly analyze the disease through the patient's speech or pronunciation state, and provide the doctor with diagnostic basis.

Gait/gesture interaction
Gait/gesture, as a common interactive behavior in people's daily life, is the result of the cooperation of human body's moving organs under the control of the nervous system. Different gait gestures can represent different commands and actions [9]. Patients with mental disorders can express their emotions through gait/gestures. In addition, when the user suffers from nervous system disease, the motor function may be damaged. During the walking process, the stress state and movement shape of the lower limb may change. The automatic gait diagnosis system is very important for the quantitative evaluation of therapeutic intervention, so it is of great significance to establish a low-cost gait analysis system. Chakraborty et al. [10] recruited 40 users (20 patients with cerebral palsy and 20 healthy users) and used Kinect to record the user's walking process. By extracting the characteristics of step length, stride length, stride time, etc., the classification accuracy, sensitivity and specificity of the limit learning machine can reach 98.59%, 100% and 96.87%, respectively. Quantitative analysis of gait by computer can effectively reflect the patient's coordination, motor function and other physical conditions, and the equipment does not need accurate calibration, reducing the cost of clinical evaluation; The disadvantage is that the sensor-based acquisition method needs to wear special sensor equipment for data acquisition, while the vision-based acquisition method is easily limited by the background environmental factors. At the same time, the presence of multiple people in the picture will affect the data acquisition, and is not suitable for carrying out in the noisy consulting room environment where people flow.

Physiological calculation
As the interface between human and computer interaction, physiological computing converts the physiological signals parsed by human into real-time input of computer, which can enrich and improve user interaction experience. The dysfunction of motor function caused by nervous system diseases can lead to specific changes in the user's physiological electrical signals. Different physiological signals with different characteristics can be obtained according to the location of the sensor. The psychological problems caused by psychological diseases can be reflected as physiological signals of ECG. Typical physiological signals include EEG, ECG and EMG (Figure 3). EMG is a measurement of muscle contraction electrical signal, which can not only distinguish whether the user is healthy, but also related to the severity of the disease [11]. The subtle EEG signal changes can describe a specific type of brain abnormality, so EEG is often used to monitor the incidence of epilepsy in clinic. Das et al. [12] collected the EEG data of 10 children with intractable epilepsy during seizures and non-seizures, calculated the mean value and root mean square of the signal and other key features, and used Ensemble Bagged Tree classifier to predict the seizure activity, with an accuracy of 91.09%, sensitivity of 87.83%, and specificity of 94.35%. Physiological calculation can directly detect the physiological signals of users, and is not limited by disease characterization and onset time, so it is highly interpretable. The disadvantage is that users need to wear an appropriate number of electrode equipment, which is easy to be interfered by external factors, and the signal noise is large.

Conclusion
This paper describes the development of six humancomputer interaction technologies, including traditional interaction technology, touch interaction technology, voice interaction technology, somatosensory interaction technology, physiological interaction technology and hybrid interaction technology. Then, according to the characteristics of language, motor ability and other physiological aspects of patients with mental and nervous system diseases, the application of humancomputer interaction technologies such as interactive pen, voice interaction, gait/gesture interaction and physiological computing in the rehabilitation treatment of mental and nervous system diseases is analysed in detail. At this stage, the use of human-computer interaction technology has been able to achieve the evaluation of patients with obvious symptoms of nervous system diseases and provide clinicians with auxiliary diagnosis basis, but there are still many problems that need further exploration and research in the future. First, realize the naturalness of interactive tasks. Users need to perform specific interactive tasks, which reduces the naturalness of data collection. In the future, we should develop an interactive system that does not require specific tasks. On the one hand, we should develop an assistant diagnosis system for doctors, and on the other hand, we should develop an intelligent home disease early warning system for users, which can more naturally collect behavioral data reflecting the current physiological status of users. Second, promote the integration of interaction modes. A single interaction mode cannot fully reflect the current complex physiological state of patients. At this stage, auxiliary diagnosis technology is mainly used to distinguish healthy users from users with psychological and neurological diseases. In the future, multimodal fusion method can be selected to better distinguish similar types of nervous system diseases using multimodal features.