Keywords

1 Introduction

1.1 Background

In recent years, VR has drawn attention not only in the entertainment industry but also in fields such as health care, education, and training. For example, the entertainment industry started out by adopting VR in a game using a head mounted display (hereinafter referred to as HMD) and now has constructed amusement facilities dedicated to VR [1]. In the medical industry, VR is used for simulation to mitigate risks related to dangerous surgery and rehabilitation [2]. In the field of education, it is becoming possible to carry out education that is easier to understand by using VR that presents temperature and force sense [3]. It is well known that VRs are also actively used in various other fields. Accordingly, people working on the development of VR are also increasing. Many companies and universities are working on a wide variety of VR technologies, such as developing a HMD with a higher image quality [4] and developing devices that can sense force as well as real space in virtual space [5]. We live in an era in which there is a sharp focus on the progress of VR technology.

According to “Virtual Reality Science” supervised by Tachi et al. [6], VR has three elements of “three-dimensional spatiality”, “real-time interaction”, and “self-projection property”. Among these tree elements, “self-projection property” is a feeling of being without conflicts in VR space. It is a subjective concept, therefore, it is hard to evaluate.

This concept has relationship deeply with “Sense of Agency (SoA),” which is an originally conceived research that has been carried out in the field of neuroscience. Studies on the feeling corresponding to a movement of a subject in a VR environment, as if he or she is really manipulating his or her body, have progressed subsequently. Moreover, research that reveals such characteristics is still under way. In such research, the usefulness of applying this idea based on SoA to the field of Neuroscience in the research of VR has been proposed [7]. However, since VR technology is still under development, the performance and functions of VR devices are being studied mainly. Methods and indicators for evaluating this concept are still unclear. By evaluating this concept quantitatively, technological development to enhance the virtual space can greatly improve.

1.2 Purpose and Overview

The final goal of this research is to clarify the relationship between force and vision sense and “Sense of Agency” in a VR environment through measurement of physiological behavior. To achieve this goal, the subgoal of this study is to establish a human-scale VR environment with visual, auditory, and haptics and to prepare a VR environment that can be used to evaluate the “Sense of Agency” through measurements related to performance and physiological behavior.

In this study, we constructed an experimental environment using human-scale haptic device (SPIDAR – HS) and measured physiological behaviors as a preliminary experiment. In this paper, mainly, the construction of the experimental equipment is described.

2 VR System Constructed in this Study

2.1 System Configuration

Figure 1 shows the constructed system. Visual and auditory information is provided using a HTC VIVE, and force information is provided from the end effector of the SPIDAR-HS. The flow of force presentation in this system will be explained by taking as an example where the bottom of a cube held in a hand in Unity is pressed against the ground. First, instructions are issued to wind the motor used in the Unity program. When pushing against the ground, the upper four motors out of the eight motors attached to the SPIDAR - HS are instructed to wind up, so that the cubes held in the hands cannot move further downward. When the motor winds up with the indicated force, the force is transmitted to the thread connecting the end effecter. This is the flow of force sense presentation. In addition, when giving visual information that a cube is pressing against the ground to a person who is already feeling the corresponding force, the person clearly feels that it is pressing against the ground; this is not the case when only the force sense information is provided. In addition, this environmental system must maintain the minimum tension to detect the position angle even when presenting an asthenic sensation. This is to accurately measure the thread winding value by preventing the thread from slacking.

Fig. 1.
figure 1

Configuration of experimental system using SPIDAR-HS

2.2 SPIDAR

SPIDAR-HS.

In this study, we employed the SPIDAR system [8,9,10,11,12] as the force presentation device. SPIDAR is a haptic device in which a motor, a threaded pulley, and an encoder for reading a yarn winding are set as one module. In addition, the force sense presentation and position and angle detection are performed by a plurality of motor modules. This SPIDAR system was further expanded to a human scale; this scaled-up system is called the SPIDAR-HS (Human Scale).

2.3 Outline of the Experimental System Using SPIDAR-HS

The construction of SPIDAR-HS is shown in Fig. 2. The size of one side of the aluminum cube is 2.5 m; thus, it is possible to secure a wider working space than the conventional SPIDAR workspace. To use the HTC VIVE for HMD and track the HMD position information on room scale, it is also possible to perform tasks while moving within the 2.5 m cube that we have constructed. Moreover, since the shape of the force sense presentation part (hereinafter referred to as end effector) and the motor attachment position can be changed, it is an experimental environment that can be applied to various physiological behavior measurement experiments by replacing these components according to the application. The specifications of the motor and encoder arranged are shown in Tables 1 and 2. The yarn used is a thread used for fishing, the thickness of which is 0.37 mm. A snap which is a fishing tool is attached to the tip of the yarn. This is installed to connect with the ring created by a fishing tool attached to the end effector body and a quick knot. HTC VIVE is used as HMD. The specifications are shown in Table 3.

Fig. 2.
figure 2

Construction of SPIDAR-HS

Table 1. Motor specification
Table 2. Encoder specification
Table 3. HMD specification

2.4 Measurement of Position Error of Working Space

Purpose.

The purpose of this experiment is to measure the position error in a work space and evaluate the construction environment. The work space was defined as a cube of 1 m side because tasks needed in this fiscal year will be carried out in this space.

Method.

We decided the coordinates of the center as (0, 0, 0) and moved slowly from there to one vertex (−0.5, 0.5, 0.5) of the cube defined as the working space. Coordinates of the Unity side were measured ten times after moving it, and the mean error and standard deviation were recorded. The end effector used at this time was the one shown in Fig. 3.

Fig. 3.
figure 3

End effector in this measurement (also used in experiment 2)

Result and Discussion.

Table 4 shows the measured coordinates x, y, z, the error average, and standard deviation. From Table 4, it can be said that the error value is small if the working space is a cube with a 1 m side. However, depending on the tasks, even an error of only 0.5 mm may have a big influence. In such a case, we must reduce this error. What can be considered as the cause of the error is how the yarn is wound. The position of the Unity is calculated assuming that the pulley is not threaded. In fact, the yarn is wrapped around the pulley and it becomes thicker by that much, and the amount that can be wound up per revolution differs from what is estimated; this difference produces errors. To reduce this error as much as possible, it is necessary to reduce the winding weight by using a thinner thread in the experiment.

Table 4. Relative error, standard deviation, average value of x, y, z

3 Experiment 1: Ball-Catching Task

3.1 Overview

Purpose.

The purpose of this experiment is to determine whether there is a difference in the nature of the muscle activity of the forearm between the Real task and the VR task. This research is a collaboration between the Kotani Laboratory, Ergonomics Laboratory, Faculty of Systems Science and Engineering, Kansai University.

Methods.

The experimental conditions are shown in Table 5. Under these conditions, experiments were carried out according to the following procedure.

Table 5. Experimental conditions

The procedure of the experiment is as follows.

  1. 1.

    Measurement of resting electromyograms of the elongated extensor carpi radialis longus muscle and palmaris longus muscle for 15 s.

  2. 2.

    Subjects wear the HMD and catch the ball dropped from a height of 80 cm in the VR environment.

  3. 3.

    Measurement of the electromyogram when the subject catches the ball.

The task of catching the ball was performed three times. Next, HMD was removed and the ball-catching task in the Real environment was performed three times using the same procedure that was used in the VR environment.

3.2 Implementation

End Effector.

Figure 4 shows the end effector used in the experiment. The frame part is made of resin made with a 3D printer. This end effector is a hemispherical Styrofoam which is of the same size as the actual sphere attached to the lower part of the frame.

Fig. 4.
figure 4

End effector used in the ball-catching task

Force Sense Presentation.

In this experiment, it was assumed that the falling ball was caught in 0.1 s, and the force sense was presented vertically downward for 0.1 s.

VR Environment.

The hand and ball in the VR task are shown in Fig. 5. The hand and ball in the Real task are shown in Fig. 6. The colors of the balls and hands in the VR task are nearly the same as those in the Real task. However, the hand in the VR task cannot move like the hand in the Real task. Also, subjects cannot see their own bodies.

Fig. 5.
figure 5

The hand and ball in the VR task (Color figure online)

Fig. 6.
figure 6

The hand and ball in the Real task (Color figure online)

3.3 Results and Discussions

The results analyzed by Kansai University, especially the level of muscle activity during catching, indicate that the muscle activity level of the VR task tended to be smaller than the muscle activity level of the Real task. It is inferred that the force presented during the VR task was smaller than the force presented during the Real task.

The impact force applied to the hand when a 220 g ball falls from a height of 80 cm was calculated. The impulse when a 220 g ball falls from 80 cm and contacts the hand is as follows.

$$ mv = 0.22 \times \sqrt {2 \times 9.81 \times 0.8} $$

If the falling ball was received in 0.1 s, the required force magnitude F[N] is as follows. An illustration of how to apply force while catching the ball is shown in the Fig. 7.

$$ F = \frac{mv}{t} = \frac{{0.22 \times \sqrt {2 \times 9.81 \times 0.8} }}{0.1} \times 2 \approx 8.18 \left[ {\text{N}} \right] $$
Fig. 7.
figure 7

Force sense presentation while catching the ball

The maximum presentation power of SPIDAR - HS was measured using a force gauge (RZ-2) and it was 6.23 N. If SPIDAR - HS cannot output a force of at least 8.18 N, it is impossible to measure muscle activity that is exactly like the Real task. It is thought that this is the reason why the level of muscle activity of the VR task becomes clearly smaller than that of the Real task.

4 Experiment 2: Rod-Tracking Task

4.1 Overview

Purpose.

The aim is to clarify the influence of the prototyped task on the sense of motor participation by using EMG in a VR environment. This research is a collaboration with the Kobayashi Laboratory, Chitose University of Science and Technology, Faculty of Science and Technology, and Department of Information Systems Engineering.

Methods.

The experimental conditions are shown in Table 6. The minimum required force vertical upward was measured by a force gauge (RZ-2).

Table 6. Rod-tracking task experiment conditions

The procedure of the experiment is as follows.

  1. 1.

    In order to measure the myoelectric potential at rest, with the right arm pulled out, stop for 10 s.

  2. 2.

    Hold the HMD and grasp the position of the yellow tape on the end effector.

  3. 3.

    With the arms stretched, bring the rod into contact with the start point (far side) of the route; then count 5 s and start.

  4. 4.

    When reaching the return point, count again for 5 s and start again from there.

  5. 5.

    Return to the starting point and count 5 s.

This procedure is carried out until the test subject no longer touches. During the task, sit, so that the body is parallel to the desk and try not to move the body and head as much as possible.

4.2 Implementation

End Effector.

The end effecter used is the one shown in Fig. 3. A part made using a 3D printer is attached to the end of a bar of cypress. The yarn from the motor is attached to the part. A vinyl tape is wrapped around the bar holding part.

Force Sense Presentation.

When a bar is sunk in an object, the force sense is presented to pull in the direction opposite to the object. The magnitude of the force sense depends on the degree of depression and the speed at the time of sinking.

VR Environment.

Figure 8 shows a route in the VR task and a route in the Real task. The left is the route in the VR task and the right is the route in the Real task. The route in the VR task is created referring to the route in the Real task. Since both the route in the VR task and the route in the Real task are the blind spots of the route, it is difficult to see. In the VR task, the subject’s own arm and body cannot be seen.

Fig. 8.
figure 8

A route in the VR task (left) and a route in the Real task (right)

4.3 Result and Discussion

Subjects answered that they were unable to recognize which side of the top or bottom the rod is facing. The SPIDAR - HS system applies a force in a certain direction whereby the end effector does not sink into the wall further when the rod contacts the path. The feeling when receiving this force is a feeling that the rod stops rather than feeling of the receiving force. Also, in case of rubbing on the wall of the route or slightly touching it, the force sense decreases. When feeling this small force sense in the absence of visual information, the upper/lower discrimination becomes difficult because the reaction force at object contact cannot be felt so much.

5 Conclusion and Future Work

The purpose of this research was to construct an experimental environment to devise a method for quantitative evaluation of SoA. For this purpose, this year we conducted physiological behavior measurements in cooperation with the ergonomics laboratory of the Faculty of Systems Science and Engineering at Kansai University and the Kobayashi Laboratory of the Faculty of Science and Engineering at Chitose University of Science and Technology and evaluated the construction environment from the results.

The problems of the construction environment that manifested in the ball-catching task were the small forces that could be presented. In the future, depending on the task, it is necessary to solve the small value of the presentation power by changing the motor to a more powerful one, the mounting position of the motor, and the force sense presentation method.

Another problem that was observed is that it is difficult to understand the upper and lower contact in the current SPIDAR system. If physiological behavior measurement is to be performed in tasks requiring a weak force sense in the future, it is necessary to make some countermeasures. For example, it is a mechanism that generates weak vibration on the side when it comes in contact with the end effector. When creating an end effector including such a mechanism, it is necessary to make it smaller and lighter.

In the two tasks performed this time, the subject was not able to see his own hands and body. In the ball-catching task, hands in the virtual space do not move at all; so, it seems that the immersive feeling is lost. Even in the case of the rod-tracking Task, there were also subjects who answered that they could not notice the tilt of the rod or hand that would normally be noticed because their own arms and bodies could not be seen. In the future, we will make our body visible in the virtual space using Chroma Key processing. As a result, when the hand of the subject moves, the hand displayed in the virtual space will move to the same extent and the immersive feeling should be much higher than that observed in the present experiment.

In the experimental environment system constructed using SPIDAR - HS, various end effectors can be attached, and the method of presenting forces can be changed freely. This year, by using different end effectors and force sense presentation methods for each of the two tasks, we were able to obtain results of different physiological behavior measurements. Also, as compared with the conventional SPIDAR series, the work space became very wide by adopting the shape of a cube with a 2.5 m side. This makes it possible to deal with tasks that could not be handled by a conventional SPIDAR. From the above, by making full use of the wide working space, the end effector which can be made into various shapes, and the degree of freedom of the motor mounting position this fiscal year, we can conclude that we were able to construct an experimental environment that can be used to conduct a wide variety of physiological behavior measurements under various conditions in virtual space. From next fiscal year, we plan to increase the performance of the whole experiment system using SPIDAR - HS and to design the VR guidelines for the visual-haptic presentation environment which is the final goal of this research.