Keywords

1 Introduction

We previously suggested that behavioral adjustments to environmental properties could be an indicator of presence in virtual environments (VE) [1]. Using Warren & Whang [2] original experimental protocol, the subject’s task was to walk through a virtual aperture of variable width. In this particular affordance [3], subjects exhibited a behavioral transition, from frontal walking (for broad apertures) to body rotation (for narrow apertures). This adaptive behavior was also observed in a VE, using a CAVE setup [1]. In a CAVE, the subject sees his/her own body. This is no longer the case when s/he wears a head mounted display (HMD). This raises the question of the role of the perception of one’s own body (a virtual self) during behavioral adjustments to environmental properties (here the virtual door’s variable width). Our general hypothesis was that, when the subject wears a HMD, the presence of a (visual, co-localized)) virtual self favors perceptual calibration of the body/environment relationships (and the processing of body-scaled information [2]).

Another reason to test the subject’s behavior while wearing a HMD is that we previously observed [4] that, when the subject’s body approaches a virtual object (or worse passes through that object), this latter has a tendency to become transparent (destroying the feeling of presence). This, of course, is no longer the case with a HMD (and a virtual body representation).

Furthermore, in line with recent studies, we supposed that vibrotactile feedback (signaling, in our study, that one’s shoulder is approaching the (virtual) door) might also contribute to behavioral calibration. If visual rendering of virtual environments is satisfying nowadays, haptic rendering remains difficult to use, without a physical hardware (force-feedback arm and/or physical objects inside the virtual environment, constraining the subject’s posture and movements). The lack of haptic feedback results in incomplete sensorial feedback, as soon as the subjects interacts with virtual objects. For example, collisions of the body with virtual objects do not typically result in proprioceptive and haptic feedback: nothing is actually there to stop the subject’s movement. This deficiency might lead to a lack of user’s presence in VE, and be one reason for inappropriate behavior, with respect to reality. In a recent study [4], we started investigating the effect of vibrotactile stimulation while interacting with virtual objects, and asked whether vibrotactile stimulation might act as a substitute for haptic stimulation (see also [57]). The general hypothesis is that vibrotactile feedback (signaling approach to- or contact with a physical surface) enhances collision awareness, and spatial perception in general.

2 Materials and Methods

2.1 Subjects

Twenty male subjects voluntarily took part in this experiment. They have been chosen in the student and university staff, with an age range between 20 and 39 years (mean = 24.15; SD = 5.1). All participants gave written informed consent prior to the study, in accordance with the 1964 Declaration of Helsinki. The study was approved by the local institutional review board (IRB) from the Institute of Movement Sciences. The logic for choosing only male subjects was the same as in [1], which is mainly a morphological reason: “In males, the body rotation while walking through an aperture is known to depend upon the shoulder width, i.e., the widest frontal body dimension. In females, the same behavior is potentially more complex since it could depend not only on the shoulder width but also on the bust size” (quoted from [1]).

Subjects were naïve as concerns the purpose of the experiment. All subjects reported normal vision and sense of touch and were free from any locomotor disorder. Their stereoscopic acuity was tested using the Randot® Graded Circles test (Stereo Optical Company Inc, Chicago, IL, USA) and their inter-pupillary distance (IPD) was measured (using the Oculus configuration utility software). IPD ranged from 61.4 to 67.8 mm (mean = 64.1; SD = 1.4), and was used to adjust stereoscopic rendering to each individual. The subjects were not selected regarding their stature. Their shoulders’ width ranged from 42.5 to 51 cm (mean = 45.9; SD = 2.2) and their standing height ranged from 168 to 185 cm (mean = 177.1; SD = 5.1).

2.2 Apparatus

The experiment was conducted in a square area (3 × 3 meters), being the inside of the CAVE setup at CRVM (www.crvm.eu). Subjects were equipped with an Oculus Rift DK2 device. This HMD allows stereoscopic 3D rendering of virtual environments, with a 960 × 1080 pixels resolution per eye. It uses a combination of 3-axis gyros, accelerometers, and magnetometers, which makes it able to achieve precise absolute (relative to earth) head orientation tracking. The HMD was connected by wire to the graphics PC, running Unity software. The wire came from the top and was long enough to minimally disturb the subject’s locomotion (see Fig. 1, left). The CAVE tracking system (ArtTrack®) with eight cameras, using infrared recognition of passive markers, was used to monitor the subject’s translational movements, in particular the subject’s head position and to update stereoscopic image relative to the subject’s point of view (a configuration of markers was placed on the HMD, see Fig. 1). Additionally, the ART tracking system was used to monitor the subject’s all-body position, with passive markers placed all over the subject’s body (feet, legs, thighs, waist, hands, arms, forearms and shoulder), in order to record the subject’s movements, and eventually connect these to a real-time and co-localized virtual representation of the subject’s body (see Fig. 1, right). The real-time VR system operated at 60 Hz.

Fig. 1.
figure 1

Left. A subject in the experimental setup. He wears the HMD, equipped with an ART target for the tracking of the head’s translational movements. He also wears an ART all-body capture set. Right. The recording of all-body movements are used to build a co-localized avatar (the subject’s head was not actually represented in the experiment). The spheres’ centers represent the vibrating actuators position on the subject’s shoulder. Their size represents the spatial domain in which the actuators were active (that is when the subject shoulder’s distance to a virtual object was inferior to their radius). See text for details.

Moreover, visual feedback (virtual environment and eventually self-avatar) could be augmented by a vibrotactile feedback. The vibrotactile device was developed previously [1]. This device is based on an Arduino-like microcontroller (ChipKit Max32™), equipped with a Bluetooth module for communicating with the PC on which the simulation was running. The controller can address up to 20 vibrators with 11 levels of amplitude (from 0 to 100 % of the amplitude), using Pulse-width modulation (PWM) and two vibration patterns (continue or discontinuous). The controller activates vibrators (DC motor with an eccentric mass), connected by wires. The controller is powered by two rechargeable batteries (one for the board power supply and the other for the vibrators supply). In the present experiment, two actuators were used. They were positioned on the subject’s shoulders (see Fig. 1, right).

The VR loop (motion capture to sensorial rendering) was controlled by a PC and a 5.1 surround sound system was used to render spatialized sound (door sliding movements). The experimental application was built with Unity3D, to allow experimental control, data recording and all scenario actions. The logical, geometrical, real-time connection between the Oculus and the ART tracking system was realized using a home-made distribution software (VRdistrib, developed by J.M. Pergandi at CRVM).

2.3 Virtual Environment

The virtual environment (VE) was designed using 3D modeling software (3DSmax®), imported into Unity3D to control the experimental scenario. The VE was composed of a small corridor, enclosed by a fence on one side and opening onto a shed on the other side (see. Figs. 2 and 3, below). There was some furniture behind the fence and in the shed, in order to provide static and dynamic depth cues. This corridor was 3 meters wide (the actual size of the physical space in the CAVE). A sliding door was positioned in the middle of the corridor (being also the middle of the physical space). The size congruence between the virtual environment and the “real” physical space was meant to enable real walking of the subject in the experiment, which appears to be an effective way to get rid of locomotion interfaces and cybersickness (and to favor presence [1]).

Fig. 2.
figure 2

The virtual environment (with representation of the subject crossing the aperture)

The door consisted of two mobile panels (height = 250 cm, thickness = 20 cm). It could be opened or closed by lateral translation. There were five different width apertures, ranging from 40 cm to 80 cm, with 10 cm steps (40, 50, 60, 70, 80 cm). A rattle spatialized sound was associated with the closing and opening movements of the door. Two dark-gray disks/marks were positioned on the floor, 1 m from the door on each side. These disks were used to indicate departure and arrival points.

Fig. 3.
figure 3

Subjective view inside the HMD (with representation of the virtual body). The reader may see the 3D scene by cross-fusing both images.

2.4 Procedure

Each subject was first received in a meeting room, where the experimenter read the instructions. The subject’s stereoscopic acuity was tested and he was asked to fill a questionnaire about his susceptibility to motion sickness (Motion Sickness Susceptibility Questionnaire Short-form [8]). This procedure was used to reject subjects who had significant motion sickness symptoms during transportation in real life, to prevent potential cybersickness. Once a subject was recruited for the study, his shoulders’ width was measured with an anthropometric device from the tip of the left humerus to the right humerus, with the shoulder relaxed. His interpupillary distance was measured, using the Oculus Rift Configuration Utility. This value was used to calibrate stereoscopic rendering of the VE.

The subject was equipped with tracking markers and with the vibrotactile device. The two vibrotactile actuators were placed on the tip of the left and right humerus. Once equipped, the subject was taken to the experimentation area and equipped with the head-mounted display. The initial VE was displayed with the door completely open and an avatar in a T pose at the doorstep. Different calibration and configuration steps were made. The initial Oculus field of vision was calibrated to match the tracking system reference. The Bluetooth connection between the PC and the vibrotactile device was established and tested.

The Avatar morphology was calibrated regarding the shoulder width and the height of the subject. Avatar calibration was performed to fit the avatar limbs with the subject limbs measured by the tracking system. At the end of this setup, the avatar was optimally co-localized with the subject’s own body, reproducing the subject’s movements in real-time.

The subject was next presented with a training session. He was required to walk straight from the starting point to the arrival point and to stop at this point. He was then asked to turn around and do the same thing in the other direction. The door was still fully open. During this phase, he could get familiar with the task and with his virtual body. In a second time, he experienced a short vibrotactile training session. The door was entirely closed and the subject was asked to slowly approach the door with one of his shoulders until he felt the vibrotactile feedback. He then could try the different feedback modes (see below), by getting closer or getting in contact with the door.

After these training sessions, the subject was asked to return to the starting point. The door was fully opened, and the first trial could begin. One trial consisted in the following sequence: the subject was standing on one of the marks on the floor, while the door was moving to the next aperture width. At the beep, the subject had to walk straight to the other mark, on the other side of the door. When he arrived at the mark, the door was opened. The subject had to turn around. The door was moved to the next aperture width and another trial could start in the other direction.

2.5 Experimental Conditions

There were two independent crossed variables. The first one was the type of feedback. It could be only visual or augmented with vibrotactile stimulation. Vibrotactile feedback operated as a radar (similar to those in cars). When the subject approached his shoulder from virtual object (a side of the door in our case) a discontinuous vibration was sent to the corresponding actuator. The closer the shoulder was from an object, the more intense was the vibration. If the shoulder collided the door a continuous vibration at maximum intensity was sent. The “radar” was a sphere with a radius around 10 cm, situated on each shoulder (see Fig. 1, right). This size was chosen in reference to the minimal security distance adopted by subjects in [1]. The collision detection was realized by placing a collider (simple collision detection algorithm) covering the shoulder and forearm of the avatar. The system always computed the radar and collision detection, but, depending on the feedback condition, the vibrotactile controller was activated or not. All collisions and distances to the doorpost were recorded during the experimentation. The level of amplitude of the radar was recorded from 1 to 10, 0 meaning no detection (distance between any shoulder and any doorpost superior to 10 cm) and 11 meaning collision.

The second condition was the presence (or not) of a virtual body (co-localized with the subject’s body). Once again, like for collision detection, the avatar was always active in the simulation software. However, it could (or not) be present in the displayed visual scene).

2.6 Task

Subjects were simply asked to walk straight from a starting point to a target position (both marked on the floor). Each subject carried out four sessions: one for each of four conditions, resulting from the combination of two “Avatar” conditions (with and without a virtual body representation in the HMD) and two “Vibrotactile” conditions (with or without activation of the actuators).

The experimentation was interrupted by a short break between each session. During these breaks the subject could rest and had to fill a questionnaire about cybersickness (SSQ [9]). Each session consisted of 20 trials. Each trial corresponded to one of 5 aperture: 40, 50, 60, 70, 80 cm, and each aperture condition was repeated four times. Within a given session, the aperture-width succession order was randomized across subjects. The subjects were split into four groups. Subjects were randomly assigned to one group, each group having a different (pseudo-random) succession order of the four experimental conditions.

2.7 Data Recording and Analysis

During the experiment, several behavioral indicators and events were recorded. In this paper, we will focus 1) on the maximal angle of shoulder rotation while the subject was passing through the aperture, indicating a qualitatively appropriate behavior of the subject, as a function of the doors’ width, and 2) on the eventual presence of a collision between one shoulder and a doorpost, which is a finer indicator of adaptive behavior.

Fig. 4.
figure 4

Representation of the average maximal shoulder rotation when the subject crossed the aperture, as a function of the ratio between each subject shoulder and the aperture width. As the aperture becomes smaller, subjects systematically change their posture from frontal walking (shoulder rotation close to zero for ratios superior to 1.6) to sideways walking (shoulder rotation close to 80–90 degrees for ratios inferior to 1). Data for average values observed in the four experimental conditions were fitted with a logistic function (noA-noV: no avatar, no vibrotactile feedback; noA-V: no avatar, vibrotactile feedback; A-noV; avatar, no vibrotactile feedback; AV: avatar, vibrotactile feedback).

3 Results

From the 20 subjects who were included in the experiment, two were removed from data analysis, due to recording issues. Results show that, in all conditions, the door’s aperture width had a significant effect on the subjects’ shoulder rotation (ranging from 0 to 90 degrees), with a significant increase in shoulder rotation for a ratio between each subject’s shoulder width and the door width for a value of approximately 1.4 (see Fig. 4).

This result is coherent with the outcome of previous studies [1, 2]. More precisely, [1] found a critical aperture width (the aperture width for which a significant shoulder rotation is observed) of about 1.3 (similar to the value observed in [2]). Here, we calculated the average shoulder rotation for all subjects, as a function of the four experimental conditions and the five aperture widths (Fig. 4). We fitted these value with a four parameter logistic curve (Eq. 1).

$$ y = min + \frac{max - min}{{1 + (\frac{x}{ec}){}^{{ \wedge }}slope}} $$
(1)

The obtained regression coefficient was superior to.999. The “ec” parameter (see Eq. 1), represents the inflexion point of the fitted curve. We took it as an approximation of the critical aperture width. It is represented in Fig. 4 as a vertical dash-line.

This first result shows that, across all four experimental conditions, subjects exhibited an adapted behavior, starting to rotate their shoulder when the ratio between their shoulder width and the aperture width became inferior to about 1.4.

This result indicates that, overall, subjects exhibited an adapted behavior, using body-scaled information [2], and that our experimental setup triggered behavioral presence. Using repeated-measures ANOVA, a significant effect of door width was observed (as expected). No significant simple effect of the experimental conditions was observed on shoulder rotation. However, an interaction effect between the “Vibrotactile” condition and door width, with shoulder rotation being significantly higher (closer to 90 degrees) as the door aperture became smaller, when the vibrotactile feedback was present (p < .01).

We further analyzed the percentage of collisions between the subjects’ shoulders and the door, as they crossed the aperture. In short, for small apertures, results show that both conditions significantly affected the occurrence of collisions, these being minimal when both vibrotactile feedback and a virtual body were present (p < .01), with no significant interaction effect between both factors. In other words, the positive effects of these factors appear to be additive.

Fig. 5.
figure 5

Average probabilities of a collision between the body and door (with standard deviation), as a function of aperture width. Top: Without an avatar, vibrotactile feedback reduces the occurrence of collisions (for small apertures). Bottom: the same effect is observed in the presence of an avatar: collisions are minimal when both the avatar and vibrotactile feedback are present (AV).

4 Conclusion

In the present experiment, we confirmed that adaptive behavior (rotating the shoulder to pass through a narrow aperture), previously observed in natural conditions and in a CAVE setup, are actually basically similar when participants wear a Head-Mounted Display (HMD).

The experimental interest of using a HMD is that, by default, the subject does not see his/her own body when using such device. This enabled us to investigate the effect of seeing or not a (co-localized) representation of one’s own-body (avatar) when interacting with a virtual environment. One first outcome of this experiment is that subjects exhibited an adaptive behavior, even in the absence of an avatar. This result is not surprising and confirms that presence is obtained while wearing an HMD, even without a representation of one’s own-body.

However, looking closer into the data, it appears that, in the absence of an “avatar”, subjects collided with the doorpost (for small apertures), in almost 50 % of the trials (Fig. 5, top). Here, adding a vibrotactile “radar” feedback (signaling approach to the doorpost) helps, enabling the subjects to calibrate the perception of their body-environment relationships (using body-scaled information). This result can be taken as further evidence of visual distance perception compression in virtual environments [10].

On the other hand, the presence of a co-localized avatar also improved distance perception, as compared to the condition without an avatar and vibrotactile feedback. However, optimal performance was only observed when both the avatar and vibrotactile feedback were present. This last result requires further investigation, since it can be related to different hypotheses.

It might be that the avatar helps the subjects feel present in the virtual environment and consequently improves distance perception [11]. Subjects using a HMD without any self-representation often complain of a feeling of “floatation” (you do not see your feet, so do not know where you are as compared to the ground surface). However, such body-appropriation would not be sufficient to get rid of the depth-compression effect in VEs, such that vibrotactile feedback would be necessary to further calibrate spatial perception.

It might also be that the “quality” of the avatar was not sufficient in our conditions to be fully effective and/or that the co-localization between the real and virtual body was imperfect (spatially and temporally), resulting in distorted spatial perception.