Keywords

1 Introduction

Although the concept of Brain-Computer Interface (BCI) had been introduced by Vidal [1] in 1973, it really emerged as a new field of research in the early nineties when systems allowing real-time processing of brain signals became available. Since then, BCI research has seen an impressive growth in the domains of neuroscience, computer science, and clinical research. However, there are still a significant number of theoretical and technological issues that have to be dealt with, probably with a maximum efficiency if following an interdisciplinary approach.

BCI is considered as an effective tool for rehabilitation and/or assistance of severely impaired patients. In our team, we are particularly working on BCI for Duchenne Muscular Dystrophy (DMD). DMD is a severe pathology of the skeletal musculature. This genetic disorder causes an absence of dystrophin, a protein that supports muscle strength and muscle fibers cohesion, which leads to progressive muscle degeneration and weakness. This dystrophy is described by Moser as the most common sex linked lethal disease in man (one case in about 4000 male live births). The patients are wheelchair bound around the age of 8-10 years and usually die before the age of 20 years [2]. It affects mostly males and is due to a progressive degeneration of the skeletal muscles but also of digestive, respiratory and heart muscles. Other disorders appear such as bone fragility, microcirculation disorders, nutritional disorders and anxiety syndromes. Recent advances in medicine may help delay the symptoms of DMD, and increase life expectancy of patients, but unfortunately fail to stop the natural evolution that leads to progressive loss of motor skills to extremely severe quadriplegia (see Fig. 1).

Fig. 1.
figure 1figure 1

In the early stages, DMD affects the shoulder and upper arm muscles and the muscles of the hips and thighs (in red). These weaknesses lead to difficulty in rising from the floor, climbing stairs, maintaining balance and raising the arms (Colour figure online). (Source: http://mda.org/disease/duchenne-muscular-dystrophy/overview)

The residual motor ability of the most advanced patients is characterized by very low amplitude movements associated with loss of degrees of freedom and severe muscle weakness in the fingers. The residual distal movements are maximized to control an electric wheelchair or a computer, thanks to palliative existing solutions on the market such as mini-joystick, mini-trackball or touchpad. Using these devices requires an extremely precise installation. However, at a very advanced stage, these assistive technologies no longer meet the needs: there are some losses of efficiency despite the custom configurations and some equipment usability fluctuations. These fluctuations are related to the duration of use, combined with excessive physical load, or may be due to the environment: humidity, coldness (causing microcirculation disorders) or minor modifications of the system (when moving a wheelchair, for example).

In the context of progressive, degenerative and very disabling disease such as DMD, the recommendation of assistive technology must adapt to the constant changes in the functional state of the user. Anticipating needs is not trivial because we need flexible human-computer interfaces at different stages of the evolution of the user’s functional profile without requiring a change in strategy (material, equipment, behavior) each time the disease evolves.

The recent development of BCI allows one to consider new control options for these patients, eager for autonomy, particularly in the use of computers, electric wheelchairs and home automation [3, 4].

The purpose of our study is to observe various muscle and brain signals, when users push joysticks, typically to move a character in a 2D or 3D maze, or a car on a road, for instance.

We want to check the level of correlation between movements performed by the patients (finger and foot) and changes in electrophysiological potentials (EEG and EMG). This correlation is well known for healthy people.

For example, in the preparation of a motor action, a desynchronization of cortical rhythms in the beta and mu frequency band can be detected on the EEG signals recorded over motor areas. This desynchronization precedes the appearance of bursts of action potentials in EMG signals and the realization of movement. Our goal is to check whether similar correlations are present in DMD subjects, and if so at what level of repeatability and robustness they can be identified, at different stages of disease progression.

In this context, the challenge is therefore to propose to the patient a series of manipulations he/she is able to do now (time t), and to look how it will be possible in the future (time t + n months) to reproduce them, without necessarily using the same data source as that used initially to control the system (EEG instead of EMG, or a mix of them, for example).

In this study, we focus our attention particularly on asynchronous non-invasive BCI [4]. Our work is oriented towards two main questions: (a) how to help people with better human-computer interfaces for BCI? and (b) how to propose hybrid BCI able to adapt themselves to the context of use (user profile, progression of his/her disease in time, type of wheelchair, adaptation intra and inter session, etc.)?

The paper is organized as the following: part 2 presents our motivation in helping people suffering from muscular dystrophy or schizophrenia with better HCI for BCI; part 3 describes our approach around Hybrid BCI and Virtual Reality; part 4 and 5 describe the conducted experimentation and the obtained results; the last part gives a short conclusion and some perspectives for this work.

2 Helping People with Better HCI for BCI

We have noticed that most of the BCI proposed to people are not very pleasant and ergonomic to use. Most of the time users can only see some graphics on the screen, but it’s not easy for them to understand how to interact with such interactive systems. For instance, how to perform mentally the task “push” with an Emotiv EPOC system, while nothing in the documentation explains how to do this, concretely? We believe that giving more feedback and feedforward to the users, before, during and after performing a task will lead to better user experiences and better results.

In the early stages of progressive disease such as DMD, the patient is still able to use his/her muscles. Thus, he/she can be trained to use a hybrid BCI with channels measuring his/her motion, his/her muscle activities through electromyography, as well as the activity of his/her motor and pre-motor cortical areas [5]. Later, with the disease evolution, motion-related channels will become less informative and the challenge will be to allow the control channels related to brain activity to take over. Clinicians of the physical medicine and rehabilitation service of Lille University Regional Hospital will define user needs, recruit and manage DMD patients for long term experiments. Colleagues of the clinical neurophysiology service at Lille University Hospital help us define the most appropriate markers of cortical activity for each interaction task.

2.1 For People Suffering from Muscular Dystrophy

As we said before, our first target population will be people suffering from Duchenne Muscular Dystrophy (DMD), but for the moment, we are improving our interactive system by testing it on non-disabled people, as presented on other work on DMD [6].

Concerning HCI aspects, we are focusing on virtual reality in order to provide a better feedback to users. The goal of virtual reality is to enable a person to obtain a sensorimotor and cognitive activity in an artificial world, created digitally, which can be imaginary, symbolic or a simulation of certain aspects of the real world. For instance, Fig. 2 presents hybrid BCIs developed in our laboratory, where the user can see in a virtual world (NeoAxis or Unity3D), realistic effects of his/her interaction in the world, when he/she pushes Arduino scanned buttons (left hand, right hand or both simultaneously) and/or when he/she thinks of pushing the buttons.

Fig. 2.
figure 2figure 2

(Left): Hybrid BCI developed in our team (EEG, EMG and Arduino). (Right): Hybrid BCI and virtual reality showing on the screen the hand(s) to move

We are also planning to use this approach (hybrid HCI and virtual reality) with other kinds of patients, such as schizophrenic people, where serious games will hopefully help to detect and manage their psychiatric disorders.

2.2 For People Suffering from Schizophrenia

As explained previously, DMD patient sometimes suffer from anxiety syndromes. Thus, we are also interested in the observation of brain signals detected on people suffering from schizophrenia. And so, our maze designed with Unity3D to study DMD pathologies could also be used with psychiatric disorders such as schizophrenia.

The maze would allow working on 2 items: occurrence of Auditive Verbal Hallucination (AVHs) and spatial memory. Many studies have characterized profiles of abnormal neuronal oscillations of schizophrenic (SCZ) patient. For example, reduction of the amplitude of theta waves and delayed phase are significant clues to detect such profiles [7]. We propose the use of a virtual maze because theta waves are involved in the spatial orientation. The complexity of the maze would depend on the amplitude of SCZ’s theta waves that will be measured in temporal area (see Fig. 3, left).

Fig. 3.
figure 3figure 3

Left: International 10-20 system and chosen electrodes. Right: Neurofeedback effect on gamma power (30-60 Hz) [8].

A recent study [8] demonstrated that the repeated use of a playful serious game allows increasing the amplitude of gamma waves. The working assumption is that the same phenomenon could be observed with theta waves.

In first time, increase of the theta wave would allow to resolve the problems of spatial orientation. In second time, as suppose McCarhy [9], this increase could reduce the severity of the AVHs or their occurring.

The use of this serious game would also allow interacting with objects triggering the AVHs. Recent study [10] demonstrates that SCZ patients can identify these objects. We propose to place/hide/animate such objects in the maze. The aim would be to observe the effect these objects cause on profiles of neuronal oscillations of schizophrenic patient.

In parallel we propose to work with the SCZ patient to develop strategies to be adopted during AVHs. McCarthy [8] proposes to use the humming, based on studies showing less AVHs occurred in periods when SCZ patients are humming [11].

The aim would be to develop these strategies based on the song then to ask the SCZ patient to apply these strategies in front of the elements met in the maze. Among them the object activing would be hidden. The objective would be to re-educate the patient in its fear and to validate the use of the humming.

3 Hybrid BCI and Virtual Reality

To improve the speed and robustness of communication, Leeb et al. [12] have recently introduced the so-called “hybrid BCI” notion, in which brain activity and one or more other signals are analyzed jointly. We consider that each channel of our hybrid BCI carries some relevant information to understand the achievement of a particular task and we apply sensor fusion techniques to improve man-machine communication and interface robustness. Our hydrid BCI is considered through HCI multimodal interaction paradigm using CARE properties (Complementarity, Assignment, Redundancy, and Equivalence) [13]. Our goal is to provide various data (EEG, EMG and actual movements measured by the Arduino) to the kernel of our system (OpenVibeFootnote 1) that performs a data fusion to derive the command sent to the avatar through a VRPN protocol [14].

Analysis of EEG signals collected over the scalp provides guidance on the amplitude and phase of cortical rhythms, allowing to identify synchronizations or desynchronizations related to a real or an imagined event (movement of a limb, for example) [15, 16].

The electromyography (EMG) detection allows the analysis of electrical phenomena that occur in the muscle during voluntary contraction. This examination detects an electrical signal transmitted by peripheral motor neuron to the muscle fibers it innervates, called motor unit action potential (MUAP) [17]. Standard techniques use invasive electrodes, usually with concentric needles. The explored area is reduced and the exploration is very precise.

But there are also non-invasive “global” EMGs, using a surface electrode, exploring a larger territory, applicable to many muscles. A compound muscle (or motor) action potential (CMAP) is then collected [18]. There are mainly used in the study of muscle strength and fatigue in patients with neuromuscular diseases [19].

Patients with Duchenne muscular dystrophy have a myogenic EMG pattern during physical effort. Abnormally high compared to the effort, it is composed of potential polyphasic motor units, with low amplitude and short duration. Abnormal spontaneous activity can also be detected in the form of potential fibrillation waves of positive sharp waves and complex repetitive discharges. In very advanced forms, some areas may become electrically silent [20].

The analysis of the EMG signal can be made on motor unit potential (MUP) [21, 22] or compound muscle action potential (CMAP) parameters [19], to check various elements such as duration, peak to peak amplitude, total area under the curve, number of phases, etc.

4 Experimentation

The experiment was realized by nine healthy subjects (6 men and 3 women) aged between 20 and 53 years. Subjects’ approval was verbally required. Two of them had previously used the system. Participants were asked to seat comfortably in an armchair and to keep their indexes on each joystick. They were asked to avoid blinking their eyes or contracting their jaws during the experiment in order to prevent recording ocular and muscular artifacts.

Subjects were wearing an electrode cap (GAMMAcap, g.tec), with Ag/AgCl electrodes located according to the international 10/20 system. Ten mono-polar channels were recorded from the central, parietal and frontal lobe: C1, C2, C3, C4, C5, C6, FC3, CP3, FC4 and CP4 with an electrode clipped on the right ear as a reference and another one on the forehead as mass. A gel was applied between skin and electrodes to increase conduction of electrical signal. Signals were amplified and sampled at the frequency Fe = 512 Hz.

Two bipolar channels were placed on the forearms in order to record muscular activities when users manipulate the joysticks.

Figure 4 shows our system which is composed of a physiological signals amplifier with 16 channels (g.USBamp, g.tec), two DELL computers using XP and windows 7 operating systems. Two joysticks are connected to the USB port of one computer via an Arduino UNO. On one computer runs OpenVibe software which collects data from EEG, EMG and Arduino sources. OpenVibe allows controlling online signal acquisition, signal processing and commands sending for application control. On the other computer runs the application developed under Unity 3D. The two computers are linked with an ethernet cable which enables commands sending from OpenVibe to Unity 3D application thanks to VRPN protocol.

Fig. 4.
figure 4figure 4

Experiment setup

The experiment was composed of a calibration phase followed by two online phases. During those second and third parts of the experiment, participants were asked to control a character in a maze thanks to the joysticks. The experiment lasted about one hour. The calibration phase enables to record EMG and EEG data at specific moment. During this phase, the user has to push on the left, right or on both joysticks according to orange arrows displayed on the screen (see Fig. 4). Virtual hands are displayed and move according to the activated joystick.

During online phase users have to get the character out of the maze, following a yellow path. The character rotates to the right or left or goes straight by activating respectively right, left or both joysticks. In the first online phase the user controls the character using an immersive view whereas an aerial view is used during the second online phase. No time indication is displayed to avoid competition effects and stressing people. Experiment ends with a questionnaire in order to get qualitative data.

5 Results

Figure 5 shows the average time needed by users to get the character out of the maze in immersive (A) and aerial (B) views. The average time in immersive view (152 s) is slightly higher than in aerial view (130 s). Nevertheless, a Wilconxon non-parametric test doesn’t show a significant difference (p-value = 0.141) of time between immersive and aerial view, with an alpha risk of 5 %.

Fig. 5
figure 5figure 5

Mean time of users to get out of the maze when leading the character in immersive (A) and aerial (B) view. Vertical bars show standard deviation.

Moreover, qualitative data indicate that users’ concentration is slightly better in aerial view than in immersive view. This can explain why the average time to get out the maze is lower in aerial view. This result seems to show that we can easily switch from one view to the other.

According to Fig. 5, time to get out of the maze, in immersive and aerial view, is correct. Our control mode with two degrees of freedom for rotating and moving the character forward in the maze seems efficient.

This is supported by qualitative data which indicate that all users feel comfortable with the proposed HCI. More over a majority of people (55 %) estimated that the character control is intuitive, unlike to other one who felt it moderately intuitive.

6 Conclusion and Perspectives

In this paper we have shown the relevance of our approach to controlling a character with two degrees of freedom. Indeed time to get out of the maze for all participants is satisfying. It is supported by qualitative data, indicating that participants feel comfortable with the proposed HCI.

Moreover time comparison, thanks to a Wilcoxon test, between character control in immersive and aerial view doesn’t indicate a significant difference.

According to users, their concentration in aerial view was slightly better than in immersive view. These results seem to show that users can easily switch from one view to the other. It is interesting if we want to switch from a 3D to a 2D application. User can firstly learn to control the proposed HCI thanks to a 3D gaming. Then he/she can switch to a 2D application such as controlling cursor movements on a computer screen. This proposed HCI is promising for handicaped people like DMD (Duchenne Muscular Dystrophy) patients who have a weak motor activity and can control a system with only very few degrees of freedom.

During calibration and online phases, virtual hands were displayed on the screen. Right and left virtual hands move when users handle respectively right and left joysticks. According to qualitative data, half the people consider that virtual hands allow a good feedback of their interaction on the system during calibration phase. So it will be beneficial to display virtual hands animations as feedback for users during the calibration phase. It allows them to know if they perform the correct movement. However a large majority (89 %) of participants don’t find these animations useful during online phase. Maybe it will be removed from the application in next experiments.

Furthermore this first study indicates that sending commands via VRPN protocol allows an efficient application control in real time. Nevertheless, when Arduino data, recording during online phase, are replaying several times, character’s trajectories are never the same. It is a drawback if character moving analysis is needed after the experiment.

Further experiments will be performed soon in order to assess our system with DMD patients. Given that muscular strength of a DMD patient is lower than healthy people, joysticks need to be adapted. Movement sensors requiring a lower effort and yielding a better sensitivity to fingers movements will also be integrated.