Keywords

1 Introduction

Medical simulation modalities can be broadly divided into three categories: standardized patients, human patient simulators, and virtual-reality trainers. Standardized patients are individuals trained to play the role of a patient in a medical scenario. Human patient simulators are computerized mannequins capable of modeling and presenting physiology to learners. Virtual reality trainers present computer generated scenarios designed to develop cognitive as well as dexterous skills. Relatively little has been done to combine all three modalities.

The Wide Area Virtual Environment (WAVE) is an 8,000 sq. ft. facility designed to provide an immersive virtual environment for medical team training. It integrates all three modalities. The WAVE is capable of rendering highly realistic training scenarios that challenges a team’s ability to provide care under difficult conditions. The WAVE represents a novel application of human-computer interaction. It forms the basis for a synergistic amalgamation of live, virtual, and constructive simulation for medical instruction.

This paper will briefly explore the three modalities of simulation. It will describe the design and functionality of the WAVE. This paper will also describe how the three modalities are combined in the WAVE environment to present a unique learning environment. Finally, our experience in using this capability, and implications for the future of medical simulation will be discussed.

2 Background

The practice of medicine requires the application of knowledge, dexterous skills, and experience. Knowledge can be acquired through classroom learning. However, skills acquisition and experience require practice. These skills may be acquired through patient interactions. The notion of ‘See one, do one, teach one’ was widely accepted [1,2,3]. Learning on patients is not without risk. To ameliorate harm, cadavers and live animals substituted for patients. These methods have their disadvantages. Cadavers do not model a live patient’s physiology. Animal anatomy can be poor substitutes for human anatomy. There are also ethical concerns over the use of live animals for teaching. Medical simulation seeks to address these limitations. It provides a consistent, repeatable environment for learning dexterous skills. Medical simulation also helps in developing experience.

Modern medical simulation modalities fall into three categories: standardized patients, part task trainers, and virtual simulations.

Standardized Patients (SPs) are individuals trained to portray patients in a learning scenario [4]. SPs maintain consistent portrayal of an individual with a medical condition. SPs can be used in clinical settings such as a doctor’s examination room. They can also be used in unconventional scenarios, such as a mass casualty or natural disaster event. Generally, SPs are used when human interaction is part of the learning objective. They are also used for practicing non-invasive medical skills.

Part task trainers are designed to facilitate the practice of specific medical or surgical skills. Examples include: surgical airway, heart catheterization, and venipuncture. Part task trainers generally involve procedures that are invasive. E.g., they involve the puncturing the skin, cutting, or the insertion of medical devices within the body. Due to the specific focus, part task trainers may not replicate full human anatomy. For example, venipuncture trainers may replicate just the arm. Since they focus only on specific tasks, they are not well suited in scenarios where multiple skills need to be integrated with cognitive assessment to render lifesaving aid to the victim.

The Human Patient Simulator (HPS) addresses this limitation. A HPS is a computer-controlled mannequin. It incorporates mechanical devices within the mannequin to simulate physiological activity. For example, learners can feel a pulse, detect breathing through airflow and chest movement, and can observe pupillary response. Some procedures normally performed on part task trainers can also be performed on an HPS. E.g., chest tube insertion and cricothyroidotomy. HPS also incorporate human physiological models. They can respond to treatment by altering heart-rate and breathing. Ruggedized HPS can be used in field conditions to simulate casualties.

Despite their utility, HPS have their limitations too. Commercially available HPS are not designed to operate autonomously. Each HPS requires human oversight to ensure specified learning objectives are being addressed. As with part task trainers, HPS require a supply of consumables. Skin patches and body inserts are used up as learners practice invasive procedures. They must be replaced prior to use by another learner.

In contrast to SPs and HPS, virtual reality trainers do not rely on a physical representation of the patient. Instead, a computer generated analogue is employed. Being virtual, they can be deployed on both mobile and desktop environments. They expand learning opportunities available to the student. Virtual reality trainers have capabilities that were traditionally the domain of SPs or HPS. For example, virtual standardized patient simulators [5] have been developed. They incorporate speech recognition and natural language processing capabilities. Virtual SPs have been used in limited settings for medical student education [6].

Virtual simulators have also been used for dexterous skills training. The Haptic Workbench [7] incorporates a 3D stereoscopic display and Phantom haptic interface devices [8] co-located within the same virtual space. Within this space, learners can see and touch computer-generated 3D objects. Medical simulators developed using this approach include cricothyroidotomy [9] and craniotomy [10].

Being fully virtual, these simulators generally do not require consumables. A greater range of invasive procedures can be simulated. Virtual simulators can objectively track learner performance. Despite these advantages, virtual simulators have not replaced the other modalities. They cannot fully replace SPs. Virtual simulators still cannot simulate human interaction at the level SPs can. At the same time, tactile feedback is still limited compared to part task trainers.

Few attempts have been made to combine modalities and overcome their limitations. [11] describes using an HPS within a CAVE environment. The objective is to use the CAVE to simulate an operating room to the learner, who is practicing on the HPS. The virtual environment was passive, and did not interact with the learner. In contrast, [12] incorporated a virtual avatars capable of providing feedback to the learner during the procedure.

These experiments demonstrated the utility of combining disparate modalities. They overcome limitations inherent with one approach by leveraging the strengths of a different approach. While promising, previous attempts have been limited in scope. They focus on a single learner.

In the next section we describe the WAVE, a large scale facility integrating SPs, part task trainers, and interactive virtual reality for medical team training.

3 Methods

The Wide Area Virtual Environment (WAVE) is an immersive virtual reality theater for medical team training. The WAVE seeks to combine modalities. The objective is to use the strengths of one modality to overcome limitations found in others. The WAVE is designed to support scenarios involving small medical teams over a period of up to four days. In this section, we describe the operation and design of the WAVE.

3.1 Layout

The WAVE is comprised of screens arranged to form two circular pods connected by a corridor. Each pod is approximately 25 ft. in diameter. The corridor is 20 ft. long. Each end of the corridor is 12 ft. wide, tapering to 9 ft. in the middle. At the middle, an electrically operated curtain is positioned. It can be raised or lowered remotely based on scenario requirements. The total useable area of the WAVE is approximately 1,100 sq. ft.

Within the WAVE, learners wear lightweight stereoscopic glasses. They perceive an immersive 3D virtual environment while moving about freely. During a training exercise, this virtual environment is augmented with live actors, human patient simulators and props to simulate an actual environment. Depending on course requirements, this environment may represent combat, humanitarian, or civilian scenarios.

The WAVE is accessed via entrances to each pod. In each pod, two screens are mounted on wheels and hinged so that they can be swung open or closed. The screen doors are balanced to facilitate ease of operation. Learners enter via this entrance. Once inside, the doors can be closed and illuminated to form a seamless environment. Figure 1 illustrates.

3.2 Concept of Operation

The WAVE is designed to support training activities of up to four days in duration. During this time, the nature and scope of the environment can change based as the exercise progresses. The WAVE accomplishes this by changing both the virtual and physical environment during a training event. This is done by using pods alternately. A hypothetical scenario is described.

In this training activity, learners enter Pod A to rescue soldiers wounded by an Improvised Explosive Device (IED) attack. The wounded may be portrayed by SPs or HPS, depending on training objectives. As patients are being treated, the team encounter hostile fire and respond by returning fire while the medic performs first aid. The team succeeds in repelling the attack and calls for air evacuation.

While learners are engaged in the point of injury scenario, Pod B is being prepared for the next step of the exercise. A mobile motion platform is brought into Pod B and a mockup of a UH-60 helicopter assembled. As the medical team makes patients ready for transport, the center curtain lifts, allowing the team to begin moving patients into the UH-60. The scenario continues into the air evacuation phase of the exercise. The medical team provides life support to the patient while inside the UH-60.

During this time, Pod A is reconfigured so it is no longer the scene of an IED attack. Instead, medical equipment found in a forward operating base is brought in. The IED virtual environment is replaced with a virtual operating room. After the UH-60 lands, the patient is wheeled back into Pod A. The scenario then continues into the operating room phase of the exercise.

By alternating pods and configuring them in parallel with an ongoing scenario, training exercises can continue indefinitely. Currently, the WAVE and its auxiliary infrastructure is designed to support a continuous exercise of up to four days (96 h) in length.

3.3 Visual and Audio Rendering

The WAVE uses an array of back-projected screens to generate 3D stereoscopic images. The WAVE’s visual rendering components are modular. The basic component is the display module. Each display module consists of a screen, a projector pair, and a pair of image generators. Front coated perspex screens are used as projection surfaces. Each screen is 108 in. tall and 81 in. wide. Each screen is back-projected by two 15,000 ANSI lumen projectors. Light from each projector is polarized before reaching the screen to facilitate 3D viewing. Each projector is driven by one image generator. An image generator comprises of a commercial off-the-shelf computer with a high-end consumer graphics card (nVidia GTX 980 at the time of writing). 24 display modules are used in the WAVE. There are 10 in. each pod and four in the corridor. Screens serving as doors (Fig. 1) are hinged. They also have supporting wheels to facilitate operation. Wheel positions are indexed relative to the floor to ensure consistent screen positioning. Figure 2 illustrates display modules.

Fig. 1.
figure 1

WAVE layout

Fig. 2.
figure 2

Back of WAVE showing two display modules

Audio rendering complements the realism of the visual environment. Audio consistent with the scenario is generated during a training exercise. Audio rendering in each pod is accomplished by a seven speaker system arranged in a ring above the display modules. Both ambient audio and directional sounds can be rendered. A 3 kW sub-woofer is positioned directly over each pod to provide high-intensity low frequency acoustic effects. E.g. explosions. A separate four speaker, one sub-woofer audio system is used in the corridor.

3.4 Tracking and Monitoring

An array of 24 Vicon motion tracking cameras is mounted just above the screens. This system is used to track specially marked individuals and equipment during training exercises. This system facilitates an interactive experience in the WAVE. For example, learners in a hazardous material training exercise can use tracked radiation detectors in the WAVE to receive real-time feedback on simulated hazards.

An array of 20 video cameras allow training exercises to be recorded. The cameras have pan-tilt-zoom capability, allowing any point within the WAVE to be examined. These cameras are sensitive to IR illumination. As such, they are capable of operating under low light conditions, or in total darkness.

3.5 Theatrical Effects

Suspension of disbelief is necessary for learners to remain engaged within the WAVE. In addition to 3D graphics and directional audio, the WAVE employs a series of theatrical effects. They are synchronized to the virtual environment. The virtual environment and theatrical effects work collectively to extend suspension of disbelief into the space physically occupied by learners. Currently, theatrical elements include: scent generators, smoke generators, air cannons, computer-controlled lighting, and a portable motion platform. We discuss each in turn.

Scent Generators. Olfaction is a primal sense [13]. The WAVE engages the olfactory sense with an array of six scent generators. A scent generator is a compact device consisting of a network interface, a small air compressor, bottles of scent liquid, and an exhaust fan. During operation, compressed air is used to force a fine spray of scent liquid from the bottle into the airflow of the exhaust fan. Six scent generators are available for training exercises. Two are mounted directly over each pod. The remaining four are self-contained portable units to be positioned as desired. Each scent generator can release up to six distinct odors. A range of different scents can be used based on the scenario. They include: burnt flesh, diesel fuel, burnt wood, urine, gunpowder, and gangrene. Figure 3 illustrates.

Fig. 3.
figure 3

Scent generator

Fig. 4.
figure 4

Smoke generator

Smoke Generators. Smoke generators can simulate the effect of fire or explosions in a training scenario. WAVE smoke machines are self-contained, portable units with a network (WiFi) interface. They are automatically controlled by the WAVE scenario. Smoke generators are synchronized with on-screen explosions, fires, or other virtual environmental features. Figure 4 illustrates.

Air Cannons. Air cannons add a kinetic element to the range of theatrical effects. Each air cannon is a self-contained unit. It consists of a compressed air tank, an electrically operated air valve, a WiFi trigger and batteries. A large directional barrel is fitted over the air release. Figure 5 illustrates. The barrel can contain harmless lightweight material to be launched toward learners. Typical materials include: textured cork (resembling dirt and asphalt), and clear silicone caulk pieces (resembling broken glass).

Computer Controlled Stage Lighting. Full spectrum LED stage lighting is used to illuminate the interior of the WAVE. Ambient lighting is matched with displayed scenes. They change based on the environment displayed on screen. In addition to ambient illumination, these lights can also be programmed to match training activities. For example, they can strobe in a manner consistent with a fire emergency. Lights close to emergency vehicles can similarly strobe using a color consistent with emergency vehicle lights.

Fig. 5.
figure 5

Air cannon

Fig. 6.
figure 6

Motion platform with helicopter mockup

Motion Platform. Some simulation exercises involve an air or ground evacuation phase. In these scenarios, the WAVE incorporates a portable motion platform. This device is wheeled to facilitate rapid deployment and movement in and out of WAVE pods as required. The motion platform provides three degrees of freedom: heave, pitch, and roll. Coupled with a mockup of the transport vehicle, this device generates motion consistent with a helicopter in flight, or ground transport over varying terrain. Vehicle movement can make some treatment procedures difficult. Learners gain experience tending to patients in these environments with the motion platform. Figure 6 illustrates.

3.6 Command and Control

Sections 3.3 through 3.5 described various elements that comprise the WAVE. Coordinating these elements in a synchronized fashion is done by the WAVE software. The command and control system of the WAVE is based on standard networking protocols (i.e., TCP/IP). Web-enabled relays are used for devices that require triggers (e.g., air cannons, smoke and scent generators). Hardwired Ethernet connections are used for fixed items. Portable units are WiFi connected. Protocol bridges are used for devices whose native control standard is not IP based. E.g., a DMX [14] to IP bridge is used to control the stage lighting.

4 Results

The WAVE has been in continuous operation since 2012. During this time, the WAVE has supported the learning objectives of the Uniformed Services University of the Health Sciences as well as regional military and federal emergency response teams. Smaller systems, termed WAVElets, have been deployed within the Air Force, Army, and Navy. Both military and civilian emergency response scenarios have been developed. Medical scenarios covering the Continuum of Care [15] Role 1 (point of injury care) through Role 4 (definitive care) are available. Joint en-route care scenarios are currently being developed. Scenarios supporting chemical casualty care are also available. Civilian emergency response scenarios include: civil disturbance, active shooter, improvised explosive detonation in enclosed (subway) and outdoor venues. We describe some of these scenarios.

Figure 7 illustrates a point of injury scenario. This scenario combines 3D computer generated audio-visual rendering with SPs, and multiple theatrical effects. Combat medics enter the WAVE to treat an SP playing the role of a wounded soldier. The instructor is at the extreme right. Learners must provide immediate aid to the wounded soldier. At the same time, awareness of their surroundings must be maintained. In this scenario, the WAVE provides context to the training exercise. Enemy combatants can also be activated to test situational awareness. An air cannon is hidden behind the sandbag barriers. Unchallenged enemy fighters will throw grenades. This causes smoke and air cannons to be discharged in close proximity. The multi-sensory response reinforces the importance of situational awareness without risk to learners.

Figure 8 illustrates a civilian mass casualty pipe bomb scenario. Multiple SPs play the role of bombing victims. Additional casualties are depicted virtually in the background. Learners must quickly identify casualties with varying severity of injuries. They must decide how to deploy limited resources to save as many individuals as possible. Background audio effects include: emergency vehicle sirens, screams from victims, and 2-way radio chatter. SPs playing the role of traumatized victims increase the level of chaos. Both the virtual and physical environment work to portray a realistic mass casualty scene. Students learn to perform the correct lifesaving measures without being overwhelmed by the situation.

Fig. 7.
figure 7

Point of injury–military scenario

Fig. 8.
figure 8

Mass casualty–civilian scenario

Figure 9 shows a wounded soldier being removed from a UH-60 helicopter after air evacuation. The UH-60 is mounted on the motion platform. During the scenario, the WAVE display simulates flight over varying terrain. The UH-60 sways in synchronization with the displayed terrain, simulating rotary wing flight. The WAVE’s audio system generates background engine noise. Sound levels are set high enough to interfere with speech, simulating actual flight conditions. Learners practice en-route care in this environment. They learn how to provide care under conditions where sound, movement and vibrations can interfere with patient monitoring. Both SPs and HPS have been used to play the role of patient.

Figure 10 depicts an emergency room in a field hospital. The WAVE simulates the interior of a busy emergency room environment. HPS are used to simulate patients. Actual emergency room equipment, such as EEG monitors, are attached to the HPS. They provide physiological feedback on the patient’s condition. Auditory cues include medical instrument signals, patient screams, and urgent conversation from other medical teams in the background. Additional stressors such as an air raid siren can be activated as required. Simulated incoming mortar explosions can also be triggered. When a mortar round lands close to the tent, WAVE projectors flicker in synchronization with the stage lights to simulate a momentary power outage.

Fig. 9.
figure 9

Aeromedical evacuation

Fig. 10.
figure 10

Field hospital

5 Discussion

Medical education and training have generally used simulation modalities in isolation. In contrast, the WAVE adopts a tightly integrated approach to combining modalities. The WAVE is a unique environment for conducting capstone exercises. i.e., training events requiring learners to apply a combination of skills to address complex problems under stressful conditions. The WAVE fills a gap between classroom learning, and actual deployment. The WAVE delivers training exercises that are more flexible and at a lower cost than field exercises. Since all major elements are computer-controlled, exercises can be repeated, or stopped and restarted at a specific point. This ability reinforces learning, and allows the team to focus on deficiencies. In contrast, field exercises involve coordination between many disparate components. Repeating or restarting an exercise is not always practical. The WAVE is also well suited for mission specific training. E.g., medical support for VIPs. The virtual environment can be configured to exactly match event venues. This allows first responders to become familiar with evacuation routes and safe zones without physically practicing in that venue.

As hardware costs decrease, we anticipate increased application of fused modality training environments for medical instruction. Since individuals, equipment, and patients can be tracked within the WAVE, a distributed training capability involving multiple networked WAVE environments is possible. Teams can practice in real-time on the same mission, even though they are geographically separated.

The WAVE is best suited to supporting small, specialized medical teams. It is best suited to training scenarios that cannot be readily replicated, e.g., chemical/biological attacks, or suicide bombings. The WAVE is also well suited for capstone exercises. It is not suited for learning individual procedures or skills where a stressful environment detracts from instruction.

6 Conclusion

The WAVE is a novel simulation platform. The objective is to deliver training scenarios with an unprecedented level of realism. It combines the three primary modalities of medical simulation: live actors, human patient simulators, and virtual reality. The WAVE does so with a high level of integration to deliver a seamless learning experience. By using a dual-pod configuration, the WAVE is capable of supporting training exercises of an indefinite duration. The WAVE is best suited for training specialized medical teams in difficult to replicate scenarios. Blended modality environments such as the WAVE hold promise in distributed training applications.