You are on page 1of 17

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/342838929

Virtual Reality for Immersive Human Machine Teaming with Vehicles

Chapter · July 2020


DOI: 10.1007/978-3-030-49695-1_39

CITATIONS READS

6 440

5 authors, including:

Rob Semmens Christa M. Chewar


Naval Postgraduate School United States Military Academy West Point
18 PUBLICATIONS 161 CITATIONS 65 PUBLICATIONS 1,037 CITATIONS

SEE PROFILE SEE PROFILE

Christopher Korpela
United States Military Academy West Point
35 PUBLICATIONS 180 CITATIONS

SEE PROFILE

All content following this page was uploaded by Rob Semmens on 25 July 2020.

The user has requested enhancement of the downloaded file.


Virtual Reality for Immersive Human
Machine Teaming with Vehicles

Michael Novitzky1(B) , Rob Semmens2 , Nicholas H. Franck1 ,


Christa M. Chewar1 , and Christopher Korpela1
1
United States Military Academy, West Point, NY 10996, USA
{michael.novitzky,christa.chewar,christopher.korpela}@westpoint.edu
2
Naval Postgraduate School, Monterey, CA 93943, USA
semmens@nps.edu
https://www.westpoint.edu/, https://my.nps.edu/

Abstract. We present developments in constructing a 3D environ-


ment and integrating a virtual reality headset in our Project Aquaticus
platform. We designed Project Aquaticus to examine the interactions
between human-robot teammate trust, cognitive load, and perceived
robot intelligence levels while they compete in games of capture the flag
on the water. Further, this platform will allows us to study human learn-
ing of tactical judgment under a variety of robot capabilities. To enable
human-machine teaming (HMT), we created a testbed where humans
operate motorized kayaks while the robots are autonomous catamaran-
style surface vehicles. MOOS-IvP provides autonomy for the robots.
After receiving an order from a human, the autonomous teammates
can perform tasks conducive to capturing the flag, such as defending
or attacking a flag. In the Project Aquaticus simulation, the humans
control their virtual vehicle with a joystick and communicate with their
robots via radio. Our current simulation is not engaging or realistic for
participants because it presents a top-down, omniscient view of the field.
This fully observable representation of the world is well suited for manag-
ing operations from the shore and teaching new players game mechanics
and strategies; however, it does not accurately reflect the limited and
almost chaotic view of the world a participant experiences while in their
motorized kayak on the water. We present creating a 3D visualization
through Unity that users experience through a virtual reality headset.
Such a system allows us to perform experiments without the need for a
significant investment in on-water experiment resources while also per-
mitting us to gather data year-round through the cold winter months.

Keywords: Underlying and supporting technologies · VAMR


technologies and techniques for human-robot interaction · Virtual
reality · Human-robot teaming · Simulation

The views and conclusions contained herein are those of the authors and should not
be interpreted as necessarily representing the official policies or endorsements, either
expressed or implied, of the U.S. Department of Defense or the U.S. Government.
c Springer Nature Switzerland AG 2020
J. Y. C. Chen and G. Fragomeni (Eds.): HCII 2020, LNCS 12190, pp. 575–590, 2020.
https://doi.org/10.1007/978-3-030-49695-1_39
576 M. Novitzky et al.

1 Introduction
Modern systems are combining manned vehicles with autonomous vehicles to
perform tasks in challenging environments. For example, the U.S. Army has the
Manned-Unmanned Teaming (MUM-T) program in which manned aircraft work
with unmanned aerial systems (UAS) [40]. The U.S. Air Force’s “Loyal Wing-
man” project is exploring manned-unmanned teaming in which an UAS and a
manned aircraft work directly on missions such as air interdiction, attack on
integrated air defense systems, and offensive counter air [17]. We have devel-
oped a similar manned-unmanned teaming concept in the marine domain called
Project Aquaticus [21,23–25,27–29,31–33]. The marine domain is more accessi-
ble for deploying autonomous vessels (no approval is required from government
agencies and vehicles can be easily stopped on the water) and yet still challeng-
ing, given the elements in the environment. As seen in Fig. 1, our manned vessels
are motorized kayaks and our autonomous teammates are autonomous surface
vehicles (ASVs). Project Aquaticus has been designed to explore the interplay
between human cognitive load, robot autonomy, and human-robot teammate
trust. The nominal composition of Project Aquaticus is to have two humans and
two robots on the same team competing in a game of capture the flag against
a similarly situated team. It is our goal to provide lessons learned from our
platform in the marine domain to other challenging environments.
The testbed has already demonstrated a level of maturity [26] as it has col-
lected and published real-world data for publicly available data sets at: www.
aquaticus.org. Researchers have been able to leverage the datasets to extract
successful tactics [36]. However, in order to run these real-world experiments, it
requires a substantial amount of resources in terms of hardware and personnel
and can only be performed during the ideal summer months. As an alternative
to real-world testing, we have begun utilizing our Project Aquaticus simulation
in order to gather participants in larger numbers and at anytime of the year. A
drawback with our Project Aquaticus simulator is that it offers participants an
overhead 2D view of the playing field which is a completely different experience
than what participants encounter in our real-world testbed.
Our goal with this work is to have participants immersed in a virtual world
playing capture the flag games. We will leverage the Unity 3D gaming environ-
ment to do this. A first step for this effort is to create the Project Aquaticus field
and import vehicles into the Unity 3D space. A second and vital step is to then
connect the Unity 3D vehicles to our MOOS-IvP Project Aquaticus simulation.
This way, the Unity 3D environment acts as a 3D visualization and allows us to
leverage its built in support for virtual reality headsets. A key component to this
will be the need to add a camera view that is anchored to the human operated
vehicle relevant to that participant.
Virtual Reality for Immersive Human Machine Teaming with Vehicles 577

Fig. 1. A view from a vessel on the water. In the distance is the blue flag surrounded
by blue and red vehicles. (Color figure online)

2 Related Work

Simulation has been a key technology to enable robotics. In particular for field
robotics, the use of simulation allows developers and experimenters to focus on
development prior to tackling the difficulties of real-world testing and deploy-
ment. Additionally, if a simulator can better approximate the environment in
terms of sensing it greatly reduces development time. There are a number of
3D simulators that are leveraged for robotics development including 4DV-Sim,
Actin, Gazebo, Morse, OpenHRP, OpenRAVE, RoboDK, Unity, V-Rep, and
Webots.

2.1 Virtual Reality for Human-Machine Teaming

Virtual Reality headsets are a natural way to study human-robot interaction.


Lipton et al. [18] use virtual reality headsets and controllers to teleport the
human inside the robot’s mind to teleoperate a humanoid robot. Taheri et al.
immerse people into virtual 3D car simulations where participants wore virtual
reality headsets to study their driving characteristics [37]. Ropelato et al. also
used 3D virtual reality headsets but to observe humans driving so that an arti-
ficial tutor could improve their driving [34].

2.2 Virtual Reality for Training of Tactical Judgment

Virtual reality has been the military’s training environment of the future for the
past thirty years [20]. There have been many efforts and studies to determine the
utility and training benefits of virtual reality for the military [1,5]. In Experience
578 M. Novitzky et al.

on Demand: What Virtual Reality Is, How It Works, and What It Can Do,
Bailenson discusses all of the benefits of virtual reality for learning. In a perfect
system, virtual reality enables learning by doing, spaced practice and repetition,
practice in decision making, the low-cost opportunity for productive failure, and
arousal and affect [4]. It is no wonder that the military desires to take advantage
of these benefits, even without considering the benefits of cost-savings or safety.
Simulation, in a broad sense, is how those in the military learn professional
judgment. Simulation can be very low fidelity, such as tactical decision games.
For example, the Infantry Journal published Combat Problems for Small Units,
in 1943. This book contains 27 tactical situations in which the reader is asked
for what commands they would issue at several points in the scenario. This
intends to put the reader in the situation as much as possible. After making
a decision, the reader then turns the page and sees an exemplar answer. The
reader can then compare their commands to suggested commands. Committing
to an answer before being shown an exemplar answer is one example of active
learning [6] and Ai-Lim Lee and colleagues have shown that virtual reality has a
significant relation with active learning, which in turn, has a significant relation
with learning outcomes [3]. It should be the case that virtual reality offers an
environment for similar decision making without the reliance on the individual’s
imagination, which can vary.
However, some learning concerns have impeded the realization of a virtual
reality nirvana. One concern is transfer [38]. Concerning virtual reality, issues
of transfer take two forms. The first is that the fidelity of the simulation is
sufficient to mimic reality, so that when trainees experience similar stimuli in the
real world, the trainees make benefit of the training, and make the appropriate
decision. The second is that virtual reality should avoid negative transfer, where
attentional cues learned or ways of interacting that only work in the virtual
world are misused in the actual world. In educational settings, transfer is often
assumed, but not tested. Virtual reality can add to the complexity of measuring
transfer, as Zaal and Sweet found in examining stall recovery training for pilots
[41].
Outside of learning issues, the military has other practical issues [8]. Iden-
tifying what tasks one should practice, to what level of proficiency, and how
frequently is persistent. This issue persists because there is a multivariate solu-
tion space that includes the capability and availability of the simulator, and the
cost and danger of performing the same training in reality. For example, interest
in purchasing flying simulators and tank simulators is often cost-driven because
the actual equipment is expensive to operate.
We believe the real value of virtual reality is not in learning procedures; it is in
learning judgment. It is learning what attentional cues hold valuable information,
making a decision based on that information, and then seeing the consequences.
Learning to recognize or recall tactics is easy. Learning the judgment to employ
any given tactic requires experience.
A tactic is simply an action taken to achieve a goal. There are individual
tactics, such as individual movement techniques that allow a soldier to move to a
Virtual Reality for Immersive Human Machine Teaming with Vehicles 579

better position while minimizing the chances of getting shot. The Army teaches
three of these: low crawling, high crawling, and 3–5 s rush. These individual
tactics support collective tactics, such as fire and maneuver, where some people
shoot while others move forward, and then switch.
These particular tactics exist due to a combination of technology and human
ability (or inability.) Concerning technology, the current rifle is accurate and
allows one person to place well-aimed fire quickly. By contrast, muskets were
neither and required a different tactic. Concerning the human, it is much easier
to point the rifle accurately when one is on the ground than when running.
This is also true for sports. In most sports, the ball moves faster than the
players. Therefore, players arrange themselves to receive the ball where they can
do something with it, or allow teammate to receive the ball. The other team tries
to prevent that. By contrast, in a game called pushball, the ball moves slower
than the players. As a result, the players arrange themselves very differently.
Formations are similar to a scrum.
In the current work, virtual reality allows us to change the capability of the
robot. We can make it faster, or slower, or more adaptable or less. This is what
is required to study the development and judgment of tactics. We will say more
about this in future work.

2.3 Previous Project Aquaticus Virtual Reality Efforts


For our previous work [22], we investigated the integration of the Gazebo 3D
simulator [15] which is maintained by OSRF. Gazebo was featured as the sim-
ulation environment for the Virtual Robotics Challenge (VRC), a component
in the DARPA Robotics Challenge in July of 2013 [2]. Gazebo also supports
the Oculus Rift VR headsets [19]. In our previous report [22], we described how
we were able to import some of our vehicle models into Gazebo. However, this
became a time sink for our team. Utilizing all that Gazebo had to offer, required
more specialized skill, specifically 3D CAD modeling which at the time we were
not yet proficient. Oftentimes, the models would fail to load. If they did load,
then the scale would be off or the texture would not load properly. Our ini-
tial difficulties with Gazebo models was only part of our frustration. Only the
earlier SDK versions of Oculus Rift VR headsets, which have been discontin-
ued, are supported. This means that in order to use this system, we would have
to find previously used devices through sites like eBay. Thus, our attempts at
incorporating a 3D visualization and virtual reality headset stalled.
As part of our second attempt at virtual reality for Project Aquaticus [25],
we came upon meshcat-python which is a WebGL 3D visualizer for Python [13].
MeshCat is a remotely-controllable 3D viewer, built on top of Three.js [7]. It
was heavily used by the MIT DRC Team for the DARPA Robotics Challenge
as a command and control interface for their humanoid robot [12]. Our ini-
tial experience with meshcat-python was phenomenal [25], as we began using
meshcat-python as a 3D visualizer for our Project Aquaticus datasets. With
minimal trial and error, we were able to import all of our vehicle models and
create a simulated Project Aquaticus field. We connected the meshcat-python
580 M. Novitzky et al.

visualizer to our logged vehicle datasets. Our tool could playback previously
recorded games in the 3D environment and has a floating camera for one to
observe each game. A compelling part of meshcat-python is that it is based on
Three.js which has support for Google Cardboard [9]. Google Cardboard [16] is
a virtual reality (VR) platform developed by Google for use with a head mount
that contains a smartphone. We were able to enable Google Cardboard support
from within meshcat-python and use smartphones on hand for a virtual reality
experience.
While meshcat-python was a great start to implementing a 3D version with
Virtual Reality support for Project Aquaticus, it lacked certain tools amenable
to creating an immersive world for our participants. We now describe our inte-
gration of Unity 3D into the Project Aquaticus simulation. Unity 3D allows us
to create higher fidelity 3D simulations of our capture the flag playing field.

3 System Design
3.1 Game Mechanics
Initially, our virtual reality experiments will continue the game mechanics that
were used during on-water data gathering [26]. In the on-water experimental
work on Aquaticus, the nominal composition of Project Aquaticus is to have
two humans and two robots on the same team competing in a game of capture
the flag against a similarly composed team.
Each Project Aquaticus game took place in the area of the Charles River
immediately adjacent to the MIT Sailing Pavilion. The field for the game extends
along the entire 160 m dock and 80 m into the river, as seen in Fig. 2. Boundaries
of the field are marked with floating buoys. The field is divided in half: one half
(to the left of Fig. 2) of the field is the blue team’s territory and the other half
is the red team’s territory. A team’s flag, a small buoy in the team’s color at a
known GPS coordinate, is located approximately in the rear center of each of
their territories. Teams start with all players near their respective flag. The flag
buoys are not moved during the game - participants and robots instead grab a
virtual flag (for safety to avoid collisions) and take it back to their flag zone.
To score a point, the participant must go to their opponent’s side of the field,
virtually grab the flag and make it back to their home flag without being tagged
or going out of bounds. A buoy is located at the end of each side to indicate
the original flag position. However, the players manipulate a virtual flag to keep
the complexity low (no need for physical interaction) and focus on human-robot
interaction with fully autonomous robots while avoiding unnecessary collisions.
Virtual flag management is coordinated by the autonomous central game con-
troller on the shoreside computer. Players request flag grabs from the central
game controller which is granted to them if they are (a) within the opposing
team’s flag zone (b) if the virtual flag is located within the flag zone, and (c) the
requesting vehicle is untagged. Players can defend their side of the field through
the use of tags. To tag an opponent, a request is made to the central game man-
ager that checks if (x) the requesting vehicle is in their home field side, (y) that
Virtual Reality for Immersive Human Machine Teaming with Vehicles 581

the opponent vehicle is both on the requesting vehicles side and within 10 m
of the requesting vehicle, and (z) the tagger is untagged. A tagged player must
untag themselves by returning to their home flag zone. If a player is carrying
a virtual flag and is tagged, then the flag is automatically reset to its original
location.

3.2 Autonomous Teammates and Interaction


The robots are truly autonomous teammates. As described in [26], the robots
leverage the MOOS-IvP marine autonomy open source project. During on-water
experiments, humans put the robots into one of several modes that are conducive
to playing games of capture the flag such as attack the flag, defend the flag, cover
me and follow me. These robot teammates continue to perform this task while
avoiding collisions and untagging themselves until told to switch tasks. As seen
in Fig. 3, the participants interact with their teammates and simulated vehicle
through an audio headset and game controller.

Fig. 2. An image of our pMarineViewer tool. It provides command and control during
experiments and the view for each simulation. As can be seen, the top down view is
perfect for getting an overall picture of the game state. However, the top down view
gives the participants a fully observable view of the game which is great for training
but a poor approximation of what a human can perceive while performing on-water
experiments.
582 M. Novitzky et al.

3.3 Current Simulation


The current simulation topology is seen in Fig. 3. The topology includes a central
server to run the Aquaticus game mechanics while we run hardware-in-the-loop
for all of the simulated vehicles. The current simulation is supported by sev-
eral MOOS-IvP applications including pMarineViewer and uSimMarine. The
pMarineViewer application, seen in Fig. 2, is a MOOS application written using
FLTK and OpenGL for rendering vehicles and associated information and his-
tory during operation or simulation. The user is able manipulate a map display
to see multiple vehicle tracks and monitor key information about individual vehi-
cles. In the primary interface mode the user is a passive observer, only able to
manipulate what the user sees and not able to initiate communications to the
vehicles. However there are hooks available to allow the interface to accept field
control commands. The uSimMarine application is a simple 3D vehicle simulator
that updates vehicle state, position and trajectory, based on the present actuator
values and prior vehicle state. The typical usage scenario has a single instance of
uSimMarine associated with each simulated vehicle. During the Aquaticus sim-
ulation, the humans operate their respective virtual kayaks through the use of
USB game controllers. In order to interact with the game and their teammates,
the participants use the human-robot speech interface.
Project Aquaticus is both real-world vessels and a simulation environment.
The benefits of using the simulator are that it allows for relatively inexpensive
data collection as compared to maintaining experimental vessels in a maritime
environment during periods of inclement weather. The current limitation of the
Project Aquaticus simulator is that it gives the participant(s) an allocentric or
god’s eye view of the environment, as seen in Fig. 2. This view provides perfect
situational awareness. This simplifies the cognitve load as compared to the ego-
centric perspective while conducting operations. Maintaining situational aware-
ness is much more challenging egocentrically, especially while coordinating with
teammates and directing a robot on the water. To create a first person per-
spective for each participant that is reflective of the actual on-water experience,
we create a plug-in for the Project Aquaticus simulator that connects to a 3D
visualization and a virtual reality headset, as seen in Fig. 5.

3.4 Project Aquatiucs Field in Unity 3D


In order to leverage Unity 3D’s capability as a visualization engine, models of the
vehicles and objects have been imported and organized to reflect the real-world
field, as seen in Fig. 4. Care must be given to keep the ratio of the vehicle models
to the field size as close as possible to the on-water games. As described further
below, each Unity 3D vehicle will connect to the Project Aquaticus simulation
for their pose information.

3.5 Bridge Between Project Aquaticus 2D Simulation and Unity 3D


The integration between the Project Aquaticus simulator and Unity 3D is seen in
Fig. 5. A new MOOS-IvP application named pUnity was created for the purposes
Virtual Reality for Immersive Human Machine Teaming with Vehicles 583

Fig. 3. The simulation network topology for our Project Aquaticus capture the flag
competition. Each vehicle communicates with our centralized game server which han-
dles the game mechanics such as tagging, flag management, and safety penalties. Each
nominal team is composed of two humans and two autonomous robots. Depicted are
human interfaces where each participant uses a laptop that presents a simulated over-
head view of the field. The participants use a headset for speech recognition and audio
cues and a game controller to control their vehicle. The autonomous robots are run
by our payload autonomy boxes (called PABLOs) which include a Raspberry Pi that
runs our autonomy software. Our autonomous surface vehicles also utilize their own
PABLO box. All simulations are run as hardware-in-the-loop.

Fig. 4. A view of the Project Aquaticus capture the flag field in the Unity 3D world.
584 M. Novitzky et al.

Fig. 5. The main contribution of this work is the integration of the Unity 3D visu-
alization framework with the Project Aquaticus simulation. This is accomplished by
the creation of the MOOS-IvP application pUnity and the Unity 3D C# script called
aquaticus connect. Each vehicle has a respective pUnity and aquaticus connect in which
the Project Aqauticus simulator produces location information for the Unity 3D visu-
alizer to place the vehicle in the appropriate pose. Virtual reality immersion for Project
Aquaticus is created by leveraging Unity 3D’s support for virtual reality headsets.

of bridging the Project Aquaticus simulation to the Unity 3D visualization. A


pUnity application is associated with each MOOS-IvP vehicle community. The
pUnity application subscribes to the MOOS variable NODE MESSAGE which
contains the vehicle’s name, x location, y location, and heading. In order to con-
nect to the Unity 3D visualizer, pUnity acts as a TCP server. Once connected,
pUnity forwards the NODE MESSAGEs to the corresponding TCP client pro-
gram attached to each Unity 3D vehicle.
A new C# script called aquaticus connect was created to attach each vehi-
cle object in Unity 3D to their respective MOOS-IvP vehicle community. Each
aquaticus connect is a TCP client that connects to a specific pUnity MOOS-
IvP application based on startup parameters of IP address and port. As pUnity
passes NODE MESSAGEs to aquaticus connect, the script in turn changes the
location of the 3D vehicle within the Unity 3D engine. This setup allows for a
flexible number of vehicles to be visualized.
Virtual Reality for Immersive Human Machine Teaming with Vehicles 585

Virtual Reality Headset. The second step is to integrate a virtual reality


headset into our 3D visualization. The most popular virtual reality headsets are
the Oculus [14], HTC Vive [10], and Valve Index [11]. Our top choice is the Valve
Index as it has the best resolution and highest frame rate possible – which is
key to reduce the possibility of motion sickness which leads to a higher number
of recruited participants. We can anchor the headset’s view to the participant’s
vehicle location. Thereby creating the 3D simulated world with the headset view
anchored to the participant’s virtual vehicle yet responsive to the participant’s
head motions to look around the environment while playing virtual games of
capture the flag.

3.6 Single Player Experience


As seen in Fig. 5, the single player experience will continue to use the game con-
troller to move the vehicle around and interact with the game mechanics such as
tag opponents and grab the flag while they will use an audio headset to commu-
nicate with their teammates and receive game events. Instead of the participant
looking at a monitor displaying the Unity 3D visualization from their vehicle’s
perspective, they will wear a virtual reality headset. Thus, when the participant
moves their head around, the virtual reality view will adjust accordingly. As the
view moves with the vehicle within the simulation, the participant will experi-
ence a view more akin to what they would experience playing games of capture
the flag on the water in their motorized kayaks.

3.7 Local Multiplayer


The 2D version of Project Aquaticus is capable of having multiple participants in
the same competition. As seen in Fig. 3, the original system is capable of being
run on Raspberry Pis. Extending it to multiple player virtual reality means
increasing the capability of each participant’s machine to a PC with a video
card capable of smoothly re-creating the 3D environment.

4 Future Work
4.1 Work to Finish the VR Platform
Visually, the Project Aquaticus capture the flag field within Unity 3D requires a
few additions. This includes the “chalk” on the water that marks the boundaries
of the play area. Additionally, some scenery will add to the ability of participants
to gain a visual estimate of speed, as included with Unity 3D.
We need to include collision physics, as that is related to some of the tac-
tics we wish to study. While in the simulation the robots are running collision
avoidance algorithms, just as they would on the water, there is no simulation
of collision physics. In our current Project Aquaticus 2D simulation the vehicles
can go through each other. To improve the study of these tactics, a basic collision
model will need to be implemented.
586 M. Novitzky et al.

4.2 Future Potential Experiments

In concurrent work, we analyzed four of the existing Aquaticus games [36]. We


used video analysis to identify tactics that players used. We identified five tac-
tics, but there could be more. We also attempted to characterize difference in
game play between experienced players and brand new players. The results were
inconclusive, in part because one novice performed very well. There is the poten-
tial that “the novice” was on the sailing team and that he was able transfer his
knowledge of watercraft performance into the game. This is speculation, as we
were not systematically collecting data about prior experience.
This experimental platform offers several advantages in studying the devel-
opment of tactical judgment. First, we can assume that few participants will not
have familiarity with playing capture the flag with a robot teammate in a virtual
reality kayak. This will help control for prior tactical knowledge. The second is
that games are short. This provides the opportunity to conduct a game, have an
after action review where we record their learning, have participants conduct a
plan, play again, review and plan again, play one more time, and then debrief the
experiment, all within an hour. The relatively few number of teammates puts a
limit on the complexity of their plan. This helps the research team identify the
tactics, even when they enact them concurrently. Finally, VR offers the opportu-
nity to easily change the characteristics of the entities and the environment. For
example, we can easily make the humans faster than the robots, or vice versa.
We can introduce a strong current, where it is easy for everybody to go in one
direction, but hard to go in the other. Either of these should eventually lead to
a change in tactics, but how that change is realized remains to be seen.
Another question to explore is when and how often robots should provide
status updates to the human? In the game, the humans are under some cognitive
load as they maneuver themselves and send maneuver commands to the robot.
Semmens, et al. asked people who were driving if it was a good time to receive
non-essential information [35]. They found that there were many times when it
was ok to receive information, and some times when it was not. It was not a good
time when people were trying to do something, such as turning left, or figuring
out the GPS. This platform will allow us to explore the same type of question
for people in a contested environment. This might inform future communications
systems that could hold information until the human is ready.
Finally, this platform may be used to investigate adaptive thinking skills
or tactical planning skills, as others have before [30,39]. It is unclear what the
benefit of training repetition and low-cost failure are for tactical judgment in
specific, and human development in general. There could be benefits to tactical
confidence, increases in growth mindset and willingness to learn, and fortification
of grit. None of that may prove true, but this platform will allow us to find out.
Virtual Reality for Immersive Human Machine Teaming with Vehicles 587

5 Conclusions
We have introduced the utility and our plan to integrate a 3D virtual reality
simulation into our experiments for the Aquaticus Project. Aquaticus captures
the richness (and chaos) of real-world human-robot teaming within a structured
game of capture-the-flag on the water. As described in Sect. 3.3, our current 2D
top-down simulator is a great learning tool and useful for command and control
from the shore. Yet, the current simulation view does not capture the chaos that
a participant experiences from being in their own vehicle with a view from the
water. We presented our previous efforts in trying to bring about a 3D visualizer
and VR headset and why it stalled. Lastly, we presented our current progress
using the Unity 3D visualization tool with our real-time simulator.

Acknowledgments. We thank Hugh R. R. Dougherty, Caileigh Fitzgerald, Paul


Robinette, Michael R. Benjamin, and Henrik Schmidt for their contributions to this
initial work with Gazebo and Meshcat-python. We also thank Clearpath Robotics for
furnishing us with a model of their Heron M300 autonomous surface vehicle.

References
1. Virtual Reality: State of Military Research and Applications in Member Countries
(La realite virutelle: L’etat actuel des travaux de recherche et des applications mili-
taires dans les pays membres de l’Alliance). Technical report, RTO-TR-018, NATO
RESEARCH AND TECHNOLOGY ORGANIZATION NEUILLY-SUR-SEINE,
FRANCE, February 2003. https://apps.dtic.mil/docs/citations/ADA411978
2. Aguero, C., et al.: Inside the virtual robotics challenge: simulating real-time robotic
disaster response. IEEE Trans. Autom. Sci. Eng. 12(2), 494–506 (2015). https://
doi.org/10.1109/TASE.2014.2368997
3. Lee, E.A., Wong, K.W., Fung, C.C.: How does desktop virtual reality
enhance learning outcomes? A structural equation modeling approach. Comput.
Educ. 55(4), 1424–1442 (2010). https://doi.org/10.1016/j.compedu.2010.06.006.
https://linkinghub.elsevier.com/retrieve/pii/S0360131510001661
4. Bailenson, J.: Experience on demand: what virtual reality is, how it works, and
what it can do (2019). oCLC: 1037811122
5. Beal, S.A.: Using games for training dismounted light infantry leaders: emer-
gent questions and lessons learned. Technical report, 1841, US Army Research
Institude for the Behavioral and Social Sciences (2005). https://doi.org/10.1037/
e423262005-001. http://doi.apa.org/get-pe-doi.cfm?doi=10.1037/e423262005-001,
type: dataset
6. Bonwell, C.C., Eison, J.A.: Active Learning: Creating Excitement in the Classroom.
1991 ASHE-ERIC Higher Education Reports. ERIC (1991)
7. Cabello, R.: Three.js, November 2018. https://threejs.org/. Accessed 02 Nov 2018
8. Campbell, C.H., Knerr, B.W., Lampton, D.R.: Virtual environments for infantry
soldiers: virtual environments for dismounted soldier simulation, training and mis-
sion rehearsal. Technical report, ARI-SP-59, ARMY RESEARCH INST FOR
THE BEHAVIORAL AND SOCIAL SCIENCES ALEXANDRIA VA, May 2004.
https://apps.dtic.mil/docs/citations/ADA425082
588 M. Novitzky et al.

9. Catanzariti, P.: Bringing VR to the web with google cardboard and


three.js, November 2018. https://www.sitepoint.com/bringing-vr-to-web-google-
cardboard-three-js/. Accessed 02 Dec 2018
10. HTC Corporation: HTC VIVE. https://www.vive.com/. Accessed 14 Aug 2019
11. Valve Corporation: Valve index. https://www.valvesoftware.com/en/index/
headset/. Accessed 14 Aug 2019
12. DARPA: MIT DARPA robotics challenge team, November 2018. http://drc.mit.
edu/. Accessed 02 Nov 2018
13. Deits, R.: Meshcat-Python, November 2018. https://github.com/rdeits/meshcat-
python/. Accessed 02 Nov 2018
14. Facebook Technologies, L.: Oculus rift. https://www.oculus.com/. Accessed 14
Aug 2019
15. Foundation, O.S.R.: Gazebo, February 2018. http://gazebosim.org/. Accessed 02
Feb 2018
16. Google: Google cardboard, November 2018. https://vr.google.com/cardboard/.
Accessed 02 Dec 2018
17. Kearns, K.: RFI: Autonomy for Loyal Wingman. Air Force Research Laboratory
(AFRL), July 2015
18. Lipton, J.I., Fay, A.J., Rus, D.: Baxter’s homunculus: virtual reality spaces for
teleoperation in manufacturing. IEEE Robot. Autom. Lett. 3(1), 179–186 (2018)
19. LLC., O.V.: Oculus rift, February 2018. https://www.oculus.com/. Accessed 02
Feb 2018
20. Lynn, V.L., Cherry, P., Brady, E., Droulihet, P., Evers, W.: Army science board
1991 summer study - army simulation strategy. Technical report, Army Sci-
ence Board, Washington, December 1991. https://apps.dtic.mil/docs/citations/
ADA250382
21. Novitzky, M., Benjamin, M.R., Robinette, P., Dougherty, H.R., Fitzgerald, C.,
Schmidt, H.: Virtual reality for immersive simulated experiments of human-robot
interactions in the marine environment. In: Workshop on Virtual, Augmented and
Mixed Reality for Human-Robot Interaction at HRI 2018, Chicago, March 2018
22. Novitzky, M., Benjamin, M.R., Robinette, P., Dougherty, H.R., Fitzgerald, C.,
Schmidt, H.: Virtual reality for immersive simulated experiments of human-robot
interactions in the marine environment. In: ACM/IEEE International Conference
on Human-Robot Interaction (HRI) 2018 Workshop on Virtual, Augmented, and
Mixed Reality (VAM) for Human-Robot Interaction. ACM (2018)
23. Novitzky, M., Dougherty, H.R.R., Benjamin, M.R.: A human-robot speech interface
for an autonomous marine teammate. In: Agah, A., Cabibihan, J.-J., Howard,
A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 513–
520. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47437-3 50
24. Novitzky, M., Fitzgerald, C., Gleason, D., Robinette, P., Benjamin, M.R., Schmidt,
H.: Integrating a multi-modal interface in a marine human-robot teaming testbed.
In: IEEE International Conference on Robotics and Automation (ICRA) work-
shop on Human-Robot Teaming Beyond Human Operational Speeds (RT-DUNE).
IEEE, Montreal, May 2019
25. Novitzky, M., Fitzgerald, C., Robinette, P., Benjamin, M.R., Schmidt, H.: Updated:
virtual reality for immersive simulated experiments of human-robot interactions in
the marine environment. In: Proceedings of the Workshop Virtual, Augmented, and
Mixed Reality for Human-Robot Interaction ACM/IEEE International Conference
on Human-Robot Interaction. ACM/IEEE, Daegu, March 2019
Virtual Reality for Immersive Human Machine Teaming with Vehicles 589

26. Novitzky, M., Robinette, P., Benjamin, M.R., Fitzgerald, C., Schmidt, H.: Aquati-
cus: publicly available datasets from a marine human-robot teaming testbed. In:
Companion of the 2019 ACM/IEEE International Conference on Human-Robot
Interaction. Daegu, March 2019
27. Novitzky, M., Robinette, P., Benjamin, M.R., Gleason, D.K., Fitzgerald, C.,
Schmidt, H.: Preliminary interactions of human-robot trust, cognitive load, and
robot intelligence levels in a competitive game. In: Companion of the 2018
ACM/IEEE International Conference on Human-Robot Interaction, pp. 203–204
(2018)
28. Novitzky, M., Robinette, P., Fitzgerald, C., Dougherty, H.R.R., Benjamin, M.,
Schmidt, H.: Issues and mitigation strategies for deploying human-robot experi-
ments on the water for competitive games in an academic environment. In: Proceed-
ings of the Workshop Dangerous HRI: Testing Real-World Robots has Real-World
Consequences ACM/IEEE International Conference on Human-Robot Interaction.
ACM/IEEE, Daegu (2019)
29. Novitzky, M., Robinette, P., Gleason, D.K., Benjamin, M.R.: A platform for study-
ing human-machine teaming on the water with physiological sensors. In: Workshop
on Human-Centered Robotics: Interaction, Physiological Integration and Auton-
omy at RSS 2017, Cambridge, July 2017
30. Pleban, R.J., Vaughn, E.D., Sidman, J., Geyer, A., Semmens, R.: Training platoon
leader adaptive thinking skills in a classroom setting. Research Report 1948, U.S.
Army Research Institute for the Behavioral & Social Sciences, Arlington (2011).
http://www.dtic.mil/dtic/tr/fulltext/u2/a544978.pdf
31. Robinette, P., Novitzky, M., Benjamin, M.R.: Trusting a robot as a user versus as
a teammate. In: Workshop on Morality and Social Trust in Autonomous Robots
at RSS 2017, Cambridge, July 2017
32. Robinette, P., Novitzky, M., Benjamin, M.R.: Longitudinal interactions between
human and robot teammates in a marine environment. In: In Workshop on Longi-
tudinal Human-Robot Teaming at HRI 2018, Chicago, March 2018
33. Robinette, P., Novitzky, M., Benjamin, M.R., Fitzgerald, C., Schmidt, H.: Explor-
ing human-robot trust during teaming in a real-world testbed. In: Companion
of the 2019 ACM/IEEE International Conference on Human-Robot Interaction,
March 2019
34. Ropelato, S., Zünd, F., Magnenat, S., Menozzi, M., Sumner, R.: Adaptive tutor-
ing on a virtual reality driving simulator. In: 1st Workshop on Artificial Intel-
ligence Meets Virtual and Augmented Worlds (AIVRAR) in Conjunction with
SIGGRAPH Asia 2017, ETH Zurich (2017)
35. Semmens, R., Martelaro, N., Kaveti, P., Stent, S., Ju, W.: Is now a good time?
An empirical study of vehicle-driver communication timing. In: Proceedings of the
2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, pp.
1–12. Association for Computing Machinery, Glasgow, May 2019. https://doi.org/
10.1145/3290605.3300867
36. Semmens, R., Novitzky, M., Robinette, P., Lieberman, G.: Insights into Expertise
and Tactics in Human-Robot Teaming
37. Taheri, S.M., Matsushita, K., Sasaki, M.: Development of a driving simulator with
analyzing driver’ characteristics based on a virtual reality head mounted display.
J. Transp. Technol. 7(03), 351 (2017)
38. Thorndike, E.L., Woodworth, R.S.: The influence of improvement in one mental
functionupon the efficiency of other functions: III. Functions involving attention,
observation and discrimination. Psychol. Rev. 8(6), 553–564 (1901). https://doi.
org/10.1037/h0071363
590 M. Novitzky et al.

39. Tucker, J., Semmens, R., Sidman, J., Geyer, A., Vaughn, E.: Training tactical-
level planning skills: an investigation of problem-centered and direct instruc-
tion approaches. Technical report, U. S. Army Research Institute for the Behav-
ioral & Social Sciences, Arlington (2011). http://www.dtic.mil/dtic/tr/fulltext/
u2/a545362.pdf
40. Whittle, R.: MUM-T is the word for AH-64E: helos fly, use drones. Breaking
Defense, January 2015
41. Zaal, P.M.T., Sweet, B.T.: The challenges of measuring transfer of stall recovery
training. In: 2014 IEEE International Conference on Systems, Man, and Cyber-
netics (SMC), pp. 3138–3143, October 2014. https://doi.org/10.1109/SMC.2014.
6974410, ISSN 1062-922X

View publication stats

You might also like