You are on page 1of 8

The 14th International Scientific Conference

eLearning and Software for Education


Bucharest, April 19-20, 2018
10.12753/2066-026X-18-079

Using Artificial Intelligence in Augmented Environments for Complex Training


Scenarios in the Defense Industry

Constantin NICA
“Mihai Viteazul” National Intelligence Academy, Management of National Security Information,
Bucharest, Romania,
constantin.nica@gmail.com

Tiberiu TANASE
Intelligence Analyst, Associated Member of the Romanian Committee for the History and Philosophy for Science and
Technics, History Division, Romanian Academy
tiberiutanase26@gmail.com

Abstract: Classical training scenarios in exercises and practices in the defense industry, focus on
realism, where complex scenarios are simulated to achieve maximum impact on unit training.
Thousands of hours are being spent to train units and provide real-life scenarios, to improve on
survivability and efficiency of units, if and when the time comes, for proven organizational and
practical efficiency. Using Augmented Reality and Artificial Intelligence to create real-life simulated
environments would create a totally new learning platform, which provides a new learning experience
and approach, for specialized situations in the defense industry. Many other solutions have tried to
provide for interactive content, without achieving the much needed interaction between subjects
participating in training courses, where complex environmental relations need to be analyzed. Artificial
Intelligence algorithms may help identify patterns and may be trained to provide for specific input on
human interaction. The Augmented Reality environment may provide for the much needed interaction
and practices that are normally required for real-life situations. Combining both elements into a single
eLearning platform, may provide for a totally new paradigm in the training of military and intelligence
units, by reducing costs on an already expansive military budget, providing a safer training
environment and ultimately improving the available methods of training for a special domain. The
present work makes an analysis of available technology, software and hardware, to provide for such a
scenario where Artificial Intelligence and Augmented Reality is united to create a new e-learning
concept. A functional architecture will be presented, with the identification of the most probable
already existing technologies that could be used to reach a proof-of-concept solution.

Keywords: Augmented Reality; Artificial Intelligence; Defense; Research; ELearning; Intelligence.

I. INTRODUCTION

The present paper is the result of research in the field of conceptual Artificial Intelligence (AI)
and Augmented Reality (AR) integration, in complex systems involving wearable technology (or
‘wearables’), done by SuperSix, with an intent to build a final product that encompasses all the briefly
described findings and presented concept in this paper. The project has been dubbed as “The Spartan”
as an internal code. The paper is putting forth a new way of combining technologies of the future, a
concept, which if used the right way, will change many branches of activity, not just learning. The
myriad of connected technologies to these concepts are many, but the combination of Big Data and
Analytics in the context of AI and AR are totally new. The future of technologies is revolving around
cloud-based technologies, mobile and Big Data. The developments are focused around the Human-
Machine Interface (HMI) and the Internet of Things (IoT) for anything past 2020. Complex

62
architectures involving all of these terms are currently under development in Research and
Development and are not production ready, in a stable environment. This is also due to the fact that
such technologies are emergent, compared to the classical computer screen.
In non-military applications, conditions to implement various software and hardware solutions
are a little less strict and testing some infrastructure usually does not take large amounts of time and
finances. In military areas products take years until they are considered stable enough for a release to
be considered viable. The more advanced the technology, the more time and money is required. When
it comes to human capital, in military areas, especially in developed countries which afford to invest in
future technologies, finances are secondary to training of skill and speed to which specific a unit may
become operational and effective. Using AR, AI, wearables and HMI to reach a better result is a view
of the future, especially in the defence industry. As it already is proven, new technologies have
expanded the battlefield, have improved the presence of combatants and how operational they can get
on the battlefield.
When it comes to how “fast” a unit may be trained in some cases, like Special Operations
Forces (SOF), a time factor can`t and shouldn`t be considered, as it would only be limiting in the
context of “continuous training”. After analyzing especially the evolution of technology in military
applications, related to the time factor, and a little less on the financial spending, time is only
important when it comes to how fast can a stable technology can be released and a new development
cycle can be started. For human capital, when special applications are a concern, in training, should
rather be a function of specific Performance Indicators (KPI), over the amount of training in money or
financial investment. It is obvious that better results in the training of human capital, will also allow
for capital and time adjustments.
Reasons for combining such technology stacks into a single tree stack, is to set the base for a
totally new standardized way of presenting training content and performing the art of training other
people. Wearable technology provides for high levels of interaction, not even AR can provide, even up
to the level of bio-feedback [1] to the wearer, as well as active real-time monitoring of body signals
[2]. The final result is easily quantifiable in various models already available, which could be used for
other applications [3], than medical experimental treatments. The development of sensors that can read
biological data, without intrusive methods, has indeed opened the gate to a variety of applications,
barely at the beginning.
When it comes to AR is obviously something that provides immersion, an aspect that
traditional eLearning systems even the ones that providing streaming and communication (like video
interaction), do lack severely. In applications where the trainer is actively involved and monitors the
evolution of a student, traditional eLearning systems are even more useless.
The subject of merging AI and AR is not so new and there are several interesting
developments in this area, like the ARC4 system by ARA (www.ara.com), under development for the
US Army. Transforming tomorrow is based on the ability to bring together the power of decision,
provided by the natural capacity of human brains and the speed of how software systems can gather
data on the battlefield or during a training process.
The paper does not explain the concepts that it brings together, since the detailed definitions
and various other applications are easily accessible via other resources. Specifically to the concept
presented in this paper, the design of a system that gathers together AI, AR, Big Data and wearables is
new in the area of eLearning. Designing a system that is targeted at training specialists in intelligence
and operational units, makes the work even more attractive, as these categories of users have special
needs, hence it is a provocation in itself. The work presented herein is barely at the beginning and by
maintaining a steady stream of developments we hope the final result will be used in more than just a
single domain, to improve the learning process.

II. TRUE CHANGES COME WITH A NEW PARADIGM OF DOING


TRADITIONAL TRAINING
Traditional eLearning is no longer enough, even in traditional classrooms, let alone in
complex situations, where human interactions are required for a good outcome of the learning process.
Not only in specialized environments, the eLearning tools are reaching a stale growth due to exactly
the fact that human interaction and repetitiveness is required to achieve a good result (let alone apply

63
concepts like “continuous learning”, which is paramount in special training applications). This is
certified by the Ebbinghaus [4] “forgetting curve”, which shows how fast and easy some skills can be
lost – up to 90% of learning within just a single week, after the training has been finished. Static
served content over the Internet, or simply reading some material is never enough. The traditional
eLearning tools display simple content under text or video format, with limited interaction between the
users and on a specific platform, regardless of the number of ways to deliver the information [5].
During this study of how information is delivered within an eLearning system, a link between spatial
geometrics and methods to deliver data has been made, which has resulted in the following product
generation classification set:

Method Generation Time-related segregation

Written, simple static text. 2nd generation Past and present


eLearning.

Video and combined video 2.5th generation


with text. eLearning. Valid through the end of 2019.

Video, text and online 3rd generation eLearning. Some free platforms will end their life
testing. cycle while 3.5th generation eLearning
will become more available without
Instructor interaction via 3.5 th
generation cost.
communication links, world- eLearning
wide accessible content with
virtual classrooms.

This includes simulators.

Virtual classrooms, 4th generation eLearning Near-present and present new


wearables, AI and AR technologies with VR-ready technology
integration. The system support.
presents Big Data and
complex self-evolving Available starting with 2018 and
algorithms. throughout 2025, with a similar
phasing-out cycle to the predecessor
generations.

Self-learning systems, 5th generation eLearning The future of eLearning and training
presents AI, AR, HMI and platforms
biofeedback. The system
evolves and presents Available starting with 2025 and
scenarios with real-life beyond. There is no control once 5th
events, presents the generation eLearning platforms have
environment and adds to the become available, even under a
environment various premium basis.
outcomes. The IoT helps the
sensor arrays and the AR The AI core will have to be a 4th
perform differently than generation AI, which involves self-
before. awareness and the capacity to interact
with humans in a natural interactive
AI and AR fully integrated, 6th generation eLearning way, where information exchange is
the system is fully interactive based on all levels – speech, behavior,
with through the HMI and decision.
provides real-time reactions
to human decision making.

64
The system is able to go
beyond Big Data Analytics
and is able to present
complex scenarios that
actually test human reactions
and is able to cause specific
reactions to see human
behavior.

Table 1. eLearning product classification generation set

Currently, the presence of some features in the most popular eLearning platforms that indicate
gen 3.5+ is the attempt of some vendors to keep up with the evolution of the current technology trend,
but is not enough. The above table also splits the generations of eLearning solutions through the years
to come, as a predicted evolution model of eLearning platforms, based on the evolution of other third-
party technologies like AR and AI. The prediction is based on observed events which may or may not
happen similarly in the complex environment that the new IoT 5G paradigm will bring, with ultra-fast
networks. The reason why during this study, simulators have been integrated within the 3.5 generation
systems, is that AI and AR bring a new way to do content for training purposes and simple problem-
solution is no longer enough as this does not require real AI capacities.
Some AR systems allow for complex interactions by putting controls in the hands of users,
like how Microsoft is doing with HoloLens. The issue revolves around the fact that legacy platforms
do not make use of advanced AR technologies, which is to be expected, since usually classic content is
developed around old static content and classic standards to present such content.
The standardization of eLearning tools to specific ways to import or export data is not helpful
and does not provide for the external developments of third-party technology which is not included in
the most used platforms in the eLearning Industry. A report showing the evolution of classic present
day eLearning solutions foretells the death of classical eLearning by 2021, due to lack of innovation
and the amount of already existing technologies that provide for near-free content [6].

III. THE CONCEPT SYSTEM AND NEW ARCHITECTURAL MODEL FOR THE
FUTURE LEARNING SYSTEMS
Building the architectural model involved the stabilization of some target desired features of
such a system, which then provided the core of such a system. These core features are:

 Integration of Big Data and Analytics. This is due to the fact that it would be the
single technology currently available to be considered fast and efficient enough to
provide for high-speed data streaming and steam analytics, in a context where sensor,
content and other data is being gathered, processed and transformed continuously. [7]

 The presence of a deep-learning engine. The proposed AI implementation model is a


neural network working in distributed environments with multiple nodes, using cloud-
based hardware resources. [8]

 HMI gathers data constantly on subjects, sensory data being gathered via multiple
types of interfaces.

 Biofeedback sensors transmit data via the HMI and react dynamically to the state of
the user of the system.

65
 The core system uses Big Data Analytics and AI to interact with the final user. The
system itself adapts to the user and supports decisions within the training session,
upgrades or downgrades the scenario in which the operator is currently involved.

 The AI is an extension of the user and does not require human interaction with a
trainer, however it does facilitate it on an entirely new levelss. The system is able to
personalize itself to the trainees, supports spatial permanent simulation of a training
environment, and presents real-life cases based on backward analytics.

 The training session supports AR environments, where content is automatically


generated, making present day content designers obsolete. The AI must be able to
create content based on predefined sets of data, in a given environment.

 Spatial data support (GIS).

 Support for wearable technologies, exoskeletons and dynamic interaction between


each participant in a training course, thus making each participant in an active data
transmission router.

The above project guidelines have sparked the following architectural model:

Projection Prediction
Users communicate
peer-to-peer;
Continuous improvement.
Users communicate
independently to the
“Spartan” system;
Big Data Cluster

Sensor data Sensor data is being


Functional transmitted
RDF Stores scenarios constantly;

Communication mainframe Biofeedback sensors


respond actively to
user decisions;
All-terrain. All-weather. Multiple models.

Artificial Intelligence
Data streams architecture

Figure 1. Project “The Spartan” system architecture

“The Spartan” development is aiming to be the beginning of the 5th generation systems, where
dynamic content is generates, facilitates user interaction and operator interaction, allows for on-the-fly
changes in a scenario-based training and supports complex simulations, like AR-based training. Where
controlled environments are available, such a system would allow for highly complex and interactive
training scenarios, where SOF and intelligence operators could be trained and evaluated to an entirely
new level.
Availability of training scenarios for SOF operators and intelligence personnel varies in scope
and size, but it also requires the “continuous” aspect. Since in the case of SOF and intelligence, the
focus is on the person and how difficult it is to train a unit. The present eLearning system would allow

66
for complex trainer-trainees interactions, advanced environmental simulations, true system evolution
via personal trainer and trainee profiling, self-evolving scenarios and true HMI.

The sheer number of available data in such a system would require data structures that support
real-time streaming, so a kappa architectural model would be the closest. [9] Data structuring in Big
Data structures would allow for concepts like redundancy, availability and high-speed processing. The
AI engine would be plugged in to large amounts of data and be able to make decisions based on
observable data sets (historical data).

The training models are scenarios which change constantly based on user behaviour and user
profiles. These are generated by the AI engine itself and provide for a truly interactive environment.
The system itself would provide for reaction-based scenarios in all types of environments and will
generate the AR overlay for the user interaction.

The communication infrastructure must resemble a Command-Control-Communication-


Computers-Intelligence-Surveillance-Reconnainsance (C4ISR) infrastructure to be able to fully make
use of the entire range of technologies. Real-time positioning of units on GIS-like with 3D support is
to be desired as an implementation model. Data overlay takes into consideration the multiple layers of
available data types:

Sensor data

Scenarios

User profile

Environment

Figure 2. Aggregate data model layers

The users would interact with the system through the wearables consisting of various sensors
and most likely an exoskeleton, which would both augment and limit movement based on the
scenarios run at the training.

IV. TRAINING SCENARIOS OF TOMORROW


One important aspect is how a training scenario will evolve using such a system. Interactive
content is highly desirable, especially in conditions where intelligence operators and SOF personnel do
not stand-by on static available content in front of a computer screen. The scenarios change and evolve
based on the AI input, while in controlled test environments, the environment could be changed by
inserting various factors. A standard training session with present day interactive content would most
likely hold content that is selectable and reacts at most, to user touch. Such complex training systems
must react to user touch, decision or user medical state. The content will be interactively changed

67
based on multiple factors or simply based on an ever evolving training scenario. This is the next level
that the new eLearning system seeks to bring to the world of training, which is to influence reactions
of users. The difference between present day ways of training is the exact difference between
“projection” and “prediction”. By using a naturally evolving scenario, the AI system, can virtually
predict various outcomes of human subjects participating in an ever changing environment.
Controlling the behavior is the key aspect in training SOF and intelligence personnel, as teaching
trainees how to behave in various key moments is crucial to long term survival.
In this instance, the difference between simple Big Data Analytics and a true AI-enabled
system can be seen (based on the Gartner analytics model [10]):

Observable data Predictable events Desirable events


(descriptive model) (predictive model) (prescriptive model)
PAST & PRESENT …STILL THE PRESENT THE FUTURE

Data methods: Data methods: Data methods:


Data mining Simulation data Simulation data
Real-time analytics Statistic data Optimized models
Various reports Predictive data Influenced results
Bulk data Forecasts & Trends Complex event data

Outcomes: Outcomes: Outcomes:


Unstructured data Functional data models Unstructured data
Structured data Structured data

Machine Learning & Reasoning

Figure 3. Data model segmentation

The role of an AI-enabled complex learning platform is to transform legacy data of today in
the predictive models of tomorrow. The environment and sensor data is still data, regardless of how
advanced the data acquisition process is. The true power of this system is based on the power of the AI
to change the scenarios based on real-time data feeds, reasoning and improve the training process.

V. CONCLUSIONS
The proposed model of eLearning for specialized personnel like SOF and intelligence
operators will more than likely change training. The ability to use wearables to provide constant real-
time data, where an AI engine could provide input, change scenarios and generate a new environment
via an AR interface has never been done before. It is also true that such a task would be difficult to
bring to completion, as the work would indeed be significant. In this case, “The Spartan” system
would be the bridge to combine physical and virtual realities, to help the final users in the training
process. The system would also be a road opener for an entirely new training paradigm, which the
learning industry needs at the moment to evolve to the next stage.

68
Acknowledgements
Special thanks to Mr. Tiberiu Tanase, Intelligence Analyst with the Romanian Secret Service,
for providing support and proof reading the paper, as well as making available various resources along
the way.

Reference Text and Citations


[1] Aminian K, Robert P, Buchser EE, Rutschmann B, Hayoz D, Depairon M: Physical activity monitoring based on
accelerometry: validation and comparison with video observation. Medical & Biological Engineering &
Computing 1999, 37(3):304-308. 10.1007/BF02513304
[2] Bonato P, Hughes R, Sherrill DM, Black-Schaffer R, Akay M, Knorr B, Stein J: Using Wearable Sensors to
Assess Quality of Movement After Stroke. 65th Annual Assembly American Academy of Physical Medicine and
Rehabilitation, Phoenix (Arizona) October 7–9, 2004
[3] Park S, Jayaraman S: Enhancing the quality of life through wearable technology. IEEE Eng Med Biol Mag
2003,22(3):41-48. 10.1109/MEMB.2003.1213625
[4] Wixted, J.T. & Ebbesen, E.B.: Memory & Cognition (1997) 25: 731. https://doi.org/10.3758/BF03211316.
[5] Xiaofei Liu; A. El Saddik ; N.D. Georganas: An implementable architecture of an e-learning system. CCECE
2003 - Canadian Conference on Electrical and Computer Engineering. Toward a Caring and Humane Technology
(Cat. No.03CH37436), DOI: 10.1109/CCECE.2003.1225995.
[6] Sam S. Adkins: The 2016-2021 Worldwide Self-paced eLearning Market: Global eLearning Market in Steep
Decline. Ambient Insight, 2016, available at: http://www.ambientinsight.com/Resources/
Documents/AmbientInsight_2015-2020_US_Self-paced-eLearning_Market_Abstract.pdf (accessed 20.01.2018)
[7] S. Sagiroglu and D. Sinanc, "Big data: A review," 2013 International Conference on Collaboration Technologies
and Systems (CTS), San Diego, CA, 2013, pp. 42-47. doi: 10.1109/CTS.2013.6567202
[8] Gerhard Weiss et al.: Multiagent Systems. A modern Approach to Distributed Artificial Intelligence. MIT Press,
Cambridge Massachusetts, London, England.
[9] Julien Forgeat: Data Processing Architectures – Lambda and Kappa. Ericsson AB available at:
https://www.ericsson.com/research-blog/data-processing-architectures-lambda-and-kappa/ (accessed 20.01.2018)
[10] John Hagerty: 2017 Planning Guide for Data and Analytics. Gartner. Published: 13 October 2016. Available at:
https://www.gartner.com/binaries/content/assets/events/keywords/catalyst/catus8/2017_planning_guide_for_data_a
nalytics.pdf (accessed: 21.01.2018)

69

You might also like