You are on page 1of 6

Grand Challenges in Modeling and Simulation:

Expanding Our Horizons


Simon J. E. Taylor

Osman Balci

Wentong Cai

ICT Innovation Group


Department of Information Systems
and Computing
Brunel University, Uxbridge, UK

Department of Computer Science


Virginia Tech
Blacksburg, VA, USA

School of Computer Engineering


Nanyang Technological University
Singapore

Margaret L. Loper

David M. Nicol

George Riley

Information & Communications Lab


Georgia Tech Research Institute
Atlanta, GA, USA

Information Trust Institute


University of Illinois at UrbanaChampaign, IL, USA

School of Electrical and Computer


Engineering
Georgia Tech
Atlanta, GA, USA

was held in Dagstuhl, Germany, in an attempt to focus efforts on


key areas (www.dagstuhl.de/02351). In 2012, a new initiative
was launched to continue these Grand Challenge efforts. The first
event in this new phase of activities was the M&S Grand
Challenge Panel held at the 2012 Winter Simulation Conference
[1]. This discussed issues including interaction of models from
different paradigms, parallel and distributed simulation,
ubiquitous computing, supercomputing, grid computing, cloud
computing, big data and complex adaptive systems, model
abstraction, embedded simulation for real-time decision support,
simulation on-demand, simulation-based acquisition, simulation
interoperability, high speed optimization, web simulation science,
spatial simulation, and ubiquitous simulation. The second event
was another Grand Challenge Panel that took place at the
Symposium on Theory of Modeling and Simulation (TMS13)
during SpringSim 2013 in San Diego [2]. This addressed a range
of topics across big simulation applications (data, models,
systems), coordinated modeling, human behavior, composability,
sustainable funding, cloud-based M&S, engineering replicability
into computational models, democratization of M&S, multidomain design, hardware platforms and education.

ABSTRACT
There continues to be many advances in the theory and practice of
Modeling and Simulation (M&S). However, some of these can be
considered as Grand Challenges; issues whose solutions require
significant focused effort across a community, sometimes with
ground-breaking collaborations with new disciplines. In 2002, the
first M&S Grand Challenges Workshop was held in Dagstuhl,
Germany, in an attempt to focus efforts on key areas. In 2012, a
new initiative was launched to continue these Grand Challenge
efforts. Panel members of this third Grand Challenge present
their views on M&S Grand Challenges. Themes presented in this
panel include M&S Methodology; Agent-based M&S; M&S in
Systems Engineering; Cyber Systems Modeling; and Network
Simulation.

Categories and Subject Descriptors


I.6.0 [Simulation and Modeling]: General

General Terms
Algorithms and Theory

Keywords

Panel members of this third Grand Challenge present their views


on M&S Grand Challenges. Themes presented in this panel
include M&S Methodology; Agent-based M&S; M&S in Systems
Engineering; Cyber Systems Modeling; and Network Simulation.

Modeling and Simulation Methodology; Agent-based Modeling


and Simulation; Modeling and Simulation Life Cycle; Cyber
Systems Modeling; Network Simulation.

1. INTRODUCTION

2. OSMAN BALCI: Developing a New


Modeling and Simulation Methodology
2.1 Overview

After several decades of progress, Modeling and Simulation


(M&S) continues to produce advancements in theory and practice.
There continues to be innovation in existing application areas of
M&S and new application areas continue to be identified. Some
of these can be considered as Grand Challenges; issues whose
solutions require significant focused effort across a community,
sometimes with ground-breaking collaborations with new
disciplines. In 2002, the first M&S Grand Challenges Workshop

M&S stands to be the only applicable technique in many cases for


bringing solutions to many complex problems. However, M&S is
currently applied in an ad hoc manner without an underlying
holistic methodology. Seventeen types of M&S are used in dozens
of different disciplines to show how diverse M&S is. However,
despite the diversity, much of the underpinnings of the current
M&S methodologies date back to 1960s. Current M&S
methodologies do not sufficiently meet the requirements for
solving complex multifaceted problems. Development of a
holistic M&S methodology for solving such complex problems
poses to be a grand challenge. This position statement justifies the
need for a new M&S methodology and identifies a set of
requirements for its development.

Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGSIM-PADS13, May 1922, 2013, Montral, Qubec, Canada.
Copyright 2013 ACM 978-1-4503-1920-1/13/05...$15.00.

409

Table 1. M&S Areas (Types)

2.2 INTRODUCTION
A Model is a representation and abstraction of anything such as a
real system, a proposed system, a futuristic system design, an
entity, a phenomenon, or an idea. Modeling is the act of
developing a model. Simulation is the act of executing,
experimenting with or exercising a model or a set of models for a
specific objective (intended use) such as analysis (problem
solving), training, acquisition, entertainment, research, or
education. Simulation cannot be conducted without a model and
modeling is an integral part of simulation. Therefore, we refer to
both modeling and simulation activities as M&S.

A.

Many areas or types of M&S exist. Taxonomy of M&S areas is


presented in Table 1. Use of the 17 M&S areas listed in Table 1
spans dozens of different disciplines for many objectives /
intended uses. Each M&S area possesses its own characteristics
and methodologies, is applicable for solving certain problems, and
has its own community of users. Some M&S areas have their own
societies, conferences, books, journals, and software tools.

B.
C.

For a description of the M&S areas, the reader is referred to the


ACM SIGSIM M&S Knowledge Repository at http://www.acmsigsim-mskr.org/MSAreas/msAreas.htm

D.

2.3 THE NEED FOR A NEW M&S


METHODOLOGY
Employment of M&S as a technique to solve complex problems
includes the M&S of many diverse systems (problem domains),
each with its own unique characteristics. In the current state of the
art, the M&S methodology that is effective for one problem
domain typically does not satisfy the needs of M&S for another
problem domain. However, many problem domains dictate the
creation of a simulation model that represents many diverse
systems in an integrated manner.

Based on Model Representation


1.
Discrete M&S
2.
Continuous M&S
3.
Monte Carlo M&S
4.
System Dynamics M&S
5.
Gaming-based M&S
6.
Agent-based M&S
7.
Artificial Intelligence-based M&S
8.
Virtual Reality-based M&S
Based on Model Execution
9.
Distributed / Parallel M&S
10. Web-based M&S
Based on Model Composition
11. Live Exercises
12. Live Experimentations
13. Live Demonstrations
14. Live Trials
Based on What is in the Loop
15. Hardware-in-the-loop M&S
16. Human-in-the-loop M&S
17. Software-in-the-loop M&S

Top 10 requirements for the new M&S methodology:


(1) The new M&S methodology must be structured based on a
comprehensive and effective M&S life cycle.
An M&S life cycle [4]:

Two paradigms have been primarily used for discrete M&S


development: procedural and object-oriented. Under the
procedural paradigm, discrete simulation models have been
developed using the following conceptual frameworks (a.k.a.
world views, simulation strategies): activity scanning, event
scheduling, three-phase approach, and process interaction [3].
These four conceptual frameworks were created in early 1960s
and some of them are still being used. It is time for new ideas!
The object-oriented paradigm originated in SIMULA simulation
programming language in 1967. Smalltalk, C++, Objective C,
Java, and C# followed as the contemporary object-oriented
programming languages that are commonly used for software
development today.
There is a need for a holistic M&S methodology to meet the grand
challenges we face today. The development of the holistic M&S
methodology poses to be a grand challenge because of the
requirements stated below.

2.4 REQUIREMENTS FOR A NEW M&S


METHODOLOGY
The Merriam-Webster dictionary defines methodology as a body
of methods, rules, and postulates employed by a discipline. The
new M&S methodology should present identifiable techniques,
approaches, and strategies, and provide effective guidance to
M&S engineers, analysts, and managers. We provide some
requirements below under which the new M&S methodology
should be developed.

Represents a framework for organization of the


processes, work products, quality assurance activities,
and project management activities required to develop,
use, maintain, and reuse an M&S application from birth
to retirement.

Specifies the work products to be created under the


designated processes together with the integrated
verification and validation (V&V) and quality assurance
(QA) activities.

Is critically needed for project management to


modularize and structure an M&S application
development and to provide guidance to an M&S
developer (engineer), manager, organization, and
community of interest.

Identifies areas of expertise in which to employ


qualified people.

Is required to show the V&V and QA activities as


integrated within the M&S development activities based
on the principle dictating that V&V and QA must go
hand in hand with the M&S development.

Enables to view M&S engineering from the four Ps


(Perspectives): Process, Product, People, and Project.

The author has developed such an M&S life cycle [4] based on his
experience with DoD-related complex M&S development
projects. The new M&S methodology can be created based on that
life cycle representation.

410

(2) The new M&S methodology must be applicable for providing


effective and integrated M&S-based solutions to complex
problems.

acceptability. V&V or QA is not a stage, but continuously


conducted activities hand-in-hand with the development activities
throughout the entire M&S life cycle. The new methodology must
facilitate the effective application of V&V and QA principles [5].

An effective M&S-based solution is the one that is sufficiently


credible, accepted, and used by the decision makers. The new
methodology should assist in reducing the M&S builders risk and
M&S users risk.

(8) The new M&S methodology must support integration with


real-life hardware and software systems for performing Live
Exercises.

M&S Builders Risk is the probability that the M&S


application is rejected although it is sufficiently credible and
acceptable. The consequences of this risk will result in higher cost
and prolonged project duration.

Live exercises, experimentations, demonstrations, and trials are


very much needed for solving some complex problems such as
emergency response management training, military training, and
technology assessment. It is critically important that the M&S
application be integrated with real-life systems.

M&S Users Risk is the probability that the M&S application


is accepted in spite of the fact that it is not sufficiently credible.
The consequences of this risk can be catastrophic since incorrect
decisions will be made based on the M&S results.

(9) The new M&S methodology must support real-time decision


making.
An M&S application can be embedded within a real-time decision
support system to enable real-time decision making during, for
example, an emergency situation.

(3) The new M&S methodology must be a holistic methodology


applicable for M&S of many diverse systems in an
integrative manner.

(10) The new M&S methodology must enable the development of


M&S applications for Analysis as well as for Training
objectives.

Many problem domains (universes of discourse) contain diverse


systems embedded within each other forming a system of systems.
Each system possesses its own characteristics, e.g., discrete,
continuous, real-time, or distributed. Each system can require a
type (area) of M&S listed in Table 1. The new methodology must
accommodate as many M&S areas (types) as possible.

Analysis (problem solving) is conducted under such objectives as


comparison of different operating policies, evaluation of a given
emergency response management plan, prediction, and sensitivity
analysis. Training refers to simulation-based training of, e.g., first
responders and decision makers. The same methodology must be
able to be used for developing an M&S application for multiple
intended uses (objectives).

(4) The new M&S methodology must provide a unifying


conceptual framework throughout the entire M&S
development life cycle.
Using different conceptual frameworks from one phase of the
M&S development life cycle to another increases the complexity
of development and probability of inducing errors. The new
methodology must employ a conceptual framework that can be
used for each life cycle phase.

2.5 CONCLUDING REMARKS


The U.S. Government is the largest sponsor and consumer of
M&S applications in the world. Billions of dollars are spent
annually by the U.S. Government for developing, using, and
maintaining M&S applications. However, the M&S
methodologies in use today date back to 1960s. It is critically
important to conduct research to develop new methodologies to
address the ever-increasing complexity of M&S development.

(5) The new M&S methodology must enable network-centric


M&S application development.
Two main reasons exist for creating an M&S application as
network-centric. (i) To be able to train geographically-dispersed
people using M&S, the M&S application must be accessible over
a network such as Internet, local area network, virtual private
network, or Secret Internet Protocol Router Network (SIPRNET).
(ii) M&S application execution can be improved by distributing
the execution of its components on different server computers at
different network nodes. The new methodology must enable the
construction of a network-centric M&S application.

Reuse has been very difficult or in some cases impossible in the


M&S discipline. However, the issue of reuse is extremely
important and should be effectively addressed [6].
Component-based M&S development is an unsolved problem for
stand-alone M&S applications due to differences in programming
languages, operating systems, and hardware. However, networkcentric M&S development promises new advances in componentbased M&S engineering and should be fully investigated.

(6) The new M&S methodology must enable reuse and


component-based M&S application development using a
library of reusable components.

3. WENTONG CAI: Agent-based Modeling


and Simulation for Complex Adaptive Systems
Traditionally M&S involves the development of a dynamic,
stochastic, and discrete model of the physical system of interest.
Once the model is shown to exhibit the expected behaviour for a
given set of known inputs, it can be used to predict the system
behaviour for a set of unknown scenarios or inputs. For many
systems which are well understood, or whose behaviour can be
characterized by a chronological sequence of events, this
approach to modelling has been very successful. There is
however a large set of systems in which a global or macro scale
understanding is simply not possible to conceive. Simulating and
modelling of these systems in a traditional manner is often too
challenging as system level trends and properties are difficult to
characterize.

Undoubtedly, the reuse provides significant economical and


technical benefits that cannot be underestimated [6]. The new
methodology must enable the creation and use of a library of
reusable components specifically created for a problem domain
(universe of discourse) of interest.
(7) The new M&S methodology must facilitate verification,
validation, and quality assurance throughout the entire M&S
life cycle.
V&V aims to assess the transformational accuracy (verification)
and behavioral/representational accuracy (validation) of an M&S
application. QA aims to assess the other M&S quality
characteristics such as interoperability, fidelity, credibility, and

411

informaation about the actions of inddividuals and also needs to


understtand the eventts in the simuulated world sso that the
individduals can determ
mine how to resppond to these eevents. But,
differennt crowd modells may use diffferent models off perception
and acction, and may respond to evvents at differeent level of
abstracction.

A
At the heart of agent-based
a
mod
del is the princip
ple of emergencce.
F
For studying an
nd attempting to understand natural compleex
pphenomena, agen
nt-based modelss offer an attracctive and intuitiv
ve
aapproach of desccribing rules and
d parameters of individual entitiies
aand letting the sy
ystem level prop
perties emerge. From a modellin
ng
pperspective this is an attractiv
ve method for reasoning abo
out
ccomplex adaptiv
ve systems, wh
here the overalll behaviour of a
ssystem is the ressult of a huge number
n
of decissions made by th
he
innter-connected individual co
omponents [7].
Given th
his
uunderstanding it is clear to see why
w agent-based
d M&S has seen
na
ddramatic increasee in its applicatio
on in a wide variety of applicatio
on
ddomains [8]. While
W
agent-based
d M&S offers su
uch great promiise
thhere are still fun
ndamental and ch
hallenging issuess.

The gennerality of agentt-based M&S haas seen its appliccation across


multiplle disciplines. As a consequennce researchers working in
differennt communitiess (such as com
mputational soccial science,
artificiaal intelligence, and M&S) hhave addressed challenges,
developped solutions annd implemented systems to enabble this type
of com
mputational modeelling. Obvioussly, to deal withh the abovementionned challenges and to push thee research forwaard in agentbased M
M&S for compllex adaptive sysstems, a well-orgganized and
orchesttrated effort is required acrosss multiple discciplines and
researcch communities.

B
Behaviour Mod
delling.
Thee effectiveness of agent-baseed
ssimulation is dettermined by the integrity of the simulation mod
del
w
with respect to its ability to caapture the true characteristics of
inndividual entitiies in a typicaal environment.. An importaant
m
modelling decisiion is the level of
o details: wheth
her a simple rullebbased agent model is sufficient or a complex
x agent behavio
our
m
model that incorp
porates social, psychological,
p
an
nd cultural facto
ors
iss required. Creaating a complex agent behaviourr model itself also
pposts a great chaallenge. This usu
ually requires stu
udying theories of
a specific domaiin and extractin
ng the essence from
f
the theoriees.
W
With the currentt wealth of data availability, theere is a possibiliity
too obtain behaviiour rules inducctively from thee data using daata
aanalytics techniq
ques.

4. M
MARGARET
T LOPER: M
Modeling &
Simu
ulation in th
he Systems E
Engineeringg Life
Cyclee
Many oof the most impoortant problems facing the world are rooted
in scieence - energy, infectious diseease, climate cchange, and
educatiion. The Nationnal Academy of E
Engineering hass identified a
set of ggrand challengess in broad realms of human conccern that are
in needd of engineeringg solutions [9]. All of these prroblems are
compleex systems, and finding solutionns to these grandd challenges
will ineevitably use M&
&S as a problem--solving techniquue. In order
to creaate solutions too these problem
ms, systems enggineering is
neededd to focus on how complex enggineering projectts should be
designeed and managedd over their life ccycles.

V
Verification and
d Validation. The major mottivation for usin
ng
aagent-based M&S to understand complex adaptive systems has an
a
uundesirable conssequence when ascertaining vaalidity of models.
T
The central argum
ment for agent-b
based model is that
t
the behavio
our
oof the system em
merges in somee unknown and unexpected waay.
T
This unpredictab
ble nature of thee system makes tracing cause an
nd
eeffect relationshiips a very challen
nging issue.

System
ms engineering iss an interdisciplinary approach too translating
users' nneeds into the ddefinition of a ssystem, its archhitecture and
design through an iterative process tthat results in aan effective
operatioonal system [100]. The systemss engineering V
V process,
shown in Figure 1, reprresents the sequeence of steps in a product or
project s development.. The V processs was designedd to simplify
the undderstanding of tthe complexity associated withh developing
system s.

T
The most effectiive approach to establishing mo
odel validity is to
ccompare the sim
mulation results of
o the model with the output daata
fr
from the system being modelled.. However, this approach may not
n
bbe applicable forr agent-based M&S for complex
x adaptive system
ms
aas there is usuaally no existing system to com
mpare with. Th
he
ccomplexity and arrangement of the huge numbers of agents will
w
aalso typically leaad to an overly parameterized
p
model.
m
So, even if
thhere are data av
vailable, evolvin
ng model parameeters to match th
he
ddata is not a straiightforward task
k either.
S
Scalability. In computational
c
teerms an autonom
mous agent can be
b
vviewed as an ind
dependently execcuting process, with
w its own staate
aand thread of con
ntrol. The comp
plexity of agentts can vary wideely
ddepending on th
heir model. In agent-based M&S there is a
trraditional view of large numb
bers of simple (or lightweigh
ht)
aagents that are interacting with each other. Ho
owever, for man
ny
aapplications agen
nts themselves can have significcant computation
nal
ccapabilities and therefore sign
nificant processiing requirementts.
W
Whether agent models
m
involve heavyweight or liightweight agentts,
thhe scale and complexity of th
he studied systems often mak
kes
eexecuting these models
m
extremely compute inten
nsive.

Figure 1. S
Systems Engineeering V Process

IInteroperability and Reusabilityy. A large-scalled model can be


b
cconstructed by sub-models creeated using the existing models.
H
However, these models
m
may nott work with each
h other: they maay
ooperate at diffeerent levels off abstraction an
nd created usin
ng
ddifferent modelliing paradigms. This makes th
he communicatio
on
bbetween differen
nt models difficu
ult. In a typical crowd simulatio
on,
ffor example, th
he crowd modeel needs to paass along various

We cann also think about systems enggineering from a life cycle


perspecctive. A life cycle is a cateegorization of tthe systems
developpment activitiess into distinct, controllable phhases. All
system s follow a genneral life cyclee from conceppt definition
throughh development aand testing to ddeployment, opeerations, and
deactivvation. The systtems engineeringg V process cann be mapped
to a sim
mple life cycle m
model [11], as shhown in Figure 2.

412

insulatiion from a fuel ttank, and the shhuttle burned up on re-entry.


As part
rt of a team thatt reviewed NAS
SA developed sstandards on
model validation (in rresponse to the disaster), I learnned that the
effect of the foams impact on thhe tiles had acctually been
simulatted, but the resuults suggested thhat only an acceeptable level
of dam
mage would ensuue. Post-disastter analysis reveealed that a
numberr of sub-modelss involved in thhe simulation w
were deeply
flawed,, and values forr model parameeters were unknnown, while
other ssub-models and their parameteers had acceptabble validity.
The fauulty sub-modelss and parameterrs contaminated the end-toend sysstem results.

Figurre 2. Systems Engineering Lifee Cycle


M
M&S has becom
me an importan
nt tool across all
a phases of th
he
ssystems engineerring life cycle to
o include, requirements definitio
on,
pprogram manag
gement, design and engineerin
ng, efficient teest
pplanning, result prediction, sup
pplement to test and evaluatio
on,
m
manufacturing, and logistics support.
There
T
are man
ny
oopportunities to use M&S in th
he systems engin
neering life cyclle,
aand when appro
opriately applieed, four major benefits can be
b
aachieved: cost savings,
s
acceleraated schedule, improved
i
produ
uct
qquality, and cost avoidance [12].

As ouur nations deffensive capabiliities rely increeasingly on


computting and comm
munication, sim
mulation of ccyber-centric
system s grows in critiical importance. How trustworrthy are our
cyber-ssimulation moddels? How cann we develop trustworthy
modelss? The problem
ms are deep and can be subtle, aand there are
many ssources for untrrustworthiness tto creep in, proopagate, and
contam
minate a system m
model. There aare serious gaps that need to
be clossed lest the cybeer-equivalent of tthe Columbia diisaster occur
in the hheat of a cyber-ddependent conflict.

T
To solve the graand challenge prroblems of our times,
t
we need to
uuse systems engiineering to desig
gn and manage the solutions, but
b
w
we as an M&S community don
n't have a good understanding of
w
what types of M&S are bestt used/needed in each system
ms
eengineering life cycle phase (i.ee., concept defin
nition, technolog
gy
ddevelopment, eng
gineering and manufacturin
ng developmen
nt,
pproduction and deployment,
d
and
d operations and
d support). Once
w
we do have thiss understanding,, we can design
n better-integrateed
M
M&S tools and solutions
s
across the systems engineering life cyccle
pphases for solvin
ng these grand ch
hallenges.

The firrst gap relates too trust in modeling parameterrs and data.
Every m
model depends on parameters aand processes daata. Not all
data orr parameters useed have the samee level of certainnty, trust, or
meaninng. Data trustwoorthiness depends on data provvenance, on
engineeering accuracy and on levels oof confidence in that data.
Our woork in simulatinng wireless com
mmunications haas shown us
how eaasy it is to intrroduce uncertaiinty in detailed models by
simply lacking good innformation abouut the parameterrs called for
in the m
models, e.g., refllective coefficiennts.

F
For example, we
w commonly use
u low-fidelity, aggregate-leveel,
cconstructive simu
ulations (e.g., war-games) in con
ncept definition to
eexamine alternative solutions, lo
ook at operationaal effectiveness or
eevaluate cost of different conceepts. Later in th
he life cycle (i.ee.,
eengineering and
d manufacturing
g developmentt) we use hig
ghffidelity, physics-based simulations for detailed
d design and for
f
ddeveloping prototypes. Once we are in the engineering an
nd
m
manufacturing development
d
ph
hase we have committed to a
ddesign, and invesstments have been made in a specific solution (so
nno opportunity for changes). What if we had the ability to
pproduce detailed
d designs and prrototypes earlierr in the life cyccle
aand put them in an operatio
onal setting to
o see how theey
pperform? For example,
e
this might
m
lead to the challenge of
bbringing high an
nd low fidelity simulations tog
gether that are at
ddifferent levels of
o resolution, in a quick and eassy way. If we caan
ddo that early eno
ough in the systtems engineering
g process, we caan
ddesign and deveelop more effecttive solutions, by
b letting the usser
aactually interact with the system
m early-on, and
d have substantiial
innput on system requirements (w
where now userss interact with th
he
final system once
o
it has been
n produced and changes are cosstpprohibitive to maake).

The seccond gap is tru


ust in model forrmulation. Paart of model
validatiion needs to bee quantified connsideration of thhe degree to
which the abstractionss of the model m
match the availaable data or
parameeters. For exam
mple, why incluude detailed subb-models of
networkk switches withh their variationss in scheduling and packetdrop poolicies if (a) knoowledge of swittches used in thhe system of
interestt is lacking, annd (b) the imppact of queuingg policy on
measurres of interest (ee.g., applicationn throughput) is significant?
A detaailed model ussing uncertain parameters m
may be less
trustwoorthy than a morre abstract modeel using trusted parameters.
Our woork in analyzingg abstractions inn switching and background
traffic hhas shown that nnetwork utilizattion is largely unnaffected by
differennt switching poolicies, but prootocols involvinng feedback
(e.g., T
TCP) are very m
much affected byy packet drop ppolicies, and
differennt switches.
A thirdd gap is trust in model composiition. Data andd parameters
used hhave different leevels of trustwoorthiness, sub-m
models have
differennt levels of trusst relative to thee data and paraameters they
use. T
The final questioon asks how truustworthy is thee end-to-end
predictiions of the moodel of sub-modeels that make uup the entire
model. This is clossely related to a well-developped field of
models that
Uncerrtainty Quantificcation in contiinuous system m
seeks tto explain how uuncertainty and error in continuuous models
propagaate through coomposition.
Discrete-system
m models particullar those of cybber-systems - neeed a correspondding theory.
The paath forward heree is difficult beecause classical uncertainty
quantiffication relies oon derivatives thhat are lacking in discrete
modelss. The key nottion though is model sensitivvity, and so
developping methods off quantifying sennsitivity of metrrics in cyber
modelss to parameters aand data may leaad to a corresponnding theory
of unceertainty quantificcation for cyber simulations.

A grand challenge for M&S is then to understtand how M&S is


uused in each off the systems en
ngineering life cycle
c
phases, an
nd
w
what types of models
m
and simullations are mostt effective in eacch
pphase. Once thiss is better underrstood, we will be
b able to develo
op
bbetter M&S tools and techniques that can be reu
used or integrateed
aacross different phases
p
of the sysstems engineerin
ng life cycle.

55. DAVID NICOL:


N
Un
ncertainty and
a Trust in
n
M
Models of Cyber
C
System
ms
M
Model validatio
on - determinin
ng whether a model correcttly
represents a system of interest - is widely recog
gnized as a cruciial
aand difficult asp
pect of M&S. Failure to propeerly validate maay
leead to disastrou
us results. Pro
otective tiles on the space shutttle
C
Columbia were damaged
d
during
g liftoff by a sm
mall piece of foaam

413

accurate modeling of the PHY layer is extremely challenging and


not very well understood. Many network simulation designers
have created models for PHY layer behavior based on well-known
theory, only to find that real-world experiments lead to vastly
different results.

6. GEORGE RILEY: Grand Challenges in


Network Simulation
The use of discrete event simulation methods to attempt to predict
performance of telecommunications networks has been an
important part of nearly all research in computer networks. In
nearly all cases it is simply too expensive and time consuming to
attempt to create an actual network to experiment with and study
network behavior under controlled conditions. However, the use
of network simulation tools is also fraught with pitfalls and
difficulties such that it becomes difficult to draw meaningful
conclusions from many simulation studies. There are several
difficult problems that need solutions in order to continue the
widespread use of network simulation tools.

The field of network M&S has been active for decades, with
bigger and better tools being developed. Progress has been made,
but the use of network simulation to accurately predict network
performance still leads to inaccurate conclusions in many cases.
The above list of challenges is by no means complete, but is a
good starting point for researchers wishing to create better
simulation tools.

7. CONCLUSIONS

Performance and Scale. Virtually all network simulation tools


quickly run in to performance issues when modeling any nontrivial network. As the size of the network topology grows and
the link capacity increases, the total number of simulation events
per simulated second quickly leads to excessive running times,
sometimes several hours or even days per simulated second.
Clearly, meaningful research is difficult to achieve when waiting
long time periods between experimental runs. Further, processor
memory also grows quickly as the topology size grows, often
leading to experiment failure due to memory exhausting.
Distributed simulation methods on supercomputers or networks of
workstations can ease this problem somewhat at the expense of
extra simulator overhead for message exchanges and time
management.

The panel members of this third M&S Grand Challenges activity


present their views on why multifaceted M&S Methodology,
Agent-based M&S, M&S in Systems Engineering, Cyber Systems
Modeling, and Network Simulation pose serious methodological
and technical challenges that are considered as grand challenges.

8. REFERENCES
[1] Taylor, S.J.E., Fishwick, P.A., Fujimoto, R., Page, E.H.,
Uhrmacher, A.M., Wainer, G. 2012. Panel on Grand Challenges
for Modeling and Simulation. In Proceedings of the Winter
Simulation Conference 2012. ACM Press, NY.

[2] Taylor, S.J.E., Khan, A., Morse, K.L., Tolk, A., Yilmaz, L,
Zander, J. 2013. Grand Challenges on the Theory of Modeling
and Simulation. In Proceedings of the 2013 Symposium on the
Theory of Modeling and Simulation. SCS, Vista, CA. To appear.

Accuracy of simulation parameters. The behaviors of the


various models in the simulation environment are intended to
reflect accurately the behavior of the same element in a real-world
network. However, in many cases it is simply not known the
actual internal behavior of the element in question in a real
network (queuing discipline, queue size, queue size units, as an
example). Thus when trying to understand the effect of proposed
changes on a real network the observed metrics from the simulator
might not match the actual metrics on the real network due to
incorrect assumptions about how the existing network is
configured.

[3] Balci, O. 1988. The Implementation of Four Conceptual


Frameworks for Simulation Modeling in High-Level Languages.
In Proceedings of the 1988 Winter Simulation Conference. ACM
Press, NY. 287-295.

[4] Balci, O. 2012. A life cycle for modeling and simulation.


Simulation: Transactions of the Society for Modeling and
Simulation International. 88, 7, 870883.

[5] Balci, O. 2010. Golden rules of verification, validation, testing,


and certification of modeling and simulation applications. SCS
M&S Magazine. Oct. 2010 Issue 4, The Society for Modeling
and Simulation International (SCS), Vista, CA.

Accuracy of simulation models. Clearly, models in a network


simulation environment are intended to behave identically to the
same network elements in real systems. However, models for
complex protocols such as TCP are difficult to create and difficult
to compare to actual networks. The number of different TCP
variations and behavior in deployed systems is surprisingly large,
and it is nearly impossible to determine a priori which of the many
variants are in use in a network. Further, models for queuing
methods also have a number of configuration parameters (see the
RED queuing method for an example) for which the modeler
often does not know the correct or accurate settings. In short, the
behavior of nearly all network element models in network
simulators is intended to mimic some real world system for which
the modeler has incomplete or inaccurate information.

[6] Balci, O., Arthur, J. D., and Ormsby, W. F. 2011. Achieving


reusability and composability with a simulation conceptual
model. Journal of Simulation 5, 3, 157-165.

[7] Holland, J. 1999. Emergence, From Chaos to Order. Basic


Books.

[8] Siebers, P. O., Macal, C. M., Garnett, J., Buxton, D., and Pidd
M. 2010. Discrete-event Simulation is Dead, Long Live Agentbased Simulation! Journal of Simulation, 4, 3, 204-210.

[9] National Academy of Engineering. 2008. Introduction to the


Grand Challenges for Engineering. Website accessed 15
February 2013: http://www.engineeringchallenges.org.

[10] Committee on Pre-Milestone A Systems Engineering. 2009. PreMilestone A and Early-Phase Systems Engineering: A
Retrospective Review and Benefits for Future Air Force
Acquisition. The National Academies Press.

Modeling physical layer performance in wireless simulations.


Network simulation has become nearly ubiquitous in the arena of
wireless networks. It is well known that the performance of such
networks is a function of many variables such as transmission
power, antenna gain and orientation, network density, routing
protocol in use, node mobility, and of course the network traffic
demand by the applications. Of these, the physical layer
characteristics are among the most important, and unfortunately

[11] Stevens, R., Brook, P., Jackson, K., and Arnold, S. 1998.
Systems Engineering: Coping with Complexity. Prentice Hall.

[12] Haskins, Cecilia (ed.). 2007. Systems Engineering Handbook: A


Guide for System Life Cycle Processes and Activities, INCOSETP-2003-002-03.1, version 3.1.

414