You are on page 1of 10

Executing Models for Ultra-Large Networks: Parallel Discrete Event Simulation and Beyond

Richard Fujimoto College of Computing and the Georgia Tech Modeling and Simulation Research and Education Center

Network Simulation Tools


Most network simulation tools based on discrete event simulation techniques
State changes occur at discrete points in simulation time (message generated, packet arrival, packet departure, etc.) Computation consists of sequence of event computations, processed in time stamp order Amount of computation proportional to the number of events; number of events proportional to the amount of traffic being simulated Several software packages available: NS, Opnet, GloMoSim/Qualnet, many others

Some work in fluid flow models


UMass, UCLA, Rutgers/GT, others

Parallel Network Simulation


Build parallel network simulator from scratch
Build parallel simulation engine; build models over engine TeD (Georgia Tech); Scalable Simulation Framework (Dartmouth, Renesys); GloMoSim (UCLA), Qualnet (Scalable Network Solutions); TeleSim (Univ. Calgary); Ultra-Large Scale Simulation Framework (Univ. Cincinnati)

Extend sequential network simulation software


Instantiate instance of simulator for each subnet/protocol layer, build infrastructure to interconnect simulators Leverages prior investments in sequential network simulation models (Software, Verification and validation) Provides a more familiar environment to current users Inherits limitations of sequential simulator Easier to integrate heterogeneous simulation models Parallel and Distributed NS [PDNS], some work with GloMoSim, Opnet (Georgia Tech); other work at Univ. Cincinnati, RPI, Opnet

105 - 106 nodes, 106 - 107 events/sec (simple, packet level)

Simulation of the Internet (Riley/Ammar 2002)


Scenario (conservative)
108 network nodes Mix of link speeds (56 Kbps to 2.4 Gbps) 50% utilization host-router links, 10% router-router links 1% hosts have a connection to another host

Simulating one second of network operation (packet level) requires


3x1011 events 4 days CPU time (106 events/second) 300 terabytes memory 14 terabytes secondary storage (per second of simulation time)

and it gets worse*


80000 60000 40000 20000 0 2001

2002

2003

2004

2005 Year

2006

2007

2008

2009

25% annual growth in host count traffic doubles every 6 onths speed doubles every 1.5 years (Moores law) o puters wont be big/fast enough in the foreseeable future arallel si ulation is necessary, but not sufficient
* Riley and Ammar (2002)

Role of Simulation
Design and analysis of scalable networks
Protocol design & evaluation, network deployment, etc.

Find breaking points in existing/planned networks (anticipated heavy workloads, attacks, instabilities)
Generate scenarios to stress test the network

After-the-fact evaluation of incidents (e.g., attacks, major failures): use simulation to observe unobservable network state and behaviors
Couple with data measurement Reproduce incidents Develop and evaluate countermeasures

On-Line Simulation to rapidly optimize or diagnose and repair networks


Automatically detect and address problems on-line as they begin to occur

On-line Simulation to Diagnose/Repair Networks


Real-time monitoring tools characterize network traffic flows
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again.

Operational Network

Back-end CPU farm at remote high-performance computing site

Reconfigure network to meet QOS objectives for time critical traffic


Graphical display of simulation outputs

Fast on-line simulations forecast behavior of alternate network configurations

Some Research Questions


How big is big enough?
Methodologies and techniques to analyze very ultra-large networks

How precise a network model is needed?


Certain information will be unknown

How far can we go with parallel simulation? Ultra-large computers (103 processors and beyond)?
No shortage of cycles, but can they be exploited?

Memory remains a problem. Whats the solution?


Compression techniques Out-of-core simulation

What about multiple simulation runs?


Most experiments require multiple runs The runs usually have much in common Can this be exploited?

Some Research Questions (cont.)


On-line simulation
What data to collect? How much? Ultra-fast model execution: Real-time, faster-then-real-time performance Automated scenario generation, model construction, experiment design, execution, output analysis Self-validating simulators?

Multi-resolution models: How accurate? How big?


Fluid models simulating background traffic, packet level models simulating delay sensitive traffic Detailed models of critical subnets, coarser models of others

Is widespread model composeability and reuse possible


Big bag of simulation models (available on the web) developed using different simulation packages that can be easily composed

Related Conferences & Workshops


Grand Challenges in Modeling and Simulation Conference
SCS Western Multiconference San Diego January 28-30, 2002

Dagstuhl Workshop
Dagstuhl castle Germany August 26-30, 2002

You might also like