You are on page 1of 61

11—Data Analysis

11 Data Analysis
You can use discrete event simulations to generate different forms of output, as
described in Chapter 10 Simulation Design on page MC-10-1. These forms
include several types of numerical data, animation, and detailed statistics
provided by the debugger included with OPNET Modeler.

The most commonly used forms of output data for discrete event simulations
are those that are directly supported by Simulation Kernel interfaces for
collection and by existing tools for viewing and analysis. This data falls into two
primary categories:

• Statistic data is generated by choosing statistics for collection and/or by the


KPs of the Simulation Kernel’s Stat package; OPNET Modeler’s Results
Browser can then be used to view and manipulate the statistical data.

• Animation data is generated either by using automatic animation probes or


by developing custom animations with the KPs of the Simulation Kernel’s
Anim package; the animation viewer is then used to view the animations.

In addition, because discrete event simulations support open interfaces to the


C/C++ languages and the host computer’s operating system, simulation
developers can generate proprietary forms of output ranging from messages
printed in the console window, to generation of ASCII or binary files, and even
live interactions with other programs.

Results Browser
The Results Browser is used to display information in the form of graphs.
Graphs are presented within rectangular areas called analysis panels. Each
analysis panel can have one or more graphs. A graph is the part of the analysis
panel that can contain statistics. A number of different operations can be used
to create graphs and analysis panels, all of which have as their basic purpose
to display a new set of data or to transform an existing one. An analysis panel
consists of a plotting area, with two numbered axes generally referred to as the
horizontal axis (abscissa), and the vertical axis (ordinate). The plotting area can
contain one or more graphs describing relationships between variables mapped
to the two axes. For example, the graph in the following figure shows how the
size of a queue varies as a function of time.

MC-11-1 OPNET Modeler/Release 16.1


11—Data Analysis

Figure 11-1 Analysis Panel Components

Analysis
panel

Graph

Analysis Data: Statistics


The graphs displayed in analysis panels represent data sets referred to as
statistics. Each statistic consists of a sequence of data points called entries. An
entry in turn consists of two real numbers, called the entry’s abscissa and
ordinate.

Table 11-1 Statistic Content


entry index entry abscissa entry ordinate

0 x0 y0

1 x1 y1

2 x2 y2

3 x3 y3

4 x4 y4

5 x5 y5

6 x6 y6

The relationship between the abscissa and ordinate variables is then described
by the correspondence established by each of the entries. For a given entry this
relationship can usually be read “When the abscissa variable takes on the value
x, the ordinate variable takes on the value y”, where x and y are the values
stored in the entry. In the analysis panel, this entry may be represented by a
point located at the intersection of the lines represented by the equations
abscissa = x and ordinate = y, as shown in the following figure.

OPNET Modeler/Release 16.1 MC-11-2


11—Data Analysis

Figure 11-2 Graphing of an Entry based on Its Abscissa and Ordinate

Entry with:
abscissa = 2.5
ordinate = 3.5

Because each statistic may consist of multiple entries, panels usually contain
many points. The resulting graph describes the relationship between the
abscissa and the ordinate not only in terms of the dependency at each point, but
also by expressing sensitivity of one variable to the other; in other words, graphs
can give an indication of the effect that changing one variable has on the other.
Usually, if one variable is considered to be varied intentionally, or is treated as
a system input parameter, it is called an independent variable and is placed on
the horizontal axis. The second variable is called a dependent variable and is
mapped to the vertical axis.

Figure 11-3 Graphing of Dependent Variable vs. Independent Variable

Dependent variable
“Throughput” is
mapped to vertical axis

Graphed entries show


relationship between
Throughput and TTRT

independent variable “TTRT” is mapped to horizontal axis

MC-11-3 OPNET Modeler/Release 16.1


11—Data Analysis

Statistics represented in the Results Browser are intrinsically discrete, even


though they might represent relationships between variables of a continuous
nature. The spacing of entries within a statistic can be arbitrary, and depends
solely on the applications that generate the statistic data (usually a simulation).
Irregular spacing of entries on the horizontal axis is a common case because
entries are often created as a function of unevenly spaced events in a
simulation.

Figure 11-4 Variable Density of Entry Spacing on Horizontal Axis

Entries have a denser


distribution along horizontal
axis than others in statistic

It is possible for a statistic to represent a mapping between abscissa and


ordinate variables that is one-to-many. Therefore, a statistic does not
necessarily behave as a mathematical function. For instance, multiple ordinate
values may occur for an abscissa value as a result of multiple events occurring
at the same simulation time and each event generating an additional entry. One
such situation is that of a queue that receives many packets at once to be
enqueued; as each packet is enqueued, the queue size statistic increases by
one, and if the queue is capable of accepting multiple packets at once, multiple
values of this statistic will be coincident in time.

Figure 11-5 Statistic with Multiple Entries Sharing Same Abscissa

OPNET Modeler/Release 16.1 MC-11-4


11—Data Analysis

Several special entries are defined to represent features of statistics that do not
correspond to ordinary numerical data. In some cases, these features are
naturally incorporated into certain statistics based on their definitions; in other
cases they result from transformations of ordinary statistical data by the
operations of the Results Browser.

• Undefined Value—This type of entry represents a point in the statistic where


the system has no knowledge of or cannot compute the ordinate variable, or
where it does not make sense to attribute a value to the ordinate variable,
based on its definition. For example, the “average queuing delay” statistic for
packets in a queue is considered undefined until at least one packet has
entered the queue. Similarly, the “signal-to-noise ratio” measured at a
receiver for transmissions arriving from a particular source is considered
undefined while that source is not active. Undefined values also frequently
result from mathematical operations that are not well-defined, such as
dividing one zero-valued entry by another, or multiplying an entry with an
infinite value by one with a zero-value.
In a graph using the discrete draw style, undefined values are simply omitted
from the graph.
In a graph using the “linear” draw style, the line is suspended until the next
defined value. When a defined value is surrounded by undefined values, it
appears as a point. The following example shows how undefined value
entries appear as gaps in a line graph.

Figure 11-6 Graph Discontinuities Caused by Undefined Entries

Undefined entries cause a


break in the statistic’s graph

The default graph style changes from linear to discrete if a graph has ten or
more disconnected lines or points. (This threshold may be lower for smaller
data sets.) You can switch the graph to linear mode in this case, but the line
will appear highly fractured.

• Infinite Value—Like an undefined value entry, this type of entry can arise
either from the definition of a statistic or from numerical manipulations
performed in the Results Browser. For example, the space available in a
queue with unlimited capacity is a statistic that has a permanently infinite
value. A common numerical manipulation that generates infinite value
entries is dividing a nonzero value entry by one with a zero value. Negative
infinite values are differentiated from positive infinite values.

MC-11-5 OPNET Modeler/Release 16.1


11—Data Analysis

Like an undefined value, an infinite value is omitted from a graph drawn with
the discrete style. The linear style distinguishes an infinite value from an
undefined value by drawing a line straight up or down (to positive or negative
infinity) from the last finite value and then back again to the next finite value.
A positive-infinity-to-negative-infinity transition appears as a straight
top-to-bottom line. The following panel depicts a statistic for the function
f(x) = 1 / x, containing a negative infinite value followed immediately by
a positive infinite value.

Figure 11-7 Example of Statistic Containing Infinite Entries

• End of statistic—For certain statistics, statistics can be thought of as


approximations of continuous mappings between the abscissa and the
ordinate variables. Because the tool is limited to a discrete representation,
an interpolation method for estimating values between samples is
sometimes needed. A particular case of this issue arises at the end of a
statistic, when the last entry is recorded at a time that is strictly less than the
ending time of a simulation. While no specific ordinate value can be given for
the final abscissa, it is useful to know what this abscissa value is, both for
plotting purposes, and possibly for users to make their own interpolations.
The statistic generation mechanisms of the Simulation Kernel and the
Results Browser retain this information by placing a special entry at the final
abscissa position of each statistic. This entry is called an end-of-statistic
entry.

Data Sources
Analysis panels can be created by a number of different operations in the
Results Browser. Because all panels must contain at least one statistic, these
operations require a source of data on which to base the new panel. There are
two possible sources of data (only one might be applicable, depending on the
operation).

• Output Vector Files

• ASCII Representations of Statistic Data

OPNET Modeler/Release 16.1 MC-11-6


11—Data Analysis

Output Vector Files


Output vector files are usually generated by simulations to store dynamic
statistics (i.e., statistics that vary as a function of simulation time). However, they
can also be generated via the External Model Access (Ema) interface, which is
OPNET Modeler’s general API (application program interface) for proprietary
file formats. In both cases, output vector files consist of a set of vectors and a
directory that describes the vectors’ locations within the file, to support fast
access. Each vector has essentially the same content as a statistic, including a
series of abscissa-ordinate entries. The abscissa variable generally represents
simulation time in an output vector generated by a simulation; however, if Ema
is used to build the file, the abscissa can represent any user-defined variable.

Vectors can be loaded into the Results Browser to serve as the basis of most of
the available operations. The simplest vector-loading operation allows one
statistic to be viewed in a panel. Numerous filters can also be applied to the
data. For more information, see View Results on page ITG-3-37.

Some cosimulation objects produce enumerated output vector files. These


vector files do not contain a scalar vertical axis, but rather a vertical axis with an
enumerated set of values. A specified dictionary file defines the possible
ordinate values. Statistics produced from enumerated vector output files will not
work with some of the filters and functions available in the Results Browser (a
Probability Density panel, for example).

Output vector files also store scalar statistics, which are stored as individual real
numbers. Typically, each scalar statistic accumulates one value per simulation,
although it is possible to accumulate multiple values in one simulation. A scalar
statistic can be thought of as a “summary” of some aspect of the system’s
behavior or performance, as evidenced during one particular simulation run.
Scalar statistics can also represent system input or operating conditions,
obtained either from model attributes or from measurements made during the
simulation. See Chapter 10 Simulation Design on page MC-10-1 for more
information about generating output scalars.

Because scalar statistics do not depend on time, but on other quantities in the
system, they cannot be plotted without choosing another variable with which a
dependency can be expressed. The Results Browser therefore supports plotting
of scalar statistics “against” one another, using the DES Parametric Studies
page. Plotting scalar Y against scalar X shows the possible values of scalar Y
for individual values of scalar X. If there are several values of Y for a given value
of X (e.g., in different simulations using distinct random number seeds), then
several vertically “stacked” data points appear in the graph.

MC-11-7 OPNET Modeler/Release 16.1


11—Data Analysis

The following figure of a scalar panel in the Results Browser shows this stacking
effect.

Figure 11-8 Two Output Scalars Plotted Against Each Other

For each value of processing speed, several different values of Peak


Queue Size have been recorded in different vector files.

Note that the relationship that is shown in a scalar plot is not necessarily due to
an inherent dependency between the output scalars. The plot merely shows
how the two quantities varied simultaneously over a series of experiments. The
causal nature of the relationship between the two variables must be inferred by
the user based on additional knowledge about the actual meaning of these
variables. OPNET Modeler is not able to make such inferences in an automated
fashion.

The Results Browser supports a second approach to visualizing scalar data that
is useful when the relationship between three scalar variables is of interest. The
supporting panel is called a parametric scalar panel and is created on the DES
Parametric Studies tabbed page. In a parametric scalar panel, an abscissa
variable and an ordinate variable play the same role as in an ordinary scalar
panel. However, a third variable called “parameter”, is used to separate the sets

OPNET Modeler/Release 16.1 MC-11-8


11—Data Analysis

of resulting points into distinct subsets. In each subset, the parameter has a
fixed value which is distinct from the parameter’s value in each of the other
subsets. The result is a “family” of curves plotted in the panel, as shown in the
following example.

Figure 11-9 Three Output Scalars in Parametric Scalar Form

Each curve corresponds to a distinct value of the “Packet Size” scalar


parameter. For that value of “Packet Size” it shows a relationship between
“Queue Size” and “Processing Speed”.

ASCII Representations of Statistic Data


Graphical plotting is the main form of information display used in the Results
Browser to visualize relationships between variables. However, in some cases,
visual representation may be ambiguous due to limited screen space and
resolution. For example, two values that are distinct but extremely close to each
other may be interpreted as being equal, or the points that represent them may
occlude each other. In addition, it is sometimes necessary to have knowledge
of exact, or near-exact values shown in the statistics. The Results Browser
provides two operations to allow users to obtain more detailed knowledge of
statistic contents:

• The Statistic Data option

• The General Statistic Information option

For instructions on both options, see Project Editor on page ER-3-1.

Statistic Data Option The most detailed view of a statistic’s data can be
obtained by using the Statistic Data option, which is provided by the Statistic
Information operation. This option displays the explicit contents of the statistics
that the panel includes. The statistics’ lengths and axes labels are given as well
as each entry’s abscissa and ordinate value. This operation applies to the

MC-11-9 OPNET Modeler/Release 16.1


11—Data Analysis

visible portion of the panel, meaning that if a panel’s axes bounds have been
modified, or if the zoom operation has been used, less than the statistic’s full
content may be displayed. The following panel and editing pad illustrate the
capability provided by the Statistic Data option.

Figure 11-10 Analysis Panel and Equivalent Textual Representation

The Statistic Data option shows abscissa and


ordinate values for each point in the statistic.

number of values does not include


undefined entries or end-of-statistic marker;
length includes both

Edit out undesired values, then use


Build New Statistic to create a
new graph with only the relevant data.

General Statistic Information Option To allow users to quickly obtain high-level


information about the statistic(s) in an existing panel, the Results Browser
provides the General Statistic Info option of the Statistic Info… operation. The
information is provided on a per-statistic basis and applies to the portion of a
statistic that falls within the abscissa range of the panel (though entries
immediately preceding and immediately following the range may be included as

OPNET Modeler/Release 16.1 MC-11-10


11—Data Analysis

well). Therefore, if a panel’s full vertical span has been reduced by editing the
vertical or horizontal scales, or by zooming, less than the full content of the
statistic will be taken into account. The following table explains the information
provided by this option:

Table 11-2 General Statistic Information


length Number of entries in statistic (including special entry for
end-of-statistic).

number of values number of entries in statistic, not including undefined values or


end-of-statistic marker.

horizontal min, max Minimum and maximum abscissa values.

vertical min, max Minimum and maximum ordinate values.

initial value Ordinate value of first entry.

final value Ordinate value of last entry.

expected value Average value of ordinate variable treated as a step function (i.e.,
using sample-and-hold interpretation of data) and weighting each
entry by the abscissa interval until the next entry; corresponds to
calculation performed by “time-average” filter.

sample mean Mean value of entries’ ordinates computed by weighting all entries
equally; corresponds to calculation performed by “average” filter.

variance Variance of ordinate values; this is the mean value of the squared
deviation from the sample mean.

standard deviation Square root of the variance; represents typical distance between an
ordinate value and the mean ordinate value.

confidence intervals Intervals estimated to contain the true mean of entries’ ordinate
values with five separate levels of confidence; calculations are
based on principles described in Computing Confidence in
Simulation Results on page MC-11-20; these results are meaningful
only if entries are independent measurements.

Exporting Vectors and Statistics


Three interfaces allow output vectors and statistics to be exported to and from
the OPNET Modeler environment:

• The External Model Access (Ema) interface allows an output vector file to be
created or data to be extracted from it. See External Model Access on
page MFA-1-1 for details.

MC-11-11 OPNET Modeler/Release 16.1


11—Data Analysis

• The Statistic Information operation displays statistic data as text. You can
then use the edit pad operations to export the data to a text file. For more
information, see Project Editor on page ER-3-1.

• The Export Data to Spreadsheet operation converts the data to a text file that
can be opened and converted by a spreadsheet program. For more
information, see Project Editor on page ER-3-1.

Template Statistics
Users of the Results Browser frequently find that they must execute the same
operations to view data from different simulation runs. In other words, after each
simulation, or set of simulations, the same statistics are loaded into the Results
Browser, with only the content of those statistics changing. This leads to the
notion that the specification for the manipulations and presentation of data can
be saved independently from the data itself. The specifications can then be
simply “applied” to data resulting from new simulations, to automatically obtain
processed and displayed information. The Results Browser supports this
capability with a feature called template statistics.

Each graph in an analysis panel can be given a special status called “template”.
A template graph contains no data (it is stripped of its data at the time that it
becomes a template). However, it does contain all of the configuration
information, such as the name of the original vectors or scalars that were used
to create it, and the operations that might have been applied to that data. It also
contains display information such as draw style and color. In other words, only
the graph’s entries (abscissa-ordinate pairs) are missing. The Results Browser
provides several operations that support converting graphs from ordinary form
to template form. See Project Editor on page ER-3-1 for more information.

The utility of a template graph is that it can again become an ordinary graph by
using its configuration information to display new data that is “applied” to it. The
new graph data need only match the graphs requirements—namely that the
names of the original scalar or vector statistics be the same. Using this feature,
the output statistics from many different simulations can be automatically
processed and displayed in an identical manner without having to go through
the individual steps required to generate each graph. The Results Browser

OPNET Modeler/Release 16.1 MC-11-12


11—Data Analysis

supports applying data to template panels when the data is loaded from output
files. In other words, when an output vector file is opened, the Results Browser
provides the option to match the data against the template graphs’
specifications and “fill in” the data if possible.

Figure 11-11 Successive Applications of OV Data to a Template Panel

OV OV

OV OV

Four separate output vector (OV) files are applied to the template panel to
generate graphs of the same statistic for four separate simulations

MC-11-13 OPNET Modeler/Release 16.1


11—Data Analysis

The Results Browser provides the flexibility of changing individual graphs to


templates, allowing panels to simultaneously contain template and ordinary
graphs. However, in typical cases, all of the graphs in a panel are converted to
template form at the same time (the Results Browser provides a “global”
operation to do this). A panel containing only template graphs is referred to as
a template panel. The following figure shows examples of these cases.

Figure 11-12 Application of OV Data to a Mixed Template Panel


The panel originally contains
two statistics, but the
“Simulated Queue Size”
statistic is a template, allowing
its data to be supplied later.
The “Measured Queue Size”
is an ordinary statistic,
containing “reference data”.

OV

After the OV file is


applied, the
“Simulated Queue
Size” becomes an
ordinary statistic,
allowing the OV data
to be compared to the
“reference data”.

Note—Output files can be applied to graphs at any time and only modify those
graphs that are selected and that match the data. This allows successive
applications to be performed to the same template panel to progressively fill in
additional data.

Data Presentation
The Results Browser offers a number of options with regard to the graphical
presentation of a panel. These options never affect the data content of the
panel, but only the manner in which the data is displayed. Access to the
presentation options is via the Edit Panel Properties and Edit Graph Properties
operations, which are activated by clicking with the right mouse button while the
cursor is in a panel or graph, respectively.

OPNET Modeler/Release 16.1 MC-11-14


11—Data Analysis

Graphs
The graphs shown previously in this section show analysis panels that contain
one graph. An analysis panel can have more than one graph, however, so long
as all graphs can share the same horizontal axis. While the vertical axes may
differ in a panel, all graphs in a panel must be able to use the same horizontal
axis. Because of this, separate graphs in an analysis panel stack vertically, as
shown in the following figure.

Figure 11-13 Two Graphs In One Analysis Panel

You can create a panel with multiple graphs or add graphs to the analysis panel
later.

Panels
Graphs reside in an analysis panel. Clicking in the panel, as opposed to a graph,
brings up the Edit Panel Properties dialog box, which allows the appearance of
the horizontal axis to be changed or the draw style for all statistics in the graph
to be globally set. The Statistics Info operation provides useful information about
the statistics contained in the panel, including the data points themselves. The
Edit Panel Properties dialog box also allows additional graphs to be added to
the panel.

Drawing Style
Each graph within a panel can be assigned one of five possible graphical
representations, called the graph’s draw style. Each graph’s draw style is
controlled independently of the draw styles of other graphs. The drawing styles
include discrete, linear, sample-hold, bar, bar chart, and square-wave.

The discrete draw style provides the most direct view of the actual data content
because one “dot” is used to represent each entry in the statistic (provided that
the ordinate is not undefined). Because no attempt is made to attribute ordinate
values to intermediate abscissa values, as is intrinsically done by the other draw
styles, the discrete drawing style is most appropriate for graphs that represent
a set of independent samples where intermediate values are not well defined.

MC-11-15 OPNET Modeler/Release 16.1


11—Data Analysis

For example, a typical statistic resulting from measuring end-to-end delay for
each received packet at the time where it is received is plotted below. Though
it may be of interest in some cases to use the linear draw style to emphasize a
trend in the discrete points, estimating the delay value at times between the
packet arrivals does not correspond to a measurement that could actually be
taken.

Figure 11-14 Statistic Plotted using Discrete Draw Style

The linear draw style consists of drawing line segments between the points that
are defined by a statistic’s entries. One of the uses of this style is to represent
intermediate points for which the statistic contains no samples, but which can
be assumed to exist nonetheless. A common example of this is for panels
containing scalar data, where each point represents the result collected by a
simulation; the linear draw style can be used to “fill in” or approximate parts of
the curve that lie between available data points, as shown in the following
example.

Figure 11-15 Scalar Data Plotted with Linear Draw Style

OPNET Modeler/Release 16.1 MC-11-16


11—Data Analysis

Because the resulting graph is without breaks (except at undefined points), the
linear draw style is also sometimes used simply to emphasize the trend in a
statistic, even if the statistic is discrete in nature. An example of this is shown in
the second panel below, which contains the statistic for the size of a queue (i.e.,
the number of packets it contains) as it varies over time.

Figure 11-16 Discrete Variable Plotted Using Linear Draw Style

The sample-hold draw style is based on the notion that between abscissa
values, no new information is known about certain types of statistics, and
therefore these statistics should be assumed to maintain their previous ordinate
value. This interpretation of a statistic’s discrete set of entries makes sense for
many statistics collected in OPNET Modeler-based simulations. Any statistic
that represents a counter of some type, such as a queue size, the number of
packets received without errors, or the number of times a queue has
overflowed, inherently maintains its value until a new sample is obtained.

Figure 11-17 Counter Variable Plotted Using Sample-Hold Draw Style

MC-11-17 OPNET Modeler/Release 16.1


11—Data Analysis

The bar draw style is essentially a simple extension of the sample-hold draw
style, where the horizontal segment that is drawn at each entry is instead
extended into a filled in bar that reaches down to the horizontal axis. This is the
traditional “bar chart” which is useful for expressing the weight associated with
each recorded abscissa value. This style is therefore often used to represent
histogram data and probability distributions.

Figure 11-18 Probability Distribution Plotted Using Bar Draw Style

The square wave draw style is similar to both the sample-hold draw style and
the bar draw style. It is, in effect, a bar graph that is not filled in. Vertical lines
connect each horizontal segment, but the horizontal segments do not extend to
the abscissa.

Figure 11-19 Data Plotted Using Square Wave Draw Style

Analysis-Panel Annotation Objects


You can create “analysis panel” annotation objects to display simulation results
directly in the network (rather than in a separate window).

Analysis-panel annotations behave much like annotations like text, lines,


circles, and squares. One major advantage of using panel annotations is that
they appear when you export your topology print your topology or export it to
bitmaps, Visio files and web reports.

OPNET Modeler/Release 16.1 MC-11-18


11—Data Analysis

Creating Analysis-Panel Annotations


To create a panel annotation, open an analysis panel and configure it the way
you want the annotation to appear. Then right-click in the panel background and
choose Make Panel Annotation in Network. You can also create an annotation
by double-clicking in the panel background.

When you create a panel annotation, the original analysis panel window
disappears and the annotation displays simulation results as they appeared in
the panel window.

Figure 11-20 An Analysis-Panel Annotation Object

Annotation displays
simulation results
and panel/graph Right-click on an
properties as they annotation to edit
appeared in original attributes, set
window view properties or
open the original
analysis panel

Editing Panel Annotations


You can edit the graph and panel properties of an annotation by double-clicking
on it, or by right-clicking and choosing Open Analysis Panel. Either action opens
the original analysis panel; at the same time, the message PANEL OPEN
appears in the annotation.

After you edit the panel, double-click in the panel background (or right-click and
choose Make Panel Annotation in Network). The window again disappears and
the annotation displays the updated results.

Annotation View Properties


You can set an annotation’s view properties by right-clicking on it. You can
specify one of three settings:

• Fixed size (default)—The annotation is fixed at the same size as the original
panel window and retains its position in the Project Editor window (regardless
of the view’s zoom level or location in the network).

MC-11-19 OPNET Modeler/Release 16.1


11—Data Analysis

• Resizable—The annotation is initially the original panel size but can be


manually resized. A resizable annotation has a fixed position in the network,
rather than the Project Editor window. As a result, the annotation responds
to changes in the view (by resizing in response to changes in the zoom level,
and by staying “pinned” to a particular network object when you scroll or
move to a different subnetwork).

• Thumbnail—The annotation is reduced to a small fixed-size icon. Like the


resizable annotation, a thumbnail annotation is pinned to a specific location
in the network. Thumbnail icons are useful for creating “placeholders” to
objects for which you want to view results. You can create a thumbnail icon
next to a node or link, and then simply click on the thumbnail to open the a
panel and view results for that object.

Computing Confidence in Simulation Results


As explained in Chapter 10 Simulation Design on page MC-10-1, system
models that include stochastic behavior have results that are dependent on the
initial seeding of the random number generator. Because a particular random
seed selection can potentially result in an anomalous, or non-representative
behavior, it is important for each model configuration to be exercised with
several random number seeds, to be able to determine standard, or typical
behavior. The basic principle applied here is that if a typical behavior exists, and
if many independent trials are performed, it is likely that a significant majority of
these trials will fall within a close range of the standard.

One of the important issues a simulation designer must confront, when


performing simulations incorporating stochastic processes, is that of deciding
how many separately seeded trials to run to have sufficient data to make a
statement about the system’s typical behavior. Of course, the simulation
designer can never be absolutely certain that the results obtained are
representative, because it is possible to be “unlucky” in the sense that all, or
most of the chosen random seeds could lead to anomalous behavior. However,
as more seeds are chosen, the possibility of this happening becomes more
remote, particularly because after all, standard behavior should be observed
more frequently than anomalous behavior. This leads to the notion of attempting
to achieve a certain level of confidence in the results obtained from a simulation
study, by ensuring that a sufficient number of simulations are performed.

Confidence Intervals
The field of statistics provides methods for calculating confidence in an
estimate, based on a trial or series of random trials. The techniques that it
provides are also frequently used in applied sciences where field
measurements are subject to error, and multiple measurements are taken to
attempt to place a bound on the magnitude of that error. The Results Browser
provides a basic capability in this area by automatically calculating and
displaying confidence intervals for statistics already contained within panels.
This capability is supported by the Show Confidence Interval checkbox in the
Edit Graph Properties dialog box.

OPNET Modeler/Release 16.1 MC-11-20


11—Data Analysis

The confidence intervals calculated by the Results Browser are for the mean
ordinate value of a set of entries. For the purposes of this operation, entry sets
are defined by collocation at the same abscissa. This approach to calculating
confidence intervals is designed primarily to support confidence estimation for
scalar data collected in multi-seed parametric experiments, where one or more
input parameters are varied, and for each input parameter value, multiple
random number seeds are used to obtain multiple output parameters. The type
of statistics that result from this type of simulation study (prior to confidence
interval calculation) are illustrated by the example below. The vertical “columns”
of entries correspond to the multiple experiments run by varying the random
seed and maintaining a fixed value for an input parameter.

Figure 11-21 Statistic Consisting of Scalar Data from Multiple Simulation Runs

Stacking of values along


same vertical line
corresponds to multiple
simulation runs with
different random seeds

Suppose that a number of simulations of a system have been run with different
random number seeds to obtain N samples of the statistic X. Even though X
may take on many values, and X’s precise distribution is unknown, it is possible
to define a value µ, which is the true mean of the random variable X. One way
to think of µis as the mean value of an extremely large set of samples of X, if it
were possible to run such a large number of simulations to obtain this sample
set. The reason µis interesting as the true mean of X is that it represents the
typical behavior of the modeled system with regard to the statistic X.

Because it is not usually possible to run a very large number of simulations to


determine µ(theoretically, an infinite number would be required), it is interesting
to determine the degree of precision with which the mean value x of an
N-sample set approximates µ. This determines whether the value x can be
used with confidence to make statements about the typical behavior of the
modeled system.

MC-11-21 OPNET Modeler/Release 16.1


11—Data Analysis

The fundamental principle used in establishing confidence in an x


measurement is called the central limit theorem. To understand this theorem,
consider the experiment consisting in the collection of N samples of X, and the
calculation of the average value x of the N samples; in other words, x is
considered to be the result of one trial of the experiment. Then consider that this
experiment may be performed many times, and that the resulting statistic X has
its own distribution. The central limit theorem states that regardless of X’s actual
distribution (this is an important generalization, because very little may be
known about X), as the number of samples N grows large, the random variable
X has a distribution that approaches that of a normal random variable with
mean µ, the same mean as the random variable X itself. The theorem further
states that if the true variance of X is σ 2 , then the variance of the statistic X is
σ 2 ⁄N .

The utility of this theorem, with respect to establishing confidence in a


measurement, lies in the fact that if sufficient samples are taken, a normal
distribution (which has known properties) can be worked with, rather than the
unknown distribution of X. Assuming that a sufficient sample size N has been
chosen, the sample x of the variable X taken from a sequence of simulations
can be placed on the hypothetical sampling distribution of X as shown below.

Figure 11-22 Sampling Distribution of X

F(X)
sampling distribution of X randomly obtained
sample of X

σ-
------
n

µ x X

Because the distribution of X is normal, the probability that the random sample
x falls within a particular distance of µcan be computed. Usually this distance
is measured in terms of the number of standard deviations that separate the
random sample from the mean. This way, a “standardized normal variable”
z = ( x – µ) ⁄σ x is defined, for which the standard deviation is unity and the
mean is zero, as shown below.

Figure 11-23 Sampling Distribution of z = ( x – µ) ⁄σ x


F(z)

0 z

OPNET Modeler/Release 16.1 MC-11-22


11—Data Analysis

If the positive value zα is defined such that Prob (-zα < z < zα ) = α, then the
following statement can be made by substituting for z (note: most standard
statistics textbooks provide tables mapping α to zα , or equivalent variables).
This statement can simply be thought of as defining the probability that x is
within a particular distance of µ, based on the fact that the distribution of X is
normal.

x–µ
Prob --------------- < z α = α
σx

Conversely, this statement can be thought of as assigning a probability to the


condition that the true mean µis within a particular distance of the random
sample x , as shown below (note that the standard deviation of X is expressed
in terms of the standard deviation of X, based on the central limit theorem).

σ σ
Prob x – z α -------- < µ< x + z α -------- = α
 N N

This statement introduces the notion of a confidence interval for µ, which is


defined to be the interval of real numbers [ΘL, ΘR], such that the probability that
µlies within this interval has a particular value α . This interval is referred to as
the 100α percent confidence interval for µ. In other words, if α = 0.95, the
interval is called the 95% confidence interval for µ, because there is a 95%
certainty level that the true mean of X lies within the interval’s bounds. It is clear
from the preceding inequalities that the confidence interval limits can be
expressed as follows:
σ σ
ΘL = x – z α -------- ΘR = x + z α --------
 N  N

From these definitions, it is clear that the confidence interval widens as the
degree of confidence increases; this makes sense, because to achieve a high
level of confidence that the true mean is within a particular interval, one can
expect to make a less restrictive hypothesis about that interval; similarly, if one
is willing to accept a lower degree of confidence, a more constraining

MC-11-23 OPNET Modeler/Release 16.1


11—Data Analysis

hypothesis can be made about the interval. As an extreme example, one can be
100% confident that the value µlies between negative and positive infinity. In
practice, a few particular confidence levels are chosen as shown in the following
table.

Table 11-3 Selected Confidence Levels


confidence level α zα

99% 2.575

98% 2.327

95% 1.96

90% 1.645

80% 1.282

The expressions for a confidence interval on µmay be also be viewed as


providing information concerning the number of trials n that must be executed
to achieve a particular degree of confidence that the error in estimating µis less
than a specified value. This can easily be seen by rearranging the above
equations.
σ
Prob x – µ < z α ------- = α
n

Because x is the estimator for µ, the error is simply the absolute value of the
difference between these two values. Then if e is the upper bound on the error
with certainty α , the number of required samples n is given by:

σ 
e = z ------ z α σ 2
α - ⇒ n = --------
-
 n  e 

Small Sample Confidence Estimates


Note that the above expressions for confidence limits on the mean µrely on
knowledge of the standard deviation σ x . However, it is not necessarily the case
that this quantity is known because the actual distribution of X is not known. If
the sample size N is sufficiently large (generally >= 30), the sample variance can
be used in place of the true variance to compute confidence intervals.

For cases where variance is unknown and the sample size is small, a method
is used that is similar to the one described above, but is based on the
T-distribution rather than the normal distribution. The T-distribution resembles
the normal distribution in its characteristic “bell curve” shape. However, this
distribution is based on the use of the sample variance rather than the
assumed or known variance. It is therefore useful for simulation studies where
fewer than 30 samples are used to estimate µ, which is actually a frequent case.

OPNET Modeler/Release 16.1 MC-11-24


11—Data Analysis

As the number of samples become large, the T-distribution begins to


approximate the normal distribution. In calculating confidence intervals, the
Results Browser uses the sample size 30 as a threshold to begin using the
normal distribution. For all sample sets with fewer than 30 values, the
T-distribution is used instead. The expressions for confidence limits in this case
are similar, except that the true variance σ 2 is replaced by the sample variance
2
s , as shown below, and the constant tα is used rather than zα to represent
areas under the distribution’s curve.

Some common values of tα are provided in the following table. More extensive
tables are available in standard statistics textbooks.

Table 11-4 Common Values of tα


confidence level α tα, N = 3 tα, N = 5 tα, N = 10 tα, N = 20

99% 9.925 4.604 3.250 2.861

98% 6.965 3.747 2.821 2.539

95% 4.303 2.776 2.262 2.093

90% 2.920 2.132 1.833 1.729

80% 1.886 1.533 1.383 1.328

When the Show Confidence Interval operation is applied to a panel, the


statistics are shown such that entries aligned on the same abscissa are treated
as groups. Each group is collapsed into one entry whose ordinate is the mean
of the group. The confidence interval for the mean of each group is calculated
using the methods described above and is then displayed as a vertical bar
centered at the mean and ending at the upper and lower confidence limits. For
entries that are unique at a particular abscissa, no confidence intervals can be

MC-11-25 OPNET Modeler/Release 16.1


11—Data Analysis

calculated. The lack of a confidence bar is indicated by a small circle


surrounding the point of interest. This operation offers a choice of five
confidence intervals: 80%, 90%, 95%, 98%, 99%. An example of a graph with
confidence limits appears below.

Figure 11-24 Statistic with Confidence Intervals

It is usually possible to control an abscissa variable mapped to the horizontal


axis, such that it has the exact same value across multiple simulations. This
way, the columns of scalar statistic values are perfectly vertical. However, in
some cases, the independent parameter may have some variability due to the
fact that it is specified on a stochastic basis. For example, consider a simulation
study where performance characteristics of a modeled computer system are to
be measured as a function of the job load applied to the system. Suppose in
addition, that the job load is specified as a parameter of the simulation, but that
the specified value is actually the average job load which controls stochastic job
generators within the model. In this case, even with the job load parameter
maintained at a constant value, the actual job load would itself vary from
simulation to simulation, as random number seeds are changed. Therefore, if
the actual job load is taken as the abscissa variable of a scalar versus scalar
plot, the groups of points corresponding to a set of identically parameterized
simulations will usually not be perfectly vertical.

Because the Show Confidence Interval operation applies only to vertically


aligned groups of entries, it is sometimes necessary to eliminate the abscissa
coordinate variability of a small group of entries before using this operation. One
possible approach that could be adopted in the computer system example, to
ensure the alignment of scalar-based entries, is to record the prescribed
average job load via the Kernel Procedure op_stat_scalar_write() rather than
the actual measured average. This way, the exact same value would be
obtained for this parameter, regardless of random number seed.

OPNET Modeler/Release 16.1 MC-11-26


11—Data Analysis

The Results Browser also computes confidence intervals. Although confidence


intervals are computed regardless of the actual content of a panel, the provided
result is only meaningful for data sets of a certain nature. In particular, the
entries of the panel’s statistic(s) must be considered independent samples of a
random variable; this determination is left to the user because the Results
Browser has no information concerning the source of the statistic data. Under
appropriate circumstances, the reported confidence limits for the five
confidence levels 80%, 90%, 95%, 98%, and 99% may be used to provide an
interval estimate for the mean value of a series of samples contained in a
statistic. To display confidence intervals, choose Edit Graph Properties from a
Graph pop-up menu and select the Show Confidence Intervals checkbox.

Vector/Statistic Operations
In addition to displaying statistical data, the Results Browser provides a number
of operations that can be used to transform this data to generate new statistics.
Because vectors stored in output vector files have the same data content as
statistics, these operations can also be applied directly to vectors. However, to
simplify discussion, all operations are described in terms of their application to
statistics.

The operations described are

• Histograms and Probability Profiles on page MC-11-27

• Filter Operations on page MC-11-37

• Predefined Filters on page MC-11-40

Histograms and Probability Profiles


Five operations are provided for the purpose of establishing a distribution of a
sample set of collected values:

• Probability Density (PDF)

• Cumulative Distribution (CDF)

• Probability Mass (PMF)

• Histogram (Sample-Distribution)

• Histogram (Time-Distribution)

• Scatter Plot (Time-Join)

Each operation is unary (i.e., requires only one statistic as input) and produces
a new single-statistic panel to hold its result when it completes. The
computations done by each of these operations are described in this section.
See Project Editor on page ER-3-1 for instructions on their use.

MC-11-27 OPNET Modeler/Release 16.1


11—Data Analysis

Probability Density Function


The probability density function (PDF) operation can be thought of as a
continuous equivalent of the PMF described earlier. Like probability mass,
probability density corresponds to the likelihood that the input statistic’s ordinate
lies within a specific range; however, density is evaluated proportionally to the
interval of interest. Therefore, if the statistic’s ordinate value has a likelihood 0.1
of falling in a given interval with width d, and also has a likelihood 0.1 of falling
in a second interval with width d/2, then the probability density in the second
interval is twice as high. In other words, probability density is highest when a
small set of possible values has a high associated probability mass.

The actual definition of a probability density function is based on the fact that its
integral over a given interval yields the probability mass associated with that
interval. The probability mass associated with an interval can also be obtained
by computing the difference in the CDF for the upper and lower limits of the
interval. As interval widths become infinitesimally small, it can be seen that the
PDF is therefore the derivative of the CDF with respect to the outcome (i.e.,
ordinate) variable.

The relationship between a PDF and a CDF is in fact the basis for the method
used by the Results Browser to compute PDFs. A CDF is first computed as
described earlier in this section, and a differentiation is performed to construct
a PDF. Because the original statistic data is necessarily discrete, differentiation
is performed in an approximate manner by dividing probability mass associated
with an interval by the interval’s width. In other words, the difference between
two consecutive CDF values is divided by the difference in the corresponding
ordinates. The resulting value is taken as the density associated with the
interval and is placed at the interval’s lower limit. Therefore if a statistic contains
two consecutive ordinate values y1, and y2, the PDF is computed as follows:

CDF ( y2 ) – CDF ( y1 )
PDF(y1) = -------------------------------------------------------
y2 – y1

An immediate consequence of this computation method is that PDFs can have


extremely large values when the input statistic has distinct but closely spaced
ordinate values, because the (y2 - y1) difference becomes small. Therefore, if
input statistic ordinate values are unevenly spaced (i.e., some very small
differences exist, but also some significantly larger ones), PDFs can have a
“spiky” or discontinuous appearance, with certain density values dwarfing
others. In such cases, PDFs tend not to be as useful as the PMF or histogram
operations.

A second consequence of this calculation is that the PDF contains one less
entry than the CDF due to the fact that no forward-looking difference can be
calculated for the final (i.e., maximum) ordinate value.

OPNET Modeler/Release 16.1 MC-11-28


11—Data Analysis

Finally, the integral of the PDF statistic, which can be computed using the
correct filter, produces a statistic that is identical to the CDF in its shape.
However, the initial value of the CDF is lost in computing the PDF, meaning that
the two statistics differ by a constant. This difference is particularly noticeable
when the original statistic has a small number of distinct ordinate values,
because the CDFs value for the minimum ordinate is at least the reciprocal of
this number (i.e., this is the probability mass associated with the first ordinate
value).

Figure 11-25 Results of PDF Operation for Regularly and Irregularly Spaced Ordinates

regularly spaced ordinates yields smooth pdf

irregularly spaced ordinates yields spiky pdf

Cumulative Distribution Function


Like the probability mass function, the cumulative distribution function (CDF) of
a statistic relates to the likelihood of occurrence of the statistic’s ordinate values.
However, rather than provide the probability mass of each ordinate’s
occurrence, the CDF shows the accumulated probability mass of all ordinates
less than or equal to a particular ordinate, hence the term “cumulative”. This
form of presentation is useful when particular ordinate value thresholds are of
interest. For example, it may be of interest to determine the likelihood of
receiving a message whose delays exceeds a particular value, because under
such conditions, the packet can not be of any practical use. In fact in many
cases, system performance requirements are stated in terms of maximum
tolerances for such probabilities; i.e., “the probability of receiving a packet with
delay >= 20 ms must be no greater than 0.1”. The CDFs resulting statistic allows
compliance with such a requirement to be readily determined by finding the
threshold value on the horizontal axis and the corresponding probability on the

MC-11-29 OPNET Modeler/Release 16.1


11—Data Analysis

vertical axis. In the example of this paragraph, a compliant system would be


characterized by a CDF value of at least 0.90 at the 20 ms abscissa position. A
possible CDF is shown below for a non-compliant system under the conditions
of this example.

Figure 11-26 Using a CDF to Determine Proportion of Entries below a Threshold

Roughly 60% of throughput values are less than 120,000,000

The computation of a CDF resembles that of a PMF in the sense that


proportions for each ordinate value in the original statistic are computed. The
same weight, which is the reciprocal of the number of entries, is attributed to
each entry. Therefore if there are 100 entries, then each entry has a weight of
0.01, and if there are five entries whose ordinate values is y, then the ordinate
y has a total probability mass of 0.05. The entries of the CDF are constructed
by positioning the distinct ordinate values of the original statistic in increasing
order on the abscissa, with one entry for each such value. The CDF value for
the initial entry is simply the probability mass of the corresponding ordinate. The
CDF value for the second entry is equal to the CDF value of the first entry
augmented by the probability mass of its corresponding ordinate value, and so
on. The CDF is essentially a running sum of the values of the PMF.

Two simple properties of the CDF result from the method of computation
described above: (1) because each CDF value is computed by adding a positive
probability mass to the previous value, CDFs are monotonically increasing;
(2) because the sum of all probability masses must add up to unity, all CDFs
must have a final value of 1.0; this also makes sense under the definition of the
CDF, because one would expect the likelihood of obtaining an ordinate value
less than or equal to the maximum ordinate value to simply be 1.0.

Probability Mass Function


The Probability Mass Function (PMF) operation allows the distribution of a
statistic’s ordinate values to be obtained, much in the same manner as a
sample-distribution histogram. While PMFs do count entries to measure the
frequency with which ordinate values occur, counters are not maintained on a
per-interval basis. Instead, each distinct ordinate value is counted separately,
so that only entries with the exact same ordinate can combine to produce higher
PMF values.

OPNET Modeler/Release 16.1 MC-11-30


11—Data Analysis

The counters used by the PMF operation to compute the frequency of each
ordinate value are normalized with respect to the total number of entries in the
original statistic. In other words, the resulting PMF represents the frequency of
occurrence of a particular ordinate value as a proportion of the number of
occurrences of all ordinate values. Therefore, the measurement provided by a
PMF can be though of as the likelihood that an entry chosen at random among
all the entries of the original statistic, would have a particular ordinate value. For
such a selection experiment, the likelihood of choosing a particular ordinate
value is also sometimes called the probability mass of that outcome, hence the
name of the operation.

The following set of data, and the accompanying statistic, illustrate the
calculation of a PMF.

Figure 11-27 Calculation of Probability Mass Function

statistic “y” ordinate values


(20 samples)

0.0 3.0

1.0 3.0

1.0 4.0

1.0 4.0

2.0 4.0

2.0 5.0

2.0 5.0

2.0 6.0 Each interval indicates the probability mass of the


abscissa on its left edge. For example, this interval shows
2.0 6.0 that the value 2 represents 25% of the ordinate values of
the variable Y.
3.0 7.0

The fact that distinct ordinate values are not aggregated on the basis of intervals
makes PMFs appropriate to apply to statistics that contain a relatively small
number of discrete ordinate values. In such cases, sample-distribution
histograms may be less appropriate than PMFs due to one primary problem: if
the discrete values that are present are unevenly spaced, it may be difficult to
choose a histogram interval width that provides for both good separation of the
values and a reasonable number of intervals. For example, consider a statistic
containing the three ordinate values 0.0, 0.001, and 1000.0. To treat the values
distinctly, a sample-distribution histogram would require an interval width that is
the smallest difference between consecutive values, or in this case 0.001.
However, the highest value, 1000.0, can only be encompassed with one million
intervals in this case, causing the sample-distribution histogram to produce an
extremely large statistic.

MC-11-31 OPNET Modeler/Release 16.1


11—Data Analysis

Conversely, PMFs may not provide significant insight into the characteristics of
statistics containing a very diverse set of ordinate values. This is due to the fact
that each ordinate value is separately counted and that as a result, little can be
said about which ordinate region(s) exhibit the highest density in terms of the
statistic’s presence. In the extreme case, if each value in the original statistic is
unique, then the resulting PMF will have a constant value of 1.0, providing
almost no visually apparent information on the distribution of the values.

Histogram (Sample-Distribution)
The sample-distribution histogram of a statistic reflects the distribution of its
ordinate values over evenly spaced intervals of the vertical axis. The vertical
axis is divided into N distinct intervals beginning at the lower bound and ending
at the upper bound. By default, N is 100, but this value may vary according to a
user-selected interval width. For each interval, the Sample-Distribution
Histogram operation then creates and initializes a separate counter to represent
the frequency with which entries occur in that interval. Subsequently, the entire
statistic is traversed and each entry analyzed; the counter whose interval
contains the entry’s ordinate value is incremented by one.

The statistic that results from this operation contains N entries corresponding to
the N intervals; because these intervals divide the vertical axis of the original
statistic, they appear on the horizontal axis of the new statistic, and the vertical
axis corresponds to the frequencies of occurrence held in the N counters. Note
from the description of this computation, that abscissa values in the original
statistic are not relevant to the sample-distribution histogram. As an example,
consider computing a histogram for the following set of entries:

abscissa “x” ordinate “y”

1.0 1.0

2.0 4.0

3.0 1.0

4.0 2.0

5.0 1.0

6.0 3.0

7.0 4.0

8.0 6.0

9.0 1.0

10.0 0.0

OPNET Modeler/Release 16.1 MC-11-32


11—Data Analysis

The ordinate values of this statistic range from 0.0 to 6.0 and are all integers.
The default setting of 100 intervals would create far more intervals than there
are values, yielding an essentially empty histogram. An interval size of 1.0 is
more sensible. The counting process performed by the sample-distribution
histogram table is summarized by the table below. Notice that intervals are
inclusive of their lower bound, but not of their upper bound, so that they provide
a complete partitioning of the vertical axis within its range, but do not overlap
with each other.

Interval Frequency of Occurrence

0.0 <= y < 1.0 1

1.0 <= y < 2.0 4

2.0 <= y < 3.0 1

3.0 <= y < 4.0 1

4.0 <= y < 5.0 2

5.0 <= y < 6.0 0

6.0 <= y < 7.0 1

The statistic resulting from the sample-distribution histogram operation is easily


obtained from the frequency table above. The “y” label now appears on the
horizontal axis and the vertical axis measures the frequency of occurrence. The
profile of the histogram indicates the degree to which regions of the variable y’s
range are occupied by the original statistic. In general, this is viewed as directly
related to the likelihood that a randomly selected sample of the variable y will
fall within a particular interval. This is illustrated for the example data set shown
in the following figure.

Figure 11-28 Sample-Distribution Histogram for Y variable

This interval indicates that This interval indicates that there


there are 4 entries with are zero entries with
y = 1.0 y = 5.0

MC-11-33 OPNET Modeler/Release 16.1


11—Data Analysis

For more complex input statistics containing a richer set of ordinate values,
sample-distribution histograms can be interpreted as a density profile, showing
where the ordinate values are concentrated.

The following figure shows this interpretation of the sample-distribution


histogram.

Figure 11-29 Sample-Distribution Histogram Showing Density Profile of Input


Statistic

these intervals indicate where ordinate values of


input statistic are most concentrated

Histogram (Time-Distribution)
Time-distribution histograms resemble sample-distribution histograms in the
sense that they establish a profile for the ordinate value of a statistic. The
resulting profile shows how frequently the ordinate value of the statistic lies
within specific ranges. Therefore, this operation divides the vertical axis into
intervals in the same manner as the sample-distribution histogram. However,
rather than use the number of entries falling within each interval as the measure
of frequency, a time-distribution histogram is based on the “time spent” by the
statistic within the intervals. In other words, ordinate values are still the basis for
the histogram, but weighting of each entry is performed differently:
sample-distribution histograms weight each entry with a coefficient of 1.0;
time-distribution histograms weight each entry with the difference between its
abscissa value and the abscissa value of the next entry.

Time-distribution histograms are computed in much the same way as


sample-distribution histograms, except that for each interval, an accumulator is
used to total the abscissa interval widths. The following example shows the
computation procedure for a specific set of data. Note that the ordinate values
are identical to those used in the example for the sample-distribution histogram
in the previous section; only the abscissas are changed. Because
sample-distribution histograms are not sensitive to abscissa values, the result
for this statistic would be identical to the one shown in the previous section.
Therefore, the result shown here can be used to contrast the behavior of the two
histogram methods.

abscissa “x” ordinate “y”

1.0 1.0

OPNET Modeler/Release 16.1 MC-11-34


11—Data Analysis

abscissa “x” ordinate “y”

1.5 4.0

3.0 1.0

3.25 2.0

4.0 1.0

5.25 3.0

7.0 4.0

7.75 6.0

8.0 1.0

8.5 0.0

9.0 end-of-statistic

The ordinate values of this statistic range from 0.0 to 6.0 and are all integers.
The default setting of 100 intervals would create far more intervals than there
are values, yielding an essentially empty histogram. An interval size of 1.0 is
therefore more sensible. The calculation performed by the time-distribution
histogram table is summarized by the table below. For each interval an
accumulator variable is maintained to compute the total abscissa span for which
the statistic’s ordinate falls within the interval. Notice that intervals are inclusive
of their lower bound, but not of their upper bound, so that they provide a
complete partitioning of the vertical axis within its range, but do not overlap with
each other.

Interval abscissa-span accumulator

0.0 <= y < 1.0 0.5

1.0 <= y < 2.0 2.5

2.0 <= y < 3.0 0.75

3.0 <= y < 4.0 1.75

4.0 <= y < 5.0 2.25

5.0 <= y < 6.0 0.0

6.0 <= y < 7.0 0.25

MC-11-35 OPNET Modeler/Release 16.1


11—Data Analysis

Figure 11-30 Time-Distribution Histogram for Y Variable

Comparing Time-Distribution and Sample-Distribution Histograms


For certain input statistics, a time-distribution histogram yields results that have
a very similar profile to a sample-distribution histogram. In particular, for input
statistics that have regularly spaced entries in terms of abscissa values, such
as the example in the previous section, the shapes of the two histograms are
identical (note that the values shown in the histogram are not necessarily
identical because abscissa values may be spaced by a value other than 1.0).
However, for cases where abscissa values of entries are not regularly spaced,
results can vary significantly between the two methods.

In general, a time-distribution histogram is most appropriate for statistics that


measure a quantity representing state information. For these types of quantities,
the entries of the statistic are merely samples of a statistic that is defined at all
times. Examples of such quantities include the size of a queue, and the average
utilization of a channel, both measured over time. In the case of queue size, the
time-distribution histogram represents the amount of time that the queue size
actually holds a particular value, or falls within a range of values; similarly, for
average channel utilization, a time-distribution histogram indicates the total
duration during which this statistic falls in the selected intervals.

In contrast, sample-distribution histograms are more appropriate for


instantaneously measured quantities such as delays associated with received
messages, and error rates in transmitted packets. These statistics tend to
characterize the occurrence of particular events, whose frequency of
occurrence can be counted. Therefore, the sample-distribution histogram is
useful to indicate how many messages experienced a particular level of delay,
regardless of when these delays were measured. However, a
sample-distribution histogram applied to a queue size statistic actually only
indicates how many times the queue size changed to then arrive at a particular
new size; this provides no definite information about how often one might expect
to find the queue at a particular size.

OPNET Modeler/Release 16.1 MC-11-36


11—Data Analysis

Scatter Plot (Time-Join)


The time-join operation of the Results Browser accepts two statistics and/or
vectors as inputs to create a new statistic that shows the relationship (or lack
thereof) between them. The ordinate variable of the first statistic that is selected
becomes the abscissa variable of the new statistic; the ordinate variable of the
second statistic is mapped into the ordinate variable of the new statistic. The
abscissa variable of both statistics is assumed (but not verified) to be the same
and becomes the implicit parameter used to relate the two input statistics.

This operation uses a simple mechanism to create the entries of the new
statistic. For each entry in the first statistic, an entry with equal abscissa is
searched for in the second statistic; if no exact match is found, then the nearest
entry with a lesser abscissa value is selected. The ordinate values of each of
this pair of entries is used to form an entry for the new statistic (i.e., the ordinate
of the first entry becomes the abscissa of the new entry, and the ordinate of the
second entry becomes the ordinate of the new entry).

The scatter plot statistic that results from this operation shows a possible
correlation between the two input statistics based on their abscissa variables. In
general, the resulting statistic is viewed using the discrete draw style and
appears as a cloud of points. If the cloud appears relatively shapeless with many
ordinates for each abscissa, and vice versa, then it can be assumed that there
is no strong correlation between the two input statistics. Otherwise, the scatter
plot statistic provides a mapping indicating either a dependency between the
ordinate variables of the two input statistics, or a correlated dependency on one
or more other factors. The following scatter plots provide examples of the
operation’s result for correlated and uncorrelated pairs of input statistics.

Figure 11-31 Examples of Scatter Plots

Scattered variables are uncorrelated Scattered variables are correlated

Filter Operations
In addition to histogram and probability distribution functions, the Results
Browser provides the ability to transform and combine statistic data with a
variety of mathematical operators, including arithmetic, calculus, and statistical
functions. Statistics and/or vectors may be fed through computational block
diagrams called filters to generate and plot new statistics. Filters are developed
using the Filter Editor. You apply the filter to a statistic in the Results Browser
by selecting the filter from the filter pull-down menu.

MC-11-37 OPNET Modeler/Release 16.1


11—Data Analysis

Filter Models
A filter model is a specification for a computation that operates on one or more
statistics to create exactly one new statistic. Abstractly, a filter can be thought
of as one system that has a defined set of inputs and an algorithm for computing
its output. In addition to inputs and outputs, a filter also has associated
parameters that may factor into the execution of its algorithm. Inputs and
parameters are given names when a filter model is created in the Filter Editor.

Figure 11-32 Abstract Model of a Filter


input 0

input 1 Filter output

parameter 0
input n parameter 1
parameter n

Filter Model Structure Internally, a filter has a hierarchical structure, meaning


that it can be composed of other, subordinate filters. These filters can also be
composed of other subordinate filters, and so on. A filter that is composed of
other filters is referred to as a macro filter. Ultimately, at the lowest levels, all
macro filter models must consist of predefined filters provided by
OPNET Modeler. The available predefined filters are described in Predefined
Filters on page MC-11-40.

To form macro filters, existing filter models are used to create subordinate filters
and are attached using filter connections. A filter connection is defined between
the output of a filter and the input of another filter to specify the flow of statistic
data. Each filter has only one output, but this output can support outgoing
connections to any number of destination filters. However, each filter input can
be the recipient of at most one connection.

Figure 11-33 Hierarchical Structure of Filters

OPNET Modeler/Release 16.1 MC-11-38


11—Data Analysis

Exactly one output of one subordinate filter must be left unattached when
compiling a filter model. This output becomes the output of the encompassing
macro filter. Therefore, the subordinate filter with no connections attached to its
output is the final subordinate filter that is executed last; the output data is then
made available to the encompassing filter or to the Results Browser to create a
new panel.

Filter connections may not be used to create feedback paths within a filter
model. Feedback paths are individual connections, or sequences of
connections that would create a flow of data such that a filter input would receive
data more than once in one filter execution. Feedback conditions are detected
during the compilation process in the Filter Editor (refer to Filter Execution on
page MC-11-40). Some examples of feedback paths are shown below.

Figure 11-34 Examples of Feedback in Filter Models

In addition to preventing feedback paths, the Filter Editor also disallows circular
inclusion of filter models within macro filters. In other words, a filter model may
not appear at any level of depth within its own definition. Therefore if filter model
A incorporates a subordinate filter with model B, and model B incorporates
model C, then it would be illegal for models B or C to incorporate model A.

Promotion of Filter Inputs and Parameters


Filter parameters and inputs can be “passed up” from subordinate filters to
encompassing macro filters via a mechanism called promotion. Promotion for
filter models is similar to promotion for modeling attributes in the Process, Node,
and Network domains. For a description of the promotion concept as it applies
to these domains, see Chapter 4 Modeling Framework on page MC-4-1. The
promotion mechanism allows a property of a lower level object or model to
become the property of an encompassing, higher level object.

In the case of a filter, a numeric parameter can be set to the promoted status to
automatically become a parameter of the macro filter. That is, the promoted filter
parameter will appear in the parameter menu of the macro filter when the latter
is deployed as part of a higher-level macro filter, or when it is executed in the
Results Browser.

MC-11-39 OPNET Modeler/Release 16.1


11—Data Analysis

The inputs of a subordinate filter can be promoted simply by leaving them


unconnected. In the same manner as promoted parameters, promoted inputs
automatically become inputs of the higher-level macro filter. The promotion of
an input is apparent in that it appears in the connection input menu of the macro
filter when this macro filter is used as a component in a higher-level macro filter
model. In addition, promoted inputs are referred to in the Results Browser when
the macro filter is executed, to prompt for the selection of an input statistic.

Filter Execution
A filter can only be executed to operate on statistics in the Results Browser if
the filter model has been compiled at some earlier time in the Filter Editor.
Successful compilation is also required for a macro filter to be usable as a
component in a still higher-level macro filter. When a filter model is compiled, all
promoted inputs and parameters must be given names. These names serve to
identify the inputs and parameters when the filter model is used, both for
execution in the Results Browser and for deployment in other filter models.

When a macro filter is executed, all unconnected filter inputs must be provided
with either a statistic or a vector from an output vector file. This data, together
with assignments for promoted parameters, constitutes the input of the filter’s
computation and is responsible for directly or indirectly triggering all
computations of subordinate filters. The filter execution method follows the
data-flow paradigm, meaning that each subordinate filter may only be executed
after all its connections have received data, either directly from the
encompassing filter’s inputs, or from another subordinate filter. After a
subordinate filter is executed, the new statistic that results from its computation
is transferred from its output to each of the connected filter inputs. This may in
turn trigger the destination subordinate filters to be executed, provided that their
other inputs have also received data.

Execution completes when all subordinate filters have executed. The final filter
to executed must have no connection attached to its output. The output that it
produces is instead made available to the Results Browser to incorporate into a
new panel.

Predefined Filters
Macro filter models are based at the lowest level on predefined filters that
actually do computations to generate new statistic data. The top level macro
filter and intermediate levels of macro filters merely serve to structure the user’s
approach to designing a filter model that performs a particular computation.

The predefined filters can do a variety of computations including arithmetic,


basic calculus, and statistical operations. Currently, there is no open interface
for defining new filter models other than macro filters; as a result, fundamentally
new types of computations cannot be performed with the filter facility. In the
following sections, the terms unary and binary are used to refer to filters that
require one and two inputs, respectively. Also, a filter can be assumed to have
no parameters, unless these parameters are explicitly mentioned.

OPNET Modeler/Release 16.1 MC-11-40


11—Data Analysis

The descriptions of some of the filters include equations and tables to explain
the computations that are performed. These tables and equations make use of
the following notations:

T α (x) ordinate value of statistic Tα when abscissa is x

yα [ n ] ordinate value of entry with index n of statistic Tα

xα [ n ] abscissa value of entry with index n of statistic Tα .


Note: entry indices begin at zero.

Arithmetic Filters
Adder Filter The “adder” filter is a binary filter used to combine two input
statistics T0 and T1 to generate a third statistic, Tout, which represents their
sum.

Table 11-5 Synopsis of Adder Filter


Input Statistics Parameters Output

two interchangeable inputs none sum of two input statistics

If the two input statistics T0 and T1 have exactly the same number of entries and
these entries are aligned with respect to their abscissa values, then Tout can be
computed simply by adding ordinate values for entries of equal abscissa, as
follows:

T out(x) = T 0(x) + T 1(x)

However, if the input statistics are not initially perfectly aligned with respect to
each other, then an abscissa alignment mechanism is automatically applied by
this filter before adding is performed. Alignment consists of two steps:

1) T0 and T1 are truncated to the same minimum and maximum abscissa


values, by removing entries as necessary. This essentially corresponds to
finding the intersection of the two statistics on the horizontal axis.

2) The truncated statistics resulting from step 1 are augmented to ensure that
each statistic contains entries at the same abscissas as the other; this
involves inserting points into each of the statistics. For example, if the two
statistics had no abscissa values in common, once augmented they would
contain a number of entries equal to the sum of their original lengths. When
an entry is inserted, its ordinate value is taken to be the ordinate value of
the previous entry in the same statistic (i.e., it is assumed that the statistic’s
ordinate value remains constant until the next original entry).

MC-11-41 OPNET Modeler/Release 16.1


11—Data Analysis

After alignment has completed, the two resulting statistics can be added
directly, entry by entry. When adding entries, the rules presented in the following
table are applied (because T0 and T1 are treated identically, the table’s
corresponding column headings can be inverted to address symmetric cases).

Table 11-6 Entry Calculation Rules for Adder Filter


T0 T1 Tout
1
a b a+b

a +infinity +infinity

a -infinity -infinity

* undefined undefined

+infinity +infinity +infinity

-infinity -infinity -infinity

+infinity -infinity undefined

1. (a, b = real constants, * = any value)

Constant Shift Filter The “constant_shift” filter is a unary filter used to operate
on one input statistic T0 to generate a second statistic, Tout, which is a
translation of T0 by an amount ∆ along the direction of the vertical axis. The shift
quantity ∆ is a real number specified as a parameter of the filter.

Table 11-7 Synopsis of Constant Shift Filter


Input Statistics Parameters Output

single input shift: translation distance along translated version of input


vertical axis statistic

The Tout statistic has the same number of entries as T0 and the two statistics are
aligned with each other with respect to the entries’ abscissa values. Only the
ordinate values differ by the constant ∆ as follows:

y out [ n ] = y 0 [ n ] + ∆

OPNET Modeler/Release 16.1 MC-11-42


11—Data Analysis

When computing the entries of Tout, the rules summarized in the following table
are applied (the content of the T0 and ∆ columns can be interchanged to
address symmetric cases).

Table 11-8 Entry Calculation Rules for Constant Shift Filter

T0 ∆ Tout

a1 b a+b

+infinity a +infinity

-infinity a -infinity

undefined * undefined

+infinity +infinity +infinity

-infinity -infinity -infinity

-infinity +infinity undefined


1.(a, b = real constant, * = any value)

Gain Filter The “gain” filter is a unary filter used to operate on one input statistic
T0 to generate a second statistic, Tout, which is a scaled version of T0 by a factor
K along the direction of the vertical axis. The scaling factor, or gain K, is a real
number specified as a parameter of the filter.

Table 11-9 Synopsis of Gain Filter


Input Statistics Parameters Output

single input gain: scaling factor along vertical scaled version of input
axis statistic

The Tout statistic has the same number of entries as T0 and the two statistics are
aligned with each other with respect to the entries’ abscissa values. Only the
ordinate values differ by the factor K, as follows:

y out [ n ] = y 0 [ n ] ⋅K

MC-11-43 OPNET Modeler/Release 16.1


11—Data Analysis

When computing the entries of Tout, the rules summarized in the following table
are applied (note: the content of the T0 and K columns can be interchanged to
address symmetric cases).

Table 11-10 Entry Calculation Rules for Gain Filter


T0 K Tout

a1 b a·b

+infinity a>0 +infinity

+infinity a<0 -infinity

±infinity a=0 undefined

-infinity a>0 -infinity

-infinity a<0 +infinity

undefined * undefined

+infinity +infinity +infinity

-infinity -infinity +infinity

-infinity +infinity -infinity

1. (a, b = real constant, * = any value)

Multiplier Filter The “multiplier” filter is a binary filter used to combine two input
statistics T0 and T1 to generate a third statistic, Tout, which represents their
product by real-number multiplication.

Table 11-11 Synopsis of Multiplier Filter


Input Statistics Parameters Output

two interchangeable inputs none product of two input statistics by


multiplication

If the two input statistics T0 and T have the same number of entries and these
entries are aligned with respect to their abscissa values, then Tout can be
computed simply by multiplying ordinate values for entries of equal abscissa, as
follows:

T out(x) = T 0(x) ⋅T 1(x)

However, if the input statistics are not initially perfectly aligned with respect to
each other, then an abscissa alignment mechanism is automatically applied by
this filter before multiplication is performed. This alignment process is identical
to that performed for the “adder” filter; a complete explanation of this
mechanism appears in the corresponding section.

OPNET Modeler/Release 16.1 MC-11-44


11—Data Analysis

After alignment has completed, the two resulting statistics can be multiplied
directly, entry by entry. When multiplying entries the rules presented in the
following table are applied (because T0 and T1 are treated identically, the table’s
corresponding column headings can be inverted to address symmetric cases).

Table 11-12 Entry Calculation Rules for Multiplier Filter


T0 T1 Tout
1
a b a·b

+infinity a>0 +infinity

+infinity a<0 -infinity

±infinity a=0 undefined

-infinity a>0 -infinity

-infinity a<0 +infinity

undefined * undefined

+infinity +infinity +infinity

-infinity -infinity +infinity

-infinity +infinity -infinity

1. (a, b = real constant, * = any value)

Reciprocal Filter The “reciprocal” filter is a unary filter used to operate on one
input statistic T0 to generate a second statistic, Tout, which is obtained by
inverting the ordinate values in T0.

Table 11-13 Synopsis of Reciprocal Filter


Input Statistics Parameters Output

single input none statistic composed of reciprocal


values

Because special manipulations may be required to handle undefined points in


Tout and indeterminate forms, the resulting statistic may not have exactly the
same length as the input statistic. In particular, when a zero ordinate value is
encountered in the input statistic, and a sign change occurs around that entry
(i.e., the entries on either side have opposite signs), then three additional entries
are generated by this filter: two infinite entries of each sign are generated to
represent the positive and negative vertical asymptotes; and a third entry
indicates that the statistic is not defined at the abscissa where the input statistic
is zero.

MC-11-45 OPNET Modeler/Release 16.1


11—Data Analysis

In general however, the Tout statistic has a set of entries with abscissa values
that match those of T0. For entries where the reciprocal is defined, the
relationship between input and output statistic is given by the following:
1
T out ( x ) = --------------
T0 ( x )

When computing the entries of Tout, the rules summarized in the following table
are applied. The notation T0(x-) and T0(x+) refers to the ordinate values of the
entries immediately preceding and following the entry whose ordinate value is
being inverted.

Table 11-14 Entry Calculation Rules for Reciprocal Filter


T0(x) T0(x-) T0(x+) Tout(x)
1
a≠0 * * 1/a

a=0 >0 >0 +infinity

a=0 >0 <0 +infinity,


undefined,
-infinity

a=0 <0 <0 -infinity

a=0 <0 >0 -infinity,


undefined,
+infinity

+infinity * * 0.0

-infinity * * 0.0

undefined * * undefined

1. (a = real constant, * = any value)

Statistical Filters
Average Filter The “average” filter is a unary filter used to operate on one input
statistic T0 to generate a second statistic, Tout, which represents the running
mean of the ordinate values of T0 beginning with the first entry.

Table 11-15 Synopsis of Average Filter


Input Statistics Parameters Output

single input none running mean of ordinate values of input statistic

OPNET Modeler/Release 16.1 MC-11-46


11—Data Analysis

The Tout statistic has the same number of entries as T0 and the two statistics are
aligned with each other with respect to the entries’ abscissa values. The
ordinate value of the i-th Tout entry is computed as a function of all entries up to
and including the i-th entry of T0. Because there is not necessarily a one-to-one
correspondence between the indices of the entries and their abscissa values,
discrete notation is used in the following expression for the value of the n-th
entry of Tout.
n

∑ y [ i]0

y out [ n ] = i-------------------
=0
-
(n + 1)

Note in the above expression that the denominator is the entry index plus one,
due to the fact that entries are zero-indexed (in other words, n + 1 is the number
of entries).

When using this filter, it is important to realize how special values such as
+infinity, -infinity, and undefined are treated. If an undefined value is
encountered in the input statistic a new entry is created at the same abscissa in
Tout. If no defined entries precede this entry in T0 then the new entry is marked
as undefined in Tout as well; however, in the opposite case, the new entry is
simply marked with the value of the preceding entry in Tout. In both cases,
undefined entries are not considered part of the mean that is computed, and
therefore their occurrence affects neither the numerator nor the denominator of
the expression above. Therefore, a discrepancy between the expression for Tout
and the actual algorithm of the filter can develop as undefined entries occur; a
correction to this discrepancy can be made by subtracting from the denominator
the number of undefined entries whose indices are less than or equal to n.

The first infinite value encountered in T0 causes the value of Tout to become
infinite as well (with same sign). The filter’s output will continue to be infinite for
all remaining entries unless an infinite value of opposite sign is encountered at
which point Tout will become undefined and remain so up to and including the
final entry.

MC-11-47 OPNET Modeler/Release 16.1


11—Data Analysis

The following table lists the rules applied in computing the values of Tout.
Because this filter incorporates the history of the input statistic into the
calculation of each entry, the table relies on a variable Sn which is the sum of all
th
defined entry ordinates of T0 up to and including the n entry. In addition, the
variable Un is defined as the number of undefined entries in T0 with index less
than or equal to n.

Table 11-16 Entry Calculation Rules for Average Filter


Sn-1 T0 Un-1 Sn Un Tout
1
a b c a+b c (a + b) / (n+1-c)

a +infinity c +infinity c +infinity

a -infinity c -infinity c -infinity

0.0 undefined c=n a c+1 undefined

a undefined c<n a c+1 a / (n - c)

+infinity b c +infinity c +infinity

+infinity -infinity c undefined c undefined

-infinity b c -infinity c -infinity

-infinity +infinity c undefined c undefined

+infinity undefined c +infinity c+1 +infinity

-infinity undefined c -infinity c+1 -infinity

undefined * c undefined c or c+1 undefined

1. (a, b, c = real constant, * = any value)

Time Average Filter The “time_average” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout, which represents the
running continuous average of the ordinate values of T0 beginning with the first
entry. The difference between this filter and the “mean” filter, described
previously, is that entry values are not weighted equally, but are instead
weighted by the difference between their own abscissa and that of the
subsequent entry. The filter is named “time average” due to the fact that it is
frequently applied to statistics whose horizontal axis represents time; however,
it is also applicable to other types of statistics.

Table 11-17 Synopsis of Time Average Filter


Input Statistics Parameters Output

single input none running continuous average of ordinate


values of input statistic

OPNET Modeler/Release 16.1 MC-11-48


11—Data Analysis

The Tout statistic has the same number of entries as T0 and the two statistics are
aligned with each other with respect to the entries’ abscissa values. The
ordinate value of the i-th Tout entry is computed as a function of all entries up to
and including the i-th entry of T0. Because there is not necessarily a one-to-one
correspondence between the indices of the entries and their abscissa values,
discrete notation is used in the following expression for the value of the n-th
entry of Tout.
n–1

∑ y [ i ] ⋅( x [ i + 1 ] – x [ i ] )
0 0 0

i=0
y out [ n ] = --------------------------------------------------------------------
n
-

∑ (x [ i + 1] – x [ i] )
0 0

i=0

With regard to treatment of the special ordinate values +infinity, -infinity, and
undefined, this filter behaves in a manner that is analogous to the “mean” filter,
described earlier. In particular, undefined values are essentially ignored: the
numerator of the above expression is not modified by their occurrence; and the
width of an undefined interval does not contribute to the denominator term. Also,
infinite ordinate values in the input statistic result infinite values of same sign for
all subsequent entries of the output statistic. However, if infinite values of
different sign are encountered, Tout becomes undefined. See Average Filter on
page MC-11-46 for a general understanding of how the time average filter is
implemented.

Moving Average Filter The “moving_average” filter is a unary filter used to


operate on one input statistic T0 to generate a second statistic, Tout, which
represents the continuous average of the ordinate values of T0 over intervals of
a specified width. These intervals slide over the range of T0’s abscissa so that
Tout represents the average over the most recent interval. The interval width is
set by the filter’s only parameter, which is called window.

Table 11-18 Synopsis of Moving Average Filter


Input Statistics Parameters Output

single input window: positive real continuous average of input statistic


number used to set ordinate values over most recent interval
averaging interval width of width equal to “window”

When the window parameter is set to a large value relative to the typical
abscissa spacing between entries of the input statistic, the “moving_average”
filter provides a smooth result that averages values together on a local basis.
However, the window parameter may be selected to be any positive size,
including extremely small values, making it possible for consecutive points to be
more than one window apart. In such cases, the moving average should still

MC-11-49 OPNET Modeler/Release 16.1


11—Data Analysis

exhibit change over the period of one window because it attempts to emulate a
continuous averaging calculation. To improve the smoothness of its results, the
“moving_average” filter may insert additional samples in the output statistic.
Therefore, there is not necessarily a direct correspondence between the lengths
of T0 and Tout.

The normal value for the minimum abscissa distance is the value of min_dt/2,
where min_dt equals minimum distance between abscissa points in the original
statistic. This results in a given number of entries for the new statistic. The
moving average filter limits the number of entries in the new statistic, however,
to no more than 10 times the number of original entries. When this occurs, the
x-axis increment is scaled up to result in a lower density of points on the x-axis
and fewer points overall.

The general calculation of Tout’s values is quite complicated and involves many
special cases for special undefined and infinite values. Due to the complexity of
the algorithm, this manual does not present a full description, as was done for
other predefined filters.

However, the following statements are useful to describe the filter’s behavior for
most purposes:

• The “moving_average” filter’s computations are essentially composed of


additions and multiplications of entry ordinates and abscissas. With regard to
the special values for infinite and undefined entries, the same basic rules
applied in other averaging filters are used, namely:
— Addition of two infinite values of same sign also yields an infinite value of
same sign.
— Addition of two infinite values of opposite signs yields an undefined value.
— Undefined ordinate values in the input statistic are essentially ignored.
— Multiplication of an infinite value by a finite value yields an infinite value
of appropriate sign.

• Given that the “moving_average” filter selects a new collection of abscissa


points at which to compute the values of Tout, the following general
expression represents the calculation performed by the filter. The expression
shows that the moving average is calculated by integrating the input statistics
over all the intervals that are included in a the window that extends from xout
[n] - W to xout [n]. Because Tout and T0 are not aligned, it is possible that the
window’s boundaries do not coincide perfectly with entry abscissas in T0. As
a result, two additional terms are needed in the numerator to compute the
contribution of these “partial” intervals. Note that because averaging is over

OPNET Modeler/Release 16.1 MC-11-50


11—Data Analysis

a fixed window, the numerator is simply the constant that corresponds to the
filter’s window parameter. As with other averaging filters, the general
expression provided here is not precisely accurate in the case where
undefined points appear in the input statistic.

∑ y 0 [ i ] ⋅( x 0 [ i + 1 ] – x 0 [ i ] ) + [ y 0 [ k – 1 ] ⋅( x 0 [ k ] – x out [ n ] + W ) ] + [ y 0 [ m ] ⋅( x out [ n ] – x 0 [ m ] ) ]
i=k
y out [ n ] = ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
W
where:
W is the window size;
k is the minimum index of an entry in T0 such that the previous entry is outside the window;
m is the maximum index of an entry in T0 such that the following entry is outside the window.

• The “moving_average” filter uses a backwards-looking integration. Therefore


for an initial part of the output statistic, equal in abscissa span to the width of
the window, the average value reflected in Tout will not be with respect to a
full window, but rather with respect to a truncated window. This window
grows progressively until it reaches the full size specified in the window
parameter.

Sample Sum Filter The “sample_sum” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout, which represents the
running total of the ordinate values of T0 beginning with the first entry.

Table 11-19 Synopsis of Sample Sum Filter


Input Statistics Parameters Output

single input none running total of ordinate values of input


statistic

The Tout statistic has the same number of entries as T0 and the two statistics are
aligned with each other with respect to the entries’ abscissa values. The
ordinate value of the i-th Tout entry is computed as a function of all entries up to
and including the i-th entry of T0. Because there is not necessarily a one-to-one
correspondence between entries indices and abscissa values, discrete notation
is used in the following expression for the value of the n-th entry of Tout.
n

y out [ n ] = ∑ y [ i] 0

i=0

MC-11-51 OPNET Modeler/Release 16.1


11—Data Analysis

When using this filter, it is important to realize how special values such as
+infinity, -infinity, and undefined are treated. If an undefined value is
encountered in the input statistic, a new entry is created at the same abscissa
in Tout. If no defined entries precede this entry in T0 then the new entry is marked
as undefined in Tout as well; however, in the opposite case, the new entry is
simply marked with the value of the preceding entry in Tout. In both cases,
undefined entries are not considered part of the sum that is computed, and
therefore their occurrence is essentially ignored.

The first infinite value encountered in T0 causes the value of Tout to become
infinite as well (with same sign). The filter’s output will continue to be infinite for
all remaining entries unless an infinite value of opposite sign is encountered, at
which point Tout will become undefined and remain so up to and including the
final entry.

The following table lists the rules applied in computing the values of Tout.
Because this filter incorporates the history of the input statistic into the
calculation of each entry, the table uses the previous value of Tout as an input
appearing on the left side of the table. Note that the contents of the two left
columns of the table can be interchanged to address symmetric cases.

Table 11-20 Entry Calculation Rules for Sample Sum Filter


Tout [n-1] T0 [n] Tout [n]

a1 b a+b

a +infinity +infinity

a -infinity -infinity

+infinity -infinity undefined

* undefined * (i.e., same)

1. (a, b = real constant, * = any value)

Calculus Operators and Related Analytical Filters


Differentiator Filter The “differentiator” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout, which represents the
derivative of T0 with respect to its abscissa variable.

Table 11-21 Synopsis of Differentiator Filter


Input Statistics Parameters Output

single input none statistic representing derivative of input


statistic w.r.t. abscissa variable

OPNET Modeler/Release 16.1 MC-11-52


11—Data Analysis

The derivative is computed using the forward differences in ordinate and


abscissa. The ratio of these two quantities provides a correct value for the
derivative over each interval, under the assumption that the statistic represents
a continuous, piecewise linear function (i.e., the actual statistic behaves as
shown when it is plotted with a linear draw style). If the statistic in fact represents
a different type of continuous function, then this filter still provides an accurate
estimate of the derivative, provided that the spacing between abscissa points is
small. The calculation generally performed by the filter is the following (with the
exception of entries that contain special values):

y0 [ n + 1 ] – y0 [ n ]
y out [ n ] = ------------------------------------------
-
x0 [ n + 1 ] – x0 [ n ]

The input and output statistics are aligned with respect to the abscissa values
of their entries. However, note that using the above expression, the
“differentiator” filter is unable to compute a value for the final entry of the input
statistic, because there is no following entry to calculate the required
differences. Therefore, Tout is shorter than T0 by one entry.

When computing the entries of Tout, the rules summarized in the following table
are applied.

Table 11-22 Entry Calculation Rules for Differentiator Filter


y0[n+1] - y0[n] x0[n+1] - x0[n] yout [n]
1
a b≠0 a/b

a ±infinity 0.0

* 0 undefined

undefined * undefined

* undefined undefined

+infinity a>0 +infinity

+infinity a<0 -infinity

-infinity a>0 -infinity

-infinity a<0 +infinity

1. (a, b = real constant, * = any value)

MC-11-53 OPNET Modeler/Release 16.1


11—Data Analysis

Integrator Filter The “integrator” filter is a unary filter used to operate on one
input statistic T0 to generate a second statistic, Tout, which represents the
integral of T0 with respect to its abscissa variable.

Table 11-23 Synopsis of Integrator Filter


Input Statistics Parameters Output

single input none statistic representing integral of input statistic


w.r.t. abscissa variable

The integral statistic is computed as the area under the input statistic using the
“sample-and-hold” interpretation of the statistic. In other words, the area
between the statistic and the horizontal axis is obtained for a particular entry by
multiplying the width of the abscissa interval until the next entry by the ordinate
value of the entry. The calculation generally performed by the filter is the
following (with the exception of special handling for entries that contain special
values):
n–1

y out [ n ] = ∑ y [ i ] ⋅( x [ i + 1 ] – x [ i ] )
0 0 0

i=0

The input and output statistics are aligned with respect to the abscissa values
of their entries. The output statistic may have one additional entry relative to the
input statistic to account for the area contributed by the final entry of the latter.
This depends upon whether the input statistic’s end-of-statistic marker has an
abscissa that exceeds that of the final entry with an ordinary value.

The “integrator” filter treats special values much in the same way as other
accumulating filters such as the “mean”, “time_average”, and “sample_sum”
filters. That is primarily to say adding infinite values of opposite sign yields an
undefined value, and that adding undefined entries in the input statistic are
essentially ignored (i.e., in this case, these entries are treated as though their
ordinate value were zero sine they contribute nothing to the integral). The
following table lists the rules applied in computing the values of Tout. Because
this filter incorporates the history of the input statistic into the calculation of each
entry, the table uses the previous value of Tout as an input appearing on the left
side of the table. Note that the contents of the two left columns of the table can
be interchanged to address symmetric cases.

Table 11-24 Entry Calculation Rules for Integrator Filter


yout [n-1] y0 [n-1] x0 [n] - x0 [n - 1] yout [n]
1
a b c a + bc

a ±infinity c ±infinity

a undefined c a

+infinity b c +infinity

OPNET Modeler/Release 16.1 MC-11-54


11—Data Analysis

Table 11-24 Entry Calculation Rules for Integrator Filter (Continued)


yout [n-1] y0 [n-1] x0 [n] - x0 [n - 1] yout [n]

-infinity b c -infinity

+infinity -infinity c undefined

-infinity +infinity c undefined

undefined * c undefined

1. (a, b, c = real constant, * = any value)

Exponentiator Filter The “exponentiator” filter is a unary filter used to operate on


one input statistic T0 to generate a second statistic, Tout, which is obtained by
raising the ordinate values in T0 to a fixed power. This power value is designated
via the filter’s single parameter called exponent.

Table 11-25 Synopsis of Exponentiator Filter


Input Statistics Parameters Output

single input exponent: real number single statistic composed of values of


used as “power” of input statistic raised to the power of
exponentiation exponent.

Because special manipulations might be required to handle undefined points in


Tout and indeterminate forms, the resulting statistic might not have the exact
same length as the input statistic. In particular, when the specified exponent is
negative, additional points might have to be inserted. For more information, see
Reciprocal Filter on page MC-11-45.

In general however, the Tout statistic has a set of entries with abscissa values
that match those of T0. For entries where the exponentiation result is defined,
the relationship between input and output statistic is given by the following:

T out(x) = ( T 0(x) ) p where p is the specified exponent

MC-11-55 OPNET Modeler/Release 16.1


11—Data Analysis

The following table lists the rules applied in the computation of the entries of Tout
in the case where the exponent is positive or zero. For cases where the
exponent is negative, this table can be used to determine an intermediate result
and subsequently, the calculation rules of the reciprocal filter can be applied.

Table 11-26 Entry Calculation Rules for Exponent Filter (with Nonnegative or
Infinite Exponent)
T0(x) exponent Tout(x)
1
a≥ 0 b≥ 0 ab

a<0 b ab

a<0 b non-integer undefined

a > 1.0 -infinity 0.0

a > 1.0 +infinity +infinity

a = 1.0 ±infinity 1.0

a < 1.0 +infinity 0.0

a < 1.0 -infinity +infinity

a<0 ±infinity undefined

±infinity 0.0 1.0

+infinity b > 0.0 +infinity

-infinity b even integer ≠ 0 +infinity

-infinity b odd integer -infinity

+infinity +infinity +infinity

+infinity -infinity undefined

-infinity b non-integer undefined

-infinity +infinity undefined

1. (a, b = real constant, * = any value)

OPNET Modeler/Release 16.1 MC-11-56


11—Data Analysis

Logarithm Filter The “logarithm” filter is a unary filter used to operate on one
input statistic T0 to generate a second statistic, Tout, which is obtained by
computing the base ten logarithm, or common logarithm, of the ordinate entries
of T0.

Table 11-27 Synopsis of Logarithm Filter


Input Statistics Parameters Output

single input none single statistic composed of base 10 logarithms


of values of input statistic

The number of entries in the Tout statistic is identical to that of T0 and the entries
of both statistics are aligned with each other on the horizontal axis. The
logarithm function is not defined for all values, meaning that some entries with
special values may be present in Tout. However, for entries where the logarithm
result is defined, the relationship between input and output statistic is given by
the following:

y out [ n ] = Log 10 ( y 0 [ n ] )

The following table lists the rules applied in the computation of the entries of
Tout.

Table 11-28 Entry Calculation Rules for Logarithm Filter


T0(x) Tout(x)

a > 0.01 log10(a)

a = 0.0 -infinity

a < 0.0 undefined

+infinity +infinity

-infinity undefined

undefined undefined

1. (a = real constant, * = any value)

Miscellaneous Filters
Abscissa Filter The “abscissa” filter is a unary filter used to operate on one input
statistic T0 to generate a second statistic, Tout, whose ordinate values are simply
the abscissa values of the entries of T0.

Table 11-29 Synopsis of Abscissa Filter


Input Statistics Parameters Output

single input none single statistic composed of abscissa values of


input statistic

MC-11-57 OPNET Modeler/Release 16.1


11—Data Analysis

The number of entries in the Tout statistic is identical to that of T0 and the entries
of both statistics are aligned with each other on the horizontal axis. Note that
only the abscissa values of the input statistic are relevant to the computation
performed by this filter, as shown in the following equation relating input and
output statistics.

y out [ n ] = x 0 [ n ]

Delay Element Filter The “delay_element” filter is a unary filter used to operate
on one input statistic T0 to generate a second statistic, Tout, which is obtained
by translating T0 along the horizontal axis by a fixed amount. The translation
distance is controlled by the sole parameter of the filter, called delay. A positive
value for delay causes a translation of the statistic in the positive abscissa
direction.

Table 11-30 Synopsis of Delay Element Filter


Input Statistics Parameters Output

single input Delay: amount to single statistic composed of same entries as


shift statistic input statistic shifted in abscissa direction

The length of T0 and Tout are identical, with the entries of Tout computed as
follows (note: all special values are simply translated to new abscissas).

y out [ n ] = y 0 [ n ]

x out [ n ] = x 0 [ n ] + delay

Glitch Notch Filter The “glitch_notch” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout, by eliminating the
occurrence of multiple entries that share the same abscissa value (such an
occurrence is referred to as a glitch). Tout is constructed by copying entries from
T0. However, if a sequence of consecutive entries (at least two) with the same
abscissa is encountered in T0, all but the last entry in this sequence are ignored,
and the last entry is copied into Tout. Therefore, Tout is glitch-free.

Table 11-31 Synopsis of Glitch Notch Filter


Input Statistics Parameters Output

single input none single statistic composed of same entries as


input statistic but with at most one entry at each
distinct abscissa.

Because the purpose of this filter is to eliminate entries to ensure uniqueness of


entries at each abscissa value, the length of Tout might be less than that of T0.
If no glitches exist in T0, then Tout is simply an identical copy of T0.

OPNET Modeler/Release 16.1 MC-11-58


11—Data Analysis

Limiter Filter The “limiter” filter is a unary filter used to operate on one input
statistic T0 to generate a second statistic, Tout, which is obtained by constraining
the ordinate values T0 within a specified range. The lower and upper limits of
this range are controlled by the filter parameters called min_val and max_val,
respectively. These are inclusive bounds for the range.

The “limiter” filter performs a transformation on T0 by copying entries into Tout


and simultaneously clipping them to ensure that they are within the specified
interval. Entries whose ordinate values exceed max_val are modified to have
the value max_val. Similarly, entries whose ordinate values are less than
min_val are modified to have the value min_val.

Table 11-32 Synopsis of Limiter Filter


Input Statistics Parameters Output

single input Min_val: lower single statistic composed of same entries as


bound of vertical input statistic, but constrained within the
range specified range [min_val, max_val]
Max_val: upper
bound of vertical
range

The length of T0 and Tout are identical, and are aligned with each other along
the horizontal axis. The entries of Tout are computed as shown below. Undefined
entries are not transformed by this filter. Positive infinite ordinates are clipped to
the upper bound, and negative infinite ordinates are clipped to the lower bound.

y out [ n ] = min (maxval,max (minval,y 0 [ n ] ))

Time Window Filter The “time_window” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout, which is obtained by
eliminating all entries whose abscissa values lie outside a specified range. The
lower and upper limits of this range are controlled by the filter parameters called
min_time and max_time, respectively. These bounds themselves are also
included in the range. The names min_time and max_time are used due to the
fact that this filter is frequently applied to statistics and/or output vectors that use
time as their abscissa variable. However, the “time window” filter is equally
applicable to other types of statistics.

MC-11-59 OPNET Modeler/Release 16.1


11—Data Analysis

The “time_window” filter performs a transformation on T0 by copying only those


entries into Tout that have abscissa variables within the specified range. Entries
whose abscissa values exceed max_time are discarded, as are those whose
abscissa values are less than min_time. Other entries are copied into Tout
unmodified.

Table 11-33 Synopsis of Time Window Filter


Input Statistics Parameters Output

single input Min_time: lower single statistic composed of only those entries in
bound of T0 whose abscissa values are in the inclusive
horizontal range range [min_val, max_val]
Max_time: upper
bound of
horizontal range

Value Notch Filter The “value_notch” filter is a unary filter used to operate on
one input statistic T0 to generate a second statistic, Tout. Tout is obtained by
eliminating all entries that are approximately equal to the parameter value.
“Approximately equal” is defined as falling within a range determined by a
specified ordinate value (x) and the value of the value_notch_filter_tolerance
preference (tol). If T0 is greater than x - tol and less than x + tol, it is eliminated;
otherwise T0 is simply copied to the output statistic Tout.
-9
(value_notch_filter_tolerance has a default value of 10 .)

Table 11-34 Synopsis of Value Notch Filter


Input Statistics Parameters Output

single input value: entries single statistic composed of only those entries in
with this ordinate T0 whose ordinate values are higher or lower
value are than the specified value by a certain number,
eliminated defined by the value_notch_filter_tolerance
preference.

OPNET Modeler/Release 16.1 MC-11-60


11—Data Analysis

MC-11-61 OPNET Modeler/Release 16.1

You might also like