You are on page 1of 31

J. Barceló and J.

Casas 1

METHODOLOGICAL NOTES ON THE CALIBRATION


AND VALIDATION OF MICROSCOPIC TRAFFIC
SIMULATION MODELS

By

Jaime Barceló1 and Jordi Casas2

1 2
Dept. of Statistics and Operations Research TSS-Transport Simulation Systems
Technical University of Catalonia Paris 101, 3rd Floor
Pau Gargallo 5 08029 Barcelona
08028 Barcelona Spain
Spain

Phone +34.93 401 7033 Phone: +34 93 532 6077


Fax +34.93 401 5881 Fax: +34 93 532 6067
Email: jaume.barcelo@upc.es Email: casas@aimsun.com

Submitted for presentation


Transportation Research Board 2004 Annual Meeting
January 2004
Washington, D.C.

# WORDS: 7,410

November 2003

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 2

ABSTRACT

From a methodological point of view it is widely accepted that simulation is a useful


technique to provide an experimental test bed to compare alternate system designs, replacing
the experiments on the physical system by experiments on its formal representation in a
computer in terms of a simulation model. Simulation may thus be seen as a sampling
experiment on the real system through its model. The reliability of this decision making
process depends on the ability to produce a simulation model representing the system
behavior closely enough for the purpose of using the model as a substitute of the actual
system for experimental purposes. This reliability is established in terms of the calibration and
validation of the model. Model calibration and validation is inherently an statistical process in
which the uncertainty due to data and model errors should be account for. This paper proposes
explicit method to take into account the autocorrelation dependencies between traffic data,
and the specific time dependencies characteristics of traffic data whose emulation is one of
the main abilities of microscopic simulation. The paper also proposes guidelines for the
calibration of the route choice models in the route based simulation. All cases are illustrated
numerically with examples from real life simulation projects with the AIMSUN microscopic
simulator..

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 3

1. INTRODUCTION: GENERAL CONCEPTS

From a methodological point of view it is widely accepted that simulation is a useful


technique to provide an experimental test bed to compare alternate system designs, replacing
the experiments on the physical system by experiments on its formal representation in a
computer in terms of a simulation model. The outcomes of the computer experiment provide
in this way the basis for a quantitative support to decision- makers. According with this
conception, the simulation model can be seen as computer laboratory to conduct experiments
with the model of the system, with the purpose of drawing valid conclusions for the real
system. In other words, use the simulation model to answer what if questions about the
system.

Simulation may thus be seen as a sampling experiment on the real system through its model
[1]. In other words, assuming that the evolution over time of the system model imitates
properly the evolution over time of the modeled system, samples of the observational
variables of interest are collected from which, using statistical analysis techniques,
conclusions on the system behavior can be drawn. Figure 1 illustrates conceptually this
methodology.

The reliability of this decision making process depends on the ability to produce a simulation
model representing the system behavior closely enough for the purpose of using the model as
a substitute of the actual system for experimental purposes. This is true for any simulation
analysis in general and obviously for traffic simulation. The process of determining whether
the simulation model is close enough to the actual system is usually achieved through the
validation of the model, an iterative process involving the calibration of the model parameters
and comparing the model to the actual system behavior and using the discrepancies between
the two, and the insight gained, to improve the model until the accuracy is judged to be
acceptable. Validation of a simulation model is a concept that should be taken into account
thorough the whole model building process.

According to Law and Kelton [2], the key methodological steps for building valid and
credible simulation models are:

• Verification: consists of determining that a simulation computer program performs as


intended and is concerned with building the model right.
• Validation: is concerned with determining whether the conceptual simulation model (as
opposed to the computer program) is an accurate representation of the system under
study. Validation deals with building the right model
• A model is credible when its results are accepted by the user, and are used as an aid in
making decisions. Animation is an effective way for an analyst to establish credibility.

Balci [3] defines a successful simulation study “to be the one that produces a sufficiently
credible solution that is accepted and used by decision makers”. This implies the assessment
of the quality of the simulation model through the Verification and Validation of the
simulation models.

Verification usually implies running the simulation model under a variety of settings of the
input parameters and check to see that the output is reasonable. In some cases, certain
measures of performance may be computed exactly and used for comparison. Animation can
also be of great help for this purpose, with some types of simulation model (traffic model are

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 4

just a good example) it may be helpful to observe an animation of the simulation output to
establish whether the computer model is working as expected. In validating a simulation
model the analyst should not forget that:

• A simulation model of a complex system can only be an approximation to the actual


system. There is no such thing as an absolutely valid model of a system.
• A simulation model should always be developed for a particular set of purposes.
• A simulation model should be validated relative to those measures of performance that
will actually be representative of these purposes.
• Model development and validation should be done hand- in-hand thorough the entire
simulation study.

Validation means the process of testing that the model does actually represent a viable and
useful alternative means to real experimentation. This requires the exercise of calibrating the
model, that is adjusting model parameters until the resulting output data agree closely
to the system observed data. The validation of the simulation model will be established
on basis to the comparison analysis between the observed output data from the actual
system and the output data provided by the simulation experiments conducted with the
computer model.

Model calibration and validation is inherently an statistical process in which the uncertainty
due to data and model errors should be account for. Depending on the variables selected, the
system and simulated data available, their characteristics and statistical behavior, a variety of
statistical techniques either for paired comparisons, or for multiple comparisons and time
series analysis, have been proposed. The conceptual framework for this validation
methodology is described in the diagram of figure 2 (adapted from reference(2)). According
to this logic when the results of the comparison analysis are not acceptable to the degree of
significance defined by the analyst, the rejection of the simulation results implies the need of
recalibrating some aspects of the simulation model. The process is repeated until a significant
degree of similarity according to some statistical analysis techniques is achieved.

2. SPECIFICS FOR THE VERIFICATION AND VALIDATION OF TRAFFIC


SIMULATION MODELS

In the case of the traffic systems the behavior of the actual system is usually defined in terms
of the traffic variables, flows, speeds, occupancies, queue lengths, and so on, which can be
measured by traffic detectors at specific locations in the road network. To validate the traffic
simulation model the simulator should be able of emulating the traffic detection process and
produce series of simulated observations which comparison to the actual measurements will
be used to determine whether the desired accuracy in reproducing the system behavior is
achieved. Rouphail and Sacks, [4], propose the following set of guiding principles:

1. The analyst must be aware that calibration and validation are conducted in
particular contexts.
2. Depending on the context the, model requires specific sets of relevant data.
3. Both, models and field data, contain uncertainties.
4. Feedback is necessary for model use and development.
5. Model validation must be exercised on an independent data set from the
calibration data set.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 5

The analyst will have to identify which data are relevant for the planned study, collect them,
identify the uncertainties, filter out the data accordingly, and use two independent sets of data.
The first set should be used for calibrating the model parameters and the second for
running the calibrated model and then, for validating the calibrated model.

The key question in the diagram of figure 2, is the model valid?, can then be reformulated as,
do model results faithfully represent reality?. The statistical techniques provide a quantified
answer to this question, quantification that, according to [4] can be formally stated in the
following terms: the probability that the difference between the “reality” and the simulated
output is less than a specified tolerable difference within a given level of significance:

P{ |”reality” - simulated output | ≤ d } > α

Where d is the tolerable difference threshold indicating how close the model is to reality, and
α is the level of significance that tells the analyst how certain is the result achieved.

This formulation raises immediately the question on, what is “reality”, and how to set d and
α. In this framework the analyst perception of the reality relays on the information gathered
through the data collection and the subsequent data processing to account for the
aforementioned uncertainties. The available data and its uncertainties will determine what can
be said about d and α. To produce input data for the simulation model of the quality required
to conduct an accurate statistical analysis a careful data collection process is necessary to
ensure that the desired correspondence is achieved at an acceptable significance level.
Detailed examples of data collection for microscopic simulation can be found in the
references [5] and [6].

2.1 Verification: Develop a traffic simulation model with high face validity

The main components of a traffic micro-simulation model are:

a. The geometric representation of any component of the of the road traffic network,
freeways, arterials, roundabouts, etc., and related traffic devices, i.e. traffic
detectors, Variable Message Panels, traffic lights, etc.
b. The representation of traffic control schemes (phasing, timings, offsets), and
traffic management schemes (directions of vehicle’s movement, allowed and
banned turnings, etc.)
c. The individual vehicles behavioral models: car- following, lane change, gap
acceptance, etc
d. The representation of the traffic demand, either as:

d.1 Input flow patterns at input sections to the road model and turning
percentages at intersections, or as
d.2 Time-sliced OD matrices for each vehicle class

e. The route-choice models, for OD based simulations.

The verification of the traffic simulation model, that is the process of building the model right
according to the above definitions must be assisted by a friendly graphic user interface
designed with the objective of supporting the modeller in tasks a and b of the process of
building the road network model. The interface must accept as background a digital map of

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 6

the road network, in terms of a DXF file from a GIS or an AutoCAD system, a JPEG or a
bitmap file, etc. so sections and nodes can be built subsequently into the foreground..

The graphic interface must provide the model builder with tools to go straightforward from
the natural system, to its computer representation in terms of the micro-simulation model. The
use of the graphic editors on the digital maps of the road networks must provide the basis for
a continuous visual validatio n of the quality of the geometric model. The interface must also
include auxiliary on-line debugging tools to prevent mistakes in building the geometric
representation, and warning the modeller when obvious inconsistencies may occur. The
graphic interface should ensure in this way a geometric model exhibiting a “high face
validity” that could been further validated by the modeller through the visual inspection by
the graphic display of the model.

The validation of the geometric model should also take into account other aspects that cannot
be so evident at a first glance, i.e. the connectivity of the network, the existence of paths
connecting every origin to every destination in a route based simulation. The system must
assist the modeler in this additional validation task identifying the type of inconsistencies, and
where in the model it are located. Examples on how this methodological process is
implemented in the microscopic simulator AIMSUN, can be found in the reference [7].

Figure 3 depicts an example of a microscopic simulation model built with AIMSUN [8],
according to the above recommendations. The network geometry has been drawn with a set of
graphic editors, TEDI [9], on top of a digital map imported as a .dxf background file from a
CAD system. The attributes and parameters of any object in the model are defined and
assigned values by means of windows dialogues such as the one in the A box of figure 3,
which shows the definition of the shared movements in a phase of a pre-timed signal control,
the allocation of the timings and the messages, and related management actions associated to
the highlighted VMS panel.

2.2 Calibration: Test the assumptions of the model empirically.

In the case of a microscopic traffic simulation model, the model behavior depends on a rich
variety of model parameters. The model is composed of entities, i.e. vehicles, sections,
junctions, intersections, and so on, each of them described by a set of attributes, i.e.
parameters of the car- following, the lane change, gap acceptance, speed limits and speed
acceptance on sections, an so on, the model behavior is determined by the numerical values of
these parameters. The calibration process has the objective of finding the values of these
parameters that will produce a valid model. Model parameters must be supplied with
values. Calibration is the process of obtaining such values from field data in a particular
setting

Some examples will help to illustrate this dependency between parameter values and model
behavior. Vehicle lengths have a clear influence in flows: as the vehicle lengths increase
flows decrease and queue lengths increase. In the typical car- following models the target
speed, the section speed limit and the speed acceptance, among others, define the desired
speed for each vehicle on each section. The higher the target speed, the higher the desired
speed for any given section, resulting in an increase in flow according to the flow-speed
relationships. In this way, as part of the calibration process one should establish for a
particular model the influence of acceleration and breaking parameters in the capacity of the
sections, namely for weaving sections. Similarly, depending on how the lane changing is

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 7

modeled the effects of lengths of zones where the lane changing decision can be made
influence the capacity of the weaving sections, specially when these length are local
parameters which values could depend on traffic condition. This effects also happen when
parameters influencing the lane distribution are included in the model.

Obviously the most exact procedure to calibrate car- following parameters is to conduct
specific experiments in which accurate field data are recorded on the relative distances and
speeds between pairs of leader-follower vehicles, and the simulation model is calibrated
against the field data. A recent example of these type of experiments can be found in
references [10], [11], [12], see also [7]. Unfortunately these type of experiments are expensive
and can seldom be conducted in the current professional practice.

However, taking into account that correct car-following models acceptably calibrated must be
able of reproducing accurately enough macroscopic observable phenomena, as for example
flow-occupancy, or flow-density relationships, additional tests to analyze the quality of the
microscopic simulator can be conducted to check the ability to reproduce such macroscopic
behavior. Manstetten et al. [10], propose a test based on simulating increasing flows on a
closed ring model, as the one depicted in figure 4, to reproduce a priory estimated flow-
density relationships A steadily increasing flow is injected in the model at on-ramp A until
reaching saturation after a pre-defined time horizon. A detector at B collects the traffic data.
Figure 4 also displays the speed - flow graphics for a reaction time of RT=0.85 seconds, and
acceleration normally distributed with mean acceleration 2.8 m/s2 , and standard deviation
0.56 m/s2 , normal deceleration 4.0 m/s2 , and maximum deceleration 6.0 m/s2 , providing a
capacity of 2,320 vphpl.

The results of AIMSUN for the simulated flow density curve versus the empirical one for this
test are displayed in figure 5, and they appear to be fairly reasonable as confirmed by the
values of the rms error measuring the fitting between the measured and simulated values. The
graphics in figure 5 also shows the sensitivity of the AIMSUN Car-following model to
variations in the values of two model parameters, the reaction time and the minimum vehicle
to vehicle distance (effective length). AIMSUN Car- following model, [8], is inspired in Gipps
model, [13]. A subset of the simulation experiments to determine the values of the model
parameters best fitting the observed values is summarized in Table 1, showing that the best
fitting is achieved in the simulation experiment 1b with a reaction time of 0.9 seconds, and an
effective length equal to the vehicle length plus 0.75 meters

The graphics in figure 6 displays the variation of the RMS error as a function of the reaction
time parameter in the Car-following model as implemented in AIMSUN, the blue curve
corresponds to a fixed value of the minimum distance between vehicles of 0,75 meters and the
red one to 1 meter. This illustrates in more detail the example of pilot runs of the simulation
model to calibrate the parameters of the car- following model for a given context. Similar
simple models to understand how to adjust the parameter values to fit the situations to study
have been proposed by Yoshii, [14].

3. VALIDATION: DETERMINE HOW REPRESENTATIVE THE SIMULATION


OUTPUT DATA ARE

The statistical methods and techniques for validating simulation models are clearly exposed in
most textbooks and specialized papers [2], [15-18]. From the general methodology three
main principles establish a framework for model validation are:

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 8

1. The measured data in the actual system should be split in two data sets: the data set that
will be used to develop and calibrate the model, and a separate data set that will be used
for the validation test.
2. Specify the data collection process in the system as well as in the simulation model:
traffic variables or MOE’s (i.e. flows, occupancies, speeds, service levels, travel times,
etc.) which values will be collected to be used for the calibration and validation phases,
and the collection frequency (i.e. 30 seconds, 1 minute, 5 minutes, etc.)
3. According to the methodological diagrams in Figure 2 validation should be considered an
iterative process: At each step in the iterative validation process a simulation experiment
will be conducted. Each of these simulation experiments will be defined by the data input
to the simulation model, the set of values of the model parameters that identify the
experiment and the sampling interval.

3.3.1 First approach: validation based on standard statistical comparison between model
and system outputs

3.3.1.1. Comparison based on global measurements

A method that has been widely used in validating transport planning models, in the typical
situation in which only aggregated values are available (i.e. flow counts at detection stations
aggregated to the hour), has been to analyze the scattergram or, alternatively, to use a global
indicator as the GEH index, widely used in the United Kingdom practice, [19]. Figure 7
depicts an example of such analysis. The regression line of observed versus simulated flows
at 76 detection stations, for the aggregated 1 hour values is plotted along with the 95%
prediction interval. The R2 value of 93.4, and the fact that only three points lay out of the
confidence band would lead to the conclusion that the model could be accepted as
significantly close to the reality.

On the other hand, the GEH index for n pairs of (Observed-Simulated) values calculated by
the following algorithm:

For i=1 to n calculate

2(ObsVal i − SimVal i )2
GEHi =
ObsVal i + SimVal i
If GEHi ≤ 5 Then GEHi = 1
Otherwise GEHi = 0
Endif
Endfor

∑ GEH
1
Let GEH = i
n i =1
If GEH ≥ 85% then ACCEPT the model
Otherwise REJECT the model
Endif

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 9

For the same example the GEH value is 72%, therefore the model would have been rejected,
leading to a conclusion contradicting the previous one.

Independently of the considerations on whether one criteria is better that the other, one should
draw the attention that this type of indicators can be considered only as a primary indicators
for acceptance or rejection in the case of microscopic simulation models. Obviously the level
of aggregation depends on the purposes of the applications, but in using aggregated values,
averaged over periods of time of similar length to those typically used with the static
equilibrium models for planning purposes, has the risk of not capturing what is considered
the essence of the microscopic traffic simulation: the ability to capture the time
variability of traffic phenomena. Therefore other type of statistical comparison should be
proposed.

3.3.1.2 Comparisons based on disaggregated measurements

For example, assuming that in the definition of the simulation experiment the sampling
interval is five minutes, that is the model statistics are gathered every five minutes, and that
the sampling variable is the simulated flow w, the output of the simulation model will be
characterized by the set of values wi j, of the simulated flow at detector i at time j, where index
i identifies the detector (I=1,2,…,n, being n the number of detectors), and index j the sampling
interval (j=1,2,…,m, being m the number of sampling intervals in the simulation horizon T).
If vi j are the corresponding actual model measures for detector i at sampling interval j, a
typical statistical technique to validate the model would be compare both series of
observations to determine if they are close enough. For detector i the comparison could be
based on testing whether the difference.

Dij = wi j – vi j, j=1,…,m

has a mean di significantly different from zero or not. This can be determined using the t-
statistics:
d − di
t m− 1 = i
sd
m

where δ I is the expected value of di and sd the standard deviation of di , for testing the null
hypothesis:
H0 : d i = 0 ( )
t m-1 > t m−1; a 2
a. If for δ I = 0 the calculated value t m −1 of the Student’s t distribution is significant to the
specified significance level α then we have to conclude that the model is not reproducing
close enough the system behavior and then we have to reject the model.
b. If δ I = 0 gives a non-significant t m −1 then we conclude that the simulated and the real
means are “practically” the same so the simulation is “valid enough”.

This process will be repeated for each of the n detectors. The model is accepted when all
detectors (or a specific subset of detectors, depending on the model purposes and taking into
account that the simulation is only a model, and therefore an approximation, so δ I will never
be exactly zero) pass the test.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 10

However, there are some considerations to take into account in the case of the traffic
simulation analysis.

1. The statistical procedure assumes identically and independently distributed (i.i.d)


observations whereas the actual system measures and the corresponding simulated output
are time series. Therefore it would be desirable that at least the m paired (correlated)
differences di j = wi j – vi j, j=1,…,m are i.i.d. This can be achieved when the wi j and the vi j
are average values of independently replicated experiments.
2. The bigger the sample is, the smaller the critical value t m −1;a 2 is, and this implies that a
simulation model has a higher chance of being rejected at the sample grows bigger.
Therefore the t statistics may be significant and yet unimportant if the sample is very
large, and the simulation model be good enough for practical purposes.

These considerations lead to recommend not relay in only one type of statistical test for
validating the simulation model, other authors see [20], have also proposed other less
stringent validation tests for traffic simulation based on classical two means comparisons.

3.3.2 An alternative approach based on time series analysis

A different family of statistical tests for the validation of a traffic simulation model is rooted
in the observation that the measured and the simulated series, vi j and wij respectively, are time
series. In this case the measured series could be interpreted as the original one and the
simulated series the “prediction” of the observed series. In that case the quality of the
simulation model could be established in terms of the quality of the prediction, and that would
mean to resort to time series forecasting techniques for that purpose. If one considers that
what is observed as output of the system as well as output of the model representing the
system is dependent on two type of components: the functional relationships governing the
system (the pattern) and the randomness (the error), and that the measured as well as the
observed data are related to these components by the relationship:

Data = pattern + error

Then the critical task in forecasting can be interpreted in terms of separating the pattern from
the error component so that the former can be used for forecasting. The general procedure for
estimating the pattern of a relationship is through fitting some functional form in such a way
as to minimize the error component. A way of achieving that could be by regression analysis.

If for detector i the error of the j-th “prediction” is di j = wi j – vi j, j=1,…,m, then a typical way
of estimating the error of the predictions for the detector i is “Root Mean Square Error” , rms i
defined by:

rmsi =
1 m
∑ (w ij − vij )2
m j= 1

This error estimate has been perhaps the most used in traffic simulation, and although
obviously the smaller rms i is the better the model is, it has a quite important drawback, as far
as it squares the error it emphasizes large errors. Therefore it would be helpful to have a
measure that considers both, the disproportionate weight of large errors and provides a basis
for comparison with other methods. On the other hand it is quite frequent in traffic simulation

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 11

that neither the observed values nor the simulated ones are independent, namely when only
single sets of traffic observations are available (i.e. flows, speeds and occupancies for one day
of the week during the rush hour). The following example is taken from a simulation study of
the I-35W freeway in Minneapolis [21], in which to capture in detail the time variability of
flows and occupancies was essential.

3.3.2.1 Autocorrelation analysis of observed flows

The Figure 8 depicts the scattergram and the autocorrelation spectrum of the observed flows
for detector 426 of the site under study. Observed points are scattered along the diagonal
indicating the presence of autocorrelation between the observations. This assumption is
corroborated by the analysis of the correlation spectrum in that shows a clear pattern
dependencies. Similar patterns could be observed for all detector in the site.

3.3.2.2 Autocorrelation analysis of simulated flows

In a similar way the scattergram and the autocorrealtion spectrum of the simulated flows for
this detector in Figure 9, measured by the detector emulation, exhibit an analogousbehavior.

Theil’s U-Statistic [22] is the measure achieving the above mentioned objectives of
overcoming the drawbacks of the rmse index and taking into account explicitly the fact that
we are comparing two autocorrelated time series, and therefore the objective of the
comparison is to determine how close both time series are.
In general, if Xj is the observed and Yj the forecasted series, j = 1,…,m, then, if
Yj +1 − X j X j+ 1 − X j
FRC j +1 = is the forecasted relative change, and ARC j+ 1 = is the
Xj Xj
actual relative change, Theil’s U-Statistic is defined as:

m −1

∑ (FRC − ARC j+ 1 )
2 2
m− 1  Yj +1 − X j+ 1 
∑  
j +1
j =1
 
U=
(m − 1) = j =1  X j 
m−1

∑ (ARC )
2
2 m−1  X j+ 1 − X j 
j =1
j+ 1
∑  X 

(m − 1) j=1  j 

An immediate interpretation of Theil’s U-Statistic, is the following:

U = 0 ⇔ FRCj+1 = ARCj+1 , and then the forecast is perfect


U = 1 ⇔ FRCj+1 = 0, and the forecast is as bad as possible

In this last case the forecast is the same as that that would be obtained forecasting no change
in the actual values. When forecasts Yj+1 are in the opposite direction of Xj+1 then the U
statistic will be greater that unity. Therefore the closer to zero the Theil’s U-Statistics is the
better the forecasted series is or, in other words, the better the simulation model. When
Theil’s U-statistic is close to or greater than 1 the forecasted series, and therefore the

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 12

simulation model, should be rejected. Taking into account that the average squared forecast
error:

D2m =
1 m

m j=1
(Yj − X j )
2

can be decomposed (Theil) in the following way:

D2m =
1 m

m j =1
( 2
( )
Yj − X j ) = Y − X + (S Y − S X ) + 2(1 − )S Y S X
2 2

where Y and X are the sample means of the forecasted and the observed series respectively,
S Y and S X are the sample standard deviations and ρ is the sample correlation coefficient
between the two series, the following indices can be defined:

UM =
( Y−X ) 2


2
Dm 
(S Y − S X ) 
2

US =  ⇒ UM + US + U C = 1
D m2 
2(1 − )S Y S X 
UC = 
D 2m 

UM is the “Bias proportion” index and can be interpreted in terms of a measure of systematic
error, US is the “variance proportion” index and provides an indication of the forecasted series
ability to replicate the degree of variability of the original series or, in other words, the
simulation model’s ability to replicate the variable of interest of the actual system. Finally UC
or “Covariance Proportion” index is a measure of the unsystematic error. The best forecasts,
and hence the best simulation model, are those for which UM and US do not differ
significantly from zero and UC is close to unity. It can be shown that this happens when β 0
and β 1 in the regression do not differ significantly from zero and unity respectively.

The example of this detector, is an interesting case that demonstrates how these statistical
techniques can reveal hidden information critical for some validation aspects that the
traditional techniques cannot. The results for the plot of the observed and simulated series is
shown in Figure 10. Visual inspection reveals a very good agreement between both series
confirmed by the value 0.9999 of R2 . The analysis of the Theil’s coefficients corroborates the
quality of the simulation model (very low values of U=0.015348, Uc= 0,000362 and US=
0.055073), but the presence of a very high value of UM = 0.920005, reveals the presence of a
systematic bias. It can be identified that there is an almost constant difference of four units
between the observed and the simulated series. That is, the simulated series is shifted 4 units
with respect to the observed one. The discrepancy could be explained in this case, see [21] for
details, by the misplacement of the detector.

3.3.3 Another alternative to explore: band analysis

When the data collection can be automatized and traffic data can be collected for long time
periods (i.e. flow counts for n Mondays for rush hour from 7:00 until 9:00 am) the
comparison between the measured data and the simulated data could consist on the

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 13

comparison of two bundles of time series, the set of measured time series and the set of time
series resulting from independent replications of the simulation model. Validation could then
be based on developing suitable statistical procedures to compare:

§ single/mean pattern to single/mean pattern


§ mean pattern to bandwidth
§ bandwidth to bandwidth

As illustrated in figure 11. This methodology has been applied in the ISM project in Hessen,
Germany, [23], [24]. The model spanned 1800 km. of motorways and highways in the land of
Hessen around the city of Frankfurt. The data available since 1999 were volumes and speeds
for every minute, for cars and trucks, from 700 cross-section detectors, each 2-3 lanes. Figure
12 depicts the typical band distribution for one of the detectors. The blue band, between the
upper and lower envelopes represent the time distribution of the observed flows. Any value
lying within the band could be considered as a valid observed value. The methodological
principle for the validation of the simulation model can then be stated in the following terms,
if the average replications of the simulation model lay within the band limits, the red line in
the example, then the simulated values cannot be distinguished from the observed values and
the model can be accepted.

The figure 13 illustrates the application of this method to some relevant detectors in Hessen
for the morning peak hour.

4. A METHOD TO ASSIST THE ANALYST IN THE CALIBRATION OF ROUTE


CHOICE MODELS

The assessment by simulation of ITS applications requires a substantial change in the


traditional paradigms of microscopic simulation, in which vehicles are generated at the input
sections in the model, and perform turnings at intersections according to probability
distributions. In such model vehicles have neither origins nor destinations and move
randomly on the network. The required simulation approach is based on a new macroscopic
simulation paradigm: a route based microscopic simulation. In this approach, vehicles are
input into the network according to the demand data defined as an O/D matrix (preferably
time dependent) and they drive along the network following specific paths in order to reach
their destination. In the main Route Based simulation new routes are to be calculated
periodically during the simulation, and a Route Choice model is needed, when alternative
routes are available. This process can be interpreted in terms of an heuristic approach to
dynamic traffic assignment [25] consisting on:

1. A method to determining the path dependent flow rates on the paths on the network,
based on a Route Choice function, and
2. A Dynamic Network Loading method, which determines how these path flows give
raise to time-dependent arc volumes, arc travel times and path travel times,
heuristically implemented by microscopic simulation.

The implemented simulation process, [7], [26], based on time dependent routes consists of the
following procedure:

Procedure heuristic dynamic assignment

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 14

Step 0: Calculate initial shortest path(s) for each O/D pair using the defined initial
costs
Step 1: Simulate for a time interval ∆t assigning to the available path Ki the fraction
of the trips between each O/D pair i for that time interval according to the
probabilities Pk , k∈ K estimated by the selected route choice model.
Step 2: Update the link cost functions and recalculate shortest paths, with the updated
link costs.
Step 3: If there are guided vehicles, or variable message panels proposing a rerouting,
provide the information calculated in 2 to the drivers that are dynamically
allowed to reroute on trip.
Step 4: Case a (Preventive dynamic assignment)
If all the demand has been assigned then stop. Otherwise go to step 1.
Case b (Reactive dynamic assignment)
If all the demand has been assigned and the convergence criteria holds then
stop.
Otherwise:
Go to step 1 if all the demand has not been assigned yet
Or
Go to step 0 and start a new major iteration

Depending on how the link cost functions are defined, and whether the procedure is applied
as one pass method completed when all the demand has been loaded, or it is applied as part of
an iterative scheme repeated until certain convergence criterion is satisfied, it corresponds
either to a “preventive” or en route dynamic traffic assignment, or to a “reactive” or heuristic
equilibrium assignment. In the first case route choice decisions are made for drivers entering
the network at a time interval based on the experienced travel times, i.e. the travel times of the
previous time interval, and the link cost function is defined in terms of the average link travel
times in the previous interval. Alternatively a heuristic approach to equilibrium can be based
on repeating the simulation scheme a number of times and defining a link cost function
including predictive terms, as proposed by Friesz et al. [27], [28]. This could be interpreted in
terms of a day-to-day learning mechanism. In the computational experiments discussed in this
paper a simplified version of that proposed in [7] consisting of a link cost function defined as:

c itk +1 = λc itk + (1 − λ )~
citk (1)

Where c itk +1 is the cost of using link i at time t at iteration k+1, and c itk and c~itk correspond
respectively to the expected and experienced link costs at this time interval from previous
iterations. The simulation experiments reported in this paper have been implemented in
AIMSUN selecting the Logit, and Proportional route choice functions from the default route
choice functions available in the simulator. The Multinomial Logit route choice model
defines the choice probability Pk of alternative path k, k∈ Ki, as a function of the difference
between the measured utilities of that path and all other alternative paths:
e ?Vk 1
Pk = =
∑e ?Vl
1+ ∑e ( ? Vl − Vk )

l∈K i l≠ k

where Vi is the perceived utility for alternative path i (i.e. the opposite of the path cost, or
path travel time), and θ is a scale factor that plays a two- fold role, making the decision based
on differences between utilities independent of measurement units, and influencing the
standard error of the distribution of expected utilities, determining in that way a trend

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 15

towards utilizing many alternative routes or concentrate in very few routes, becoming in that
way the critical parameter to calibrate how the logit route choice model leads to a meaningful
selection of routes or not. Other option is the estimation of the choice probability Pk of path k
, k∈ Ki, in terms of a generalization of Kirchoff’s laws given by the function
CP k − α
P =
k
∑ CP
l∈ K i
l
−α

where CPl is the cost of path l, α is in this case the parameter whose value has to be
calibrated.

The method explored in this paper to guide the user in the calibration of the θ or α
parameters, depending on the route choice function selected, is based on the assumption that,
as far as the described assignment process, depending on how it is implemented, can be
associated to a heuristic realization of a preventive or a reactive dynamic assignment, a
proper route selection should lead to a realization of some equilibrium. A way of measuring
the progress towards the equilibrium in an assignment, and therefore qualify the solution, is
the relative gap function, Rgap(t) , [25], [29], that estimates at time t the relative difference
between the total travel time actually experienced and the total travel time that would have
been experienced if all vehicles had the travel time equal to the current shortest path:

∑ ∑h
i∈I k∈K i
k (t )[s k (t ) − u i ( t)]
Rgap(t) =
∑ g (t)u
i∈I
i i (t )

Where ui (t) are the travel times on the shortest paths for the i- th OD pair at time interval t,
sk (t) is the travel time on path k connecting the i-th OD pair at time interval t, hk (t) is the flow
on path k at time t, gi (t) is the demand for the i- th OD pair at time interval t, Ki, is the set of
paths for the i-th OD pair, and I is the set of all OD pairs.

The figures 14, and 15 depict the time evolution of the Rgap(t) function for various Route
Choice functions, for the preventive, or en-route version of the assignment procedure, using a
K-shortest path algorithm, for the test models of :

• The borough of Amara in the City of san Sebastián in Spain. A model with 365 road
sections, 100 nodes and 225 OD pairs.
• The model of Brunnsviken network in Stockholm. This model has 493 road sections,
260 nodes and 576 OD pairs.

In these figures Logit n, corresponds to the above defined Logit function with value n for the
shape parameter θ, proportional corresponds to a path probability inversely proportional to the
path cost. The expected role of the θ parameter in terms of the Rgaps function becomes
evident in the combination of the logit function with the assignment procedure. Improper
choices of the parameter values tend to produce a bang-bang effect consequence of the
tendency to move most of the flow to the current shortest path, as the oscillations of the Rgap
function show, while a more appropriate θ value (θ = 30 in Amara, or θ = 900 in
Brunnsviken) not only smooths out significantly the Rgap oscillations but also shows that a
path selection with acceptable path costs differences (a 10% in Amara and around a 1% in
Brunnsviken.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 16

The figure 16 and 17, depict the time evolution of the Rgap function for the same logit route
choice function, for the reactive version of he assignment procedure using the costs as defined
in (1), at iteration k=20, and λ=0.25, 0.5 and 0.75 respectively, for θ values of 30 in Amara,
and 900 in Brunnsviken. Rgap values tend almost to zero, as expected in equilibrium terms,
and the variations for the various values of λ show that λ=0.75 is the best.

5. CONCLUSIONS

The paper proposes methodological patterns to calibrate the parameters of microscopic traffic
simulation models based on simple models to check the adequacy of the car- following
parameters. The proposed method is illustrated with examples built with the AIMSUN
microsimulator. The model validation is discussed and a paradigm based on time series
analysis, to account explicitly for the autocorrelation processes of traffic data, is proposed and
numerical result are presented. A band comparison process is also proposed for those cases
when the amount of available data allow to take into account the explicit variability of traffic
data. The proposed method is applied to a real case. And finally, assuming that the “dynamic
equilibrium” exists the empirical results show that a proper time varying k-shostest paths
calculation, with a suitable definition of link costs, and adequate stochastic route choice
functions, using a microscopic network loading mechanism, achieves a network state that can
replicate acceptably the observed flows over the simulation horizon, and led to a reasonable
set of used paths between OD pairs as the oscillations within a narrow band of the empirical
Rgap function indicates.

6. REFERENCES

[1] M.Pidd, Computer Simulation in Management Science, John Wiley 1992.


[2] A.M. Law and W.D. Kelton, Simulation Modeling and Analysis, McGraw-Hill, 1991
[3] O. Balci, Verification, Validation and Testing, in: Handbook of Simulation: Principles,
Methodology, Advances, Applications and Practice, Ed. by J. Banks, John Wiley, 1998.
[4] Nagui M. Rouphail, NC State University and Jerry Sacks, National Institute of Statistical
Sciences, North Carolina, USA, Thoughts on Traffic Models Calibration and Validation,
paper presented at the Workshop on Traffic Modeling, Sitges, Spain, June 2003.
[5] J.T. Hughes, 1998, Intensive Traffic Data collection for Simulation of congested
Auckland Motorway, Proceedings 19th ARRB Transport Research Conference, Sydney.
[6] J. T. Hughes, AIMSUN2 Simulation of a Congested Auckland Freeway, in Transportation
Planning: state of the art, Ed. by M. Patriksson and M. Labbé, Kluwer, 2002.
[7] J. Barceló, and J. Casas, Dynamic Network Simulation with AIMSUN, Proceedings of
the International Symposium on Traffic Simulation, Yokohama, Kluwer, 2003.
[8] AIMSUN Version 4.1 User’s Manual, TSS-Transport Simulation Systems, 2002.
[9] TEDI Version 4.1 User’s Manual, TSS-Transport Simulation Systems, 2002.
[10] D. Manstetten, W. Krautter and T. Schwab, Traffic Simulation Supporting Urban
Control System Development, Robert Bosch GmbH, Corporate Research and Development,
Information and Systems Technology, P.O. Box 10 60 50, 70049 Stuttgart, Germany,1998.
[11] D. Manstetten, W. Krautter and T. Schwab, Traffic Simulation Supporting Urban Control
System Development. Proceedings of the 4th World Conference on ITS, Seoul, 1998b.
[12] T. Bleile, W. Krautter, D. Manstetten and T. Schwab Traffic Simulation at Robert Bosch
GmbH, Proc. Euromotor Seminar Telematic / Vehicle and Environment, Aachen, Germany,
Nov. 11-12, 1996.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 17

[13] P.G. Gipps A behavioral car- following model for computer simulation. Transp. Res. B,
Vol. 15, pp 105-111, 1981.
[14] T. Yoshii, Standard Verification Process for Traffic Simulation Model – Verification
Manual, Kochi University of Technology, Kochi, Japan, 1999..
[15] J.P.C. Kleijnen and W. Van Groenendaal, Simulation: A Statistical Perspective, John
Wiley, 1992.
[16] J.P.C: Kleijnen, Theory and Methodology: Verification and Validation of Simulation
Models, European Journal of Operational Research, Vol. 82, pp. 145-162, 1995.
[17] J.P.C. Kleijnen, Validation of Models: Statistical Techniques and Data Availability,
Proceedings of the 1999 Winter Simulation Conference.
[18] J.P.C.Kleijnen and R.G.Sargent, A Methodology for Fitting and Validating Metamodels
in Simulation, European Journal of Operational Research, Vol. 120 pp. 14-29, 2000.
[19] Traffic Appraisal in Urban Areas, Highways Agency, manual for Roads and Bridges,
Volume 2, Section 2, Department for Transport, London, UK, 1966.
[20] L.Rao, L. Owen and D. Goldsman, Development and Application of a Validation
Framework for Traffic Simulation Models, Proceedings of the 1998 Winter Simulation
Conference.
[21] J. Hourdakis and P. Michalopoulos, Evaluating Ramp Metering with AIMSUN,
Proceedings of the 81st TRB Meeting, Washington, 2002.
[22] H. Theil, Applied Economic Forecasting, North-Holland, 1966
[23] J. Barcelo, D. García, H. Kirschfink: “Scenario Analysis Simulation based tool for
regional strategic traffic management”, 9th World Conference on ITS, Chicago 2002
[24] J. Barceló, D. García, H. Kirschfink: “Scenario Analysis a Simulation based tool for
regional strategic traffic management”, Traffic Technology International, Annual Review
2001
[25] M. Florian, M. Mahut and N. Tremblay (2001), A Hybrid Optimization-Mesoscopic
Simulation Dynamic Traffic Assignment Model, Proceedings of the 2001 IEEE Intelligent
Transport Systems Conference, Oakland, pp. 120-123.
[26] J. Barceló J.L. Ferrer, R. Grau, M. Florian, I. Chabini and E. Le Saux, A Route Based
variant of the AIMSUN Microsimulation Model. Proceedings of the 2nd World Congress on
Intelligent Transport Systems, Yokohama, 1995.
[27] T.L. Friesz, D. Bernstein, T.E. Smith, R.L. Tobin and B.W. Wie, A Variational
Inequality Formulation of the Dynamic Netrwork User Equilibrium Problem, Operations
Research, Vol. 41, No. 1, pp. 179-191, 1993.
[28] Y.W. Xu, J.H. Wu, M. Florian, P. Marcotte and D.L. Zhu, Advances in the Continuous
Dynamic Network Problem, Transportation Science, Vol. 33, No. 4, pp. 341-353,1999.
[29] B. N. Janson, Dynamic Assignment for Urban Road Networks, Transpn. Res. B, Vol. 25,
Nos. 2/3, pp. 143-161, 1991.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 18

LIST OF TABLES

Table 1: Model quality as a function of reaction time and effective vehicle length

LIST OF FIGURES

Figure 1. Experimental Nature of Simulation


Figure 2: Logic diagram for model validation
Figure 3: Example of AIMSUN graphic user interface for building and verifying microscopic
simulation models.
Figure 4: Model to estimate speed- flow curves, and example
Figure 5. Empirical versus simulated flow-density curves
Figure 6. RMS Versus Reaction Time
Figure 7. example of Scattergram Analysis to compare observed versus simulated aggregated
flows.
Figure 8: Scattergram and autocorrelation spectrum of the observed flows
Figure 9. Scattergram and autorrelation spectrum of the simulated flows
Figure 10: Observed and simulated series for detector 426
Figure 11: Possibilities of comparison
Figure 12. Principle of band comparison for the model validation
Figure 13. Example of band comparison for four detectors in Hessen
Figure 14. Time evolution of the Rgap fuction for various Route Choice functions for Amara
model (Preventive case)
Figure 15. Rgap function for Brunnsviken (Preventive case)
Figure 16. Rgap for Amara (Reactive case)
Figure 17. Rgap for Brunnsviken (Reactive case)

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 19

TABLES

Model Parameters:
Reaction Time (RT, seconds) and Effective Vehicle Length (Vehicle Length+DM, meters)
Simulation 1a Simulation 2a Simulation 3a Simulation 1b Simulation 2b Simulation 3b
RT0.9/DM1.0 RT0.95/DM1.0 RT1.0/DM1.0 RT0.90/DM 0.75 RT0.95/DM0.75 RT1.0/DM0.75
RMS 0,0645901 0,091316 0,121131 0,0518984 0,0620237 0,0920621

Table 1: Model quality as a function of reaction time and effective vehicle length

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 20

FIGURES

INPUTS
(Alternatives, policies, OUTPUTS
what if questions) (Answers)
SIMULATION
MODEL

EXPERIMENTATION

Figure 1. Experimental Nature of Simulation

System Estimated
input system input
data data

Actual Simulation
system Model

System Compare Model output


output data data
(Observed) (Simulated)

No
Valid ?
Calibrate

Figure 2: Logic diagram for model va lidation

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 21

Figure 3: Example of AIMSUN graphic user interface for building and verifying microscopic
simulation models.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 22

Speed / Flow
R=0.85 Acc 2.8,4,6.5
120
110
100
90
80
70
60
50
40
30
20
10 Lane 1 Lane 2
0
0 500 1000 1500 2000 2500
-10

Figure 4. Model to estimate speed- flow curves, and example

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 23

Flow-Density

2500

2000
Flow Obs
Flow Sim (1a)
Flows (Veh/h)

1500 Flow Sim (2a)


Flow Sim (3a)
1000 Flow Sim (1b)
Flow Sim (2b)
Flow Sim (3b)
500

0
0 20 40 60 80 100 120 140
Density (Veh/Km)

Figure 5: Empirical versus simulated flow-density curves

RMS Versus Reaction Time

1,4

1,2

0,8
RMS

0,6

0,4

0,2

0
0 0,2 0,4 0,6 0,8 1 1,2
Reaction Time

RMS/DM0,75 RMS/DM1,0

Figure 6. RMS Versus Reaction Time

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 24

Regression Plot
Obs-Flows = 35,8639 + 1,18427 Sim-Flow

S = 111,222 R-Sq = 93,4 % R-Sq(adj) = 93,3 %

2000
Obs-Flows

1000

Regression

0 95% PI

0 500 1000 1500

Sim-Flow

Figure 7. example of Scattergram Analysis to compare observed versus simulated aggregated


flows.

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 25

Observed Flows Scattergram

250

200
OBSFLW

150

100

100 150 200 250


OFLWLG

Autocorrelation spectrum of observed flows


1,0
Autocorrelation

0,8
0,6
0,4
0,2
0,0
-0,2
-0,4
-0,6
-0,8
-1,0

2 7 12 17

Lag Corr T LBQ Lag Corr T LBQ Lag Corr T LBQ

1 0,69 5,88 36,00 8 0,47 1,56 230,82 15 0,15 0,47 275,25


2 0,77 4,69 81,49 9 0,36 1,16 241,69 16 0,07 0,22 275,77
3 0,63 2,99 111,79 10 0,34 1,07 251,51 17 0,07 0,20 276,20
4 0,66 2,81 145,67 11 0,33 1,01 260,74 18 -0,01 -0,03 276,21
5 0,55 2,14 169,92 12 0,22 0,68 265,12
6 0,59 2,14 197,78 13 0,24 0,73 270,26
7 0,42 1,46 212,58 14 0,17 0,53 273,04

Figure 8: Scattergram and autocorrelation spectrum of the observed flows

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 26

Simulated Flows Scattergram

250

200
SIMFLW

150

100

100 150 200 250


SFLWLG

Autocorrelation spectrum of simulated flows


1,0
Autocorrelation

0,8
0,6
0,4
0,2
0,0
-0,2
-0,4
-0,6
-0,8
-1,0

2 7 12 17

Lag Corr T LBQ Lag Corr T LBQ Lag Corr T LBQ

1 0,70 5,91 36,43 8 0,47 1,55 231,78 15 0,15 0,47 275,97


2 0,78 4,69 82,29 9 0,36 1,15 242,58 16 0,08 0,23 276,54
3 0,63 2,99 112,84 10 0,34 1,07 252,43 17 0,07 0,20 276,98
4 0,66 2,81 146,82 11 0,32 1,01 261,58 18 -0,01 -0,03 276,99
5 0,55 2,14 171,24 12 0,22 0,67 265,84
6 0,59 2,13 198,93 13 0,24 0,73 270,98
7 0,42 1,45 213,67 14 0,17 0,52 273,72

Figure 9. Scattergram and autocorrelation spectrum of the simulated flows

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 27

Dectector 426: Observed and Simulated Series

300

250

200
Meas (X)
150
Sim (Y)
100

50

0
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69

Figure 10: Observed and simulated series for detector 426

single/mean pattern to single/mean single/mean pattern to bandwitdth

bandwidth to bandwidth
Pattern 1 (simulated values, v)
Pattern 2 (measured values, w)
Difference to be validated

Figure 11: Possibilities of comparison

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 28

Figure 12. Principle of band comparison for the model validation

A5_16GN A3_12S
MultiVergleich MultiVergleich
3000

2500
W[km/h]

KW[km/h]

Legende Legende
Messwerte Messwerte
qKFZ[Fzg/h] qLKW[Fzg/h] vPKW[km/h] vLK

Simulationswerte
qKFZ[Fzg/h] qLKW[Fzg/h] vPKW[km/h] vL

Simulationswerte
2500

2000
2000

1500
1500
1000

1000

7:00 7:10 7:20 7:30 7:40 7:50 8:00 7:00 7:10 7:20 7:30 7:40 7:50 8:00
Uhrzeit Uhrzeit

A3_20N A3_10N
MultiVergleich MultiVergleich
3000
3500

W[km/h]
KW[km/h]

Legende Legende
Messwerte Messwerte
qKFZ[Fzg/h] qLKW[Fzg/h] vPKW[km/h] vLK

Simulationswerte
qKFZ[Fzg/h] qLKW[Fzg/h] vPKW[km/h] vL

Simulationswerte
2500
3000

2000
2500

1500
2000

1000
1500

7:00 7:10 7:20 7:30 7:40 7:50 8:00 7:00 7:10 7:20 7:30 7:40 7:50 8:00
Uhrzeit Uhrzeit

Figure 13. Example of band comparison for four detectors in Hessen

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 29

RGAP (Amara)

60,00%

50,00%
LOGIT 30

40,00%
LOGIT 60

30,00%
LOGIT 90

20,00%
LOGIT 300

10,00% PROPORTIONAL

0,00%
1

11

13

15

17

19

21

23

25

27

29
Figure 14. Time evolution of the Rgap fuction for various Route Choice functions for Amara
model (Preventive case)

70,00%

60,00%

50,00%

LOGIT 1
LOGIT 60
40,00%
LOGIT 90
LOGIT 300
LOGIT 900
30,00%
LOGIT 3600
Prportional
20,00%

10,00%

0,00%
7:05 7:10 7:15 7:20 7:25 7:30 7:35 7:40 7:45 7:50 7:55 8:00 8:05 8:10 8:15 8:20 8:25 8:30
AM AM AM AM AM AM AM AM AM AM AM AM AM AM AM AM AM AM

Figure 15. Rgap function for Brunnsviken (Preventive case)

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 30

0,12

0,1

0,08

l=0.25
0,06 l=0.5
l=0.75

0,04

0,02

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Figure 16. Rgap for Amara (Reactive case)

0,3

0,25

0,2

l=0.25
0,15 l=0.5
l=0.75

0,1

0,05

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Figure 17. Rgap for Brunnsviken (Reactive case)

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.
J. Barceló and J. Casas 31

TRB 2004 Annual Meeting CD-ROM Paper revised from original submittal.

You might also like