You are on page 1of 47

Distribution-valued heavy-trac limits for the G/GI/ queue

Josh Reed Rishi Talreja


Stern School of Business Knight Capital Group
New York University Jersey City, NJ
New York, NY
June 16, 2012
Abstract
We study the G/GI/ queue in heavy-trac using tempered distribution-valued processes
which track the age and residual service time of each customer in the system. In both cases, we
use the continuous mapping theorem together with functional central limit theorem results in
order to obtain uid and diusion limits for these processes in the space of tempered distribution-
valued processes. We nd that our diusion limits are tempered distribution-valued Ornstein-
Uhlenbeck processes.
1 Introduction
Limit theorems for the innite-server queue in heavy-trac have a rich history starting with the
seminal work of Iglehart [17] on the M/M/ queue. This work then inspired a line of research
aimed at extending the results of [17] to additional classes of service time distributions. Whitt [32]
studies the GI/PH/ queue, having phase-type service-time distributions, and Glynn and Whitt
[11] consider the GI/GI/ queue with service times taking values in a nite set. In [5], [25] and
[31], the G/GI/ queue is studied with general service time distributions. [29] gives a survey of
these results.
In this paper, we study two Markov processes associated with the G/GI/ queue. The rst
process which we study is a tempered distribution-valued process which tracks the age of each
customer in the system. We refer to this process as the age process. The second process which we
study is also a tempered distribution-valued process and it tracks the residual service time of each
customer in the system as well as the amount of time since departure for each customer who has
left the system. We refer to this process as the residual service time process. Although analyzing
either of these processes might at rst appear to be a dicult task, one of the key themes that
runs throughout the present paper is that techniques originally developed for establishing heavy-
trac limits in the nite-dimensional setting may also be successfully applied in the more abstract
innite-dimensional setting.
Our main results in this paper are to obtain uid and diusion limits for both the age and resid-
ual service time processes. In particular, for both the age and the residual service time process, we
use the continuous mapping theorem together with functional central limit theorem results in order
1
to establish our main results and we nd that the corresponding diusion limits that we obtain are
tempered distribution-valued Ornstein-Uhlenbeck processes. The particular representation which
we use for the residual service time process was also used by Decreusefond and Moyal [8] in order
to analyze the M/G/ queue. Indeed, many of the results and techniques found in the present
paper were inspired by [8]. However, our proofs are quite dierent.
A second major contribution of our work is to make a connection between the literature on
innite-dimensional heavy-trac limits for queueing systems [7, 8, 9, 12, 13, 14, 24] and the vast
literature on innite-dimensional Ornstein-Uhlenbeck processes motivated by applications to inter-
acting particle systems [2, 3, 4, 15, 16, 19, 20, 27, 28]. Our work especially relies upon [19] and [20]
in order to prove continuity of a particular regulator map.
Another set of papers related to ours are those of Kaspi and Ramanan [23, 24]. Although
these works analyze the many-server queue with general service time distributions, their innite-
dimensional representation of the system is similar to ours. Fluid limits are established for the
system in [24] in the space of Radon measure-valued processes. However, when establishing corre-
sponding diusion limits, the limit process evidently falls out of the space of Radon measure-valued
processes and distribution-valued processes are used instead in [23]. In the present paper, we follow
the work of [8] in which the space of tempered distributions is used. This space may be characterized
as the topological dual of Schwartz space, the space of rapidly decreasing, innitely dierentiable
functions.
We also mention the work [30], where the authors build upon the work of [11] and [25] in order to
prove heavy-trac limits for the G/GI/ queue in a two-parameter function space. They analyze
both the age and the residual service time process as we do. The main dierence between the
present work and [30] is that in the present work tempered distribution-valued processes are used
which allows one to apply the continuous mapping theorem and other standard results in order to
obtain heavy-trac limits.
The remainder of this paper is now organized as follows. In 2, we derive basic system equations
for both the age process and the residual service time process. These equations serve as the starting
point for our analysis in the remainder of the paper. In 3, we present a regulator map result to be
used in conjunction with the continuous mapping theorem in order to prove our main results. In 4,
we provide martingale results that are used together with the regulator map of 3 in order to obtain
our uid and diusion limits. In 5 and 6, we prove our uid and diusion limits, respectively.
In the Appendix, we provide the proofs of several technical lemmas that are used throughout the
paper.
1.1 Technical Background
We now provide some technical background which is useful for the remainder of the paper. We
begin with some preliminary details.
1.1.1 Preliminaries
All random variables and processes in this paper are assumed to be dened on a common probability
space (, T, P) and are measurable maps from (, T, P) to an arbitrary topological space with an
associated Borel -algebra. It turns out that many of the random quantities which we study in
this paper take values in a topological space which is not metrizable and so we now provide the
denition of weak convergence on an arbitrary topological space. We follow the approach of [21].
2
Let X be an arbitrary topological space with associated Borel -algbera B(X). We say that a
sequence of probability measures (P
n
)
n1
on B(X) weakly converges to a probability measure P on
B(X), abbreviated as P
n
P, if
_
fdP
n

_
fdP for every bounded, continuous real functional f
on X (see Denition 2.2.1 of [21]). We also use the notation
P
to denote convergence in probability.
For any two topological spaces X and Y , we denote by X Y the cartesian product of X and Y
and we associate with X Y the product topology. Note that using the above denition of weak
convergence, it is straightforward to show that the continuous mapping theorem continues to hold
(see, for instance, the proof of Theorem 3.4.1 of [33]). In particular, we have the following.
Proposition 1.1. Let X and Y be two topological spaces and let (x
n
)
n1
be a sequence of random
elements of X such that x
n
x. If g : X Y is a continuous function, then g(x
n
) g(x) in Y .
Next, for each 0 < T < , let D([0, T], R) denote the space of functions from [0, T] to R that
are right-continuous on [0, T) with left limits everywhere on (0, T]. We equip D([0, T], R) with the
Skorokhod J
1
-topology [1]. We also note that we will commonly abbreviate the notation D([0, T], R)
by simply writing D. Next, let D([0, T], D) denote the space of functions from [0, T] to D that are
right-continuous on [0, T) with left limits everywhere on (0, T]. We equip D([0, T], D) with the
Skorokhod J
1
-topology [1] as well. For an element x D([0, T], R), we set
|x|
T
= sup
0tT
[x(t)[.
We denote by e = (t, t [0, T]), the identity process on [0, T]
1.1.2 Schwartz Space
The space of rapidly decreasing functions, also known as Schwartz Space, plays an important rule
in this paper and so we now provide a brief review of some of the relevant facts concerning this
space. Much of the material found in this subsubsection may also be found in [21].
Let R, R
+
and R

denote the set of reals, non-negative reals and non-positive reals, respec-
tively. Also, denote by N = 0, 1, 2, ... the set of non-negative integers. Let C

(R) denote the set


of innitely dierentiable functions fromR to R and let C

(R
+
) denote the set of innitely dieren-
tiable functions from R
+
to R. Also, let C

b
(R) denote the set of innitely dierentiable, bounded
functions from R to R whose derivatives of all orders are bounded and, similarly, let C

b
(R
+
) denote
the set of innitely dierentiable, bounded functions from R
+
to R whose derivatives of all orders
are bounded.
Now dene
o C

(R) : ||
,
< for all , N, (1)
where
||
,
sup
xR
[x

()
(x)[ (2)
and
()
denotes the th derivative of . The space o is commonly referred to as Schwartz space
or the space of rapidly decreasing functions [21].
The topology of o is given by the family of seminorms | |
,
: , N dened in (2). In
particular,
n
in o if |
n
|
,
0 for all , N. We note that by Lemma 1.3.2 and
Theorem 1.3.2 of [21], the space o is a nuclear Fr`echet space. Moreover, we also note that one may
construct a sequence of seminorms | |
p
: p N with the property that | |
p
| |
p+1
for each
3
p N and which also induce the same the topology on o as the seminorms | |
,
: , N
given above. In particular, by Lemma 1.3.3 of [21], one has that for each p N, there exists a
k N and C > 0 such that
||
p
C max
0,k+1
||
,
for all o, (3)
and, by Lemma 1.3.4 of [21], one has that for each , N, there exists a p N and M > 0 such
that
max
0,k+1
||
,
M||
p
for all o. (4)
For a precise construction of | |
p
for each p N, one may consult page 24 of [21] .
The set of all linear maps from o to o is denoted by L(o, o) and the strong topology on L(o, o)
is dened in the following manner. A subset B of o is said to be bounded if for any neighborhood
U of 0 o, there exists a constant > 0 such that
1
B U (see Denition 1.1.7 of [21]).
The strong topology on L(o, o) is then given by the following denition (see Theorem 1.2.1 of [21]).
Denition 1.2. For each bounded subset B of o and p N, let
q
B,p
(T) sup
B
|T|
p
for all T L(o, o).
Then, q
B,p
constitutes a family of seminorms on L(o, o) and the topology given by these semi-
norms is referred to as the strong topology on L(o, o).
1.1.3 The Space of Tempered Distributions
Many of the processes studied in this paper take values in the topological dual of o, which we denote
by o

. Recall that o

is the space of all continuous linear functionals on o. Elements of o

are
referred to as tempered distributions and we now review some relevant facts concerning tempered
distributions as well as tempered distribution-valued processes.
For each o

and o, we denote the duality product of and by , ) (). The


distributional derivative of o

is denoted by

and is dened to be the unique element of o

such that

, ) = ,

) for all o.
It is clear by the denition of o that

is well-dened . For each o

and t R, we also dene

t
as the unique element of o

such that

t
, ) = ,
t
) for all o,
where
t
o is the function dened by
t
() ( t).
All statements in this paper regarding convergence in o

are with respect to the strong topology


on o

, which we now dene. One may consult Section 1.1 of [21] for further details.
Denition 1.3. For each bounded subset B o, let
q
B
() sup
B
[, )[ for all o

. (5)
Then, the strong topology on o

is the topology induced by the family of seminorms q


B
.
4
Unfortunately, the space o

is not metrizable with respect to the strong topology (see Section 2


of [21]). Nevertheless, as we discuss below, one may still usefully speak of weak convergence of
o

-valued random elements and processes taking values in o

.
Let D([0, T], o

) denote the space of functions from [0, T] to o

that are right-continuous on


[0, T) with left limits everywhere on (0, T]. If (
t
)
t0
D([0, T], o

) and t [0, T], we then dene


the tempered distribution
_
t
0

s
ds to be the unique element of o

(see Section 2 of [19]) such that


__
t
0

s
ds,
_
=
_
t
0

s
, ) ds for all t 0, o.
As noted immediately following Denition 1.3 above, the space o

equipped with the strong


topology is not metrizable and so the Skorokhod metric and ensuing Skorokhod J
1
-topology may
not dened on D([0, T], o

) in the usual manner. We therefore follow the approach of [21, 26] in


dening an appropriate topology on D([0, T], o

). Let be the set of strictly increasing continuous


maps from [0, T] onto itself such that for each ,
() = sup
0s<tT

ln
_

t

s
t s
_

< .
We then have the following denition (see [21, 26]).
Denition 1.4. For each seminorm q
B
dening the strong topology on o

, let
d
o
q
B
(, ) = inf

_
sup
0tT
[q
B
(
t

t
) +()[
_
for all , o

.
The topology on D([0, T], o

) is then dened by the family of pseudometrics d


o
q
B
.
By part (c) of Theorem 2.4.1 of [21], the topology given in Denition 1.4 above is equivalent to the
topology dened by the family of pseudometrics d
q
B
, where
d
q
B
(, ) = inf

_
sup
0tT
[q
B
(
t

t
)[ + sup
0tT
[
t
t[
_
for all , o

.
We also note that under this topology, D([0, T], o

) is a completely regular topological space [26].


The following result is an important consequence of Proposition 5.2 of [26] regarding weak con-
vergence of processes taking values in the dual of a nuclear Fr`echet space. It provides a convenient
characterization of weak convergence of processes taking values in o

.
Theorem 1.5 (Mitomas Theorem). Let (
n
)
n1
be a sequence of random elements of D([0, T], o

).
Then,

n
in D([0, T], o

)
if the following two statements hold:
1. For each o, the sequence
n
, )
n1
is tight in D([0, T], R).
5
2. For
1
, . . . ,
m
o and t
1
, . . . , t
m
[0, T],
_

n
t
1
,
1
), . . . ,
n
tm
,
m
)
_
(
t
1
,
1
), . . . ,
tm
,
m
)) in R
m
.
We now conclude the technical background section with some comments regarding martingales
and, in particular, o

-valued martingales. Let (T


t
)
t0
be a ltration on an underlying probability
space (, T, P) and let M and N be two R-valued T
t
-matingales. The quadratic covariation of M
and N is denoted by (< M, N >
t
)
t0
and the quadratic variation of M is denoted by (<< M >>
t
)
t0
(< M, M >
t
)
t0
. An o

-valued process M is said to be an o

-valued T
t
-martingale if for all
o, (M
t
, ))
t0
is an R-valued T
t
-martingale. For two o

-valued martingales M and N, their


tensor quadratic covariation (< M, N >
t
)
t0
is given for all t 0 and all , o by
< M, N >
t
(, ) < M

, ), N

, ) >
t
,
and the tensor quadratic variation (<< M >>
t
)
t0
of an o

-valued martingale is given by (<<


M >>
t
)
t0
(< M, M >
t
)
t0
. Two o

-valued martingales, M and N, are said to be orthogonal if


< M, N >= 0 identically. Corresponding notions for the optional quadratic variation process [M]
are dened analogously.
2 System Equations
In this section, we obtain semi-martingale decompositions of the tempered distribution-valued age
process / (/
t
)
t0
and the tempered distribution-valued residual service time process 1
(1
t
)
t0
. We begin in 2.1 by treating the age process / and then move on in 2.2 to treating the
residual service time process 1.
2.1 Age Process
We consider a G/GI/ queue with general arrival process (E
t
)
t0
D([0, ), R). We assume that
E
0
= 0, P-a.s., and, for convenience in our proofs, we also dene E
t
= 0 for t < 0. We also make
the assumption that for each t 0, we have that E[E
2
t
] < . Next, for each i 1, we denote by

i
= inft 0 : E
t
i
the time of the arrival of the ith customer to the system after time t = 0. We assume that E[
i
] <
for each i = 1, 2, ... We denote by
i
the service time of the ith customer to arrive to the system after
time t = 0 and we assume that
i
, i 1 is an i.i.d. sequence of non-negative, mean 1 random
variables with cumulative distribution function (cdf) F, complementary cumulative distribution
function (ccdf)

F = 1 F, and probability density function (pdf) f. We also assume that the
hazard rate function h of F satises the following assumption.
Assumption 2.1. The function h C

b
(R
+
).
Now let (A
t
)
t0
D([0, ), D) be such that for each t 0 and y 0, the quantity A
t
(y)
represents the number of customers in the system at time t 0 that have been in the system for
less than or equal to y units of time at time t. For y < 0, we set A
t
(y) = 0. At time t = 0, we
assume that there are A
0
(y) customers present who have been in the system for less than or equal
6
to y 0 units of time and that there are a total of A
0
() customers present. We assume that
E[A
2
0
()] < . For each i = 1, ..., A
0
(), we denote by

i
= infy 0 : A
0
(y) i
the arrival time of the ith initial customer to the system. We denote by
i
the remaining service
time at time t = 0 of the ith initial customer in the system. The distribution of
i
, conditional on
the arrival time
i
, is given by
P(
i
> x[
i
) =
1 F(
i
+x)
1 F(
i
)
, x 0. (6)
We denote by f

i
the conditional pdf associated with this distribution and we set h

i
() = h(
i
).
We now derive a convenient representation for the system equations for (A
t
)
t0
and its tempered
distribution-valued counterpart, /, which we dene shortly. We begin by noting that by rst
principles we have that for each t 0 for y 0,
A
t
(y) =
A
0
()

i=1
1
t
i
y
1
t<
i

+
Et

i=1
1
t
i
y
1
t
i
<
i

. (7)
Our rst result provides an alternative way to write (7). In the following, we set

0
i=1
= 0.
Proposition 2.2. For each t 0 and y 0,
A
t
(y) = A
0
(y)
A
0
()

i=1
1

i
t(y+
i
)

A
0
(y)

i=A
0
(yt)+1
1

i
>y+
i

+E
t

Et

i=1
1

i
(t
i
)y

E
(ty)

i=1
1

i
>y
. (8)
Proof. By (7), we have that
A
t
(y) =
A
0
(y)

i=1
1
t
i
y
1
t<
i

+
Et

i=1
1
t
i
y
1
t
i
<
i

= A
0
(y) +
A
0
(y)

i=1
(1
t
i
y
1
t<
i

1) +E
t
+
Et

i=1
(1
t
i
y
1
t
i
<
i

1). (9)
However,
1 1
t
i
y
1
t<
i

= 1
t
i
y
1

i
t
+1
t
i
>y
(10)
=
_
1
t
i
y
1

i
t
+1
t
i
>y
1

i
+
i
y
_
+1
t
i
>y
1

i
+
i
>y
,
and, similarly,
1 1
t
i
y
1
t
i
<
i

= 1
t
i
y
1

i
t
i

+1
t
i
>y
(11)
=
_
1
t
i
y
1

i
t
i

+1
t
i
>y
1

i
y
_
+1
t
i
>y
1

i
>y
.
Substituting (11) and (10) into (9) and summing over A
0
(y) and E
t
completes the proof.
7
We now provide an intuitive description of each of the terms appearing in (8). The rst term
represents the number of customers in the system at time t = 0 that have been in the system for
less than or equal to y units of time, the second term represents the number of departures by time
t 0 of those initial customers that had total service less than or equal to y units of time at time
t = 0, and the third term represents the number of initial customers whose total service time is
greater than y units of time and had been in the system for less than or equal to y units of time at
time t = 0 but have been been in the system for greater than y units of time at time t 0. The
fourth, fth and sixth terms represent similar quantities but for those customers that arrived to
the system after time t = 0.
Now let D
0
= (D
0
t
)
t0
D([0, ), D) be dened by setting
D
0
t
(y) =
A
0
()

i=1
_
1

i
t(y+
i
)

_

i
t(y+
i
)
0
h

i
(u) du
_
, t 0, y 0, (12)
and set D
0
t
(y) = 0 for y < 0, and let D = (D
t
)
t0
D([0, ), D) be dened by setting
D
t
(y) =
Et

i=1
_
1

i
(t
i
)y

_

i
(t
i
)y
0
h(u) du
_
, t 0, y 0, (13)
and set D
t
(y) = 0 for y < 0. It then follows from (8) that for each t 0 and y 0, we may write
A
t
(y) = A
0
(y) +E
t
D
0
t
(y) D
t
(y)
A
0
()

i=1
_

i
t(y+
i
)
0
h

i
(u) du

Et

i=1
_

i
(t
i
)y
0
h(u) du
A
0
(y)

i=A
0
(yt)+1
1

i
+
i
>y

E
(ty)

i=1
1

i
>y
. (14)
The above expression for A
t
(y) will become useful in a moment. However, we next move on to
expressing the age process as a tempered distribution-valued process using the Schwartz space o
dened in (1). In particular, we associate with the process A dened in (7) the o

-valued process
/ = (/
t
)
t0
such that for each t 0 and o we set
/
t
, ) =
_
R
(y) dA
t
(y). (15)
In a similar manner, we associate the o

-valued processes T
0
= (T
0
t
)
t0
and T = (T
t
)
t0
with D
0
and D, respectively. That is, for each t 0 and o we set
T
0
t
, ) =
_
R
(y) dD
0
t
(y) and T
t
, ) =
_
R
(y) dD
t
(y).
We also associate the o

-valued random variable /


0
with A
0
by setting
/
0
, ) =
_
R
(y) dA
0
(y), o.
It is straightforward to see that for each t 0, the quantities /
t
, T
0
t
and T
t
are well dened
elements of o

. Moreover, since for each xed o the sample paths of (/


t
, ))
t0
, (T
0
t
, ))
t0
and (T
t
, ))
t0
all lie in D([0, ), R), P-a.s., it follows that /, T
0
, T D([0, ), o

), P-a.s.
8
For the remainder of the paper, we now replace the hazard rate function h : R
+
R with a
function

h : R R such that

h(x) = h(x) for x 0 and

h C

b
(R). In a similar manner, we
replace the cdf F and ccdf

F with corresponding functions

F and

F such that

F,

F C

b
(R). For
ease of notation, we continue to refer to

h,

F and

F as h, F and

F, respectively.
Our next step is to use the expression (14) in order to provide a convenient expression for the
tempered distribution-valued process /. We begin by noting that integrating test functions o
term-by-term in (14) it follows that for each t 0 one has that
/
t
, ) = /
0
, ) T
0
t
+T
t
, )
A
0
()

i=1
_

i
+(
i
t)

i
(y)h(y) dy

Et

i=1
_

i
(t
i
)
0
(y)h(y) dy
_
R
+
(y) d
_
_
A
0
(y)

i=A
0
(yt)+1
1

i
+
i
>y
+
E
(ty)

i=1
1

i
>y
_
_
. (16)
The following two propositions now allow us to further simplify the expression in (16). We rst
have the following.
Proposition 2.3. For each t 0,
A
0
()

i=1
_

i
+(
i
t)

i
(y)h(y) dy +
Et

i=1
_

i
(t
i
)
0
(y)h(y) dy =
_
t
0
/
s
, h) ds.
Proof. For each t 0,
A
0
()

i=1
_

i
+(
i
t)

i
(y)h(y) dy +
Et

i=1
_

i
(t
i
)
0
(y)h(y) dy
=
A
0
()

i=1
_
t
0
1
0s
i

(s
i
)h(s
i
) ds +
Et

i=1
_
t
0
1
0s
i

(s
i
)h(s
i
) ds
=
_
t
0
_
_
A
0
()

i=1
1
0s
i

(s
i
)h(s
i
) +
Et

i=1
1
0s
i

(s
i
)h(s
i
)
_
_
ds
=
_
t
0
/
s
, h) ds.
This completes the proof.
Next, we have the following.
Proposition 2.4. For each t 0,

_
R
+
(y) d
_
_
A
0
(y)

i=A
0
(yt)+1
1

i
+
i
>y
+
E
(ty)

i=1
1

i
>y
_
_
= E
t
(0) +
_
t
0
/
s
,

) ds.
9
Proof. Let t 0. Then, integrating by parts we have that
E
t
(0)
_
R
+
(y) d
_
_
A
0
(y)

i=A
0
(yt)+1
1

i
+
i
>y
+
E
(ty)

i=1
1

i
>y
_
_
=
_
R
+
_
_
A
0
(y)

i=A
0
(yt)+1
1

i
+
i
>y
+
E
(ty)

i=1
1

i
>y
_
_

t
(y) dy
=
_
R
+
_
_
A
0
()

i=1
1

i
y,
i
+
i
>y,
i
+y<t
+
Et

i=1
1

i
>y,
i
+y<t
_
_

t
(y) dy
=
A
0
()

i=1
_
R
+
1

i
y,
i
+
i
>y,
i
+y<t

t
(y) dy +
Et

i=1
_
R
+
1

i
>y,
i
+y<t

t
(y) dy
=
A
0
()

i=1
_
t
0
1
0s
i

i
+
i

t
(s
i
) ds +
Et

i=1
_
t
0
1
0s
i

t
(s
i
) ds
=
_
t
0
_
_
A
0
()

i=1
1
0s
i

i
+
i

(s
i
) +
Et

i=1
1
0s
i

(s
i
)
_
_
ds
=
_
t
0
__
R
+

(u)d/
s
(u)
_
ds
=
_
t
0
_
/
s
,

_
ds.
This completes the proof.
Now note that combining Propositions 2.3 and 2.4 with system equation (16), one nds that
for each t 0 and o,
/
t
, ) = /
0
, ) +c
t
T
0
t
T
t
, )
_
t
0
/
s
, h) ds +
_
t
0
/
s
,

) ds, (17)
where we dene the o

-valued process c = (c
t
)
t0
to be such that c
t
, ) = E
t
(0) for each o
and t 0. In general, we refer to (17) as the semi-martingale decomposition of /. This will become
clear in 4 where we show that the process (T
0
t
+T
t
)
t0
is a martingale.
2.2 Residual Service Time Process
We next move on to analyzing the residual service time process 1. As in 2.1, we assume that
we have a G/GI/ queue in which customers arrive to the system according to a general arrival
process (E
t
)
t0
D([0, ), R), where we assume that E
0
= 0, P-a.s. For each i 1, we denote by

i
= inft 0 : E
t
i
the time of the ith customer arrival to the system after time t = 0 and we let
i
be the service time
of the ith customer to arrive to the system after time t = 0. We assume that
i
, i 1 is an i.i.d.
10
sequence of non-negative, mean 1 random variables with cumulative distribution function F and
probability density function f. We also assume in this subsection that the hazard rate function h
of F satises Assumption 2.1 of 2.1.
Now let R = (R
t
)
t0
D([0, ), D) be such that for each t 0 and y R, the quantity R
t
(y)
denotes the number of customers in the system at time t 0 that have less than or equal to y R
units of service remaining. For y < 0, we interpret R
t
(y) as the number of customers who have
departed from the system by time t + y. Thus, (R
t
(y))
yR
not only keeps tracks of the residual
service times of those customers present in the system at time t, but it also records the departure
times of all customers who have departed from the system by time t. We assume that at time t = 0
there are R
0
(y) customers in the system that have less than or equal to y units of service time
remaining. By rst principles, it then follows that for each t 0 and y R we may write
R
t
(y) = R
0
(t +y) +
Et

i=1
1

i
(t
i
)y
. (18)
The following proposition now presents an alternative expression for the righthand side of (18).
Proposition 2.5. For each t 0 and y R,
R
t
(y) = R
0
(y) + (R
0
(t +y) R
0
(y)) +
Et

i=1
1

i
y
+
Et

i=1
1

i
>y,(
i
+
i
)ty
. (19)
Proof. By (18),
R
t
(y) = R
0
(t +y) +
Et

i=1
1

i
(t
i
)y
= R
0
(y) + (R
0
(t +y) R
0
(y)) +
Et

i=1
1

i
(t
i
)y
. (20)
However, note that
1

i
(t
i
)y
= 1

i
y
+1

i
>y,
i
(t
i
)y
. (21)
Substituting (21) into (20) and summing over E
t
, completes the proof.
We now give an intuitive description for each of the terms appearing in (19). The rst term
represents the number of customers in the system at time t = 0 with less than or equal to y units of
service time remaining. The second term represents the number of customers in the system at time
t = 0 with greater than y units of total service time but at time t 0 have less than or equal to
y units of service time remaining. The third and fourth terms in (19) have analogous descriptions
but for those customers that arrive to the system after time t = 0.
Now let G = (G
t
)
t0
D([0, ), D) be dened by setting
G
t
(y) =
Et

i=1
_
1

i
y
F(y)
_
, t 0, y R.
By Proposition 2.5, it then follows that for each t 0 and y R, R
t
(y) may be written as
R
t
(y) = R
0
(y) + (R
0
(t +y) R
0
(y)) +G
t
(y) +E
t
F(y) +
Et

i=1
1

i
>y,(
i
+
i
)ty
. (22)
11
The above representation for R
t
(y) will be useful in a moment. However, we rst proceed to
dene tempered distribution-valued versions of the processes dened above. In particular, we let
1 = (1
t
)
t0
be the o

-valued process associated with R such that for each t 0 and o we


have that
1
t
, ) =
_
R
(y) dR
t
(y)
and we let ( = ((
t
)
t0
be the o

-valued process associated with G such that for each t 0 and


o we have that
(
t
, ) =
_
R
(y) dG
t
(y).
We also dene the elements T and T
e
of o

by setting
T, )
_

0
(x)dF(x), o,
and
T
e
, )
_

0
(x)(1 F(x))dx, o.
Now note that integrating test functions o against each of the terms in (22), it follows that
for each o and t 0,
1
t
, ) = 1
0
, ) +
_
R
(y) d(R
0
(t +y) R
0
(y))
+(
t
, ) +E
t
T, ) +
_
R
(y) d
_
Et

i=1
1

i
>y,(
i
+
i
)ty
_
. (23)
The following proposition now allows us to simplify the righthand side of (23).
Proposition 2.6. For each t 0,
_
R
(y) d(R
0
(t +y) R
0
(y)) +
_
R
(y) d
_
Et

i=1
1

i
>y,(
i
+
i
)ty
_
=
_
t
0
1
s
,

) ds. (24)
Proof. The proof parallels the proof of Proposition 2.4. Let t 0. Then, integrating by parts, we
have that

_
R
(y) d(R
0
(t +y) R
0
(y))
_
R
(y) d
_
Et

i=1
1

i
>y,(
i
+
i
)ty
_
=
_
R
_
_
R
0
()

i=1
1
y<
i
t+y
_
_

(y) dy +
_
R
_
Et

i=1
1

i
>y,(
i
+
i
)ty
_

(y) dy
=
R
0
()

i=1
_
R
1
y<
i
t+y

(y) dy +
Et

i=1
_
R
1

i
>y,(
i
+
i
)ty

(y) dy
12
=
R
0
()

i=1
_
R
1

i
ty<
i

(y) dy +
Et

i=1
_
R
1
(
i
+
i
)ty<
i

(y) dy
=
R
0
()

i=1
_
t
0

(
i
s) ds +
Et

i=1
_
t
0
1

i
s

(
i
+
i
s) ds
=
_
t
0
_
_
R
0
()

i=1

(
i
s) +
Et

i=1
1

i
s

(
i
+
i
s)
_
_
ds
=
_
t
0
__
R
+

dR
s
(u)
_
ds
=
_
t
0
1
s
,

) ds.
This completes the proof.
Substituting (24) into (23), one now obtains that for each t 0 and o,
1
t
, ) = 1
0
, ) +(
t
, ) +E
t
T, )
_
t
0
1
s
,

) ds. (25)
We refer to (25) as the semi-martingale decomposition of 1. This is due to the fact that in 4 it
will be shown that the process ( in (25) is a martingale. We also point out the similarity of (25)
with (4) of [8].
3 Regulator Map Result
Let B : o o be a continuous linear operator and for each D([0, T], o

), consider the solution


D([0, T], o

) to the integral equation

t
, ) =
t
, ) +
_
t
0

s
, B) ds, t [0, T], o. (26)
In this section, we rst show that under some mild restrictions on B, (26) denes a continuous
function
B
: D([0, T], o

) D([0, T], o

) mapping to . We then proceed in 3.1 and 3.2 to


study particular continuous linear operators associated with the age process and the residual service
time process, respectively.
We begin with the following denition from [19].
Denition 3.1. A family (S
t
)
t0
of linear operators on o is said to be a strongly-continuous
(C
0
, 1) semi-group if the following three conditions are satised:
1. S
0
= I, where I is the identity operator, and, for all s, t 0, S
s
S
t
= S
s+t
.
2. The map t S
t
is continuous in the strong topology of L(o, o). That is, if t
n
t in R
+
,
then for any bounded subset K o and p 1,
sup
K
|S
tn
S
t
|
p
0.
13
3. For each q 0, there exist numbers M
q
,
q
and p q such that for all o and t 0,
|S
t
|
q
M
q
e
qt
||
p
.
Remark 3.2. Note that Condition 2 of Denition 3.1 is stronger than the corresponding condition
for a (C
0
, 1) semi-group as given, for example, in [19]. The weaker denition in [19] only requires
that the map t S
t
be continuous in the weak topology of L(o, o). That is, if t
n
t in R
+
, then
for all o and m 1, |S
tn
S
t
|
m
0.
Recall now that for a family (S
t
)
t0
of linear operators on o, the innitesimal generator B of
(S
t
)
t0
is dened to be such that
B = lim
t0
S
t

t
in o,
for all such o that the limit on the righthand side above exits. We refer to such o as
T(B), the domain of B. We now have the following result, which is our main result regarding the
integral equation (26).
Theorem 3.3. Let B L(o, o) be the innitesimal generator of a strongly-continuous (C
0
, 1)
semi-group (S
t
)
t0
. Then, for each D([0, T], o

), the equation (26) has a unique solution given


by

t
, ) =
t
, ) +
_
t
0

s
, S
ts
B) ds, t [0, T], o. (27)
Furthermore, (27) denes a continuous function
B
: D([0, T], o

) D([0, T], o

) mapping to .
Proof. That
B
is a well-dened function from D([0, T], o

) to D([0, T], o

) and the form of the


solution (27) follows from Theorem 2.1 of [19] (see also Corollary 2.2 of [19]).
We now show that
B
is continuous. By (27), it suces to show that the function mapping
D([0, T], o

) to D([0, T], o

) dened by
_

0
B

s
ds, where B

and S

t
denote the adjoint
operators of B and S
t
, respectively, is continuous. Let (
n
)
n1
be a sequence converging to in
D([0, T], o

) . Then, by Denition 1.4 and the comment below it (see also Theorem 2.4.1 of [21]),
there exists a sequence (
n
)
n1
of strictly increasing homeomorphisms of the interval [0, T] such
that for each bounded set K o,
sup
0tT
sup
K

n
t

n
t
,
_

0 and sup
0tT
[
n
t
t[ 0 as n . (28)
Moreover, it suces to consider homeomorphisms (
n
)
n1
that are absolutely continuous with
respect to Lebesgue measure on [0, T] having corresponding derivatives (

n
)
n1
satisfying |

1|
T
0 as n (see pages 112-114 of [1]). It then follows that for each t [0, T] and o we
14
may write

__
t
0
B

ts

n
s
ds
_

n
t
0
B

n
t
s

s
ds,
_

__
t
0
B

ts

n
s
ds
_
t
0
B

n
t

n
s

n
s

n
s
ds,
_

n
1|
T

__
t
0
B

n
t

n
s

n
s
ds,
_

__
t
0
B

(S

ts
S

n
t

n
s
)

n
s
ds,
_

__
t
0
B

ts
(
n
s

n
s
)ds,
_

= |

n
1|
T

_
t
0

n
s
, S

n
t

n
s
B
_
ds

_
t
0

n
s
, (S
ts
S

n
t

n
s
)B
_
ds

_
t
0

n
s

n
s
, S
ts
B
_
ds

. (29)
In order to complete the proof, it now suces to show that for each bounded subset K o, the
following three limits hold:
sup
t[0,T]
sup
K
|

n
1|
T

_
t
0

n
s
, S

n
t

n
s
B
_
ds

0 as n , (30)
sup
t[0,T]
sup
K

_
t
0

n
s
, (S
ts
S

n
t

n
s
)B
_
ds

0 as n , (31)
sup
t[0,T]
sup
K

_
t
0

n
s

n
s
, S
ts
B
_
ds

0 as n . (32)
We begin with (30). First note that for each bounded subset K o,
sup
s[0,T]
sup
K
[
s
, )[ < . (33)
Now let K o be an arbitrary bounded set. We show that the set K

S
u
B, u [0, T], K
is bounded in o as well. Indeed, note that by Denition (3.1), we have that for each q 0 there
exist M
q
,
q
and p q such that
sup
K

||
q
= sup
u[0,T],K
|S
u
B|
q
M
q
e
qT
sup
K
|B|
p
< , (34)
where the nal inequality follows from the fact that continuous linear operators from o to R map
bounded sets of o to bounded sets of R (see Theorem 1.2.1 of [21]). Thus, K

= S
u
B, u
[0, T], K is bounded and so by (33),
sup
s[0,T]
sup
u[0,T],K
[
s
, S
u
B)[ < .
It therefore follows since |

n
1|
T
0 as n , that
sup
t[0,T]
sup
K
|

n
1|
T

_
t
0

n
s
, S

n
t

n
s
B
_
ds

n
1|
T
T sup
s[0,T]
sup
u[0,T],K
[
s
, S
u
B)[ ds 0, (35)
15
as n , which implies (30).
We next prove the limit (31). First note that by Lemma 2.2 of [18], there exist 0 and q 1
such that
sup
t[0,T]
sup
K

_
t
0

n
s
, (S
ts
S

n
t

n
s
)B
_
ds

(36)
T sup
s[0,T]
sup
0wvT
sup
K

s
, (S
vw
S

n
v

n
w
)B
_

T sup
0wvT
sup
K
|(S
vw
S

n
v

n
w
)B|
q
.
Next, note that for w v we may write
S
vw
S

n
v

n
w
= 1
(vw)(
n
v

n
w
)0
(S
(vw)(
n
v

n
w
)
I)S

n
v

n
w
+1
(
n
v

n
w
)(vw)0
(I S
(
n
v

n
w
)(vw)
)S
vw
.
Thus, recalling the denition of the set K

= S
u
B, u [0, T], K, it follows that
sup
0wvT
sup
K
|(S
vw
S

n
v

n
w
)B|
q
(37)
sup
0wvT
sup
K
|1
(vw)(
n
v

n
w
)0
(S
(vw)(
n
v

n
w
)
I)S

n
v

n
w
B|
q
+ sup
0wvT
sup
K
|1
(
n
v

n
w
)(vw)0
(I S
(
n
v

n
w
)(vw)
)S
vw
B|
q
sup
0wvT
sup
K

|1
(vw)(
n
v

n
w
)0
(S
(vw)(
n
v

n
w
)
I)|
q
+ sup
0wvT
sup
K

|1
(
n
v

n
w
)(vw)0
(I S
(
n
v

n
w
)(vw)
)|
q
.
However, note that since |

n
1|
T
0 as n , it follows that
sup
0wvT
[(v w) (
n
v

n
w
)[ 0 as n .
Thus, since by (34) the set K

o is bounded, it follows by Part 2 of Denition 3.1 that


sup
0wvT
sup
K

|1
(vw)(
n
v

n
w
)0
(S
(vw)(
n
v

n
w
)
I)|
q
+ sup
0wvT
sup
K

|1
(
n
v

n
w
)(vw)0
(I S
(
n
v

n
w
)(vw)
)|
q
0 as n ,
which, by (36) and (37), implies (31).
Finally, (32) follows from the fact that
sup
t[0,T]
sup
K

_
t
0

n
s

n
s
, S
ts
B
_
ds

T sup
s[0,T]
sup
u[0,T],K

n
s

n
s
, S
u
B
_

0, (38)
as n , where the nal convegence follows from (28) and (34). This completes the proof.
16
3.1 Age Process
Now let B
,
be the linear operator dened on o such that
B
,
=

h for all o. (39)


Our main result in this subsection is to verify that B
,
generates a strongly-continuous (C
0
, 1)
semi-group. This will then be useful in 5.1 and 6.1, where we prove our uid and diusion limits,
respectively, for the age process. In particular, by Theorem 3.3 of the preceding subsection and
(17) of 2.1, this will then allow us to write
/ =
B
A(/
0
+c (T
0
+T)), (40)
where the map
B
A : D([0, T], o

) D([0, T], o

) is a continuous map. We begin with the following


lemma. Its proof may be found in the Appendix.
Lemma 3.4.
1. For each n 1 and t 0,
sup
x0

_

F(x +t)

F(x)
_
(n)

< . (41)
2. For each T > 0, there exists a sequence (M
n
)
n0
such that for each s, t [0, T],
sup
x0

_

F(x +t)

F(x)

F(x +s)

F(x)
_
(n)

M
n
[t s[. (42)
Proof. See Appendix.
Next, we have the following.
Lemma 3.5. For each o, t R and , N, we have
|
t
|
,
(1 [t[

)2

max
0i
||
i,
. (43)
Proof. For each o, t R and , N, we have
|
t
|
,
= |( t)|
,
= sup
xR

()
(x t)

= sup
xR

[t + (x t)]

()
(x t)

= sup
xR

i=0
_

i
_
t
i
(x t)
i

()
(x t)

i=0
_

i
_
[t[
i
||
i,
(1 [t[

)2

max
0i
||
i,
.
This completes the proof.
17
The following is now our main result of this subsection.
Proposition 3.6. The linear operator B
,
dened by (39) generates a strongly-continuous (C
0
, 1)
semi-group (S
,
t
)
t0
given by
S
,
t
= (1/

F)
t
_

F
_
for all o. (44)
Proof. We rst check that B
,
is indeed the innitesimal generator of the semi-group given by (44).
In order to do so, it suces to check that for each , N,
lim
t0

S
,
t

t
(

h)

,
= 0. (45)
We begin by noting that for each t 0, x R and o, we have that

S
,
t
(x)

1 F(x t)
1 F(x)

t
(x)

e
|h|t
[
t
(x)[,
and so it follows that S
,
t
o. Now let x R be xed and note that for each t 0, we may write
S
,
t
(x) = exp
_

_
x+t
x
h(v)dv
_
(x +t).
Hence, by Taylors theorem, expanding in terms of t we obtain that we may write
S
,
t
(x) = (x) + (

(x) h(x)(x))t +R(x, t), (46)


where the remainder term R(x, t) has the from
R(x, t) =
1
2
_
t
0
d
2
du
2
_
exp
_

_
x+u
x
h(v)dv
_
(x +u)
_
udu.
Now dierentiating with respect to x in (46), we obtain that for each , N,
lim
t0

S
,
t

t
(

h)

,
= lim
t0
sup
xR

R
()
(x, t)
t

,
where R
()
(x, t) denotes the th derivative of R(x, t) with respect to x. Hence, in order to verify
(45) it now suces to show that for each , N,
lim
t0
sup
xR

R
()
(x, t)
t

= 0.
First note that for each x R xed, we may write
x

R
()
(x, t) =
1
2
_
t
0
x

_
d
2
du
2
_
exp
_

_
x+u
x
h(v)dv
_
(x +u)
__
()
udu. (47)
18
However, since h C
b
(R
+
) and o, it is straightforward to show that for each , N, there
exists a constant C
,
< such that for u suciently small,
sup
xR

_
d
2
du
2
_
exp
_

_
x+u
x
h(v)dv
_
(x +u)
__
()

< C
,
.
Hence, by (47) we obtain that
lim
t0
sup
xR

R
()
(x, t)
t

< lim
t0

C
,
2t
_
t
0
udu

= lim
t0
C
,
4
t = 0.
This now completes the verication of the fact that B
,
is the innitesimal generator of the semi-
group (S
,
t
)
t0
given by (44).
We next verify that the semi-group (S
,
t
)
t0
is a strongly-continuous (C
0
, 1) semi-group. It
is straightforward to see that Part 1 of Denition 3.1 is satised. We next check that Part 2 of
Denition 3.1 is satised. Consider 0 s < t, a bounded set K o, and , N. We then have
that we may write
sup
K
|S
,
s
S
,
t
|
,
= sup
K
|(1/

F)
s
(

F) (1/

F)
t
(

F)|
,
= sup
K
sup
xR

_

F(x +s)

F(x)
(x +s)

F(x +t)

F(x)
(x +t)
_
()

sup
K
sup
xR

_
__

F(x +s)

F(x)

F(x +t)

F(x)
_
(x +s)
_
()
_

+ sup
K
sup
xR

_

F(x +t)

F(x)
((x +s) (x +t))
_
()

. (48)
We now handle each of the terms in (48) separately.
For the rst term in (48), note that by Lemmas 3.4 and 3.5 we have that
sup
K
sup
xR

_
__

F(x +s)

F(x)

F(x +t)

F(x)
_
(x +s)
_
()
_

= sup
K
sup
xR

i=0
_

i
_

_

F(x +s)

F(x)

F(x +t)

F(x)
_
(i)

(i)
(x +s)

= [t s[

i=0
M
i
_

i
_
sup
K
sup
xR

(i)
(x +s)

[t s[(1 [t[

)2

i=0
M
i
_

i
_
sup
K
max
0j
||
j,i
. (49)
We next focus on the second term in (48). For each n 1, denote the left-hand-side of (41) by
19
L
n
. By the mean value theorem, for each x R there exists an r
x
[s, t] such that
sup
K
sup
xR

_

F(x +t)

F(x)
((x +s) (x +t))
_
()

= sup
K
sup
xR

i=0
_

i
_


F(x +t)

F(x)
(i)
_

(i)
(x +s)
(i)
(x +t)
_

= sup
K
sup
xR

i=0
_

i
_
L
i

(i)
(x +s)
(i)
(x +t)
_

[t s[ sup
K
sup
xR

i=0
_

i
_
L
i

(i+1)
(x +r
x
)

[t s[(1 [t[

)2

i=0
_

i
_
L
i
sup
K
max
0j
||
j,i+1
, (50)
where the nal inequality follows as a consequence of Lemma 3.5. Combining (49) and (50) with
(48) and taking the limit as s t now yields Part 2 of Denition 3.1.
We now complete the proof by verifying that Part 3 of Denition 3.1 is satised. First note
that for each o, t 0 and , N, we may write
|S
,
t
|
,
= sup
xR

_

F(x +t)

F(x)
(x +t)
_
()

= sup
xR

i=0
_

i
__

F(x +t)

F(x)
_
(i)

(i)
(x +t)

i=0
_

i
_
L
i
sup
xR

(i)
(x +t)

(1 [t[

)2

i=0
_

i
_
L
i
max
0j
||
j,i
(1 [t[

)2
+
max
0i
L
i
max
0j
max
0i
||
j,i
,
where the nal inequality above follows from Lemma 3.5. Part 3 of Denition 3.1 now follows from
(3) and (4) of 1.1 and the fact that | |
p
| |
p+1
for each p N. This completes the proof.
3.2 Residual Service Time Process
Now let B
1
be the linear operator dened on o such that
B
1
=

for all o. (51)


In this subsection, we verify that B
1
generates a strongly-continuous (C
0
, 1) semi-group. This
will be useful in 5.2 and 6.2, where we prove our uid and diusion limits, respectively, for the
20
residual service time process. In particular, by (25) of 2.2 and Theorem 3.3, this then implies that
we may write
1 =
B
R(1
0
+ET +(), (52)
where the map
B
R : D([0, T], o

) D([0, T], o

) is a continuous map.
Our main result of this subsection is the following.
Proposition 3.7. The linear operator B
1
dened by (51) generates the strongly-continuous (C
0
, 1)
semi-group (
t
)
t0
.
Proof. First note that it is clear by Lemma 3.5 that for each t 0 and o, we have that
t
o.
We next check that for each , N, we have the convergence
lim
t0

t

t
(

,
= 0. (53)
This will then be sucient to verify that B
1
as dened by (51) generates the semi-group (
t
)
t0
.
Let x R and o be xed. It then follows by Taylors theorem, expanding in terms of t, that
we may write

t
(x) = (x)

(x)t +R(x, t), (54)


where the remainder term R(x, t) is given by
R(x, t) =
1
2
_
t
0

(x +u)(t u)du. (55)


Now dierentiating in (54) with respect to x, we obtain that
lim
t0

t

t
(

,
= lim
t0
sup
xR

R
()
(t, x)
t

,
where R
()
(x, t) denotes the th derivative of R with respect to x. Thus, in order to prove (53), it
now suces to check that
lim
t0
sup
xR

R
()
(t, x)
t

= 0. (56)
First note that by (55) we may write
x

R
()
(x, t) =
x

2
_
t
0

(+2)
(x +u)(t u)du =
1
2
_
t
0
x

(+2)
(x +u)(t u)du.
However, by Lemma 3.5 there exists a constant C
,
< such that for suciently small u,
sup
xR

(+2)
(x +u)

< C
,
.
We therefore obtain that
lim
t0
sup
xR

R
()
(t, x)
t

< lim
t0

C
,
2t
_
t
0
(t u)du

= lim
t0
C
,
4
t = 0,
21
thus proving (56). This completes our verication of the fact that B
1
as dened by (51) generates
the semi-group (
t
)
t0
.
We next proceed to verify that (
t
)
t0
is a strongly-continuous (C
0
, 1) semi-group. In order
to check this fact, we will verify that (
t
)
t0
satises Parts 1 through 3 of Denition 3.1. It is
clear that (
t
)
t0
satises Part 1 of Denition 3.1. We next check that (
t
)
t0
satises Part 2 of
Denition 3.1. Let s < t, K o be a bounded set, and let , N. Then, by the mean value
theorem, for each x R there exists an r
x
[x t, x s] such that
sup
K
|
s

t
|
,
= sup
K
sup
xR

()
(x s)
()
(x t)
_

= sup
K
sup
xR

(r
x
+ (x r
x
))

()
(x s)
()
(x t)
_

= sup
K
sup
xR

i=0
_

i
_
r
i
x
(x r
x
)
i
_

()
(x s)
()
(x t)
_

= [t s[ sup
K
sup
xR

i=0
_

i
_
r
i
x
(x r
x
)
i

(+1)
(x r
x
)

[t s[

i=0
_

i
_
(t + 1)
i
sup
K
||
i,+1
.
Part 2 of Denition 3.1 now follows from the bound (3) and the fact that K is a bounded set.
We now complete the proof by verifying that (
t
)
t0
satises Part 3 of Denition 3.1. Note that
for each o, t 0 and , N, we have that
|
t
|
,
= |( t)|
,
= sup
xR

()
(x t)

= sup
xR

[t + (x t)]

()
(x t)

= sup
xR

i=0
_

i
_
t
i
(x t)
i

()
(x t)

i=0
_

i
_
t
i
||
i,
(1 t

)2

max
0i
||
i,
.
Part 3 of Denition 3.1 now follows as a result of the bounds (3) and (4).
4 Martingale Results
In this section, we show that the process T
0
+T dened in 2.1 and the process ( dened in 2.2
are both o

-valued martingales. The fact that T


0
+T is an o

-valued martingale will ultimately be


used together with the martingale functional central limit theorem [10] and the continuous mapping
theorem [1] in 5.1 and 6.1 in order to prove our uid and diusion limits, respectively, for the age
22
process. The fact that ( is an o

-valued martingale is not necessarily needed in order to prove limit


theorems for the residual service time process but may be used to show that the residual service
time process is in fact a Markov process. We begin by studying T
0
+T.
4.1 Age Process
In this subsection, we show that the process T
0
+T dened in 2.1 is an o

-valued martingale with


respect to the ltration (T
,
t
)
t0
dened by
T
,
t
= 1

i
s
i

, s t, i = 1, 2, ..., A
0
()
i
, i = 1, ..., A
0
()
1

i
s
i

, s t, i = 1, 2, ..., E
t
E
s
, s t ^.
Moreover, we explicitly identify the tensor quadratic variation of T
0
+T. The following is our main
result of this subsection.
Proposition 4.1. The process T
0
+T is an o

-valued T
,
t
-martingale with tensor quadratic vari-
ation process given for all , o by
<< T
0
+T >>
t
(, ) =
A
0
()

i=1
_

i
t
0
(x
i
)(x
i
)h

i
(x) dx
+
Et

i=1
_

i
(t
i
)
+
0
(x)(x)h(x) dx. (57)
Proof. Let o. We claim that ((T
0
+ T)
t
, ))
t0
is an R-valued T
,
t
-martingale, which is
sucient to show that T
0
+ T is an o

-valued T
,
t
-martingale. Let t 0. We rst show that
E[[(T
0
+T)
t
, )[] < . Note that by (12) and (13) we may write
E
_

(T
0
+T)
t
, )

E
_
_

A
0
()

i=1
_
t
0
(x
i
) d
_
1

i
x

_

i
x
0
h

i
(u) du
_

_
_
(58)
+E
_

Et

i=1
_
(t
i
)
+
0
(x) d
_
1

i
x

_

i
x
0
h(u) du
_

_
E
_
_
sup
0s<
[(s)[
A
0
()

i=1
_
1

i
t
+
_

i
t
0
h

i
(u) du
_
_
_
+E
_
sup
0s<
[(s)[
Et

i=1
_
1

i
(t
i
)
+

+
_

i
(t
i
)
+
0
h(u) du
__
sup
0s<
[(s)[(1 +t|h|

)(E[A
0
()] +E[E
t
])
< ,
where the nal inequality follows from the assumptions made on E[A
0
()] and E[E
t
] in 2.1. Thus,
E[[(T
0
+T)
t
, )[] < as desired.
23
Next, we show that ((T
0
+ T)
t
, ))
t0
possesses the martingale property with respect to the
ltration (T
,
t
)
t0
. That is, we show that for each 0 s t,
E[(T
0
+T)
t
, )[T
,
s
] = (T
0
+T)
s
, ). (59)
First note that by (12) and (13), we may write
(T
0
+T)
t
, ) =

i=1
1
iA
0
()
T
0,i
t
, ) +

i=1
T
i
t
, ), (60)
where, for each i 1, we set
1
iA
0
()
T
0,i
t
, ) = 1
iA
0
()
_
t
0
(x
i
) d
_
1

i
x

_

i
x
0
h

i
(u) du
_
and
T
i
t
, ) =
_
(t
i
)
+
0
(x) d
_
1

i
x

_

i
x
0
h(u) du
_
. (61)
We now show that for each i 1,
E[T
i
t
, )[T
,
s
] = T
i
s
, ). (62)
The proof that
E[1
iA
0
()
T
0,i
t
, )[T
,
s
] = 1
iA
0
()
E[T
0,i
t
, )[T
,
s
] = 1
iA
0
()
T
0,i
s
, ), (63)
for each i 1, is similar and will not be included. For each i 1 and y 0, set
D
i
t
(y) = 1

i
(t
i
)
+
y

_

i
(t
i
)
+
y
0
h(u) du. (64)
We now claim that in order to show (62), it suces to show that E[D
i
t
(y)[T
,
s
] = D
i
s
(y) for each
y 0. This is true since it will then follow that
E
_
T
i
t
,
_
[T
,
s

= E
__
R
+
D
i
t
(y)

(y) dy[T
,
s
_
=
_
R
+
E
_
D
i
t
(y)[T
,
s

(y) dy
=
_
R
+
D
i
s
(y)

(y) dy,
=

T
i
s
,
_
.
First note that since y (t
i
)
+
= (t (
i
+y)
i
)
+
, we may write
D
i
t
(y) = D
i
t(
i
+y)
().
Next note that since by the assumptions in 2.1, we have that E[
i
] < , it is straightforward
to verify that
i
+ y is an T
,
t
-stopping time for each y 0. Thus, by Problem 3.2.4 of [22], it
24
suces to show that D
i
() = (D
i
t
())
t0
is an R-valued T
,
t
-martingale. First note that since
|h|

< , it is straightforward to show that E[[D


i
t
()[] < for each t 0. Next note that using
the independence of
i
and
i
, one may also verify that
E[D
i
t
()[1

i
s
, 1

i
s
i

] = D
i
s
().
Hence, since
i
is assumed to be independent of A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,=
i, we obtain that that
E[D
i
t
()[T
,
s
] = E[D
i
t
()[1

i
s
, 1

i
s
i

] = E[D
i
s
()[T
,
s
] = D
i
s
(),
and so D
i
() is an R-valued T
,
t
-martingale as desired. This completes the proof of (62). The
proof of (63) is similar.
We now show that (62) and (63) imply (59). We claim that for each k 1, the sum
k

i=1
1
iA
0
()
T
0,i
t
, ) +
k

i=1

T
i
t
,
_
is dominated uniformly over k 1 by an integrable random variable. By Lebesgues dominated
convergence theorem for conditional expectations [6], this then implies that
E
_
T
t
, ) [T
,
s

= E
_

i=1

T
i
t
,
_

T
,
s
_
=

i=1
E
_
T
i
t
,
_
[T
,
s

i=1

T
i
s
,
_
= T
s
, ) , (65)
as desired. However, to obtain the bound is straightforward since

i=1
1
iA
0
()
T
0,i
t
, ) +
k

i=1

T
i
t
,
_

i=1
1
iA
0
()
_
t
0
(x
i
) d
_
1

i
x

_

i
x
0
h

i
(u) du
_

i=1
_
(t
i
)
+
0
(x) d
_
1

i
x

_

i
x
0
h(u) du
_

sup
0s<
[(s)[(1 +t|h|

)(A
0
() +E
t
),
and as in (58) we have that
sup
0s<
[(s)[(1 +t|h|

)(E[A
0
()] +E[E
t
]) < .
We have therefore shown that ((T
0
+ T)
t
, ))
t0
is an R-valued T
,
t
-martingale for each o,
which implies that T
0
+T is an o

-valued T
,
t
-martingale
We next proceed to calculate the tensor quadratic variation of T
0
+ T. Let i 1 and recall
that by (61) we have that for each o and t 0,
T
i
t
, ) =
_
(t
i
)
+
0
(x) d
_
1

i
x

_

i
x
0
h(u) du
_
. (66)
25
Therefore, as on page 259 of [25], it follows that
<< T
i
, ) >>
t
=
_
(t
i
)
+
0
(x)
2
d
_
1

i
x

_

i
x
0
h(u) du
_
=
_

i
(t
i
)
+
0
(x)
2
h(x) dx, (67)
which implies that
<< T
i
>>
t
(, ) = < T
i
, ), T
i
, ) >
t
=
1
4
_
<< T
i
, +) >>
t
<< T
i
, ) >>
t
_
=
1
4
_
_

i
(t
i
)
+
0
((x) +(x))
2
h(x) dx

_

i
(t
i
)
+
0
((x) (x))
2
h(x) dx
_
=
_

i
(t
i
)
+
0
(x)(x)h(x) dx,
where the second equality in the above follows from the polarization identity and the third equality
follows from (67). In a similar manner, one may also show that for all i 1,
<< 1
iA
0
()
T
0,i
>>
t
(, ) = 1
iA
0
()
_

i
t
0
(x
i
)(x
i
)h

i
(x) dx,
for all , o.
We now claim that in order to show that the tensor quadratic variation of T
0
+T is given by
(57), it suces to show the following three facts:
1. T
i
is orthogonal to T
j
for i ,= j,
2. 1
iA
0
()
T
0,i
is orthogonal to 1
jA
0
()
T
0,j
for i ,= j,
3. 1
iA
0
()
T
0,i
is orthogonal to T
j
for all i, j 1.
The fact that the tensor quadratic variation of T
0
+ T is given by (57) can then be shown in the
following manner. For each k 1, o and t 0, let
(T
0
+T)
k
t
, ) =
k

i=1
1
iA
0
()
T
0,i
t
, ) +
k

i=1
T
i
t
, ),
and set (T
0
+T)
k
= ((T
0
+T)
k
t
, t 0). It is then clear that Claims 1 through 3 above imply that
<< (T
0
+T)
k
>>
t
(, ) =
k

i=1
1
iA
0
()
_

i
t
0
(x
i
)(x
i
)h

i
(x) dx
+
k

i=1
_

i
(t
i
)
+
0
(x)(x)h(x) dx. (68)
26
Moreover, using similar arguments as above and the simple inequality (x
1
+ x
2
)
2
4(x
2
1
+ x
2
2
), it
is straightforward to show that for each k 1, one has P-a.s. the bound

(T
0
+T)
k
t
, )(T
0
+T)
k
t
, )

_
k

i=1
1
iA
0
()
_

i
t
0
(x
i
)(x
i
)h

i
(x) dx +
k

i=1
_

i
(t
i
)
+
0
(x)(x)h(x) dx
_

4
_
sup
0s<
([(s)[ +[(s)[)
_
2
(1 +t|h|

)
2
(E
2
t
+A
2
0
())
+ sup
0s<
([(s)[[(s)[)(1 +t|h|

)(E
t
+A
0
()).
However, by the assumptions in 2.1, one has that E[E
2
t
+A
2
0
()] < and E[E
t
+A
0
()] < .
Hence, using the dominated convergence theorem for conditional expectations [6], it follows that
for 0 s t,
E[(T
0
+T)
t
, )(T
0
+T)
t
, ) << (T
0
+T) >>
t
(, )[T
A
s
]
= E[ lim
k
((T
0
+T)
k
t
, )(T
0
+T)
k
t
, ) << (T
0
+T)
k
>>
t
(, ))[T
A
s
]
= lim
k
E[(T
0
+T)
k
t
, )(T
0
+T)
k
t
, ) << (T
0
+T)
k
>>
t
(, )[T
A
s
]
= lim
k
((T
0
+T)
k
s
, )(T
0
+T)
k
s
, ) << (T
0
+T)
k
>>
s
(, ))
= (T
0
+T)
s
, )(T
0
+T)
s
, ) << (T
0
+T) >>
s
(, ).
This then implies that the tensor quadratic variation of T
0
+T is given by (57). We now proceed
to prove Claims 1 through 3, which is sucient to complete the proof.
We begin with Claim 1. Let , o and i ,= j. We show that (T
i
t
, )T
j
t
, ))
t0
is an
R-valued T
,
t
-martingale, which is sucient to show that T
i
is orthogonal to T
j
. First note that it
is clear as in (58) that for each t 0, we have that E[[T
i
t
, )T
j
t
, )[] < . Next, let 0 s t. By
the independence of
i
from A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= i, and, similarly,
the independence of
j
from A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= j, it follows that
E[T
i
t
, )T
j
t
, )[T
,
s
] = E[T
i
t
, )T
j
t
, )[1

i
s
, 1

j
s
, 1

i
s
i

, 1

j
s
j

].
However, by the independence of
i
from
j
and
j
, and, similarly, the independence of
j
from
i
and
i
, we have that
E[T
i
t
, )T
j
t
, )[1

i
s
, 1

j
s
, 1

i
s
i

, 1

j
s
j

]
= E[T
i
t
, )[1

i
s
, 1

i
s
i

]E[T
j
t
, )[1

j
s
, 1

j
s
j

]
= T
i
s
, )T
j
s
, ),
where the nal equality follows from (62) and the fact that for k 1,
E[T
k
t
, )[1

k
s
, 1

k
s
k

] = E[T
k
t
, )[T
,
s
]. (69)
Thus, it is clear that (T
i
t
, )T
j
t
, ))
t0
possesses the martingale property and so (T
i
t
, )T
j
t
, ))
t0
is an R-valued T
,
t
-martingale, and hence T
i
is orthogonal to T
j
. The proof of Claim 2 above fol-
lows similarly. The proof of Claim 3 above follows in a similar manner as well. In particular,
27
let i, j 1. We show that (1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
is an R-valued T
,
t
-martingale for each
, o, which is sucient to show that 1
iA
0
()
T
0,i
is orthogonal to T
j
. For each t 0, it is
clear as in (58) that E[[1
iA
0
()
T
0,i
t
, )T
j
t
, )[] < . Next, note that since
i
is independent
of A
0
,
k
, k = 1, ..., A
0
(); k ,= i, E = (E
t
)
t0
and
k
, k 1, and, similarly,
j
is independent of
A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= j, we have that
E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[T
,
s
]
= E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[
i
, 1

j
s
, 1

i
s
i

, 1

j
s
j

].
However, by the independence of
i
from
j
and
j
, and, similarly, the independence of
j
from
i
and
i
, we have that
E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[
i
, 1

j
s
, 1

i
s
i

, 1

j
s
j

]
= E[1
iA
0
()
T
0,i
t
, )[
i
, 1

i
s
i

]E[T
j
t
, )[1

j
s
, 1

j
s
j

]
= 1
iA
0
()
T
0,i
s
, )T
j
s
, ),
where the nal equality follows by (62), (63), (69) and the fact that for k 1,
E[1
kA
0
()
T
0,k
t
, )[
k
, 1

k
s
k

] = E[1
kA
0
()
T
0,k
t
, )[T
,
s
].
Thus, it is clear that (1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
possesses the martingale property and so
(1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
is an R-valued T
,
t
-martingale, and hence T
0,i
is orthogonal to T
j
.
This proves Claim 3, which completes the proof.
4.2 Residuals
In this subsection, we show that the process ( dened in 2.2 is a martingale. This fact may be
useful in future work where one wishes to show that the residual service time process is a Markov
process. Let (T

t
)
t0
be the natural ltration generated by (. We then have the following result.
Proposition 4.2. The process ( is an o

-valued T

t
-martingale with tensor optional quadratic
variation process given for all , o by
[(]
t
(, ) =
Et

i=1
((
i
) T, ))((
i
) T, )). (70)
Proof. Let o. We rst show that ((
t
, ))
t0
is an R-valued T

t
-martingale. Dene the
ltration (H
k
)
k1
by setting H
k
= E
t
, t 0
1
,
2
, ...,
k
^ for each k 1. Next, dene
the discrete-time D-valued process (G
k
)
k1
by
G
k
(y) =
k

i=1
_
1

i
y
F(y)
_
, y 0, (71)
and, for convenience, let G
k
(y) = 0 for y < 0. Then, let ((
k
)
k1
be the o

-valued process associated


with G
k
. Since o is bounded, it is clear that E[[(
k
, )[] < for each k 1. Moreover, by the
independence of the service times from the arrival process, one has that ((
k
, ))
k1
possesses the
28
martingale property with respect to (H
k
)
k1
. Hence, ((
k
, ))
k1
is an R-valued H
k
-martingale.
However, since for each t 0 we have by the assumptions in 2.1 that E[E
t
] < , it is straightfor-
ward to see that E
t
is a stopping time with respect to the ltration (H
k
)
k1
. Thus, the ltration
(H
Et
)
t0
is well-dened and, furthermore, it follows by the optional sampling theorem [22] that
((
t
, ))
t0
= ((
Et
, ))
t0
is an H
Et
-martingale. The result now follows since any martingale is a
martingale relative to its natural ltration.
The form of the tensor optional quadratic variation (70) is immediate by Theorem 3.3 of [29].
5 Fluid Limits
In this section, we provide our main uid limit results. We begin in 5.1 by studying the age process
and in 5.2 we study the residual service time process. Our setup in both subsections is the same.
In particular, we consider a sequence of G/GI/ queues indexed by n 1, where the arrival rate
to the system grows large with n while the service time distribution does not change with n.
5.1 Ages
We begin by studying the age process / dened in 2.1. For each n 1, dene the uid scaled
quantities

/
n
0

/
n
0
n
,

E
n

E
n
n
,

T
0,n

T
0,n
n
,

T
n

T
n
n
,

/
n

/
n
n
, (72)
and set

c
n


E
n

0
. Using (17), Theorem 3.3 and Proposition 3.6, it is straightforward to show
that one may write

/
n
=
B
A(

/
n
0
+

c
n
(

T
0,n
+

T
n
)), (73)
where the map
B
A : D([0, T], o

) D([0, T], o

) is continuous. We now prove that if (



/
n
0
+

c
n
)
n1
weakly converges, then so too does (

/
n
0
+

c
n
(

T
0,n
+

T
n
))
n1
.
Proposition 5.1. If

/
n
0
+

c
n


/
0
+

c in D([0, T], o

) as n , then

/
n
0
+

c
n
(

T
0,n
+

T
n
)

/
0
+

c in D([0, T], o

) as n .
Proof. We rst note that by Theorem 1.5, it is sucient to show that if

/
n
0
+

c
n


/
0
+

c as
n , then

T
0,n
+

T
n
0 in D([0, T], o

) as n . (74)
Let T > 0 and 0 t T. Then, for each o, we have by Proposition 4.1 that
[ <<

T
0,n
+

T
n
>>
t
(, )[
=

1
n
2
_
_
A
n
0
()

i=1
_

i
t
0

2
(x
n
i
)h

n
i
(x) dx +
E
n
t

i=1
_

i
(t
n
i
)
+
0

2
(x)h(x) dx
_
_

|h|

n
2
A
n
0
()

i=1
_
t
0

2
(x
n
i
) dx +
|
2
h|

E
n
T
. (75)
29
Thus, from (75) we obtain that for each 0 t T,
[ <<

T
0,n
+

T
n
>>
t
(, )[ (76)

|h|

n
2
A
n
0
()

i=1
_
t
0

2
(x
n
i
) dx +
|
2
h|

E
n
T
=
|h|

n
2
_
t
0
A
n
0
()

i=1

2
(x
n
i
) dx +
|
2
h|

E
n
T
=
|h|

n
_
t
0


/
n
0
,
x

2
) dx +
|
2
h|

E
n
T

|h|

t
n
q
K
(

/
n
0
) +
|
2
h|

E
n
T
,
where the set K is given by K =
x

2
, 0 x t. By Lemma 3.5, the set K is bounded in o and
hence q
K
is a semi-norm on o

by Denition 1.3. This then implies that q


K
is a continuous function
on o

. Hence, since by assumption



/
n
0


/
0
and

E
n
T


E
T
, it follows by Slutskys Theorem that
|h|

t
n
q
K
(

/
n
0
) +
|
2
h|

E
n
T
0 as n .
By (76), this then implies that
<<

T
0,n
+

T
n
>> (, ) 0 in D([0, T], R) as n . (77)
We now verify that Parts 1 and 2 of Theorem 1.5 are satised for the sequence (

T
0,n
+

T
n
)
n1
,
with the limit point being the function which is identically 0. We begin with Part 1. Using the
fact that the maximum jump of both

T
0,n
+

T
n
, ) and <<

T
0,n
+

T
n
>> (, ) is bounded over
the interval [0, T] uniformly in n, we obtain by (77) and the martingale FCLT (see Theorem 7.1.4
of [10] or [34]) that

T
0,n
+

T
n
, ) 0 in D([0, T], R) as n . (78)
Thus, Part 1 of Theorem 1.5 holds. We next check that Condition 2 holds. Let m 1 and let
t
1
, ..., t
m
[0, T] and
1
, . . . ,
m
o. By (78), we have that for each 1 i m,
(

T
0,n
+

T
n
)
t
i
,
i
) 0 in R as n .
However, by Theorem 3.9 of [1] this now implies that
_
(

T
0,n
+

T
n
)
t
1
,
1
), . . . (

T
0,n
+

T
n
)
tm
,
m
)
_
(0, . . . , 0) in R
m
as n .
Thus, we have shown that Part 2 of Theorem 1.5 holds and so (74) is proven. This completes the
proof.
We are now in a position to prove the main result of this subsection. We have the following.
30
Theorem 5.2. If

/
n
0
+

c
n


/
0
+

c in D([0, T], o

) as n , then

/
n


/ in D([0, T], o

) as n ,
where

/ is the unique solution to the integral equation


/
t
, ) =

/
0
, ) +

c
t
, )
_
t
0


/
s
, h) ds +
_
t
0


/
s
,

) ds, t [0, T], (79)


for all o.
Proof. By the assumption that

/
n
0
+

c
n


/
0
+

c, it follows immediately by Proposition 5.1 that

/
n
0
+

c
n
(

T
0,n
+

T
n
)

/
0
+

c in D([0, T], o

) as n . (80)
Next, recall that by (73) we have that

/
n
=
B
A(

/
n
0
+

c
n
(

T
0,n
+

T
n
)), where the map
B
A :
D([0, T], o

) D([0, T], o

) is continuous. The result now follows by (80) and Proposition 1.1


applied to
B
A.
Remark 5.3. Note that one may now use Theorem 5.2 along with Theorem 3.3 in order to obtain
an explicit expression for

/. Similarly, one may obtain explicit expressions for

1,

/ and

1 in
Theorems 5.6, 6.5 and 6.9, respectively, below.
We also note that at a heuristic level, one may attempt to substitute the function 1
x0
into
the explicit formula provided by Theorem 3.3 for

/ in order to obtain an expression for the limiting,
uid scaled total number of customers in the system. For instance, suppose that

/
0
= 0 so that
the system is initially empty and that

c, ) = (0)e for each o. Then, using the form of


the generator B
,
from (39) and the semi-group (S
,
t
)
t0
from Proposition 3.6, one obtains after
substituting into Theorem 3.3 that heuristically the total number of customers in the system at
time t 0 is given by
t
_
t
0
sf(t s)ds =
_
t
0

F(t s)ds.
We now conclude this subsection by providing an additional condition on the arrival process
under which a stationary solution to the uid limit equation (79) may be explicitly found. Note
also that our condition in Proposition 5.4 below holds, for example, if the arrival process to the
nth system is a renewal process which has been sped up by a factor of n (as will be the case for
the GI/GI/ queue).
Proposition 5.4. If

c, ) = (0)e for each o, then



/ = T
e
is a stationary solution to the
uid limit equation (79).
Proof. Substituting

/ = T
e
and

c, ) = (0)e into (79), we see that it suces to verify that

_
R
+
(y) dF
e
(y) =
_
R
+
(y) dF
e
(y) +t(0) t
_
R
+
(h(y)(y)

(y)) dF
e
(y).
31
However, this follows since
t
_
R
+
(h(y)(y)

(y)) dF
e
(y) = t
_
R
+
(h(y)(y)

(y))

F(y) dy
= t
_
R
+
(f(y)(y)

F(y)

(y)) dy
= t
_
R
+
_

F(y)(y)
_
dy
= t(0).
This completes the proof.
5.2 Residuals
We next proceed to analyze the residual service time process 1 of 2.2. Our setup in this subsection
is the same as that in the previous subsection. However, in addition to the uid scaled quantities
already dened in 5.1, we also now dene for each n 1 the new uid scaled quantities

1
n

1
n
n
,

1
n
0

1
n
0
n
and

(
(
n
n
.
Using (25), Theorem 3.3 and Proposition 3.7, it is now straightforward to show that

1
n
=
B
R(

1
n
0
+

E
n
T +

(
n
), (81)
where the map
B
R : D([0, T], o

) D([0, T], o

) is continuous. In our rst result of this subsection,


we prove that if (

1
n
0
+

E
n
T)
n1
weakly converges, then so too does (

1
n
0
+

E
n
T +

(
n
)
n1
.
Proposition 5.5. If

1
n
0
+

E
n
T

1
0
+

ET in D([0, T], o

) as n , then

1
n
0
+

E
n
T +

(
n


1
0
+

ET in D([0, T], o

) as n . (82)
Proof. We rst note that by Theorem 1.5, it is sucient to show that if

1
n
0
+

E
n
T

1
0
+

ET as
n , then

(
n
0 in D([0, T], o

) as n . (83)
Let T > 0 and 0 t T. We then have by Proposition 4.2 and the assumption that

1
n
0
+

E
n
T

1
0
+

ET, that for each , o,
[[

(
n
]
t
(, )[ =

1
n
2
E
n
t

i=1
(
i
)(
i
)

1
n
2
E
n
T
sup
0s<
[(s)(s)[ 0 in R as n . (84)
The remainder of the proof now proceeds in a similar manner to the proof of Proposition 5.1. We
omit the details.
The following is now our main result of this subsection.
32
Theorem 5.6. If

1
n
0
+

E
n
T

1
0
+

ET in D([0, T], o

) as n , then

1
n


1 in D([0, T], o

) as n ,
where

1 is the unique solution to the integral equation

1
t
, ) =

1
0
, ) +

E
t
T, )
_
t
0

1
s
,

) ds, t [0, T], (85)


for all o.
Proof. By the assumption that

1
n
0
+

E
n
T

1
0
+

ET, it follows immediately by Proposition 5.5
that

1
n
0
+

E
n
T +

(
n


1
0
+

ET in D([0, T], o

) as n . (86)
Next, recall that by (81) we have that

1
n
=
B
R(

1
n
0
+

E
n
T +

(
n
), where the map
B
R :
D([0, T], o

) D([0, T], o

) is continuous. The result now follows by (86) and Proposition 1.1


applied to
B
R.
6 Diusion Limits
In this section, we prove our main diusion limit results. In 6.1, we study the age process and
in 6.2 we study the residual service time process. Before we provide our main results, however,
we rst must provide the denition of an o

-valued Wiener process and a generalized o

-valued
Ornstein-Uhlenbeck process. Our denitions are the same as those in [2].
Denition 6.1. A continuous o

-valued Gaussian process W = (W


t
)
t0
is called a generalized
o

-valued Wiener process with covariance functional


K(s, ; t, ) = E[W
s
, )W
t
, )] , s, t 0 and , o,
if it has continuous trajectories and, for each s, t 0 and , o, K(s, ; t, ) is of the form
K(s, ; t, ) =
_
st
0
Q
u
, ) du,
where the operators Q
u
: o o

, u 0, possess the following two properties:


1. Q
u
is linear, continuous, symmetric and positive for each u 0,
2. the function u Q
u
, ) is in D([0, ), R) for each , o.
If Q
u
does not depend on u 0, then the process W is called an o

-valued Wiener process.


Now, using the above denition of a generalized o

-valued Wiener process, we may provide the


following denition of a generalized o

-valued Ornstein-Uhlenbeck process.


Denition 6.2. An o

-valued process X = (X
t
)
t0
is called a (generalized) o

-valued Ornstein-
Uhlenbeck process if for each o and t 0,
X
t
, ) = X
0
, ) +
_
t
0
X
u
, A) du +W
t
, ),
where W (W
t
)
t0
is a (generalized) o

-valued Wiener process and A : o o is a continuous


operator.
33
6.1 Ages
In this subsection, we prove our main diusion limit result for the age process / dened in 2.1.
Our setup is the same as that in 5. That is, we consider a sequence of G/GI/ queues indexed by
n 1, where the arrival rate to the system grows large with n while the service time distribution
does not change with n. For the remainder of this subsection, we assume that

/
n
0
+

c
n


/
0
+

c as
n , where

/
0
+

c is a non-random quantity. By Theorem 5.2 of 5.1, this then implies that

/
is a non-random quantity as well. Setting

A
n
0
() = n
1
A
n
0
() for each n 1 and letting T 0,
we also assume that the sequences

A
n
0
(), n 1 and

E
n
T
, n 1 are uniformly integrable.
Now, for each n 1, dene the diusion scaled quantities

/
n

n
_

/
n


/
_
,

/
n
0

n
_

/
n
0


/
0
_
,

E
n

n
_

E
n


E
_
,

T
0,n

n

T
0,n
,

T
n

n

T
n
,
and set

c
n


E
n

0
. Then, recalling the form of the uid limit

/ from Theorem 5.2, note that using
system equation (17) in conjunction with Theorem 3.3 and Proposition 3.6, one has that for each
n 1,

/
n
=
B
A(

/
n
0
+

c
n
(

T
0,n
+

T
n
)), (87)
where the map
B
A : D([0, T], o

) D([0, T], o

) is continuous. Our strategy now is to rst prove


a weak convergence result for the sequence (

/
n
0
+

c
n
(

T
0,n
+

T
n
))
n1
and then to apply Theorem
1.1 together with (87) in order to prove our diusion limit result for the sequence (

/
n
)
n1
.
We begin with the following result. Its proof may be found in the Appendix.
Lemma 6.3. If

/
n
0
+

c
n


/
0
+

c in D([0, T], o

) as n , then

T
0,n
+

T
n


T
0
+

T in D([0, T], o

) as n , (88)
where

T
0
+

T is a generalized o

-valued Wiener process with covariance functional given for each


, o and s, t 0 by
K

T
0
+

T
(s, ; t, ) =
_
st
0


/
u
, h) du. (89)
Proof. See Appendix.
We next have the following result, which provides a weak limit for the sequence (

/
n
0
+

c
n

(

T
0,n
+

T
n
))
n1
.
Proposition 6.4. If

/
n
0
+

c
n


/
0
+

c in D([0, T], o

) as n , then

/
n
0
+

c
n
(

T
0,n
+

T
n
)

/
0
+

c (

T
0
+

T) in D([0, T], o

) as n , (90)
where

T
0
+

T is as given in Lemma 6.3 and is independent of

/
0
+

c.
Proof. We will check that the sequence (

/
n
0
+

c
n
(

T
0,n
+

T
n
))
n1
satises Parts 1 and 2 of Theorem
1.5. We begin with Part 1. Let o. By assumption, the sequence (

/
n
0
+

c
n
, ))
n1
weakly
converges and hence is tight in D([0, T], R), and, by Lemma 6.3, the sequence ((

T
0,n
+

T
n
), ))
n1
weakly converges and hence is tight in D([0, T], R). Therefore, there must exist a subsequence
(n)
n1
along which we have the joint convergence
(

/
n
0
+

c
n
, ), (

T
0,n
+

T
n
), )) (

/
0
+

c, ), (

T
0
+

T), )) in D
2
([0, T], R) as n .
34
Now note that clearly

/
0
+

c, ) has the same distribution as



/
0
+

c, ) and, similarly, (

T
0
+

T), )
has the same distribution as (

T
0
+

T), ). We now verify that

/
0
+

c, ) and (

T
0
+

T), ) are
independent of one another. This will then imply the convergence


/
n
0
+

c
n
, ) (

T
0,n
+

T
n
), )

/
0
+

c, ) (

T
0
+

T), ) in D([0, T], R) as n ,
along the given subsequence. However, since the subsequence was arbitrary, this will then imply
convergence along the entire sequence, thus verifying Part 1 of Theorem 1.5.
Let t
1
, t
2
[0, T] with t
1
t
2
and let a
1
, a
2
, b
1
, b
2
R and let x, y R. We will show that
P(a
1


/
n
0
+

c
n
t
1
, ) +a
2


/
n
0
+

c
n
t
2
, ) x, b
1
(

T
0,n
t
1
+

T
n
t
1
), ) +b
2
(

T
0,n
t
2
+

T
n
t
2
), ) y)
P(a
1


/
0
+

c
t
1
, ) +a
2


/
0
+

c
t
2
, ) x) (91)
P(b
1
(

T
0
t
1
+

T
t
1
), ) +b
2
(

T
0
t
2
+

T
t
2
), ) y)
as n . The analogous proof for t
1
, ..., t
m
[0, T] with m > 2 follows similarly. This will then
be sucient to show that

/
0
+

c, ) and (

T
0
+

T), ) are independent of one another. First note
that we may write
P(a
1


/
n
0
+

c
n
t
1
, ) +a
2


/
n
0
+

c
n
t
2
, ) x, b
1
(

T
0,n
t
1
+

T
n
t
1
), ) +b
2
(

T
0,n
t
2
+

T
n
t
2
), ) y)
= E
_
1
a
1


,
n
0
+

c
n
t
1
,)+a
2


,
n
0
+

c
n
t
2
,)x
1
b
1
(

T
0,n
t
1
+

T
n
t
1
),)+b
2
(

T
0,n
t
2
+

T
n
t
2
),)y
_
.
However, by the tower property of conditional expectations [6], we have that
E
_
1
a
1


,
n
0
+

c
n
t
1
,)+a
2


,
n
0
+

c
n
t
2
,)x
1
b
1
(

T
0,n
t
1
+

T
n
t
1
),)+b
2
(

T
0,n
t
2
+

T
n
t
2
),)y
_
= E
_
1
a
1


,
n
0
+

c
n
t
1
,)+a
2


,
n
0
+

c
n
t
2
,)x
E
_
1
b
1
(

T
0,n
t
1
+

T
n
t
1
),)+b
2
(

T
0,n
t
2
+

T
n
t
2
),)y
[/
n
, c
n
__
.
We now claim that
E
_
1
b
1
(

T
0,n
t
1
+

T
n
t
1
),)+b
2
(

T
0,n
t
2
+

T
n
t
2
),)y
[/
n
, c
n
_
(92)
P
P(b
1
(

T
0
t
1
+

T
t
1
), ) +b
2
(

T
0
t
2
+

T
t
2
), ) y)
as n . Then, since by assumption
E
_
1
a
1


,
n
0
+

c
n
t
1
,)+a
2


,
n
0
+

c
n
t
2
,)x
_
P(a
1


/
0
+

c
t
1
, ) +a
2


/
0
+

c
t
2
, ) x)
as n , this will then imply (91), thus verifying Part 1 of Theorem 1.5.
In order to see that (92) holds, rst note that
E
_
1
b
1
(

T
0,n
t
1
+

T
n
t
1
),)+b
2
(

T
0,n
t
2
+

T
n
t
2
),)y
[/
n
, c
n
_
= P
_
b
1
(

T
0,n
t
1
+

T
n
t
1
), ) +b
2
(

T
0,n
t
2
+

T
n
t
2
), ) y[/
n
, c
n
_
.
Now recall from (60) that we may write
b
1
(

T
0,n
t
1
+

T
n
t
1
), ) +b
2
(

T
0,n
t
2
+

T
n
t
2
), ) =
A
n
0
()

i=1
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
+
E
n
t
2

i=1
(

T
n,i
t
1
, b
1
) +

T
n,i
t
2
, b
2
)).
35
Moreover, given /
n
and c
n
, we have that the random variables (

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
)), i =
1, ..., A
n
0
(), and (

T
n,i
t
1
, b
1
) +

T
n,i
t
2
, b
2
)), i = 1, ..., E
n
t
2
, are mutually independent, with mean
zero. In addition, it is straightforward to calculate that
E
_
_
A
n
0
()

i=1
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
_
(93)
+E
_
_
E
n
t
2

i=1
(

T
n,i
t
1
, b
1
) +

T
n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
_
= E
__
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
_
.
Also, note that for each i = 1, ..., A
n
0
(),
E
_
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
3
[/
n
0
, c
n
_
(94)
= E
_
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
2
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))[/
n
0
, c
n
_

([b
1
[ +[b
2
[)||

(1 +t
2
|h|

n
E
_
(

T
0,n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
,
and, similarly, for each i = 1, ..., E
n
t
2
,
E
_
(

T
n,i
t
1
, b
1
) +

T
0,n,i
t
2
, b
2
))
3
[/
n
0
, c
n
_
(95)

([b
1
[ +[b
2
[)||

(1 +t
2
|h|

n
E
_
(

T
n,i
t
1
, b
1
) +

T
n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
.
Now let denote the CDF of a standard, normal random variable. It then follows by (93), (94), (95)
and an application of the Berry-Esseen Theorem [6] for independent (but not necessarily identically
distributed) random variables that

P
_
_
_
b
1
(

T
0,n
t
1
+

T
n
t
1
), ) +b
2
(

T
0,n
t
2
+

T
n
t
2
), )
_
E
_
_
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
__
1/2
y[/
n
, c
n
_
_
_(y)

n
([b
1
[ +[b
2
[)||

(1 +t
2
|h|

)
_
E
_
_
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
__
1/2
.
Hence, in order to complete the proof of (92) and hence verify Part 1 of Theorem 1.5, it suces to
show that
E
__
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
_
P

_
t
1
0


/
u
, (b
1
+b
2
)
2
h)du as n . (96)
However, note that by Theorem 5.2 and the continuity of the integral map [1], we have that
_
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du
P

_
t
1
0


/
u
, (b
1
+b
2
)
2
h)du as n . (97)
36
Next, note that the uniform integrability of

A
n
0
(), n 1 and

E
n
T
, n 1 implies the uniform
integrability of
__
t
1
0


/
n
u
, (b
1
+b
2
)
2
h)du, n 1
_
. (98)
It is then straightforward to show that (97) implies (96), thus completing the verication of Part 1
of Theorem 1.5. The proof of the verication of Part 2 of Theorem 1.5 follows in a similar manner
to the above and has been omitted for the sake of brevity. This completes the proof.
The following is now our main result of this section. Its proof is a straightforward consequence
of Theorem 1.1, (87) and Proposition 6.4.
Theorem 6.5. If

/
n
0
+

c
n


/
0
+

c in D([0, T], o

) as n , then

/
n


/ in D([0, T], o

) as n , (99)
where

/ is the solution to the stochastic integral equation


/
t
, ) =

/
0
, )+

c
t
, )

T
0
+

T, )
_
t
0


/
s
, h) ds+
_
t
0


/
s
,

) ds, t [0, T], o. (100)


In addition, if

c is an o

-valued Wiener process with covariance functional K

c
(s, ; t, ) =
2
(s
t)(0)(0), then

/ is a generalized o

-valued Ornstein-Uhlenbeck process driven by a generalized


o

-valued Wiener process with covariance functional


K

c(

T
0
+

T)
(s, ; t, ) =
2
(s t)(0)(0) +
_
st
0


/
u
h, ) du. (101)
Proof. First note that by (87) we have that

/
n
=
B
A(

/
n
0
+

c
n
(

T
0,n
+

T
n
)), where the map

B
A : D([0, T], o

) D([0, T], o

) is a continuous map. The convergence (99) now follows by


Theorem 1.1 and Proposition 6.4.
Next, suppose that

c is an o

-valued Wiener process with covariance functional K

c
(s, ; t, ) =

2
(s t)(0)(0). Then, combining this with (89) and the fact that

T
0
+

T and

/
0
+

c are
independent from Proposition 6.4, yields (101). Thus, by Denition 6.2,

/ is an o

-valued Ornstein-
Uhlenbeck process.
Recall now from Proposition 5.4 of 5.1 that if

c, ) = (0)e for each o and some 0,


then a stationary solution to the uid limit equation (79) is given by

/ = T
e
. We now show that
under the additional condition that

c is an o

-valued Wiener process with covariance functional


K

c
(s, ; t, ) =
2
(s t)(0)(0), then the resulting limiting diusion scaled age process

/ of
Theorem 6.5 is a time-homogeneous Markov process. Our result is the following. Note also that a
similar approach may be used to analyze the diusion limit of the residual service time process in
Theorem 6.9 in the following subsection.
Proposition 6.6. If

c, ) = (0)e for each o,



/
0
= T
e
and

c is an o

-valued Wiener pro-


cess with covariance functional K

c
(s, ; t, ) =
2
(st)(0)(0), then

/ is an o

-valued Ornstein-
Uhlenbeck process driven by an o

-valued Wiener process with covariance functional given for each


, o and s, t 0 by
K

c(

T
0
+

T)
(s, ; t, ) = (s t)
2

0
+T, ). (102)
37
Proof. It is clear that the covariance functional of

c is given by
K

c
(s, ; t, ) = (s t)
2

0
, ). (103)
We now show that the covariance functional of

T
0
+

T is given by
K

T
0
+

T
(s, ; t, ) = (s t)T, ). (104)
Then, since

c and

T
0
+

T are independent, summing (103) and (104) will prove (102), which will
complete the proof.
Note that by Proposition 5.4 we have that since by assumption

/
0
= T
e
, it follows that

/ = T
e
is the unique solution to the uid limit equation (79). Therefore, by Lemma 6.3 we have
that for each , o and s, t 0,
K

T
0
+

T
(s, ; t, ) =
_
st
0


/
u
, h
_
du
=
_
st
0
_
R
+
(x)(x)h(x) d

A
u
(x) du
= (s t)
_
R
+
(x)(x)h(x)

F(x) dx
= (s t)
_
R
+
(x)(x)f(x) dx.
This proves (104), which completes the proof.
We now note that using (27) of Theorem 3.3 and (44) of Proposition 3.6, one may obtain an
explicit representation of

/ in terms of the o

-valued Wiener process given in Proposition 6.6 above.


Direct calculations may then be used in order to obtain the transient and limiting distribution of

/. In particular, assuming that



/
0
is a Gaussian random variable, one may then show that for
each t [0, T],

/
t
is a Gaussian random variable with mean
E[

/
t
, )] =

/
0
,

F
1

t
_

F
__
, o, (105)
and covariance functional given for each , o and t [0, T] by
E[

/
t
, )

/
t
, )] =

T
e
,

F
1

t
_


F
_ _
1

F
1

t

F
__
+
_
t
0
(u)(u)
_
F(u) +
2

F(u)
_

F(u)du. (106)
In addition, taking limits as t , one also nds that

/
t
weakly converges as t to a Gaussian
random variable /

with mean zero and covariance functional given for each , o by


E[

/

, )

/

, )] = T
e
,
_
F +
2

F
_
).
We now conclude this subsection by noting that one may heuristically attempt to substitute
the test function 1
x0
into the formula for

/ provided by Theorem 3.3 in order to obtain an
expression for the limiting diusion scaled total number of customers in the system. For instance,
38
suppose that

/ = 0 and that, as in the statement of Proposition 6.6, we have that

c, ) = (0)e
for each o and

/
0
= T
e
and

c is an o

-valued Wiener process with covariance functional


K

c
(s, ; t, ) =
2
(s t)(0)(0). Then, using the form of the generator B
,
from (39), the semi-
group (S
,
t
)
t0
from Proposition 3.6 and (102) of Proposition 6.6, one obtains after a substitution
into Theorem 3.3 that the limiting diusion scaled number of customers in the system at time t 0
is heuristically given by

B
t

_
t
0

B
s
f(t s)ds =
_
t
0

F(t s)d

B
s
,
where

B = (

B
t
)
t0
is a Brownian motion with innitesimal variance
2
+.
6.2 Residuals
We next proceed to study the residual service time process 1 dened in 2.2. Our setup is the same
as in 5.2. That is, we consider a sequence of G/GI/ queues indexed by n, where the arrival
rate to the nth system is of order n and the service time distribution does not change with n. For
the remainder of this subsection, we also assume that

1
n
0
+

E
n
T

1
0
+

ET as n , where

1
0
+

ET is a non-random quantity. By Theorem 5.6 of 5.2, this implies that

1 is non-random as
well.
Now, for each n 1, in addition to the diusion scaled quantities dened in 6.1, let us also
now dene the diusion scaled quantities

1
n

n
_

1
n


1
_
,

1
n
0

n
_

1
n
0


1
0
_
and

(
n

(
n
.
Then, after recalling the form of the uid limit

1 from Theorem 5.6, note that using system
equation (25) in conjunction with Theorem 3.3 and Proposition 3.7, one has that

1
n
=
B
R(

1
n
0
+

E
n
T +

(
n
), (107)
where the map
B
R : D([0, T], o

) D([0, T], o

) is a continuous map. Our strategy now is to


proceed similar to as in 6.1. That is, we rst prove a weak convergence result for the sequence
(

1
n
0
+

E
n
T +

(
n
)
n1
and then we apply Theorem 1.1 together with (107) in order to obtain a
diusion limit result for the sequence (

1
n
)
n1
.
We rst show that for each n 1, the process

(
n
may be well approximated by a process which
is independent of

1
n
0
+

E
n
T. For each n 1, let

(
n
be the o

-valued process dened for o by

(
n
t
, ) =
1

n
]n

Et|

i=1
((
i
) T, )), t 0.
Note that it is clear that

(
n
is independent of

1
n
0
+

E
n
T. We now have the following result. Its
proof may be found in the Appendix.
Lemma 6.7. If

1
n
0
+

E
n
T

1
0
+

ET in D([0, T], o

) as n , then

(
n


(
n
0 in D([0, T], o

) as n , (108)
39
and

(
n


( in D([0, T], o

) as n , (109)
where

( is an o

-valued Wiener process with covariance functional given for , o and s, t 0


by
K

G
(s, ; t, ) = (

E
s


E
t
)Cov((), ()), (110)
where is a random variable with cdf F.
Proof. See Appendix.
Using Lemma 6.7, we may now prove the following result on the weak convergence of (

1
n
0
+

E
n
T +

(
n
)
n1
. We have the following.
Proposition 6.8. If

1
n
0
+

E
n
T

1
0
+

ET in D([0, T], o

) as n , then

1
n
0
+

E
n
T +

(
n


1
0
+

ET +

( in D([0, T], o

) as n , (111)
where

( is as given in Lemma 6.7 and is independent of

1
0
+

ET.
Proof. Since

G
n
is independent of

1
n
0
+

E
n
T for each n 1, it follows Theorem 1.5 and (109) of
Lemma 6.7 that we have the convergence

1
n
0
+

E
n
T +

G
n


1
0
+

ET +

( in D([0, T], o

) as n . (112)
The result now follows by (112), Theorem 1.5, (108) of Lemma 6.7 and the fact that we may write

1
n
0
+

E
n
T +

G
n
=

1
n
0
+

E
n
T +

G
n
+ (

G
n


G
n
).
The following is now the main result of this subsection. It provides a weak limit for the sequence
(

1
n
)
n1
.
Theorem 6.9. If

1
n
0
+

E
n
T

1
0
+

ET in D([0, T], o

) as n , then

1
n


1 in D([0, T], o

) as n , (113)
where

1 is the solution to the the stochastic integral equation

1
t
, ) =

1
0
, ) +

(
t
, ) +

E
t
T, )
_
t
0

1
s
,

) ds, t [0, T], o. (114)


In addition, if

E is a Brownian motion with diusion coecient , then

1 is a generalized o

-
valued Ornstein-Uhlenbeck process driven by a generalized o

-valued Wiener process with covariance


functional
K

ET+

G
(s, ; t, ) =
2
(s t)E[()]E[()] +
_

E
s


E
t
_
Cov((), ()), (115)
where is a random variable with cdf F.
40
Proof. First note that by (107) we have that

1
n
=
B
R(

1
n
0
+

E
n
T +

(
n
), where the map
B
R :
D([0, T], o

) D([0, T], o

) is a continuous map. The convergence (113) now follows by Theorem


1.1 and Proposition 6.8.
Next, note that if

E is a Brownian motion with diusion coecient , then it is easily checked
that

ET is an o

-valued Wiener process with covariance functional


K

ET
(s, ; t, ) =
2
(s t)E[()]E[()]. (116)
Combining (110) with (116) and the fact that

ET and

( are independent, yields (115).
Remark 6.10. Note that in the special case when the arrival process to the nth system is a Poisson
process with rate n, we then have that

E = e and so

E turns out to be a Brownian motion with
diusion coecient . It then follows that K

ET
(s, ; t, ) = (s t)E[()()] and so Theorem
6.9 gives us a version of Theorem 3 of [8].
7 Acknowledgements
The authors would like to thank the referees for their numerous helpful comments and suggestions
which have helped to improve the clarity and overall exposition of the paper.
8 Appendix
In the Appendix, we provide the proofs of several supporting lemmas from the main body of the
paper. We begin with the proof of Lemma 3.4.
Proof of Lemma 3.4. We prove Part 1 by induction. For each n 0 and t 0 xed, denote the
quantity on the lefthand side of (41) by L
n
. For the base case of n = 0, it is straightforward to see
that L
0
1. Next, for the inductive step, suppose that (41) holds for n = 0, . . . , k 1, and t 0.
Then, we have that

_

F(x +t)

F(x)
_
(k)

_
_

F(x +t)

F(x)
_
(1)
_
(k1)

_
(h(x) h(x +t))

F(x +t)

F(x)
_
(k1)

k1

i=0
_
k 1
i
_

(h(x) h(x +t))


(k1i)

_

F(x +t)

F(x)
_
(i)

2
k1

i=0
_
k 1
i
_
|h
(k1i)
|

L
i
< .
This completes the proof of Part 1.
41
We next prove Part 2 by induction as well. First recall that for s, t 0, we may write

F(x +t)

F(x +s)
= exp
_

_
x+t
x+s
h(u)du
_
. (117)
Hence, for the base case of n = 0, we have that
sup
x0

F(x +t)

F(x)

F(x +s)

F(x)

= sup
x0

F(x +s)

F(x)
_
1

F(x +t)

F(x +s)
_

sup
x0

F(x +s)

F(x)

sup
x0

F(x +t)

F(x +s)

sup
x0

F(x +t)

F(x +s)

1 e
|h|[ts[
|h|

[t s[,
where the third inequality above follows from (117) and the nal inequality follows from the mean
value theorem. Next, for the inductive step, suppose that (42) holds for n = 0, 1, ..., k 1. We then
have that
sup
x0

_

F(x +t)

F(x)

F(x +s)

F(x)
_
(k)

= sup
x0

_
(h(x) h(x +s))

F(x +s)

F(x)
(h(x) h(x +t))

F(x +t)

F(x)
_
(k1)

= sup
x0

_
(h(x +t) h(x +s))

F(x +s)

F(x)
(h(x) h(x +t))
_

F(x +t)

F(x)

F(x +s)

F(x)
__
(k1)

= sup
x0

k1

i=0
_
k 1
i
_
[(h(x +t) h(x +s))]
(k1i)
_

F(x +s)

F(x)
_
(i)

k1

i=0
_
k 1
i
_
(h(x) h(x +t))
(k1i)
_

F(x +t)

F(x)

F(x +s)

F(x)
_
(i)

. (118)
Now note that since by Assumption 2.1 we have that h C

b
(R
+
), it follows that all of the
derivatives of h are bounded and hence uniformly continuous as well. Using this fact, Part 1 and
the inductive hypothesis it now follows that (118) is less than or equal to
2[t s[
k1

i=0
_
k 1
i
_
_
|h
((k1i)1)
|

L
i
+|h
(k1i)
|

M
i
_
M
k
[t s[.
This proves Part 2 and completes the proof.
We next provide the proof of Lemma 6.3
42
Proof of Lemma 6.3. We must show that

T
0,n
+

T
n


T
0
+

T in D([0, T], o

) as n . (119)
Let , o and let t 0. It then follows by by Proposition 4.1 that we may write
<<

T
0,n
+

T
n
>>
t
(, )
=
1
n
_
_
A
n
0
()

i=1
_

i
t
0
(u
n
i
)(u
n
i
)h

n
i
(u) du +
E
n
t

i=1
_

i
(t
n
i
)
0
(u)(u)h(u) du
_
_
=
_
t
0


/
n
s
, h
_
ds, (120)
where the second equality above follows as a result of Proposition 2.3. However, by the continuity
of the integral mapping on D([0, T], R), it now follows by Theorem 5.2 that
_
e
0


/
n
s
, h
_
ds
_
e
0


/
s
, h
_
ds in D([0, T], R) as n . (121)
We now verify that (

T
0,n
+

T
n
)
n1
satises Parts 1 and 2 of Theorem 1.5. We begin with Part 1.
Let = in (121) and note that using the fact that the maximum jumps of both

T
0,n
+

T
n
, )
and <<

T
0,n
+

T
n
>> (, ) over the interval [0, T] are uniformly bounded in n, along with the
martingale FCLT [10] and (121), yields the limit

T
0,n
+

T
n
, )

T
0
+

T, ) in D([0, T], R) as n .
Thus, Part 1 of Theorem 1.5 is satised. We next proceed to verify Part 2 of Theorem 1.5. Let
m 1 and let
1
, . . . ,
m
o and 1 i, j m. Then, using the fact that the maximum jumps
of both
_

T
0,n
+

T
n
,
1
), . . .

T
0,n
+

T
n
,
m
)
_
and <

T
0,n
+

T
n
> (
i
,
j
) are bounded over the
interval [0, T], uniformly in n, it follows by the martingale FCLT [10] and (121) that
_

T
0,n
+

T
n
,
1
), . . .

T
0,n
+

T
n
,
m
)
_

_

T
0
+

T,
1
), . . .

T
0
+

T,
m
)
_
in D
m
([0, T], R) as n . This limit then provides convergence of the nite dimensional dis-
tributions of the random vector on the lefthand side above, which is sucient to verify Part 2 of
Theorem 1.5. Thus, (119) holds, where

T
0
+

T is an o

-valued Gaussian martingale with tensor


quadratic covariation given by (121). (89) now holds since

T
0
+

T, ) has independent increments
for each o.
Next, we provide the proof of Lemma 6.7
Proof of Lemma 6.7. We rst prove that

(
n


( in D([0, T], o

) as n . (122)
43
In order to do so, we will verify that Parts 1 and 2 of Theorem 1.5 are satised. We begin with
Part 1. Let , o and note that by Proposition 4.2, the functional strong law of large numbers
[33] and the random time change theorem [1], we have that
[

(
n
](, ) =
1
n
E
n

i=1
((
i
) T, ))((
i
) T, )) (123)


ECov((), ()) in D([0, T], R) as n ,
where is a random variable with CDF F. Now letting = in (123) and using the fact that
the maximum jump of

(
n
, ) over the interval [0, T] is bounded uniformly in n, along with the
martingale FCLT [10], yields the limit

(
n
, )

(, ) in D([0, T], R) as n .
Thus, Part 1 of Theorem 1.5 holds. We next prove that Part 2 of Theorem 1.5 holds. Let m 1
and let
1
, . . . ,
m
o. Then, using the limit (123) and the fact that the maximum jump of
_

(
n
,
1
), . . . ,

(
n
,
m
)
_
over the interval [0, T] is bounded uniformly in n, along with the martin-
gale FCLT [10], yields the limit
_

(
n
,
1
), . . . ,

(
n
,
m
)
_

_

(,
1
), . . . ,

(,
m
)
_
in D
m
([0, T], R) as n .
This limit then provides convergence of the nite dimensional distributions of the random vector
on the left-hand side above, which shows that Part 2 of Theorem 1.5 holds. Thus, (122) it proven.
In order to complete the proof, it now suces to show that

(
n


(
n
0 in D([0, T], o

) as n . (124)
However, in order to show (124), it suces by Theorem 1.5 to show that for each o,

(
n
, )

(
n
, ) 0 in D([0, T], R) as n . (125)
We proceed as follows. In a similar manner to the above, one may show using the martingale FCLT
that

(
n


( in D([0, T], o

) as n . (126)
Hence, for each o, the sequence (

(
n
, )

(
n
, ))
n1
is tight in D([0, T], R) and so in order
to show (125), it suces to show that for each 0 t T,

(
n
t
, )

(
n
t
, ) 0 as n . (127)
First note that we may write

(
n
t
, )

(
n
t
, ) = 1
E
n
t
]n

Et|
1

n
_
_
E
n
t

i=]n

Et|
((
i
) T, ))
_
_
1
]n

Et|E
n
t

1

n
_
_
]n

Et|

i=E
n
t
((
i
) T, ))
_
_
.
44
Now squaring both sides of the above and using the basic identity (x
1
+ x
2
)
2
2(x
2
1
+ x
2
2
), it is
straightforward to show that one may write
(

(
n
t
, )

(
n
t
, ))
2
1


E
n
t
2

Et
2
n
_
_
E
n
t
]n

Et|

i=E
n
t
]n

Et|
((
i
) T, ))
_
_
2
(128)
+1


E
n
t
2

Et
2
n
_
_
E
n
t
]n

Et|

i=E
n
t
]n

Et|
((
i
) T, ))
_
_
2
. (129)
We now show that each of the terms (128) and (129) converges to 0 in probability as n tends to
, which implies (127) and completes the proof.
We begin with (128). First note that by the independence of
i
, i 1 from the arrival process
E
n
, and the i.i.d. nature of the sequence
i
, i 1, we have that
E
_

_
2
n
1


E
n
t
2

Et
_
_
E
n
t
]n

Et|

i=E
n
t
]n

Et|
((
i
) T, ))
_
_
2
_

_
=
2
n
E
_
_
1


E
n
t
2

Et
E
n
t
]n

Et|

i=E
n
t
]n

Et|
((
i
) T, ))
2
_
_

2
n
E
_
_
(E
n
t
2n

Et|)]n

Et|

i=(E
n
t
2n

Et|)]n

Et|
((
i
) T, ))
2
_
_
8|
2
|

E[[(

E
n
t
2

E
t
)

E
t
[].
However, since

E
n
t


E
t
as n , it follows that
E[[(

E
n
t
2

E
t
)

E
t
[] 0 as n .
This then implies that (128) converges to 0 in probability as n tends to , as desired.
We next proceed to (129). Note that since

E
n
t


E
t
as n , it follows that for each > 0,
lim
n
P
_
_
_1


E
n
t
2

Et
2
n
_
_
E
n
t
]n

Et|

i=E
n
t
]n

Et|
((
i
) T, ))
_
_
2
>
_
_
_ = 0.
This shows that (129) converges to 0 in probability as n tends to , which completes the proof.
References
[1] P. Billingsley. Convergence of Probability Measures. John Wiley and Sons, New York, 1999.
[2] T. Bojdecki and L. G. Gorostiza. Langevin equations for o

-valued Gaussian processes and


uctuation limits of innite particle systems. Probab. Th. Rel. Fields, 73:227244, 1986.
45
[3] T. Bojdecki and L. G. Gorostiza. Gaussian and non-Gaussian distribution-valued Ornstein-
Uhlenbeck processes. Can. J. Math., 43:11361149, 1991.
[4] T. Bojdecki, L. G. Gorostiza, and S. Ramaswamy. Convergence of o

-valued processes and


space-time random elds. Journal of Functional Analysis, 66:2141, 1986.
[5] A.A. Borovkov. On limit laws for service processes in multi-channel systems. Siberian Journal
of Mathematics, 8:9831004, 1967.
[6] K.L. Chung. A course in probability theory. Academic Pr, 2001.
[7] L. Decreusefond and P. Moyal. Fluid limit of a heavily loaded edf queue with impatient
customers. Markov Processes and Related Fields, 14:131158, 2008.
[8] L. Decreusefond and P. Moyal. A functional central limit theorem for the M/GI/ queue.
Annals of Applied Probability, 18:21562178, 2008.
[9] D. G. Down, H. C. Gromoll, and A. L. Puha. Fluid limits for shortest remaining processing
time queues. Mathematics of Operations Research, 34(4):880911, 2009.
[10] S. Ethier and T. Kurtz. Markov Processes: Characterization and Convergence. John Wiley &
Sons, New York, 1986.
[11] P. W. Glynn and W. Whitt. A new view of the heavy-trac limit theorem for the innite-server
queue. Adv. Appl. Prob., 23(1):188209, 1991.
[12] H. C. Gromoll. Diusion approximation of a processor sharing queue in heavy trac. Annals
of Applied Probability, 14:555611, 2004.
[13] H. C. Gromoll, A. L. Puha, and R. J. Williams. The uid limit of a heavily loaded processor
sharing queue. Annals of Applied Probability, 12:797859, 2002.
[14] H. C. Gromoll, Ph. Robert, and B. Zwart. Fluid limits for processor sharing queues with
impatience. Math. of Op. Res., 33:375402, 2008.
[15] M. Hitsuda and I. Mitoma. Tightness problems and stochastic evolution equation arising from
uctuation phenomena for interacting diusions. Journal of Multivariate Analysis, 19:311328,
1986.
[16] R. A. Holley and D. W. Stroock. Generalized Ornstein-Uhlenbeck processes and innite particle
branching Brownian motions. Publ. RIMS, Kyoto University, 14:741788, 1978.
[17] D. L. Iglehart. Limit diusion approximations for the many server queue and the repairman
problem. J. of Applied Probability, 2:429441, 1965.
[18] G. Kallianpur. Stochastic Dierential Equations in Duals of Nuclear Spaces with Some Applica-
tions. IMA Preprint Series No. 244. Institute for Mathematics and its Applications, University
of Minnesota, 1986.
[19] G. Kallianpur and V. Perez-Abreu. Stochastic evolution equations driven by nuclear-space-
valued martingales. Appl. Math. Optim., 17:237272, 1988.
46
[20] G. Kallianpur and V. Perez-Abreu. Weak convergence of solutions of stochastic evolution equa-
tions on nuclear spaces. In G. Da Prato and L. Tubaro, editors, Stochastic Partial Dierential
Equations and Applications II, volume 1390 of Lecture Notes in Mathematics, pages 119131.
Springer, 1989.
[21] G. Kallianpur and J. Xiong. Stochastic Dierential Equations in Innite Dimensional Spaces.
Lecture Notes - Monograph Series. Institue of Mathematical Statistics, 1995.
[22] I. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus. Springer, 1998.
[23] H. Kaspi and K. Ramanan. Spde limits of many server queues. Arxiv preprint arxiv:1010.0330,
2010.
[24] H. Kaspi and K. Ramanan. Law of large numbers limits for many-server queues. The Annals
of Applied Probability, 21(1):33114, 2011.
[25] E.V. Krichagina and A.A. Puhalskii. A heavy trac analysis of a closed queueing system with
a GI/ service center. Queueing Systems, 25:235280, 1997.
[26] I. Mitoma. Tightness of probabilities on C([0, 1]; o

) and D([0, 1]; o

). Annals of Probability,
11(4):989999, 1983.
[27] I. Mitoma. An -dimensional inhomogeneous Langevins equation. Journal of Functional
Analysis, 61:342359, 1985.
[28] I. Mitoma. Generalized Ornstein-Uhlenbeck process having a characteristic operator with
polynomial coecients. Probab. Th. Rel. Fields, 76:533555, 1987.
[29] G. Pang, R. Talreja, and W. Whitt. Martingale proofs of many-server heavy-trac limits for
Markovian queues. Probab. Surv., 4:193267, 2007.
[30] G. Pang and W. Whitt. Two-parameter Markov heavy-trac limits for innite-server queues.
Queueing Systems, 65:325364, 2010.
[31] A.A. Puhalskii and J.E. Reed. On many-server queues in heavy trac. The Annals of Applied
Probability, 20(1):129195, 2010.
[32] W. Whitt. On the heavy-trac limit theorem for GI/G/ queues. Adv. Appl. Prob.,
14(1):171190, 1982.
[33] W. Whitt. Stochastic-Process Limits. Springer, New York, 2002.
[34] W. Whitt. Proofs of the martingale FCLT. Probability Surveys, 4:268302, 2007.
47