denote the set of reals, nonnegative reals and nonpositive reals, respec
tively. Also, denote by N = 0, 1, 2, ... the set of nonnegative integers. Let C
(R
+
) denote the set of innitely dieren
tiable functions from R
+
to R. Also, let C
b
(R) denote the set of innitely dierentiable, bounded
functions from R to R whose derivatives of all orders are bounded and, similarly, let C
b
(R
+
) denote
the set of innitely dierentiable, bounded functions from R
+
to R whose derivatives of all orders
are bounded.
Now dene
o C
(R) : 
,
< for all , N, (1)
where

,
sup
xR
[x
()
(x)[ (2)
and
()
denotes the th derivative of . The space o is commonly referred to as Schwartz space
or the space of rapidly decreasing functions [21].
The topology of o is given by the family of seminorms  
,
: , N dened in (2). In
particular,
n
in o if 
n

,
0 for all , N. We note that by Lemma 1.3.2 and
Theorem 1.3.2 of [21], the space o is a nuclear Fr`echet space. Moreover, we also note that one may
construct a sequence of seminorms  
p
: p N with the property that  
p
 
p+1
for each
3
p N and which also induce the same the topology on o as the seminorms  
,
: , N
given above. In particular, by Lemma 1.3.3 of [21], one has that for each p N, there exists a
k N and C > 0 such that

p
C max
0,k+1

,
for all o, (3)
and, by Lemma 1.3.4 of [21], one has that for each , N, there exists a p N and M > 0 such
that
max
0,k+1

,
M
p
for all o. (4)
For a precise construction of  
p
for each p N, one may consult page 24 of [21] .
The set of all linear maps from o to o is denoted by L(o, o) and the strong topology on L(o, o)
is dened in the following manner. A subset B of o is said to be bounded if for any neighborhood
U of 0 o, there exists a constant > 0 such that
1
B U (see Denition 1.1.7 of [21]).
The strong topology on L(o, o) is then given by the following denition (see Theorem 1.2.1 of [21]).
Denition 1.2. For each bounded subset B of o and p N, let
q
B,p
(T) sup
B
T
p
for all T L(o, o).
Then, q
B,p
constitutes a family of seminorms on L(o, o) and the topology given by these semi
norms is referred to as the strong topology on L(o, o).
1.1.3 The Space of Tempered Distributions
Many of the processes studied in this paper take values in the topological dual of o, which we denote
by o
. Recall that o
are
referred to as tempered distributions and we now review some relevant facts concerning tempered
distributions as well as tempered distributionvalued processes.
For each o
is denoted by
such that
, ) = ,
) for all o.
It is clear by the denition of o that
t
as the unique element of o
such that
t
, ) = ,
t
) for all o,
where
t
o is the function dened by
t
() ( t).
All statements in this paper regarding convergence in o
, which we now dene. One may consult Section 1.1 of [21] for further details.
Denition 1.3. For each bounded subset B o, let
q
B
() sup
B
[, )[ for all o
. (5)
Then, the strong topology on o
.
Let D([0, T], o
s
ds,
_
=
_
t
0
s
, ) ds for all t 0, o.
As noted immediately following Denition 1.3 above, the space o
ln
_
t
s
t s
_
< .
We then have the following denition (see [21, 26]).
Denition 1.4. For each seminorm q
B
dening the strong topology on o
, let
d
o
q
B
(, ) = inf
_
sup
0tT
[q
B
(
t
t
) +()[
_
for all , o
.
The topology on D([0, T], o
_
sup
0tT
[q
B
(
t
t
)[ + sup
0tT
[
t
t[
_
for all , o
.
We also note that under this topology, D([0, T], o
.
Theorem 1.5 (Mitomas Theorem). Let (
n
)
n1
be a sequence of random elements of D([0, T], o
).
Then,
n
in D([0, T], o
)
if the following two statements hold:
1. For each o, the sequence
n
, )
n1
is tight in D([0, T], R).
5
2. For
1
, . . . ,
m
o and t
1
, . . . , t
m
[0, T],
_
n
t
1
,
1
), . . . ,
n
tm
,
m
)
_
(
t
1
,
1
), . . . ,
tm
,
m
)) in R
m
.
We now conclude the technical background section with some comments regarding martingales
and, in particular, o
valued T
t
martingale if for all
o, (M
t
, ))
t0
is an Rvalued T
t
martingale. For two o
, ), N
, ) >
t
,
and the tensor quadratic variation (<< M >>
t
)
t0
of an o
i
= inft 0 : E
t
i
the time of the arrival of the ith customer to the system after time t = 0. We assume that E[
i
] <
for each i = 1, 2, ... We denote by
i
the service time of the ith customer to arrive to the system after
time t = 0 and we assume that
i
, i 1 is an i.i.d. sequence of nonnegative, mean 1 random
variables with cumulative distribution function (cdf) F, complementary cumulative distribution
function (ccdf)
F = 1 F, and probability density function (pdf) f. We also assume that the
hazard rate function h of F satises the following assumption.
Assumption 2.1. The function h C
b
(R
+
).
Now let (A
t
)
t0
D([0, ), D) be such that for each t 0 and y 0, the quantity A
t
(y)
represents the number of customers in the system at time t 0 that have been in the system for
less than or equal to y units of time at time t. For y < 0, we set A
t
(y) = 0. At time t = 0, we
assume that there are A
0
(y) customers present who have been in the system for less than or equal
6
to y 0 units of time and that there are a total of A
0
() customers present. We assume that
E[A
2
0
()] < . For each i = 1, ..., A
0
(), we denote by
i
= infy 0 : A
0
(y) i
the arrival time of the ith initial customer to the system. We denote by
i
the remaining service
time at time t = 0 of the ith initial customer in the system. The distribution of
i
, conditional on
the arrival time
i
, is given by
P(
i
> x[
i
) =
1 F(
i
+x)
1 F(
i
)
, x 0. (6)
We denote by f
i
the conditional pdf associated with this distribution and we set h
i
() = h(
i
).
We now derive a convenient representation for the system equations for (A
t
)
t0
and its tempered
distributionvalued counterpart, /, which we dene shortly. We begin by noting that by rst
principles we have that for each t 0 for y 0,
A
t
(y) =
A
0
()
i=1
1
t
i
y
1
t<
i
+
Et
i=1
1
t
i
y
1
t
i
<
i
. (7)
Our rst result provides an alternative way to write (7). In the following, we set
0
i=1
= 0.
Proposition 2.2. For each t 0 and y 0,
A
t
(y) = A
0
(y)
A
0
()
i=1
1
i
t(y+
i
)
A
0
(y)
i=A
0
(yt)+1
1
i
>y+
i
+E
t
Et
i=1
1
i
(t
i
)y
E
(ty)
i=1
1
i
>y
. (8)
Proof. By (7), we have that
A
t
(y) =
A
0
(y)
i=1
1
t
i
y
1
t<
i
+
Et
i=1
1
t
i
y
1
t
i
<
i
= A
0
(y) +
A
0
(y)
i=1
(1
t
i
y
1
t<
i
1) +E
t
+
Et
i=1
(1
t
i
y
1
t
i
<
i
1). (9)
However,
1 1
t
i
y
1
t<
i
= 1
t
i
y
1
i
t
+1
t
i
>y
(10)
=
_
1
t
i
y
1
i
t
+1
t
i
>y
1
i
+
i
y
_
+1
t
i
>y
1
i
+
i
>y
,
and, similarly,
1 1
t
i
y
1
t
i
<
i
= 1
t
i
y
1
i
t
i
+1
t
i
>y
(11)
=
_
1
t
i
y
1
i
t
i
+1
t
i
>y
1
i
y
_
+1
t
i
>y
1
i
>y
.
Substituting (11) and (10) into (9) and summing over A
0
(y) and E
t
completes the proof.
7
We now provide an intuitive description of each of the terms appearing in (8). The rst term
represents the number of customers in the system at time t = 0 that have been in the system for
less than or equal to y units of time, the second term represents the number of departures by time
t 0 of those initial customers that had total service less than or equal to y units of time at time
t = 0, and the third term represents the number of initial customers whose total service time is
greater than y units of time and had been in the system for less than or equal to y units of time at
time t = 0 but have been been in the system for greater than y units of time at time t 0. The
fourth, fth and sixth terms represent similar quantities but for those customers that arrived to
the system after time t = 0.
Now let D
0
= (D
0
t
)
t0
D([0, ), D) be dened by setting
D
0
t
(y) =
A
0
()
i=1
_
1
i
t(y+
i
)
_
i
t(y+
i
)
0
h
i
(u) du
_
, t 0, y 0, (12)
and set D
0
t
(y) = 0 for y < 0, and let D = (D
t
)
t0
D([0, ), D) be dened by setting
D
t
(y) =
Et
i=1
_
1
i
(t
i
)y
_
i
(t
i
)y
0
h(u) du
_
, t 0, y 0, (13)
and set D
t
(y) = 0 for y < 0. It then follows from (8) that for each t 0 and y 0, we may write
A
t
(y) = A
0
(y) +E
t
D
0
t
(y) D
t
(y)
A
0
()
i=1
_
i
t(y+
i
)
0
h
i
(u) du
Et
i=1
_
i
(t
i
)y
0
h(u) du
A
0
(y)
i=A
0
(yt)+1
1
i
+
i
>y
E
(ty)
i=1
1
i
>y
. (14)
The above expression for A
t
(y) will become useful in a moment. However, we next move on to
expressing the age process as a tempered distributionvalued process using the Schwartz space o
dened in (1). In particular, we associate with the process A dened in (7) the o
valued process
/ = (/
t
)
t0
such that for each t 0 and o we set
/
t
, ) =
_
R
(y) dA
t
(y). (15)
In a similar manner, we associate the o
valued processes T
0
= (T
0
t
)
t0
and T = (T
t
)
t0
with D
0
and D, respectively. That is, for each t 0 and o we set
T
0
t
, ) =
_
R
(y) dD
0
t
(y) and T
t
, ) =
_
R
(y) dD
t
(y).
We also associate the o
), Pa.s.
8
For the remainder of the paper, we now replace the hazard rate function h : R
+
R with a
function
h : R R such that
h(x) = h(x) for x 0 and
h C
b
(R). In a similar manner, we
replace the cdf F and ccdf
F with corresponding functions
F and
F such that
F,
F C
b
(R). For
ease of notation, we continue to refer to
h,
F and
F as h, F and
F, respectively.
Our next step is to use the expression (14) in order to provide a convenient expression for the
tempered distributionvalued process /. We begin by noting that integrating test functions o
termbyterm in (14) it follows that for each t 0 one has that
/
t
, ) = /
0
, ) T
0
t
+T
t
, )
A
0
()
i=1
_
i
+(
i
t)
i
(y)h(y) dy
Et
i=1
_
i
(t
i
)
0
(y)h(y) dy
_
R
+
(y) d
_
_
A
0
(y)
i=A
0
(yt)+1
1
i
+
i
>y
+
E
(ty)
i=1
1
i
>y
_
_
. (16)
The following two propositions now allow us to further simplify the expression in (16). We rst
have the following.
Proposition 2.3. For each t 0,
A
0
()
i=1
_
i
+(
i
t)
i
(y)h(y) dy +
Et
i=1
_
i
(t
i
)
0
(y)h(y) dy =
_
t
0
/
s
, h) ds.
Proof. For each t 0,
A
0
()
i=1
_
i
+(
i
t)
i
(y)h(y) dy +
Et
i=1
_
i
(t
i
)
0
(y)h(y) dy
=
A
0
()
i=1
_
t
0
1
0s
i
(s
i
)h(s
i
) ds +
Et
i=1
_
t
0
1
0s
i
(s
i
)h(s
i
) ds
=
_
t
0
_
_
A
0
()
i=1
1
0s
i
(s
i
)h(s
i
) +
Et
i=1
1
0s
i
(s
i
)h(s
i
)
_
_
ds
=
_
t
0
/
s
, h) ds.
This completes the proof.
Next, we have the following.
Proposition 2.4. For each t 0,
_
R
+
(y) d
_
_
A
0
(y)
i=A
0
(yt)+1
1
i
+
i
>y
+
E
(ty)
i=1
1
i
>y
_
_
= E
t
(0) +
_
t
0
/
s
,
) ds.
9
Proof. Let t 0. Then, integrating by parts we have that
E
t
(0)
_
R
+
(y) d
_
_
A
0
(y)
i=A
0
(yt)+1
1
i
+
i
>y
+
E
(ty)
i=1
1
i
>y
_
_
=
_
R
+
_
_
A
0
(y)
i=A
0
(yt)+1
1
i
+
i
>y
+
E
(ty)
i=1
1
i
>y
_
_
t
(y) dy
=
_
R
+
_
_
A
0
()
i=1
1
i
y,
i
+
i
>y,
i
+y<t
+
Et
i=1
1
i
>y,
i
+y<t
_
_
t
(y) dy
=
A
0
()
i=1
_
R
+
1
i
y,
i
+
i
>y,
i
+y<t
t
(y) dy +
Et
i=1
_
R
+
1
i
>y,
i
+y<t
t
(y) dy
=
A
0
()
i=1
_
t
0
1
0s
i
i
+
i
t
(s
i
) ds +
Et
i=1
_
t
0
1
0s
i
t
(s
i
) ds
=
_
t
0
_
_
A
0
()
i=1
1
0s
i
i
+
i
(s
i
) +
Et
i=1
1
0s
i
(s
i
)
_
_
ds
=
_
t
0
__
R
+
(u)d/
s
(u)
_
ds
=
_
t
0
_
/
s
,
_
ds.
This completes the proof.
Now note that combining Propositions 2.3 and 2.4 with system equation (16), one nds that
for each t 0 and o,
/
t
, ) = /
0
, ) +c
t
T
0
t
T
t
, )
_
t
0
/
s
, h) ds +
_
t
0
/
s
,
) ds, (17)
where we dene the o
valued process c = (c
t
)
t0
to be such that c
t
, ) = E
t
(0) for each o
and t 0. In general, we refer to (17) as the semimartingale decomposition of /. This will become
clear in 4 where we show that the process (T
0
t
+T
t
)
t0
is a martingale.
2.2 Residual Service Time Process
We next move on to analyzing the residual service time process 1. As in 2.1, we assume that
we have a G/GI/ queue in which customers arrive to the system according to a general arrival
process (E
t
)
t0
D([0, ), R), where we assume that E
0
= 0, Pa.s. For each i 1, we denote by
i
= inft 0 : E
t
i
the time of the ith customer arrival to the system after time t = 0 and we let
i
be the service time
of the ith customer to arrive to the system after time t = 0. We assume that
i
, i 1 is an i.i.d.
10
sequence of nonnegative, mean 1 random variables with cumulative distribution function F and
probability density function f. We also assume in this subsection that the hazard rate function h
of F satises Assumption 2.1 of 2.1.
Now let R = (R
t
)
t0
D([0, ), D) be such that for each t 0 and y R, the quantity R
t
(y)
denotes the number of customers in the system at time t 0 that have less than or equal to y R
units of service remaining. For y < 0, we interpret R
t
(y) as the number of customers who have
departed from the system by time t + y. Thus, (R
t
(y))
yR
not only keeps tracks of the residual
service times of those customers present in the system at time t, but it also records the departure
times of all customers who have departed from the system by time t. We assume that at time t = 0
there are R
0
(y) customers in the system that have less than or equal to y units of service time
remaining. By rst principles, it then follows that for each t 0 and y R we may write
R
t
(y) = R
0
(t +y) +
Et
i=1
1
i
(t
i
)y
. (18)
The following proposition now presents an alternative expression for the righthand side of (18).
Proposition 2.5. For each t 0 and y R,
R
t
(y) = R
0
(y) + (R
0
(t +y) R
0
(y)) +
Et
i=1
1
i
y
+
Et
i=1
1
i
>y,(
i
+
i
)ty
. (19)
Proof. By (18),
R
t
(y) = R
0
(t +y) +
Et
i=1
1
i
(t
i
)y
= R
0
(y) + (R
0
(t +y) R
0
(y)) +
Et
i=1
1
i
(t
i
)y
. (20)
However, note that
1
i
(t
i
)y
= 1
i
y
+1
i
>y,
i
(t
i
)y
. (21)
Substituting (21) into (20) and summing over E
t
, completes the proof.
We now give an intuitive description for each of the terms appearing in (19). The rst term
represents the number of customers in the system at time t = 0 with less than or equal to y units of
service time remaining. The second term represents the number of customers in the system at time
t = 0 with greater than y units of total service time but at time t 0 have less than or equal to
y units of service time remaining. The third and fourth terms in (19) have analogous descriptions
but for those customers that arrive to the system after time t = 0.
Now let G = (G
t
)
t0
D([0, ), D) be dened by setting
G
t
(y) =
Et
i=1
_
1
i
y
F(y)
_
, t 0, y R.
By Proposition 2.5, it then follows that for each t 0 and y R, R
t
(y) may be written as
R
t
(y) = R
0
(y) + (R
0
(t +y) R
0
(y)) +G
t
(y) +E
t
F(y) +
Et
i=1
1
i
>y,(
i
+
i
)ty
. (22)
11
The above representation for R
t
(y) will be useful in a moment. However, we rst proceed to
dene tempered distributionvalued versions of the processes dened above. In particular, we let
1 = (1
t
)
t0
be the o
by setting
T, )
_
0
(x)dF(x), o,
and
T
e
, )
_
0
(x)(1 F(x))dx, o.
Now note that integrating test functions o against each of the terms in (22), it follows that
for each o and t 0,
1
t
, ) = 1
0
, ) +
_
R
(y) d(R
0
(t +y) R
0
(y))
+(
t
, ) +E
t
T, ) +
_
R
(y) d
_
Et
i=1
1
i
>y,(
i
+
i
)ty
_
. (23)
The following proposition now allows us to simplify the righthand side of (23).
Proposition 2.6. For each t 0,
_
R
(y) d(R
0
(t +y) R
0
(y)) +
_
R
(y) d
_
Et
i=1
1
i
>y,(
i
+
i
)ty
_
=
_
t
0
1
s
,
) ds. (24)
Proof. The proof parallels the proof of Proposition 2.4. Let t 0. Then, integrating by parts, we
have that
_
R
(y) d(R
0
(t +y) R
0
(y))
_
R
(y) d
_
Et
i=1
1
i
>y,(
i
+
i
)ty
_
=
_
R
_
_
R
0
()
i=1
1
y<
i
t+y
_
_
(y) dy +
_
R
_
Et
i=1
1
i
>y,(
i
+
i
)ty
_
(y) dy
=
R
0
()
i=1
_
R
1
y<
i
t+y
(y) dy +
Et
i=1
_
R
1
i
>y,(
i
+
i
)ty
(y) dy
12
=
R
0
()
i=1
_
R
1
i
ty<
i
(y) dy +
Et
i=1
_
R
1
(
i
+
i
)ty<
i
(y) dy
=
R
0
()
i=1
_
t
0
(
i
s) ds +
Et
i=1
_
t
0
1
i
s
(
i
+
i
s) ds
=
_
t
0
_
_
R
0
()
i=1
(
i
s) +
Et
i=1
1
i
s
(
i
+
i
s)
_
_
ds
=
_
t
0
__
R
+
dR
s
(u)
_
ds
=
_
t
0
1
s
,
) ds.
This completes the proof.
Substituting (24) into (23), one now obtains that for each t 0 and o,
1
t
, ) = 1
0
, ) +(
t
, ) +E
t
T, )
_
t
0
1
s
,
) ds. (25)
We refer to (25) as the semimartingale decomposition of 1. This is due to the fact that in 4 it
will be shown that the process ( in (25) is a martingale. We also point out the similarity of (25)
with (4) of [8].
3 Regulator Map Result
Let B : o o be a continuous linear operator and for each D([0, T], o
t
, ) =
t
, ) +
_
t
0
s
, B) ds, t [0, T], o. (26)
In this section, we rst show that under some mild restrictions on B, (26) denes a continuous
function
B
: D([0, T], o
) D([0, T], o
t
, ) =
t
, ) +
_
t
0
s
, S
ts
B) ds, t [0, T], o. (27)
Furthermore, (27) denes a continuous function
B
: D([0, T], o
) D([0, T], o
) mapping to .
Proof. That
B
is a welldened function from D([0, T], o
) to D([0, T], o
) to D([0, T], o
) dened by
_
0
B
s
ds, where B
and S
t
denote the adjoint
operators of B and S
t
, respectively, is continuous. Let (
n
)
n1
be a sequence converging to in
D([0, T], o
) . Then, by Denition 1.4 and the comment below it (see also Theorem 2.4.1 of [21]),
there exists a sequence (
n
)
n1
of strictly increasing homeomorphisms of the interval [0, T] such
that for each bounded set K o,
sup
0tT
sup
K
n
t
n
t
,
_
0 and sup
0tT
[
n
t
t[ 0 as n . (28)
Moreover, it suces to consider homeomorphisms (
n
)
n1
that are absolutely continuous with
respect to Lebesgue measure on [0, T] having corresponding derivatives (
n
)
n1
satisfying 
1
T
0 as n (see pages 112114 of [1]). It then follows that for each t [0, T] and o we
14
may write
__
t
0
B
ts
n
s
ds
_
n
t
0
B
n
t
s
s
ds,
_
__
t
0
B
ts
n
s
ds
_
t
0
B
n
t
n
s
n
s
n
s
ds,
_
n
1
T
__
t
0
B
n
t
n
s
n
s
ds,
_
__
t
0
B
(S
ts
S
n
t
n
s
)
n
s
ds,
_
__
t
0
B
ts
(
n
s
n
s
)ds,
_
= 
n
1
T
_
t
0
n
s
, S
n
t
n
s
B
_
ds
_
t
0
n
s
, (S
ts
S
n
t
n
s
)B
_
ds
_
t
0
n
s
n
s
, S
ts
B
_
ds
. (29)
In order to complete the proof, it now suces to show that for each bounded subset K o, the
following three limits hold:
sup
t[0,T]
sup
K

n
1
T
_
t
0
n
s
, S
n
t
n
s
B
_
ds
0 as n , (30)
sup
t[0,T]
sup
K
_
t
0
n
s
, (S
ts
S
n
t
n
s
)B
_
ds
0 as n , (31)
sup
t[0,T]
sup
K
_
t
0
n
s
n
s
, S
ts
B
_
ds
0 as n . (32)
We begin with (30). First note that for each bounded subset K o,
sup
s[0,T]
sup
K
[
s
, )[ < . (33)
Now let K o be an arbitrary bounded set. We show that the set K
S
u
B, u [0, T], K
is bounded in o as well. Indeed, note that by Denition (3.1), we have that for each q 0 there
exist M
q
,
q
and p q such that
sup
K

q
= sup
u[0,T],K
S
u
B
q
M
q
e
qT
sup
K
B
p
< , (34)
where the nal inequality follows from the fact that continuous linear operators from o to R map
bounded sets of o to bounded sets of R (see Theorem 1.2.1 of [21]). Thus, K
= S
u
B, u
[0, T], K is bounded and so by (33),
sup
s[0,T]
sup
u[0,T],K
[
s
, S
u
B)[ < .
It therefore follows since 
n
1
T
0 as n , that
sup
t[0,T]
sup
K

n
1
T
_
t
0
n
s
, S
n
t
n
s
B
_
ds
n
1
T
T sup
s[0,T]
sup
u[0,T],K
[
s
, S
u
B)[ ds 0, (35)
15
as n , which implies (30).
We next prove the limit (31). First note that by Lemma 2.2 of [18], there exist 0 and q 1
such that
sup
t[0,T]
sup
K
_
t
0
n
s
, (S
ts
S
n
t
n
s
)B
_
ds
(36)
T sup
s[0,T]
sup
0wvT
sup
K
s
, (S
vw
S
n
v
n
w
)B
_
T sup
0wvT
sup
K
(S
vw
S
n
v
n
w
)B
q
.
Next, note that for w v we may write
S
vw
S
n
v
n
w
= 1
(vw)(
n
v
n
w
)0
(S
(vw)(
n
v
n
w
)
I)S
n
v
n
w
+1
(
n
v
n
w
)(vw)0
(I S
(
n
v
n
w
)(vw)
)S
vw
.
Thus, recalling the denition of the set K
= S
u
B, u [0, T], K, it follows that
sup
0wvT
sup
K
(S
vw
S
n
v
n
w
)B
q
(37)
sup
0wvT
sup
K
1
(vw)(
n
v
n
w
)0
(S
(vw)(
n
v
n
w
)
I)S
n
v
n
w
B
q
+ sup
0wvT
sup
K
1
(
n
v
n
w
)(vw)0
(I S
(
n
v
n
w
)(vw)
)S
vw
B
q
sup
0wvT
sup
K
1
(vw)(
n
v
n
w
)0
(S
(vw)(
n
v
n
w
)
I)
q
+ sup
0wvT
sup
K
1
(
n
v
n
w
)(vw)0
(I S
(
n
v
n
w
)(vw)
)
q
.
However, note that since 
n
1
T
0 as n , it follows that
sup
0wvT
[(v w) (
n
v
n
w
)[ 0 as n .
Thus, since by (34) the set K
1
(vw)(
n
v
n
w
)0
(S
(vw)(
n
v
n
w
)
I)
q
+ sup
0wvT
sup
K
1
(
n
v
n
w
)(vw)0
(I S
(
n
v
n
w
)(vw)
)
q
0 as n ,
which, by (36) and (37), implies (31).
Finally, (32) follows from the fact that
sup
t[0,T]
sup
K
_
t
0
n
s
n
s
, S
ts
B
_
ds
T sup
s[0,T]
sup
u[0,T],K
n
s
n
s
, S
u
B
_
0, (38)
as n , where the nal convegence follows from (28) and (34). This completes the proof.
16
3.1 Age Process
Now let B
,
be the linear operator dened on o such that
B
,
=
) D([0, T], o
_
F(x +t)
F(x)
_
(n)
< . (41)
2. For each T > 0, there exists a sequence (M
n
)
n0
such that for each s, t [0, T],
sup
x0
_
F(x +t)
F(x)
F(x +s)
F(x)
_
(n)
M
n
[t s[. (42)
Proof. See Appendix.
Next, we have the following.
Lemma 3.5. For each o, t R and , N, we have

t

,
(1 [t[
)2
max
0i

i,
. (43)
Proof. For each o, t R and , N, we have

t

,
= ( t)
,
= sup
xR
()
(x t)
= sup
xR
[t + (x t)]
()
(x t)
= sup
xR
i=0
_
i
_
t
i
(x t)
i
()
(x t)
i=0
_
i
_
[t[
i

i,
(1 [t[
)2
max
0i

i,
.
This completes the proof.
17
The following is now our main result of this subsection.
Proposition 3.6. The linear operator B
,
dened by (39) generates a stronglycontinuous (C
0
, 1)
semigroup (S
,
t
)
t0
given by
S
,
t
= (1/
F)
t
_
F
_
for all o. (44)
Proof. We rst check that B
,
is indeed the innitesimal generator of the semigroup given by (44).
In order to do so, it suces to check that for each , N,
lim
t0
S
,
t
t
(
h)
,
= 0. (45)
We begin by noting that for each t 0, x R and o, we have that
S
,
t
(x)
1 F(x t)
1 F(x)
t
(x)
e
ht
[
t
(x)[,
and so it follows that S
,
t
o. Now let x R be xed and note that for each t 0, we may write
S
,
t
(x) = exp
_
_
x+t
x
h(v)dv
_
(x +t).
Hence, by Taylors theorem, expanding in terms of t we obtain that we may write
S
,
t
(x) = (x) + (
_
x+u
x
h(v)dv
_
(x +u)
_
udu.
Now dierentiating with respect to x in (46), we obtain that for each , N,
lim
t0
S
,
t
t
(
h)
,
= lim
t0
sup
xR
R
()
(x, t)
t
,
where R
()
(x, t) denotes the th derivative of R(x, t) with respect to x. Hence, in order to verify
(45) it now suces to show that for each , N,
lim
t0
sup
xR
R
()
(x, t)
t
= 0.
First note that for each x R xed, we may write
x
R
()
(x, t) =
1
2
_
t
0
x
_
d
2
du
2
_
exp
_
_
x+u
x
h(v)dv
_
(x +u)
__
()
udu. (47)
18
However, since h C
b
(R
+
) and o, it is straightforward to show that for each , N, there
exists a constant C
,
< such that for u suciently small,
sup
xR
_
d
2
du
2
_
exp
_
_
x+u
x
h(v)dv
_
(x +u)
__
()
< C
,
.
Hence, by (47) we obtain that
lim
t0
sup
xR
R
()
(x, t)
t
< lim
t0
C
,
2t
_
t
0
udu
= lim
t0
C
,
4
t = 0.
This now completes the verication of the fact that B
,
is the innitesimal generator of the semi
group (S
,
t
)
t0
given by (44).
We next verify that the semigroup (S
,
t
)
t0
is a stronglycontinuous (C
0
, 1) semigroup. It
is straightforward to see that Part 1 of Denition 3.1 is satised. We next check that Part 2 of
Denition 3.1 is satised. Consider 0 s < t, a bounded set K o, and , N. We then have
that we may write
sup
K
S
,
s
S
,
t

,
= sup
K
(1/
F)
s
(
F) (1/
F)
t
(
F)
,
= sup
K
sup
xR
_
F(x +s)
F(x)
(x +s)
F(x +t)
F(x)
(x +t)
_
()
sup
K
sup
xR
_
__
F(x +s)
F(x)
F(x +t)
F(x)
_
(x +s)
_
()
_
+ sup
K
sup
xR
_
F(x +t)
F(x)
((x +s) (x +t))
_
()
. (48)
We now handle each of the terms in (48) separately.
For the rst term in (48), note that by Lemmas 3.4 and 3.5 we have that
sup
K
sup
xR
_
__
F(x +s)
F(x)
F(x +t)
F(x)
_
(x +s)
_
()
_
= sup
K
sup
xR
i=0
_
i
_
_
F(x +s)
F(x)
F(x +t)
F(x)
_
(i)
(i)
(x +s)
= [t s[
i=0
M
i
_
i
_
sup
K
sup
xR
(i)
(x +s)
[t s[(1 [t[
)2
i=0
M
i
_
i
_
sup
K
max
0j

j,i
. (49)
We next focus on the second term in (48). For each n 1, denote the lefthandside of (41) by
19
L
n
. By the mean value theorem, for each x R there exists an r
x
[s, t] such that
sup
K
sup
xR
_
F(x +t)
F(x)
((x +s) (x +t))
_
()
= sup
K
sup
xR
i=0
_
i
_
F(x +t)
F(x)
(i)
_
(i)
(x +s)
(i)
(x +t)
_
= sup
K
sup
xR
i=0
_
i
_
L
i
(i)
(x +s)
(i)
(x +t)
_
[t s[ sup
K
sup
xR
i=0
_
i
_
L
i
(i+1)
(x +r
x
)
[t s[(1 [t[
)2
i=0
_
i
_
L
i
sup
K
max
0j

j,i+1
, (50)
where the nal inequality follows as a consequence of Lemma 3.5. Combining (49) and (50) with
(48) and taking the limit as s t now yields Part 2 of Denition 3.1.
We now complete the proof by verifying that Part 3 of Denition 3.1 is satised. First note
that for each o, t 0 and , N, we may write
S
,
t

,
= sup
xR
_
F(x +t)
F(x)
(x +t)
_
()
= sup
xR
i=0
_
i
__
F(x +t)
F(x)
_
(i)
(i)
(x +t)
i=0
_
i
_
L
i
sup
xR
(i)
(x +t)
(1 [t[
)2
i=0
_
i
_
L
i
max
0j

j,i
(1 [t[
)2
+
max
0i
L
i
max
0j
max
0i

j,i
,
where the nal inequality above follows from Lemma 3.5. Part 3 of Denition 3.1 now follows from
(3) and (4) of 1.1 and the fact that  
p
 
p+1
for each p N. This completes the proof.
3.2 Residual Service Time Process
Now let B
1
be the linear operator dened on o such that
B
1
=
) D([0, T], o
) is a continuous map.
Our main result of this subsection is the following.
Proposition 3.7. The linear operator B
1
dened by (51) generates the stronglycontinuous (C
0
, 1)
semigroup (
t
)
t0
.
Proof. First note that it is clear by Lemma 3.5 that for each t 0 and o, we have that
t
o.
We next check that for each , N, we have the convergence
lim
t0
t
t
(
,
= 0. (53)
This will then be sucient to verify that B
1
as dened by (51) generates the semigroup (
t
)
t0
.
Let x R and o be xed. It then follows by Taylors theorem, expanding in terms of t, that
we may write
t
(x) = (x)
t
t
(
,
= lim
t0
sup
xR
R
()
(t, x)
t
,
where R
()
(x, t) denotes the th derivative of R with respect to x. Thus, in order to prove (53), it
now suces to check that
lim
t0
sup
xR
R
()
(t, x)
t
= 0. (56)
First note that by (55) we may write
x
R
()
(x, t) =
x
2
_
t
0
(+2)
(x +u)(t u)du =
1
2
_
t
0
x
(+2)
(x +u)(t u)du.
However, by Lemma 3.5 there exists a constant C
,
< such that for suciently small u,
sup
xR
(+2)
(x +u)
< C
,
.
We therefore obtain that
lim
t0
sup
xR
R
()
(t, x)
t
< lim
t0
C
,
2t
_
t
0
(t u)du
= lim
t0
C
,
4
t = 0,
21
thus proving (56). This completes our verication of the fact that B
1
as dened by (51) generates
the semigroup (
t
)
t0
.
We next proceed to verify that (
t
)
t0
is a stronglycontinuous (C
0
, 1) semigroup. In order
to check this fact, we will verify that (
t
)
t0
satises Parts 1 through 3 of Denition 3.1. It is
clear that (
t
)
t0
satises Part 1 of Denition 3.1. We next check that (
t
)
t0
satises Part 2 of
Denition 3.1. Let s < t, K o be a bounded set, and let , N. Then, by the mean value
theorem, for each x R there exists an r
x
[x t, x s] such that
sup
K

s
t

,
= sup
K
sup
xR
()
(x s)
()
(x t)
_
= sup
K
sup
xR
(r
x
+ (x r
x
))
()
(x s)
()
(x t)
_
= sup
K
sup
xR
i=0
_
i
_
r
i
x
(x r
x
)
i
_
()
(x s)
()
(x t)
_
= [t s[ sup
K
sup
xR
i=0
_
i
_
r
i
x
(x r
x
)
i
(+1)
(x r
x
)
[t s[
i=0
_
i
_
(t + 1)
i
sup
K

i,+1
.
Part 2 of Denition 3.1 now follows from the bound (3) and the fact that K is a bounded set.
We now complete the proof by verifying that (
t
)
t0
satises Part 3 of Denition 3.1. Note that
for each o, t 0 and , N, we have that

t

,
= ( t)
,
= sup
xR
()
(x t)
= sup
xR
[t + (x t)]
()
(x t)
= sup
xR
i=0
_
i
_
t
i
(x t)
i
()
(x t)
i=0
_
i
_
t
i

i,
(1 t
)2
max
0i

i,
.
Part 3 of Denition 3.1 now follows as a result of the bounds (3) and (4).
4 Martingale Results
In this section, we show that the process T
0
+T dened in 2.1 and the process ( dened in 2.2
are both o
, s t, i = 1, 2, ..., A
0
()
i
, i = 1, ..., A
0
()
1
i
s
i
, s t, i = 1, 2, ..., E
t
E
s
, s t ^.
Moreover, we explicitly identify the tensor quadratic variation of T
0
+T. The following is our main
result of this subsection.
Proposition 4.1. The process T
0
+T is an o
valued T
,
t
martingale with tensor quadratic vari
ation process given for all , o by
<< T
0
+T >>
t
(, ) =
A
0
()
i=1
_
i
t
0
(x
i
)(x
i
)h
i
(x) dx
+
Et
i=1
_
i
(t
i
)
+
0
(x)(x)h(x) dx. (57)
Proof. Let o. We claim that ((T
0
+ T)
t
, ))
t0
is an Rvalued T
,
t
martingale, which is
sucient to show that T
0
+ T is an o
valued T
,
t
martingale. Let t 0. We rst show that
E[[(T
0
+T)
t
, )[] < . Note that by (12) and (13) we may write
E
_
(T
0
+T)
t
, )
E
_
_
A
0
()
i=1
_
t
0
(x
i
) d
_
1
i
x
_
i
x
0
h
i
(u) du
_
_
_
(58)
+E
_
Et
i=1
_
(t
i
)
+
0
(x) d
_
1
i
x
_
i
x
0
h(u) du
_
_
E
_
_
sup
0s<
[(s)[
A
0
()
i=1
_
1
i
t
+
_
i
t
0
h
i
(u) du
_
_
_
+E
_
sup
0s<
[(s)[
Et
i=1
_
1
i
(t
i
)
+
+
_
i
(t
i
)
+
0
h(u) du
__
sup
0s<
[(s)[(1 +th
)(E[A
0
()] +E[E
t
])
< ,
where the nal inequality follows from the assumptions made on E[A
0
()] and E[E
t
] in 2.1. Thus,
E[[(T
0
+T)
t
, )[] < as desired.
23
Next, we show that ((T
0
+ T)
t
, ))
t0
possesses the martingale property with respect to the
ltration (T
,
t
)
t0
. That is, we show that for each 0 s t,
E[(T
0
+T)
t
, )[T
,
s
] = (T
0
+T)
s
, ). (59)
First note that by (12) and (13), we may write
(T
0
+T)
t
, ) =
i=1
1
iA
0
()
T
0,i
t
, ) +
i=1
T
i
t
, ), (60)
where, for each i 1, we set
1
iA
0
()
T
0,i
t
, ) = 1
iA
0
()
_
t
0
(x
i
) d
_
1
i
x
_
i
x
0
h
i
(u) du
_
and
T
i
t
, ) =
_
(t
i
)
+
0
(x) d
_
1
i
x
_
i
x
0
h(u) du
_
. (61)
We now show that for each i 1,
E[T
i
t
, )[T
,
s
] = T
i
s
, ). (62)
The proof that
E[1
iA
0
()
T
0,i
t
, )[T
,
s
] = 1
iA
0
()
E[T
0,i
t
, )[T
,
s
] = 1
iA
0
()
T
0,i
s
, ), (63)
for each i 1, is similar and will not be included. For each i 1 and y 0, set
D
i
t
(y) = 1
i
(t
i
)
+
y
_
i
(t
i
)
+
y
0
h(u) du. (64)
We now claim that in order to show (62), it suces to show that E[D
i
t
(y)[T
,
s
] = D
i
s
(y) for each
y 0. This is true since it will then follow that
E
_
T
i
t
,
_
[T
,
s
= E
__
R
+
D
i
t
(y)
(y) dy[T
,
s
_
=
_
R
+
E
_
D
i
t
(y)[T
,
s
(y) dy
=
_
R
+
D
i
s
(y)
(y) dy,
=
T
i
s
,
_
.
First note that since y (t
i
)
+
= (t (
i
+y)
i
)
+
, we may write
D
i
t
(y) = D
i
t(
i
+y)
().
Next note that since by the assumptions in 2.1, we have that E[
i
] < , it is straightforward
to verify that
i
+ y is an T
,
t
stopping time for each y 0. Thus, by Problem 3.2.4 of [22], it
24
suces to show that D
i
() = (D
i
t
())
t0
is an Rvalued T
,
t
martingale. First note that since
h
i
s
, 1
i
s
i
] = D
i
s
().
Hence, since
i
is assumed to be independent of A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,=
i, we obtain that that
E[D
i
t
()[T
,
s
] = E[D
i
t
()[1
i
s
, 1
i
s
i
] = E[D
i
s
()[T
,
s
] = D
i
s
(),
and so D
i
() is an Rvalued T
,
t
martingale as desired. This completes the proof of (62). The
proof of (63) is similar.
We now show that (62) and (63) imply (59). We claim that for each k 1, the sum
k
i=1
1
iA
0
()
T
0,i
t
, ) +
k
i=1
T
i
t
,
_
is dominated uniformly over k 1 by an integrable random variable. By Lebesgues dominated
convergence theorem for conditional expectations [6], this then implies that
E
_
T
t
, ) [T
,
s
= E
_
i=1
T
i
t
,
_
T
,
s
_
=
i=1
E
_
T
i
t
,
_
[T
,
s
i=1
T
i
s
,
_
= T
s
, ) , (65)
as desired. However, to obtain the bound is straightforward since
i=1
1
iA
0
()
T
0,i
t
, ) +
k
i=1
T
i
t
,
_
i=1
1
iA
0
()
_
t
0
(x
i
) d
_
1
i
x
_
i
x
0
h
i
(u) du
_
i=1
_
(t
i
)
+
0
(x) d
_
1
i
x
_
i
x
0
h(u) du
_
sup
0s<
[(s)[(1 +th
)(A
0
() +E
t
),
and as in (58) we have that
sup
0s<
[(s)[(1 +th
)(E[A
0
()] +E[E
t
]) < .
We have therefore shown that ((T
0
+ T)
t
, ))
t0
is an Rvalued T
,
t
martingale for each o,
which implies that T
0
+T is an o
valued T
,
t
martingale
We next proceed to calculate the tensor quadratic variation of T
0
+ T. Let i 1 and recall
that by (61) we have that for each o and t 0,
T
i
t
, ) =
_
(t
i
)
+
0
(x) d
_
1
i
x
_
i
x
0
h(u) du
_
. (66)
25
Therefore, as on page 259 of [25], it follows that
<< T
i
, ) >>
t
=
_
(t
i
)
+
0
(x)
2
d
_
1
i
x
_
i
x
0
h(u) du
_
=
_
i
(t
i
)
+
0
(x)
2
h(x) dx, (67)
which implies that
<< T
i
>>
t
(, ) = < T
i
, ), T
i
, ) >
t
=
1
4
_
<< T
i
, +) >>
t
<< T
i
, ) >>
t
_
=
1
4
_
_
i
(t
i
)
+
0
((x) +(x))
2
h(x) dx
_
i
(t
i
)
+
0
((x) (x))
2
h(x) dx
_
=
_
i
(t
i
)
+
0
(x)(x)h(x) dx,
where the second equality in the above follows from the polarization identity and the third equality
follows from (67). In a similar manner, one may also show that for all i 1,
<< 1
iA
0
()
T
0,i
>>
t
(, ) = 1
iA
0
()
_
i
t
0
(x
i
)(x
i
)h
i
(x) dx,
for all , o.
We now claim that in order to show that the tensor quadratic variation of T
0
+T is given by
(57), it suces to show the following three facts:
1. T
i
is orthogonal to T
j
for i ,= j,
2. 1
iA
0
()
T
0,i
is orthogonal to 1
jA
0
()
T
0,j
for i ,= j,
3. 1
iA
0
()
T
0,i
is orthogonal to T
j
for all i, j 1.
The fact that the tensor quadratic variation of T
0
+ T is given by (57) can then be shown in the
following manner. For each k 1, o and t 0, let
(T
0
+T)
k
t
, ) =
k
i=1
1
iA
0
()
T
0,i
t
, ) +
k
i=1
T
i
t
, ),
and set (T
0
+T)
k
= ((T
0
+T)
k
t
, t 0). It is then clear that Claims 1 through 3 above imply that
<< (T
0
+T)
k
>>
t
(, ) =
k
i=1
1
iA
0
()
_
i
t
0
(x
i
)(x
i
)h
i
(x) dx
+
k
i=1
_
i
(t
i
)
+
0
(x)(x)h(x) dx. (68)
26
Moreover, using similar arguments as above and the simple inequality (x
1
+ x
2
)
2
4(x
2
1
+ x
2
2
), it
is straightforward to show that for each k 1, one has Pa.s. the bound
(T
0
+T)
k
t
, )(T
0
+T)
k
t
, )
_
k
i=1
1
iA
0
()
_
i
t
0
(x
i
)(x
i
)h
i
(x) dx +
k
i=1
_
i
(t
i
)
+
0
(x)(x)h(x) dx
_
4
_
sup
0s<
([(s)[ +[(s)[)
_
2
(1 +th
)
2
(E
2
t
+A
2
0
())
+ sup
0s<
([(s)[[(s)[)(1 +th
)(E
t
+A
0
()).
However, by the assumptions in 2.1, one has that E[E
2
t
+A
2
0
()] < and E[E
t
+A
0
()] < .
Hence, using the dominated convergence theorem for conditional expectations [6], it follows that
for 0 s t,
E[(T
0
+T)
t
, )(T
0
+T)
t
, ) << (T
0
+T) >>
t
(, )[T
A
s
]
= E[ lim
k
((T
0
+T)
k
t
, )(T
0
+T)
k
t
, ) << (T
0
+T)
k
>>
t
(, ))[T
A
s
]
= lim
k
E[(T
0
+T)
k
t
, )(T
0
+T)
k
t
, ) << (T
0
+T)
k
>>
t
(, )[T
A
s
]
= lim
k
((T
0
+T)
k
s
, )(T
0
+T)
k
s
, ) << (T
0
+T)
k
>>
s
(, ))
= (T
0
+T)
s
, )(T
0
+T)
s
, ) << (T
0
+T) >>
s
(, ).
This then implies that the tensor quadratic variation of T
0
+T is given by (57). We now proceed
to prove Claims 1 through 3, which is sucient to complete the proof.
We begin with Claim 1. Let , o and i ,= j. We show that (T
i
t
, )T
j
t
, ))
t0
is an
Rvalued T
,
t
martingale, which is sucient to show that T
i
is orthogonal to T
j
. First note that it
is clear as in (58) that for each t 0, we have that E[[T
i
t
, )T
j
t
, )[] < . Next, let 0 s t. By
the independence of
i
from A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= i, and, similarly,
the independence of
j
from A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= j, it follows that
E[T
i
t
, )T
j
t
, )[T
,
s
] = E[T
i
t
, )T
j
t
, )[1
i
s
, 1
j
s
, 1
i
s
i
, 1
j
s
j
].
However, by the independence of
i
from
j
and
j
, and, similarly, the independence of
j
from
i
and
i
, we have that
E[T
i
t
, )T
j
t
, )[1
i
s
, 1
j
s
, 1
i
s
i
, 1
j
s
j
]
= E[T
i
t
, )[1
i
s
, 1
i
s
i
]E[T
j
t
, )[1
j
s
, 1
j
s
j
]
= T
i
s
, )T
j
s
, ),
where the nal equality follows from (62) and the fact that for k 1,
E[T
k
t
, )[1
k
s
, 1
k
s
k
] = E[T
k
t
, )[T
,
s
]. (69)
Thus, it is clear that (T
i
t
, )T
j
t
, ))
t0
possesses the martingale property and so (T
i
t
, )T
j
t
, ))
t0
is an Rvalued T
,
t
martingale, and hence T
i
is orthogonal to T
j
. The proof of Claim 2 above fol
lows similarly. The proof of Claim 3 above follows in a similar manner as well. In particular,
27
let i, j 1. We show that (1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
is an Rvalued T
,
t
martingale for each
, o, which is sucient to show that 1
iA
0
()
T
0,i
is orthogonal to T
j
. For each t 0, it is
clear as in (58) that E[[1
iA
0
()
T
0,i
t
, )T
j
t
, )[] < . Next, note that since
i
is independent
of A
0
,
k
, k = 1, ..., A
0
(); k ,= i, E = (E
t
)
t0
and
k
, k 1, and, similarly,
j
is independent of
A
0
,
k
, k = 1, ..., A
0
(), E = (E
t
)
t0
and
k
, k ,= j, we have that
E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[T
,
s
]
= E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[
i
, 1
j
s
, 1
i
s
i
, 1
j
s
j
].
However, by the independence of
i
from
j
and
j
, and, similarly, the independence of
j
from
i
and
i
, we have that
E[1
iA
0
()
T
0,i
t
, )T
j
t
, )[
i
, 1
j
s
, 1
i
s
i
, 1
j
s
j
]
= E[1
iA
0
()
T
0,i
t
, )[
i
, 1
i
s
i
]E[T
j
t
, )[1
j
s
, 1
j
s
j
]
= 1
iA
0
()
T
0,i
s
, )T
j
s
, ),
where the nal equality follows by (62), (63), (69) and the fact that for k 1,
E[1
kA
0
()
T
0,k
t
, )[
k
, 1
k
s
k
] = E[1
kA
0
()
T
0,k
t
, )[T
,
s
].
Thus, it is clear that (1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
possesses the martingale property and so
(1
iA
0
()
T
0,i
t
, )T
j
t
, ))
t0
is an Rvalued T
,
t
martingale, and hence T
0,i
is orthogonal to T
j
.
This proves Claim 3, which completes the proof.
4.2 Residuals
In this subsection, we show that the process ( dened in 2.2 is a martingale. This fact may be
useful in future work where one wishes to show that the residual service time process is a Markov
process. Let (T
t
)
t0
be the natural ltration generated by (. We then have the following result.
Proposition 4.2. The process ( is an o
valued T
t
martingale with tensor optional quadratic
variation process given for all , o by
[(]
t
(, ) =
Et
i=1
((
i
) T, ))((
i
) T, )). (70)
Proof. Let o. We rst show that ((
t
, ))
t0
is an Rvalued T
t
martingale. Dene the
ltration (H
k
)
k1
by setting H
k
= E
t
, t 0
1
,
2
, ...,
k
^ for each k 1. Next, dene
the discretetime Dvalued process (G
k
)
k1
by
G
k
(y) =
k
i=1
_
1
i
y
F(y)
_
, y 0, (71)
and, for convenience, let G
k
(y) = 0 for y < 0. Then, let ((
k
)
k1
be the o
/
n
0
/
n
0
n
,
E
n
E
n
n
,
T
0,n
T
0,n
n
,
T
n
T
n
n
,
/
n
/
n
n
, (72)
and set
c
n
E
n
0
. Using (17), Theorem 3.3 and Proposition 3.6, it is straightforward to show
that one may write
/
n
=
B
A(
/
n
0
+
c
n
(
T
0,n
+
T
n
)), (73)
where the map
B
A : D([0, T], o
) D([0, T], o
/
0
+
c in D([0, T], o
) as n , then
/
n
0
+
c
n
(
T
0,n
+
T
n
)
/
0
+
c in D([0, T], o
) as n .
Proof. We rst note that by Theorem 1.5, it is sucient to show that if
/
n
0
+
c
n
/
0
+
c as
n , then
T
0,n
+
T
n
0 in D([0, T], o
) as n . (74)
Let T > 0 and 0 t T. Then, for each o, we have by Proposition 4.1 that
[ <<
T
0,n
+
T
n
>>
t
(, )[
=
1
n
2
_
_
A
n
0
()
i=1
_
i
t
0
2
(x
n
i
)h
n
i
(x) dx +
E
n
t
i=1
_
i
(t
n
i
)
+
0
2
(x)h(x) dx
_
_
h
n
2
A
n
0
()
i=1
_
t
0
2
(x
n
i
) dx +

2
h
E
n
T
. (75)
29
Thus, from (75) we obtain that for each 0 t T,
[ <<
T
0,n
+
T
n
>>
t
(, )[ (76)
h
n
2
A
n
0
()
i=1
_
t
0
2
(x
n
i
) dx +

2
h
E
n
T
=
h
n
2
_
t
0
A
n
0
()
i=1
2
(x
n
i
) dx +

2
h
E
n
T
=
h
n
_
t
0
/
n
0
,
x
2
) dx +

2
h
E
n
T
h
t
n
q
K
(
/
n
0
) +

2
h
E
n
T
,
where the set K is given by K =
x
2
, 0 x t. By Lemma 3.5, the set K is bounded in o and
hence q
K
is a seminorm on o
t
n
q
K
(
/
n
0
) +

2
h
E
n
T
0 as n .
By (76), this then implies that
<<
T
0,n
+
T
n
>> (, ) 0 in D([0, T], R) as n . (77)
We now verify that Parts 1 and 2 of Theorem 1.5 are satised for the sequence (
T
0,n
+
T
n
)
n1
,
with the limit point being the function which is identically 0. We begin with Part 1. Using the
fact that the maximum jump of both
T
0,n
+
T
n
, ) and <<
T
0,n
+
T
n
>> (, ) is bounded over
the interval [0, T] uniformly in n, we obtain by (77) and the martingale FCLT (see Theorem 7.1.4
of [10] or [34]) that
T
0,n
+
T
n
, ) 0 in D([0, T], R) as n . (78)
Thus, Part 1 of Theorem 1.5 holds. We next check that Condition 2 holds. Let m 1 and let
t
1
, ..., t
m
[0, T] and
1
, . . . ,
m
o. By (78), we have that for each 1 i m,
(
T
0,n
+
T
n
)
t
i
,
i
) 0 in R as n .
However, by Theorem 3.9 of [1] this now implies that
_
(
T
0,n
+
T
n
)
t
1
,
1
), . . . (
T
0,n
+
T
n
)
tm
,
m
)
_
(0, . . . , 0) in R
m
as n .
Thus, we have shown that Part 2 of Theorem 1.5 holds and so (74) is proven. This completes the
proof.
We are now in a position to prove the main result of this subsection. We have the following.
30
Theorem 5.2. If
/
n
0
+
c
n
/
0
+
c in D([0, T], o
) as n , then
/
n
/ in D([0, T], o
) as n ,
where
/ is the unique solution to the integral equation
/
t
, ) =
/
0
, ) +
c
t
, )
_
t
0
/
s
, h) ds +
_
t
0
/
s
,
/
0
+
c, it follows immediately by Proposition 5.1 that
/
n
0
+
c
n
(
T
0,n
+
T
n
)
/
0
+
c in D([0, T], o
) as n . (80)
Next, recall that by (73) we have that
/
n
=
B
A(
/
n
0
+
c
n
(
T
0,n
+
T
n
)), where the map
B
A :
D([0, T], o
) D([0, T], o
F(t s)ds.
We now conclude this subsection by providing an additional condition on the arrival process
under which a stationary solution to the uid limit equation (79) may be explicitly found. Note
also that our condition in Proposition 5.4 below holds, for example, if the arrival process to the
nth system is a renewal process which has been sped up by a factor of n (as will be the case for
the GI/GI/ queue).
Proposition 5.4. If
_
R
+
(y) dF
e
(y) =
_
R
+
(y) dF
e
(y) +t(0) t
_
R
+
(h(y)(y)
(y)) dF
e
(y).
31
However, this follows since
t
_
R
+
(h(y)(y)
(y)) dF
e
(y) = t
_
R
+
(h(y)(y)
(y))
F(y) dy
= t
_
R
+
(f(y)(y)
F(y)
(y)) dy
= t
_
R
+
_
F(y)(y)
_
dy
= t(0).
This completes the proof.
5.2 Residuals
We next proceed to analyze the residual service time process 1 of 2.2. Our setup in this subsection
is the same as that in the previous subsection. However, in addition to the uid scaled quantities
already dened in 5.1, we also now dene for each n 1 the new uid scaled quantities
1
n
1
n
n
,
1
n
0
1
n
0
n
and
(
(
n
n
.
Using (25), Theorem 3.3 and Proposition 3.7, it is now straightforward to show that
1
n
=
B
R(
1
n
0
+
E
n
T +
(
n
), (81)
where the map
B
R : D([0, T], o
) D([0, T], o
) as n , then
1
n
0
+
E
n
T +
(
n
1
0
+
ET in D([0, T], o
) as n . (82)
Proof. We rst note that by Theorem 1.5, it is sucient to show that if
1
n
0
+
E
n
T
1
0
+
ET as
n , then
(
n
0 in D([0, T], o
) as n . (83)
Let T > 0 and 0 t T. We then have by Proposition 4.2 and the assumption that
1
n
0
+
E
n
T
1
0
+
ET, that for each , o,
[[
(
n
]
t
(, )[ =
1
n
2
E
n
t
i=1
(
i
)(
i
)
1
n
2
E
n
T
sup
0s<
[(s)(s)[ 0 in R as n . (84)
The remainder of the proof now proceeds in a similar manner to the proof of Proposition 5.1. We
omit the details.
The following is now our main result of this subsection.
32
Theorem 5.6. If
1
n
0
+
E
n
T
1
0
+
ET in D([0, T], o
) as n , then
1
n
1 in D([0, T], o
) as n ,
where
1 is the unique solution to the integral equation
1
t
, ) =
1
0
, ) +
E
t
T, )
_
t
0
1
s
,
1
n
0
+
E
n
T +
(
n
1
0
+
ET in D([0, T], o
) as n . (86)
Next, recall that by (81) we have that
1
n
=
B
R(
1
n
0
+
E
n
T +
(
n
), where the map
B
R :
D([0, T], o
) D([0, T], o
valued
OrnsteinUhlenbeck process. Our denitions are the same as those in [2].
Denition 6.1. A continuous o
valued process X = (X
t
)
t0
is called a (generalized) o
valued Ornstein
Uhlenbeck process if for each o and t 0,
X
t
, ) = X
0
, ) +
_
t
0
X
u
, A) du +W
t
, ),
where W (W
t
)
t0
is a (generalized) o
/
0
+
c as
n , where
/
0
+
c is a nonrandom quantity. By Theorem 5.2 of 5.1, this then implies that
/
is a nonrandom quantity as well. Setting
A
n
0
() = n
1
A
n
0
() for each n 1 and letting T 0,
we also assume that the sequences
A
n
0
(), n 1 and
E
n
T
, n 1 are uniformly integrable.
Now, for each n 1, dene the diusion scaled quantities
/
n
n
_
/
n
/
_
,
/
n
0
n
_
/
n
0
/
0
_
,
E
n
n
_
E
n
E
_
,
T
0,n
n
T
0,n
,
T
n
n
T
n
,
and set
c
n
E
n
0
. Then, recalling the form of the uid limit
/ from Theorem 5.2, note that using
system equation (17) in conjunction with Theorem 3.3 and Proposition 3.6, one has that for each
n 1,
/
n
=
B
A(
/
n
0
+
c
n
(
T
0,n
+
T
n
)), (87)
where the map
B
A : D([0, T], o
) D([0, T], o
/
0
+
c in D([0, T], o
) as n , then
T
0,n
+
T
n
T
0
+
T in D([0, T], o
) as n , (88)
where
T
0
+
T is a generalized o
T
0
+
T
(s, ; t, ) =
_
st
0
/
u
, h) du. (89)
Proof. See Appendix.
We next have the following result, which provides a weak limit for the sequence (
/
n
0
+
c
n
(
T
0,n
+
T
n
))
n1
.
Proposition 6.4. If
/
n
0
+
c
n
/
0
+
c in D([0, T], o
) as n , then
/
n
0
+
c
n
(
T
0,n
+
T
n
)
/
0
+
c (
T
0
+
T) in D([0, T], o
) as n , (90)
where
T
0
+
T is as given in Lemma 6.3 and is independent of
/
0
+
c.
Proof. We will check that the sequence (
/
n
0
+
c
n
(
T
0,n
+
T
n
))
n1
satises Parts 1 and 2 of Theorem
1.5. We begin with Part 1. Let o. By assumption, the sequence (
/
n
0
+
c
n
, ))
n1
weakly
converges and hence is tight in D([0, T], R), and, by Lemma 6.3, the sequence ((
T
0,n
+
T
n
), ))
n1
weakly converges and hence is tight in D([0, T], R). Therefore, there must exist a subsequence
(n)
n1
along which we have the joint convergence
(
/
n
0
+
c
n
, ), (
T
0,n
+
T
n
), )) (
/
0
+
c, ), (
T
0
+
T), )) in D
2
([0, T], R) as n .
34
Now note that clearly
/
0
+
c, ) and, similarly, (
T
0
+
T), )
has the same distribution as (
T
0
+
T), ). We now verify that
/
0
+
c, ) and (
T
0
+
T), ) are
independent of one another. This will then imply the convergence
/
n
0
+
c
n
, ) (
T
0,n
+
T
n
), )
/
0
+
c, ) (
T
0
+
T), ) in D([0, T], R) as n ,
along the given subsequence. However, since the subsequence was arbitrary, this will then imply
convergence along the entire sequence, thus verifying Part 1 of Theorem 1.5.
Let t
1
, t
2
[0, T] with t
1
t
2
and let a
1
, a
2
, b
1
, b
2
R and let x, y R. We will show that
P(a
1
/
n
0
+
c
n
t
1
, ) +a
2
/
n
0
+
c
n
t
2
, ) x, b
1
(
T
0,n
t
1
+
T
n
t
1
), ) +b
2
(
T
0,n
t
2
+
T
n
t
2
), ) y)
P(a
1
/
0
+
c
t
1
, ) +a
2
/
0
+
c
t
2
, ) x) (91)
P(b
1
(
T
0
t
1
+
T
t
1
), ) +b
2
(
T
0
t
2
+
T
t
2
), ) y)
as n . The analogous proof for t
1
, ..., t
m
[0, T] with m > 2 follows similarly. This will then
be sucient to show that
/
0
+
c, ) and (
T
0
+
T), ) are independent of one another. First note
that we may write
P(a
1
/
n
0
+
c
n
t
1
, ) +a
2
/
n
0
+
c
n
t
2
, ) x, b
1
(
T
0,n
t
1
+
T
n
t
1
), ) +b
2
(
T
0,n
t
2
+
T
n
t
2
), ) y)
= E
_
1
a
1
,
n
0
+
c
n
t
1
,)+a
2
,
n
0
+
c
n
t
2
,)x
1
b
1
(
T
0,n
t
1
+
T
n
t
1
),)+b
2
(
T
0,n
t
2
+
T
n
t
2
),)y
_
.
However, by the tower property of conditional expectations [6], we have that
E
_
1
a
1
,
n
0
+
c
n
t
1
,)+a
2
,
n
0
+
c
n
t
2
,)x
1
b
1
(
T
0,n
t
1
+
T
n
t
1
),)+b
2
(
T
0,n
t
2
+
T
n
t
2
),)y
_
= E
_
1
a
1
,
n
0
+
c
n
t
1
,)+a
2
,
n
0
+
c
n
t
2
,)x
E
_
1
b
1
(
T
0,n
t
1
+
T
n
t
1
),)+b
2
(
T
0,n
t
2
+
T
n
t
2
),)y
[/
n
, c
n
__
.
We now claim that
E
_
1
b
1
(
T
0,n
t
1
+
T
n
t
1
),)+b
2
(
T
0,n
t
2
+
T
n
t
2
),)y
[/
n
, c
n
_
(92)
P
P(b
1
(
T
0
t
1
+
T
t
1
), ) +b
2
(
T
0
t
2
+
T
t
2
), ) y)
as n . Then, since by assumption
E
_
1
a
1
,
n
0
+
c
n
t
1
,)+a
2
,
n
0
+
c
n
t
2
,)x
_
P(a
1
/
0
+
c
t
1
, ) +a
2
/
0
+
c
t
2
, ) x)
as n , this will then imply (91), thus verifying Part 1 of Theorem 1.5.
In order to see that (92) holds, rst note that
E
_
1
b
1
(
T
0,n
t
1
+
T
n
t
1
),)+b
2
(
T
0,n
t
2
+
T
n
t
2
),)y
[/
n
, c
n
_
= P
_
b
1
(
T
0,n
t
1
+
T
n
t
1
), ) +b
2
(
T
0,n
t
2
+
T
n
t
2
), ) y[/
n
, c
n
_
.
Now recall from (60) that we may write
b
1
(
T
0,n
t
1
+
T
n
t
1
), ) +b
2
(
T
0,n
t
2
+
T
n
t
2
), ) =
A
n
0
()
i=1
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
+
E
n
t
2
i=1
(
T
n,i
t
1
, b
1
) +
T
n,i
t
2
, b
2
)).
35
Moreover, given /
n
and c
n
, we have that the random variables (
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
)), i =
1, ..., A
n
0
(), and (
T
n,i
t
1
, b
1
) +
T
n,i
t
2
, b
2
)), i = 1, ..., E
n
t
2
, are mutually independent, with mean
zero. In addition, it is straightforward to calculate that
E
_
_
A
n
0
()
i=1
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
_
(93)
+E
_
_
E
n
t
2
i=1
(
T
n,i
t
1
, b
1
) +
T
n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
_
= E
__
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
_
.
Also, note that for each i = 1, ..., A
n
0
(),
E
_
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
3
[/
n
0
, c
n
_
(94)
= E
_
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
2
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))[/
n
0
, c
n
_
([b
1
[ +[b
2
[)
(1 +t
2
h
n
E
_
(
T
0,n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
,
and, similarly, for each i = 1, ..., E
n
t
2
,
E
_
(
T
n,i
t
1
, b
1
) +
T
0,n,i
t
2
, b
2
))
3
[/
n
0
, c
n
_
(95)
([b
1
[ +[b
2
[)
(1 +t
2
h
n
E
_
(
T
n,i
t
1
, b
1
) +
T
n,i
t
2
, b
2
))
2
[/
n
0
, c
n
_
.
Now let denote the CDF of a standard, normal random variable. It then follows by (93), (94), (95)
and an application of the BerryEsseen Theorem [6] for independent (but not necessarily identically
distributed) random variables that
P
_
_
_
b
1
(
T
0,n
t
1
+
T
n
t
1
), ) +b
2
(
T
0,n
t
2
+
T
n
t
2
), )
_
E
_
_
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
__
1/2
y[/
n
, c
n
_
_
_(y)
n
([b
1
[ +[b
2
[)
(1 +t
2
h
)
_
E
_
_
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
__
1/2
.
Hence, in order to complete the proof of (92) and hence verify Part 1 of Theorem 1.5, it suces to
show that
E
__
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du[/
n
0
, c
n
_
P
_
t
1
0
/
u
, (b
1
+b
2
)
2
h)du as n . (96)
However, note that by Theorem 5.2 and the continuity of the integral map [1], we have that
_
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du
P
_
t
1
0
/
u
, (b
1
+b
2
)
2
h)du as n . (97)
36
Next, note that the uniform integrability of
A
n
0
(), n 1 and
E
n
T
, n 1 implies the uniform
integrability of
__
t
1
0
/
n
u
, (b
1
+b
2
)
2
h)du, n 1
_
. (98)
It is then straightforward to show that (97) implies (96), thus completing the verication of Part 1
of Theorem 1.5. The proof of the verication of Part 2 of Theorem 1.5 follows in a similar manner
to the above and has been omitted for the sake of brevity. This completes the proof.
The following is now our main result of this section. Its proof is a straightforward consequence
of Theorem 1.1, (87) and Proposition 6.4.
Theorem 6.5. If
/
n
0
+
c
n
/
0
+
c in D([0, T], o
) as n , then
/
n
/ in D([0, T], o
) as n , (99)
where
/ is the solution to the stochastic integral equation
/
t
, ) =
/
0
, )+
c
t
, )
T
0
+
T, )
_
t
0
/
s
, h) ds+
_
t
0
/
s
,
c
(s, ; t, ) =
2
(s
t)(0)(0), then
/ is a generalized o
c(
T
0
+
T)
(s, ; t, ) =
2
(s t)(0)(0) +
_
st
0
/
u
h, ) du. (101)
Proof. First note that by (87) we have that
/
n
=
B
A(
/
n
0
+
c
n
(
T
0,n
+
T
n
)), where the map
B
A : D([0, T], o
) D([0, T], o
c
(s, ; t, ) =
2
(s t)(0)(0). Then, combining this with (89) and the fact that
T
0
+
T and
/
0
+
c are
independent from Proposition 6.4, yields (101). Thus, by Denition 6.2,
/ is an o
valued Ornstein
Uhlenbeck process.
Recall now from Proposition 5.4 of 5.1 that if
c
(s, ; t, ) =
2
(s t)(0)(0), then the resulting limiting diusion scaled age process
/ of
Theorem 6.5 is a timehomogeneous Markov process. Our result is the following. Note also that a
similar approach may be used to analyze the diusion limit of the residual service time process in
Theorem 6.9 in the following subsection.
Proposition 6.6. If
c
(s, ; t, ) =
2
(st)(0)(0), then
/ is an o
valued Ornstein
Uhlenbeck process driven by an o
c(
T
0
+
T)
(s, ; t, ) = (s t)
2
0
+T, ). (102)
37
Proof. It is clear that the covariance functional of
c is given by
K
c
(s, ; t, ) = (s t)
2
0
, ). (103)
We now show that the covariance functional of
T
0
+
T is given by
K
T
0
+
T
(s, ; t, ) = (s t)T, ). (104)
Then, since
c and
T
0
+
T are independent, summing (103) and (104) will prove (102), which will
complete the proof.
Note that by Proposition 5.4 we have that since by assumption
/
0
= T
e
, it follows that
/ = T
e
is the unique solution to the uid limit equation (79). Therefore, by Lemma 6.3 we have
that for each , o and s, t 0,
K
T
0
+
T
(s, ; t, ) =
_
st
0
/
u
, h
_
du
=
_
st
0
_
R
+
(x)(x)h(x) d
A
u
(x) du
= (s t)
_
R
+
(x)(x)h(x)
F(x) dx
= (s t)
_
R
+
(x)(x)f(x) dx.
This proves (104), which completes the proof.
We now note that using (27) of Theorem 3.3 and (44) of Proposition 3.6, one may obtain an
explicit representation of
/ in terms of the o
t
_
F
__
, o, (105)
and covariance functional given for each , o and t [0, T] by
E[
/
t
, )
/
t
, )] =
T
e
,
F
1
t
_
F
_ _
1
F
1
t
F
__
+
_
t
0
(u)(u)
_
F(u) +
2
F(u)
_
F(u)du. (106)
In addition, taking limits as t , one also nds that
/
t
weakly converges as t to a Gaussian
random variable /
, )
/
, )] = T
e
,
_
F +
2
F
_
).
We now conclude this subsection by noting that one may heuristically attempt to substitute
the test function 1
x0
into the formula for
/ provided by Theorem 3.3 in order to obtain an
expression for the limiting diusion scaled total number of customers in the system. For instance,
38
suppose that
/ = 0 and that, as in the statement of Proposition 6.6, we have that
c, ) = (0)e
for each o and
/
0
= T
e
and
c is an o
c
(s, ; t, ) =
2
(s t)(0)(0). Then, using the form of the generator B
,
from (39), the semi
group (S
,
t
)
t0
from Proposition 3.6 and (102) of Proposition 6.6, one obtains after a substitution
into Theorem 3.3 that the limiting diusion scaled number of customers in the system at time t 0
is heuristically given by
B
t
_
t
0
B
s
f(t s)ds =
_
t
0
F(t s)d
B
s
,
where
B = (
B
t
)
t0
is a Brownian motion with innitesimal variance
2
+.
6.2 Residuals
We next proceed to study the residual service time process 1 dened in 2.2. Our setup is the same
as in 5.2. That is, we consider a sequence of G/GI/ queues indexed by n, where the arrival
rate to the nth system is of order n and the service time distribution does not change with n. For
the remainder of this subsection, we also assume that
1
n
0
+
E
n
T
1
0
+
ET as n , where
1
0
+
ET is a nonrandom quantity. By Theorem 5.6 of 5.2, this implies that
1 is nonrandom as
well.
Now, for each n 1, in addition to the diusion scaled quantities dened in 6.1, let us also
now dene the diusion scaled quantities
1
n
n
_
1
n
1
_
,
1
n
0
n
_
1
n
0
1
0
_
and
(
n
(
n
.
Then, after recalling the form of the uid limit
1 from Theorem 5.6, note that using system
equation (25) in conjunction with Theorem 3.3 and Proposition 3.7, one has that
1
n
=
B
R(
1
n
0
+
E
n
T +
(
n
), (107)
where the map
B
R : D([0, T], o
) D([0, T], o
(
n
t
, ) =
1
n
]n
Et
i=1
((
i
) T, )), t 0.
Note that it is clear that
(
n
is independent of
1
n
0
+
E
n
T. We now have the following result. Its
proof may be found in the Appendix.
Lemma 6.7. If
1
n
0
+
E
n
T
1
0
+
ET in D([0, T], o
) as n , then
(
n
(
n
0 in D([0, T], o
) as n , (108)
39
and
(
n
( in D([0, T], o
) as n , (109)
where
( is an o
G
(s, ; t, ) = (
E
s
E
t
)Cov((), ()), (110)
where is a random variable with cdf F.
Proof. See Appendix.
Using Lemma 6.7, we may now prove the following result on the weak convergence of (
1
n
0
+
E
n
T +
(
n
)
n1
. We have the following.
Proposition 6.8. If
1
n
0
+
E
n
T
1
0
+
ET in D([0, T], o
) as n , then
1
n
0
+
E
n
T +
(
n
1
0
+
ET +
( in D([0, T], o
) as n , (111)
where
( is as given in Lemma 6.7 and is independent of
1
0
+
ET.
Proof. Since
G
n
is independent of
1
n
0
+
E
n
T for each n 1, it follows Theorem 1.5 and (109) of
Lemma 6.7 that we have the convergence
1
n
0
+
E
n
T +
G
n
1
0
+
ET +
( in D([0, T], o
) as n . (112)
The result now follows by (112), Theorem 1.5, (108) of Lemma 6.7 and the fact that we may write
1
n
0
+
E
n
T +
G
n
=
1
n
0
+
E
n
T +
G
n
+ (
G
n
G
n
).
The following is now the main result of this subsection. It provides a weak limit for the sequence
(
1
n
)
n1
.
Theorem 6.9. If
1
n
0
+
E
n
T
1
0
+
ET in D([0, T], o
) as n , then
1
n
1 in D([0, T], o
) as n , (113)
where
1 is the solution to the the stochastic integral equation
1
t
, ) =
1
0
, ) +
(
t
, ) +
E
t
T, )
_
t
0
1
s
,

valued OrnsteinUhlenbeck process driven by a generalized o
ET+
G
(s, ; t, ) =
2
(s t)E[()]E[()] +
_
E
s
E
t
_
Cov((), ()), (115)
where is a random variable with cdf F.
40
Proof. First note that by (107) we have that
1
n
=
B
R(
1
n
0
+
E
n
T +
(
n
), where the map
B
R :
D([0, T], o
) D([0, T], o
ET
(s, ; t, ) =
2
(s t)E[()]E[()]. (116)
Combining (110) with (116) and the fact that
ET and
( are independent, yields (115).
Remark 6.10. Note that in the special case when the arrival process to the nth system is a Poisson
process with rate n, we then have that
E = e and so
E turns out to be a Brownian motion with
diusion coecient . It then follows that K
ET
(s, ; t, ) = (s t)E[()()] and so Theorem
6.9 gives us a version of Theorem 3 of [8].
7 Acknowledgements
The authors would like to thank the referees for their numerous helpful comments and suggestions
which have helped to improve the clarity and overall exposition of the paper.
8 Appendix
In the Appendix, we provide the proofs of several supporting lemmas from the main body of the
paper. We begin with the proof of Lemma 3.4.
Proof of Lemma 3.4. We prove Part 1 by induction. For each n 0 and t 0 xed, denote the
quantity on the lefthand side of (41) by L
n
. For the base case of n = 0, it is straightforward to see
that L
0
1. Next, for the inductive step, suppose that (41) holds for n = 0, . . . , k 1, and t 0.
Then, we have that
_
F(x +t)
F(x)
_
(k)
_
_
F(x +t)
F(x)
_
(1)
_
(k1)
_
(h(x) h(x +t))
F(x +t)
F(x)
_
(k1)
k1
i=0
_
k 1
i
_
_
F(x +t)
F(x)
_
(i)
2
k1
i=0
_
k 1
i
_
h
(k1i)

L
i
< .
This completes the proof of Part 1.
41
We next prove Part 2 by induction as well. First recall that for s, t 0, we may write
F(x +t)
F(x +s)
= exp
_
_
x+t
x+s
h(u)du
_
. (117)
Hence, for the base case of n = 0, we have that
sup
x0
F(x +t)
F(x)
F(x +s)
F(x)
= sup
x0
F(x +s)
F(x)
_
1
F(x +t)
F(x +s)
_
sup
x0
F(x +s)
F(x)
sup
x0
F(x +t)
F(x +s)
sup
x0
F(x +t)
F(x +s)
1 e
h[ts[
h
[t s[,
where the third inequality above follows from (117) and the nal inequality follows from the mean
value theorem. Next, for the inductive step, suppose that (42) holds for n = 0, 1, ..., k 1. We then
have that
sup
x0
_
F(x +t)
F(x)
F(x +s)
F(x)
_
(k)
= sup
x0
_
(h(x) h(x +s))
F(x +s)
F(x)
(h(x) h(x +t))
F(x +t)
F(x)
_
(k1)
= sup
x0
_
(h(x +t) h(x +s))
F(x +s)
F(x)
(h(x) h(x +t))
_
F(x +t)
F(x)
F(x +s)
F(x)
__
(k1)
= sup
x0
k1
i=0
_
k 1
i
_
[(h(x +t) h(x +s))]
(k1i)
_
F(x +s)
F(x)
_
(i)
k1
i=0
_
k 1
i
_
(h(x) h(x +t))
(k1i)
_
F(x +t)
F(x)
F(x +s)
F(x)
_
(i)
. (118)
Now note that since by Assumption 2.1 we have that h C
b
(R
+
), it follows that all of the
derivatives of h are bounded and hence uniformly continuous as well. Using this fact, Part 1 and
the inductive hypothesis it now follows that (118) is less than or equal to
2[t s[
k1
i=0
_
k 1
i
_
_
h
((k1i)1)

L
i
+h
(k1i)

M
i
_
M
k
[t s[.
This proves Part 2 and completes the proof.
We next provide the proof of Lemma 6.3
42
Proof of Lemma 6.3. We must show that
T
0,n
+
T
n
T
0
+
T in D([0, T], o
) as n . (119)
Let , o and let t 0. It then follows by by Proposition 4.1 that we may write
<<
T
0,n
+
T
n
>>
t
(, )
=
1
n
_
_
A
n
0
()
i=1
_
i
t
0
(u
n
i
)(u
n
i
)h
n
i
(u) du +
E
n
t
i=1
_
i
(t
n
i
)
0
(u)(u)h(u) du
_
_
=
_
t
0
/
n
s
, h
_
ds, (120)
where the second equality above follows as a result of Proposition 2.3. However, by the continuity
of the integral mapping on D([0, T], R), it now follows by Theorem 5.2 that
_
e
0
/
n
s
, h
_
ds
_
e
0
/
s
, h
_
ds in D([0, T], R) as n . (121)
We now verify that (
T
0,n
+
T
n
)
n1
satises Parts 1 and 2 of Theorem 1.5. We begin with Part 1.
Let = in (121) and note that using the fact that the maximum jumps of both
T
0,n
+
T
n
, )
and <<
T
0,n
+
T
n
>> (, ) over the interval [0, T] are uniformly bounded in n, along with the
martingale FCLT [10] and (121), yields the limit
T
0,n
+
T
n
, )
T
0
+
T, ) in D([0, T], R) as n .
Thus, Part 1 of Theorem 1.5 is satised. We next proceed to verify Part 2 of Theorem 1.5. Let
m 1 and let
1
, . . . ,
m
o and 1 i, j m. Then, using the fact that the maximum jumps
of both
_
T
0,n
+
T
n
,
1
), . . .
T
0,n
+
T
n
,
m
)
_
and <
T
0,n
+
T
n
> (
i
,
j
) are bounded over the
interval [0, T], uniformly in n, it follows by the martingale FCLT [10] and (121) that
_
T
0,n
+
T
n
,
1
), . . .
T
0,n
+
T
n
,
m
)
_
_
T
0
+
T,
1
), . . .
T
0
+
T,
m
)
_
in D
m
([0, T], R) as n . This limit then provides convergence of the nite dimensional dis
tributions of the random vector on the lefthand side above, which is sucient to verify Part 2 of
Theorem 1.5. Thus, (119) holds, where
T
0
+
T is an o
T
0
+
T, ) has independent increments
for each o.
Next, we provide the proof of Lemma 6.7
Proof of Lemma 6.7. We rst prove that
(
n
( in D([0, T], o
) as n . (122)
43
In order to do so, we will verify that Parts 1 and 2 of Theorem 1.5 are satised. We begin with
Part 1. Let , o and note that by Proposition 4.2, the functional strong law of large numbers
[33] and the random time change theorem [1], we have that
[
(
n
](, ) =
1
n
E
n
i=1
((
i
) T, ))((
i
) T, )) (123)
ECov((), ()) in D([0, T], R) as n ,
where is a random variable with CDF F. Now letting = in (123) and using the fact that
the maximum jump of
(
n
, ) over the interval [0, T] is bounded uniformly in n, along with the
martingale FCLT [10], yields the limit
(
n
, )
(, ) in D([0, T], R) as n .
Thus, Part 1 of Theorem 1.5 holds. We next prove that Part 2 of Theorem 1.5 holds. Let m 1
and let
1
, . . . ,
m
o. Then, using the limit (123) and the fact that the maximum jump of
_
(
n
,
1
), . . . ,
(
n
,
m
)
_
over the interval [0, T] is bounded uniformly in n, along with the martin
gale FCLT [10], yields the limit
_
(
n
,
1
), . . . ,
(
n
,
m
)
_
_
(,
1
), . . . ,
(,
m
)
_
in D
m
([0, T], R) as n .
This limit then provides convergence of the nite dimensional distributions of the random vector
on the lefthand side above, which shows that Part 2 of Theorem 1.5 holds. Thus, (122) it proven.
In order to complete the proof, it now suces to show that
(
n
(
n
0 in D([0, T], o
) as n . (124)
However, in order to show (124), it suces by Theorem 1.5 to show that for each o,
(
n
, )
(
n
, ) 0 in D([0, T], R) as n . (125)
We proceed as follows. In a similar manner to the above, one may show using the martingale FCLT
that
(
n
( in D([0, T], o
) as n . (126)
Hence, for each o, the sequence (
(
n
, )
(
n
, ))
n1
is tight in D([0, T], R) and so in order
to show (125), it suces to show that for each 0 t T,
(
n
t
, )
(
n
t
, ) 0 as n . (127)
First note that we may write
(
n
t
, )
(
n
t
, ) = 1
E
n
t
]n
Et
1
n
_
_
E
n
t
i=]n
Et
((
i
) T, ))
_
_
1
]n
EtE
n
t
1
n
_
_
]n
Et
i=E
n
t
((
i
) T, ))
_
_
.
44
Now squaring both sides of the above and using the basic identity (x
1
+ x
2
)
2
2(x
2
1
+ x
2
2
), it is
straightforward to show that one may write
(
(
n
t
, )
(
n
t
, ))
2
1
E
n
t
2
Et
2
n
_
_
E
n
t
]n
Et
i=E
n
t
]n
Et
((
i
) T, ))
_
_
2
(128)
+1
E
n
t
2
Et
2
n
_
_
E
n
t
]n
Et
i=E
n
t
]n
Et
((
i
) T, ))
_
_
2
. (129)
We now show that each of the terms (128) and (129) converges to 0 in probability as n tends to
, which implies (127) and completes the proof.
We begin with (128). First note that by the independence of
i
, i 1 from the arrival process
E
n
, and the i.i.d. nature of the sequence
i
, i 1, we have that
E
_
_
2
n
1
E
n
t
2
Et
_
_
E
n
t
]n
Et
i=E
n
t
]n
Et
((
i
) T, ))
_
_
2
_
_
=
2
n
E
_
_
1
E
n
t
2
Et
E
n
t
]n
Et
i=E
n
t
]n
Et
((
i
) T, ))
2
_
_
2
n
E
_
_
(E
n
t
2n
Et)]n
Et
i=(E
n
t
2n
Et)]n
Et
((
i
) T, ))
2
_
_
8
2

E[[(
E
n
t
2
E
t
)
E
t
[].
However, since
E
n
t
E
t
as n , it follows that
E[[(
E
n
t
2
E
t
)
E
t
[] 0 as n .
This then implies that (128) converges to 0 in probability as n tends to , as desired.
We next proceed to (129). Note that since
E
n
t
E
t
as n , it follows that for each > 0,
lim
n
P
_
_
_1
E
n
t
2
Et
2
n
_
_
E
n
t
]n
Et
i=E
n
t
]n
Et
((
i
) T, ))
_
_
2
>
_
_
_ = 0.
This shows that (129) converges to 0 in probability as n tends to , which completes the proof.
References
[1] P. Billingsley. Convergence of Probability Measures. John Wiley and Sons, New York, 1999.
[2] T. Bojdecki and L. G. Gorostiza. Langevin equations for o
). Annals of Probability,
11(4):989999, 1983.
[27] I. Mitoma. An dimensional inhomogeneous Langevins equation. Journal of Functional
Analysis, 61:342359, 1985.
[28] I. Mitoma. Generalized OrnsteinUhlenbeck process having a characteristic operator with
polynomial coecients. Probab. Th. Rel. Fields, 76:533555, 1987.
[29] G. Pang, R. Talreja, and W. Whitt. Martingale proofs of manyserver heavytrac limits for
Markovian queues. Probab. Surv., 4:193267, 2007.
[30] G. Pang and W. Whitt. Twoparameter Markov heavytrac limits for inniteserver queues.
Queueing Systems, 65:325364, 2010.
[31] A.A. Puhalskii and J.E. Reed. On manyserver queues in heavy trac. The Annals of Applied
Probability, 20(1):129195, 2010.
[32] W. Whitt. On the heavytrac limit theorem for GI/G/ queues. Adv. Appl. Prob.,
14(1):171190, 1982.
[33] W. Whitt. StochasticProcess Limits. Springer, New York, 2002.
[34] W. Whitt. Proofs of the martingale FCLT. Probability Surveys, 4:268302, 2007.
47
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.