Professional Documents
Culture Documents
2016 Book RenewalTheoryForPerturbedRando PDF
2016 Book RenewalTheoryForPerturbedRando PDF
Alexander Iksanov
Renewal Theory
for Perturbed
Random Walks
and Similar
Processes
Probability and Its Applications
Series editors
Steffen Dereich
Davar Khoshnevisan
Andreas Kyprianou
Sidney I. Resnick
Renewal Theory
for Perturbed Random Walks
and Similar Processes
Alexander Iksanov
Faculty of Computer Science
and Cybernetics
Taras Shevchenko National
University of Kyiv
Kyiv, Ukraine
The present book offers a detailed treatment of perturbed random walks, per-
petuities, and random processes with immigration. These objects are of major
importance in modern probability theory, both theoretical and applied. Furthermore,
these have been used to model various phenomena. Areas of possible applications
include most of the natural sciences as well as insurance and finance. Recent years
have seen an explosion of activity around perturbed random walks, perpetuities, and
random processes with immigration. Over the last decade, several nice results have
been proved, and some efficient techniques and methods have been worked out. This
book is a result of a growing author’s conviction that the time has come to present
in a book format main developments in the area accumulated to date. Summarizing,
the first purpose of this book is to provide a thorough discussion of the state of the
art in the area with a special emphasis on the methods employed. Although most of
the results are given in a final form as ultimate criteria, there are still a number of
open questions. Some of these are stated in the text.
Formally, the main objects are related because each of these is a derived process
of i.i.d. pairs .X1 ; 1 /, .X2 ; 2 /; : : :. Here 1 , 2 ; : : : are real-valued random variables,
whereas X1 , X2 ; : : : are real-valued random variables in the case of perturbed random
walks and perpetuities (with nonnegative entries) and X1 , X2 ; : : : are DŒ0; 1/-valued
random processes in the case of random processes with immigration. As far as
perturbed random walks .Tn /n2N defined by
Tn WD 1 C : : : C n1 C Xn ; n2N
are concerned, the main motivation behind our interest is to what extent classical
results (some of these are given in Section 6.3) for ordinary random walks .1 C : : :
C n /n2N must be adjusted in the presence of a perturbating sequence. A similar
motivation is also our driving force in studying weak convergence of random
processes with immigration .X.t//t0 defined by
X
X.t/ WD XkC1 .t 1 : : : k /1f1 C:::Ck tg ; t 0:
k0
vii
viii Preface
If Xk .t/ 1 and k 0 for all k 2 N, then X.t/ is nothing else but the first time the
ordinary random walk exits the interval
P .1; t. This is a classical object of renewal
theory, and it is well known that . k0 1f1 C:::Ck utg /u0 satisfies a functional limit
theorem as t ! 1. If Xk .t/ is not identically one, the asymptotic behavior of X.t/
is affected by both the first-passage time process as above and fluctuations of the
“perturbating” sequence .Xk /k2N . With this point of view, the subject matter of the
book is one generalization of renewal theory. Thus, the second purpose of the book
is to work out theoretical grounds of such a generalization. Actually, the connections
between the main objects extend far beyond the formal definition. The third purpose
of the book is to exhibit these links in full. As a warm-up, we now give two examples
in which perturbed random walks are linked to perpetuities and random processes
with immigration, respectively.
(a) To avoid at this point introducing additional notation, we only discuss perpe-
tuities with nonnegative
P entries that are almost surely convergent series of the
form Y1 WD n1 exp.Tn /. It turns out that whenever the tail of the distribution
of Y1 is sufficiently heavy, the asymptotic behavior of PfY1 > xg as x ! 1
is completely determined by that of Pfsupn1 Tn > log xg. In particular, if the
power or logarithmic moments of supn1 exp.Tn / are finite, so are those of Y1 ;
see Sections 1.3.1
P and 2.1.4. A similar relation also exists between the finite-
n-perpetuities nkD1 exp.Tk / and maxima max1kn exp.Tk /, though this time
with respect to weak convergence; see Section 2.2.
(b) The number of visits to .1; t of the perturbed random walk is a certain
random process with immigration evaluated at point t. The moment results for
general random processes with immigration derived in Section 3.4 are a key in
the analysis of the moments of the numbers of visits (see Section 1.4 for the
latter).
As has already been mentioned, the random processes treated here allow for
numerous applications. The fourth purpose of the book is to add two less known
examples to the list of possible applications. In Section 4 we show that a criterion
for the finiteness of perpetuities can be used to prove an ultimate version of Biggins’
martingale convergence theorem which is concerned with the intrinsic martingales
in supercritical branching random walks. For the proof, we describe and exploit an
interesting connection between these at first glance unrelated models which emerges
when studying the weighted random tree associated with the branching random
walk under the so-called size-biased measure. In Section 5 we investigate weak
convergence of the number of empty boxes in the Bernoulli sieve which is a random
allocation scheme generated by a multiplicative random walk and a uniform sample
on Œ0; 1. We demonstrate that the problem amounts to studying weak convergence
of a particular random process with immigration which is actually an operator
defined on a particular perturbed random walk. We emphasize that the connection
between the Bernoulli sieve and certain random processes with immigration remains
veiled unless we consider the Bernoulli sieve as the occupancy scheme in a
random environment, and functionals in question are analyzed conditionally on the
environment.
Preface ix
I close this preface with thanks and acknowledgments. I thank my family for all
their love and creating a nice working atmosphere, both at home and dacha, where
most of the research and writing of this monograph was done. The other portion
of the research underlying the present document was mostly undertaken during my
frequent visits to Münster under the generous support of the University of Münster
and DFG SFB 878 “Geometry, Groups and Actions.” I thank Gerold Alsmeyer for
making these visits to Münster possible and always being ready to help. Matthias
Meiners helped in arranging my visits to Münster, too, which is highly appreciated. I
thank Oleg Zakusylo, my former supervisor, for all-round support at various stages
of my scientific career. I thank my colleagues and friends (in alphabetical order)
Gerold Alsmeyer, Darek Buraczewski, Sasha Gnedin, Zakhar Kabluchko, Sasha
Marynych, Matthias Meiners, Andrey Pilipenko, Uwe Rösler, Zhora Shevchenko,
and Vladimir Vatutin, in collaboration with whom many of the results presented
in this book were originally obtained. Sasha Marynych scrutinized the entire book
and found many typos, inconsistencies, and other blunders of mine. Apart from
this, I owe special thanks to Sasha for his ability to be helpful almost at any time.
I am grateful to Darek Buraczewski, Matthias Meiners, Andrey Pilipenko, and
Igor Samoilenko who read some chapters of the book and gave me useful advices
concerning the presentation and detected several errors.
xi
xii Contents
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 237
Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 249
List of Notation
xiii
xiv List of Notation
d
D – equality of one-dimensional distributions
d
X Y means that PfX > zg PfY > zg for all z 2 R
d
! – convergence in distribution of random variables or random vectors
f:d:
Vt .u/ ) V.u/ as t ! 1 – weak convergence of finite-dimensional distributions,
i.e., for any n 2 N and any 0 < u1 < u2 < : : : < un < 1,
d
.Vt .u1 /; : : : ; Vt .un // ! .V.u1 /; : : : ; V.un //; t ! 1:
P
! – convergence in probability
) – convergence in distribution in a function space
f1 .t/ f2 .t/ as t ! A means that limt!A .f1 .t/=f2 .t// D 1
We stipulate hereafter that all unspecified limit relations hold as t ! 1 or n !
1. Which of the alternatives prevails should be clear from the context.
Chapter 1
Perturbed Random Walks
Let .k ; k /k2N be a sequence of i.i.d. two-dimensional random vectors with generic
copy .; /. No condition is imposed on the dependence structure between and
. Let .Sn /n2N0 be the zero-delayed ordinary random walk with increments n for
n 2 N, i.e., S0 D 0 and Sn D 1 C : : : C n , n 2 N. Then define its perturbed variant
.Tn /n2N , that we call perturbed random walk (PRW), by
Tn WD Sn1 C n ; n 2 N: (1.1)
Y
k
R0 WD 1; Rk WD Wi ; k 2 N;
iD1
where .Wk /k2N are independent copies of a random variable W taking values in the
open interval .0; 1/. Also, let .Uj /j2N be independent random variables which are
independent of R and have the uniform distribution on Œ0; 1. A random allocation
scheme in which ‘balls’ U1 , U2 , etc. are allocated over an infinite array of ‘boxes’
.Rk ; Rk1 , k 2 N is called Bernoulli sieve.
Since a particular ball falls into the box .Rk ; Rk1 with random probability
the Bernoulli sieve is also the classical infinite allocation scheme with the ran-
dom frequencies .Pk /k2N . In this setting it is assumed that, given the random
frequencies .Pk /, the balls are allocated over an infinite collection of the boxes
.R1 ; R0 ; .R2 ; R1 ; : : : independently with probability Pj of hitting box j. Assuming
that the number of balls equals n, denote by Kn the number of nonempty boxes and
by Ln the number of empty boxes within the occupancy range.
From the very definition it is clear that the Bernoulli sieve is connected with
.b
T k /k2N the PRW generated by the couples .j log Wk j; j log.1 Wk /j/k2N . For
instance, the logarithmic size of the largest box in the Bernoulli sieve equals
log supk1 .W1 W2 : : : Wk1 .1 Wk // D supk1 .b T k / the supremum of the PRW
.bT k /. There is a deeper relation between the Bernoulli sieve and the PRW .b T k /.
In particular, it was proved in [100] that the weak convergence of Kn , properly
normalized and centered, is completely determined by the weak convergence of
b
N.x/ WD #fk 2 N W Pk ex g
D #fk 2 N W W1 : : : Wk1 .1 Wk / ex g; x > 0;
are i.i.d. Then V may be called a process with regenerative increments. For a simple
example, take a Lévy process V with negative mean and n D n. For n 2 N, put
i.e., the supremum of the process with regenerative increments can be represented
as the supremum of an appropriate PRW.
The supremum of the PRW is a relatively simple functional that has received
considerable attention in the literature. The corresponding results concerning
finiteness, existence of moments, and tail behavior will be presented in Section 1.3.
Queues and Branching Processes Suppose that and are both positive and
define
X X
Y .t/ WD 1fSk CkC1 tg and Y .t/ WD 1fSk t<Sk CkC1 g ; t 0:
k0 k0
.1/ .2/
Alternating Renewal Process Let .k ; k /k2N be a sequence of i.i.d. copies of
.1/ .1/ .2/
a Œ0; 1/ Œ0; 1/-valued random vector . .1/ ; .2/ /. The sequence 1 , 1 C 1 ,
.1/ .2/ .1/
1 C 1 C 2 ; : : : is sometimes called alternating renewal process. Related to
this sequence is a perturbed random walk .Tk /k2N with D .1/ C .2/ and D .1/
which is especially simple, for it forms a nondecreasing sequence. This is in contrast
to general perturbed random walks which do not normally possess such a property.
It is well known that a nontrivial zero-delayed ordinary random walk .Sn /n2N0 (i.e.,
a random walk starting at the origin with increment distribution not degenerate at 0)
exhibits one of the following three regimes:
1) drift to C1 (positive divergence): limn!1 Sn D 1 a.s.;
2) drift to 1 (negative divergence): limn!1 Sn D 1 a.s.;
3) oscillation: lim infn!1 Sn D 1 and lim supn!1 Sn D 1 a.s.
PRWs exhibit the same trichotomy. In order to state the result precisely some further
notation is needed. As usual, let1 C D _ 0 and D . _ 0/ D . ^ 0/. Then,
for x > 0, define
Z x
x
A˙ .x/ WD Pf˙ > yg dy D E. ˙ ^ x/ and J˙ .x/ WD (1.3)
0 A ˙ .x/
whenever the denominators are nonzero. Notice that J˙ .x/ for x > 0 is well defined
if, and only if, Pf˙ > 0g > 0. In this case, we set J˙ .0/ WD 1=Pf˙ > 0g. The
following theorem, though not stated explicitly, can be read off from Theorem 2.1
in [109].
Theorem 1.2.1 Any PRW .Tn /n2N satisfying the standing assumption is either
positively divergent, negatively divergent or oscillating. Positive divergence takes
place if, and only if,
1
We use x _ y or max.x; y/, x ^ y or min.x; y/ interchangeably, depending on typographical
convenience.
1.2 Global Behavior 5
Oscillation occurs in the remaining cases, thus if, and only if, either
or
or
2
To give a better feeling of the result, consider the simplest situation when E 2 .1; 0/ and
EC < 1. Then, by the strong law of large numbers, Sn drifts to 1 at a linear rate. On the
other hand, limn!1 n1 C C
n D 0 a.s. by the Borel–Cantelli lemma which shows that n grows at
most sublinearly. Combining pieces together shows limn!1 .Sn1 C n / D 1 a.s.
6 1 Perturbed Random Walks
Theorem 1.3.5 Let a > 0 and3 Pf D 1g 2 Œ0; 1/. The following assertions are
equivalent:
Remark 1.3.7 The known Breiman4 theorem states that if U and V are nonnegative
independent random variables such that PfU > xg is regularly varying at C1 of
index ˛, ˛ 0, and EV ˛C < 1 for some > 0, then
It is known that in some cases, for instance, if PfU > xg const x˛ relation (1.15)
holds under the sole assumption EV ˛ < 1 (see Lemma 2.1 in [110]). Thus, if `
in (1.13) is equivalent to a constant, the equivalence (1.13),(1.14) holds whenever
Eea < 1, irrespective of the condition Ee.aC
/ < 1.
Recall that a distribution is called nonlattice if it is not concentrated on any lattice
ıZ, ı > 0. A distribution is called ı-lattice if it is concentrated on the lattice ıZ and
not concentrated on any lattice ı1 Z for ı1 > ı.
Theorem 1.3.8 Suppose that there exists positive a such that
3
A strange assumption Pf D 1g 2 Œ0; 1/ which is made here and in Lemma 1.3.12 is of
principal importance for the proof of Theorem 2.1.5.
4
Actually, Breiman (Proposition 3 in [52]) only proved the result for ˛ 2 .0; 1/. The whole range
˛ > 0 was later covered by Corollary 3.6 (iii) in [70].
8 1 Perturbed Random Walks
0
where C WD E ea1 ea.1 CT / 1f1 CT0 1 g 2 .0; 1/ and T0 WD supn2 .Tn 1 /.
If the distribution of e is ı-lattice, then, for each x 2 R,
Lemma 1.3.9 given next collects some relevant properties of functions f introduced
in Theorem 1.3.1.
Lemma 1.3.9 Let f be a function as defined in Theorem 1.3.1. Then there exists a
differentiable and nondecreasing on RC function h which further satisfies h.0/ D 0,
h.x C y/ h.2x/
h.x/ C h.y/ h.x/
a WD PfT bg > 0;
P
the function V.x/ WD 1 C n1 Pfmax1kn Tk b; Sn > xg satisfies
Then
X
ETx D PfTx ng D V.x/
n1
.x/
ESTx D E. ^ x/ ETx D A .x/V.x/; x > 0: (1.20)
.x/
x1B ..STx / ^ x/1B .STx / ^ x STx :
Consequently,
.x/
ESTx ax
Proof We first note that the moment assumption and limx!1 f .x/ D 1 together
ensure T < 1 a.s. Therefore, there exists a b > 0 such that a D PfT bg > 0. In
view of Lemma 1.3.9, in the following we can and do assume that f is differentiable
with f 0 .x/ 0 on RC .
10 1 Perturbed Random Walks
Now fix any c > b and infer for x b (with V as in the previous lemma)
X n o
PfT > xg D Pf1 > xg C P max Tk x; TnC1 > x
1kn
n1
X n o
Pf1 > c C xg C P max Tk b; TnC1 > x; nC1 > c C x
1kn
n1
0 1
Z X n o
@1 C P max Tk b; Sn > x y A dPf yg
.cCx; 1/ 1kn
n1
the last inequality following by Lemma 1.3.10. With this at hand, we further obtain
Z 1
1 > Ef .T / f 0 .x/PfT > xgdx
b
Z 1
a f 0 .x/ EJ . x/1f>cCxg dx
b
Z c
0
D a E 1f>bCcg f .x/J . x/dx
b
Z !
=2
0
a E 1f>2cg f .x/J . x/dx
b
a E 1f>2cg . f .=2/ f .b//J .=2/
21 a E 1f>2cg . f .=2/ f .b//J ./ ;
having utilized J .x=2/ J .x/=2 for the last inequality. Recalling that T < 1
a.s. ensures EJ .C / < 1 by Theorem 1.2.1 we infer Ef .C =2/J .C / < 1. The
proof of Lemma 1.3.11 is complete because f varies regularly. t
u
The proofs of the implication (1.6))(1.7) in Theorem 1.3.1 and the implication
(1.11))(1.12) in Theorem 1.3.5 as well as the proof of Theorem 1.3.8 are (partially)
based on a lemma.
Lemma 1.3.12 Let Pf D 1g 2 Œ0; 1/. The following inequalities hold:
and
In order to obtain (1.24) fix any c > 1 such that Pf > cg 1=c. Then (1.22)
with y D c provides us with
and the finiteness of the left-hand side entails Ef .C /J .C / < 1 by
Lemma 1.3.11 because k ^ 0 0 a.s.
(1.7) ) (1.8). By Lemma 1.3.9, we can assume that f is nondecreasing and
R1
satisfies (1.18). Since 1=J .t/ D 0 Pf > xtgdx we conclude that J is
nondecreasing with limt!1 J .t/ D 1. Hence we can assume that f J is
nondecreasing.
Since
C
sup .Sk1 C k / sup Sk C sup C
k a.s.
1k 0k 1 1k
we infer
f . sup .Sk1 C k //C J . sup .Sk1 C k //C
1k 1k
c f sup S k J sup Sk C f sup C C
k J sup k a.s.
0k 1 0k 1 1k 1k
Observe that is the stopping time w.r.t. the filtration .Fn /n2N0 , where F0 WD
f˛; g and, for n 2 N, Fn is the -algebra generated by .k ; k /1kn . Hence
X
Ef sup C
k J sup C
k E f .C C C C
k /J .k / D EEf . /J . / < 1
1k 1k kD1
by Wald’s identity.
(1.8))(1.6). Without loss of generality (see Lemma 1.3.9), we can assume that f is
nondecreasing and differentiable with f .0/ D 0.
Define the sequence .n /n2N0 of ladder epochs associated with , given by 0 WD
0, 1 WD and
b
n WD sup.n1 C1 ; n1 C1 C n1 C2 ; : : : ; n1 C1 C : : : C n 1 C n /
1.3 Supremum of the Perturbed Random Walk 13
D I1 C I2 :
Since Ef .b
PC /J .b
C / < 1 (trivially) entails Ef .b
C / < 1 and the renewal function
b b
U.x/ WD n1 PfSn1 < xg is finite for all x 0 (see (6.1)) the second integral is
easily estimated as
0 1
X Z 1
I2 @ Pfb
Sn1 ygA f 0 .x/ Pfb
> xgdx
n1 0
b Ef .b
D U.y/ C / < 1:
Z 1 Zb
C
D 0 b x/1
f .x/EU.b b C /
dx EU.b f 0 .x/ dx
fb
>xCyg
0 0
b C /f .b C
2b
D EU.b C / E R C /
f .b
C
0 PfS > zgdz
C
2b
ER C / D Ef .b
f .b C /J .b
C / < 1
C
0 PfS1 > zgdz
having utilized Erickson’s inequality (formula (6.5)) for the penultimate inequality
and an easy observation that fS1 > zg fS > zg for z > 0 for the last
inequality. The proof of Theorem 1.3.1 is complete. t
u
Proof of Proposition 1.3.4 (1.10))(1.9). By Lemma 1.3.9, we can and do assume
that f satisfies (1.17). According to the implication (6.18))(6.19) of Theorem 6.3.1,
14 1 Perturbed Random Walks
the condition Ef . C /J . C / < 1 entails Ef .Sw /1fw <1g < 1. Further,
Ef .C C
w C1 /1fw <1g D Ef . /Pfw < 1g, whence
in view of (1.17).
(1.9))(1.10). Pick 2 R such that Pf > g > 0. Then
X
1 > Ef .TCw C1 /1fw <1g D Ef ..Sk C kC1 /C /1fS1 <0;:::;Sk1 <0;Sk 0g
k1
Z
D Ef ..Sw C x/C /1fw <1g dPf xg
R
Z
Ef ..Sw C x/C /1fw <1g dPf xg
.;1/
Ef .C /1f> g Pfw < 1g _ Ef ..Sw C /C /1fw <1g Pf > g
as desired, where the independence of Sn and nC1 for each n has been used.
(1.11))(1.12). Using formulae (1.23) and (1.24) of Lemma 1.3.12 with ˆ.x/ D
eax we conclude that Eea < 1 which is the second part of (1.12) and also
E exp.a supn0 Sn / < 1. Left with proving that the last inequality entails5 Eea < 1
1.3 Supremum of the Perturbed Random Walk 15
We start by noting that the assumptions entail E 2 Œ1; 0/ and EC < 1. Hence
T is a.s. finite by Theorem 1.2.1. Write
PfeT > xg X 1
lim inf 1C EeaSn D :
x!1 1 F.x/ n1
1 Eea
5
Actually, E exp.a supn0 Sn / < 1 if, and only if, Eea < 1. To prove the implication ( just use
P
the inequality E exp.a supn0 Sn / E n0 eaSn D .1 Eea /1 .
16 1 Perturbed Random Walks
1 F.xeSn /
lim E D EeaSn :
x!1 1 F.x/
PfeT > xg X 1
lim sup 1C EeaSn D
x!1 1 F.x/ n1
1 Eea
1 F.xeSn /
follows once we can find a sequence .un /n2N such that E un for each
P 1 F.x/
n 2 N and all x large enough and n1 un < 1.
Pick ı 2 .0; min.a;
// that satisfies Ee.aCı/ < 1. Since the function x 7! Eex
is convex on .0; a C
/ we also have Ee.aı/ < 1. For this ı and any positive A1
there exists a positive x1 such that
whenever x x1 . Further, Potter’s bound (Theorem 1.5.6 (iii) in [44]) tells us that
for any positive A2 there exists a positive x2 such that
1 F.ux/
A2 max.uaCı ; uaı /
1 F.x/
1 F.xeSn / 0 Ee
xaCı .aCı/Sn
.aCı/Sn
E 1feSn >x=x0 g aCı A1 xaCı
0 Ee and
1 F.x/ x .1 F.x//
1 F.xeSn /
E 1feSn x=x0 g A2 E max.e.aı/Sn ; e.aCı/Sn /
1 F.x/
A2 .Ee.aı/Sn C Ee.aCı/Sn /;
1.3 Supremum of the Perturbed Random Walk 17
where T0 D supn2 .Tn 1 / is independent of .1 ; 1 / and has the same distribution
as T . On the one hand,
0
PfeT > xg 1 F.x/ C Pfe1CT > xg
whence
0
1 F.x/ Pfe1 CT > xg
lim inf 1 lim 0 D 1 Eea (1.27)
x!1 PfeT > xg x!1 PfeT > xg
having utilized Breiman’s theorem for the last equality. On the other hand,
0
PfeT > xg D 1 F.x/ C E1fe1 xg PfeT > xe1 jF1 g
whence
1 F.x/
lim inf 1 Eea (1.28)
x!1 PfeT > xg
and
0
Q.x/ WD e PfT > xg Pf1 C T > xg :
ax
18 1 Perturbed Random Walks
Since
Z
eax Pf1 C T0 > xg D P.x t/dPf 0 tg; x 2 R;
R
where 0 is a random variable with distribution Pf 0 2 dxg D eax Pf 2 dxg, we
conclude that P is a (locally bounded) solution to the renewal equation
Z
P.x/ D P.x t/dPf 0 tg C Q.x/; x 2 R: (1.29)
R
where .Sk0 /k2N0 is a zero-delayed ordinary random walk with jumps having the
distribution of 0 . Observe that Eeb < 1 for all b > 0. In particular, Eea <
1 which in combination with the second condition in (1.16) ensures Eea 2 R.
The convexity of m.x/ WD Eex on Œ0; a together with m.0/ D m.a/ D 1 implies
that m is increasing at the left neighborhood of a whence the left derivative m0 .a/ is
positive. Since E 0 D Eea D m0 .a/, we have proved that E 0 2 .0; 1/. Further,
X
0 eax Q.x C ıj/
j2Z
X
D eaıj Pfmax.1 ; 1 C T0 / > x C ıjg Pf1 C T0 > x C ıjg
j2Z
X
D eaıj Pf1 > x C ıj; 1 C T0 < 1 g Pf1 C T0 > x C ıj; 1 C T0 < 1 g
j2Z
X
eaıj Pf1 > x C ıjg:
j2Z
The assumption Eea < 1 guaranteesP that the last series converges for each x 2 R.
Thus, we have checked that the series j2Z Q.x C ıj/ converges for each x 2 R.
By the key renewal theorem for the lattice case (Proposition 6.2.6)
ı X
lim P.x C ın/ D Q.x C ıj/ DW C.x/: (1.30)
n!1 Eea j2Z
It remains to show that C.x/ > 0. To this end, pick y 2 R such that p WD Pf >
yg > 0. For any fixed x > 0, there exists i 2 Z such that x y 2 Œıi; ı.i C 1//. With
1.3 Supremum of the Perturbed Random Walk 19
For x 0, set .x/ WD inffk 2 N W Sk > xg, with the usual convention that
inf ˛ D 1, and WD .0/. Define a new probability measure6 Pa by
for each Borel function h W RnC1 ! Œ0; 1/, where Ea is the corresponding
expectation. Since the P-distribution of 0 is the same as the Pa -distribution of S1 , we
have Ea S1 D E 0 2 .0; 1/. Therefore, .Sn /n2N0 , under Pa , is an ordinary random
walk with the positive drift whence E.x/ < 1 for each x 0 and thereupon
Ea S D Ea S1 Ea 2 .0; 1/. Further, for each x > 0,
eax Pfsup Sk > xg D eax Pf.x/ < 1g D eax Ea eaS .x/ 1f .x/<1g
k0
having utilized (1.32) for the second equality. Since S1 , under Pa , has a ı-lattice
distribution, an application of Theorem 10.3(ii) on p. 104 in [119] allows us to
conclude that S .ın/ ın converges in Pa -distribution as n ! 1 to a random variable
Y with Pa fY D ıkg D EaıS Pa fS ıkg, k 2 N0 . This immediately implies that
lim eaın Pfsup Sk > ıng D lim Ea ea.S .ın/ın/ D Ea eaY > 0;
n!1 k0 n!1
a result that is stronger than (1.31). The proof of Theorem 1.3.8 is complete. t
u
6
This is indeed a probability measure because, in view of the first condition in (1.16), .eaSn /n2N0 is
a nonnegative martingale with respect to the natural filtration.
20 1 Perturbed Random Walks
Then
.1;˛/
max .Sk C kC1 /=a.n/ ) sup jk ; n ! 1: (1.36)
0kŒn .1;˛/
tk
holds for some c > 0, then contributions of max0kn Sk and max1knC1 k to the
asymptotic behavior of max0kn .Sk C kC1 / are comparable. This situation which
is more interesting than the other two is treated in Theorem 1.3.14 given below.
Theorem 1.3.14 Suppose that (1.33) and (1.37) hold. Then
.c;2/ .c;2/
n1=2 max .Sk C kC1 / ) sup .vS2 .tk / C jk /; n ! 1; (1.38)
0kŒn .c;2/
tk
Remark 1.3.16 The marginal distribution of the right-hand side of (1.39) can be
explicitly computed and is given by
8
ˆ x
u c=
n o ˆ
< ; x
u; if
> 0;
.c;1/ .c;1/ xx c=j
j
P sup
tk C jk x D ; x 0 if
< 0; (1.40)
.c;1/ ˆ xCj
ju
tk u :̂exp.cu=x/; x 0 if
D 0:
7
The only principal difference is that one should use SŒn =n )
‡.t/ on D where ‡.t/ D t for
t 0, rather than Donsker’s theorem in the form (1.54).
22 1 Perturbed Random Walks
because N .c;1/ .t; y/ W t u;
t C y > x is a Poisson random variable. It remains to
note that
Z uZ
EN .c;1/ .t; y/ W t u;
t C y > x D 1f
tCy>xg
c; 1 .dy/dt
0 Œ0;1/
Z u
Dc .x C j
jt/1 dt
0
D .c=j
j/.log.x C j
ju/ log x/:
Using an analogous argument we can obtain a (rather implicit) formula for the
marginal distribution of the right-hand side of (1.38):
n o Z
.c;1/ .c;1/
u
1fvS2 .t/<xg
P sup vS2 .tk / C jk x D E exp c dt :
tk
.c;1/
u 0 .x vS2 .t//2
It does not seem possible to simplify the last expression. A similar formula for
finite-dimensional distributions can be found in Proposition 1 of [256].
Let C WD CŒ0; 1/ be the set of continuous functions defined on Œ0; 1/. Denote by
Mp the set of Radon point measures on Œ0; 1/ .1; 1 which satisfy
for all ı > 0 and all T > 0. The set Mp is endowed with the vague topology. Define
the mapping F from D Mp to D by
8
< sup . f .k / C yk /; if k t for some k;
F . f ; / .t/ WD kW k t
:
f .0/; otherwise;
P
where D k ".k ; yk / . Assumption (1.41) ensures that F . f ; / 2 D. If (1.41) does
not hold, F . f ; / may lose right-continuity.
Theorem 1.3.17 For n 2 N, let fn 2 D and n 2 Mp . Assume that f0 2 C and
• 0 .Œ0; 1/ .1; 0/ D 0 and 0 .f0g .1; C1/ D 0,
• 0 ..r1 ; r2 / .0; 1/ 1 for all positive r1 and r2 such that r1 < r2 ,
P
• 0 D k " .0/ ; y.0/ does not have clustered jumps, i.e., k ¤ j for k ¤ j.
.0/ .0/
k k
1.3 Supremum of the Perturbed Random Walk 23
If
lim fn D f0 (1.42)
n!1
on Mp , then
lim F . fn ; n / D F . f0 ; 0 / (1.44)
n!1
in the J1 -topology on D.
Proof It suffices to prove convergence (1.44) on DŒ0; T for any T > 0 such that
0 .fTg .0; 1/ D 0 (the last condition ensures that F . f0 ; 0 / is continuous at T).
Let dT be the standard Skorokhod metric on DŒ0; T. Then
dT .F . fn ; n /; F . f0 ; 0 //
dT .F . fn ; n /; F . f0 ; n // C dT .F . f0 ; n /; F . f0 ; 0 //
sup jF . fn ; n /.t/ F . f0 ; n /.t/j C dT .F . f0 ; n /; F . f0 ; 0 //
t2Œ0;T
having utilized the fact that dT is dominated by the uniform metric. It follows
from (1.42) and the continuity of f0 that limn!1 fn D f0 uniformly on Œ0; T.
Therefore we are left with checking that
lim dT .F . f0 ; n /; F . f0 ; 0 // D 0: (1.45)
n!1
dT .F . f0 ; n /; F . f0 ; 0 //
sup jn .t/ tj
t2Œ0;T
. f0 .Ni / C yN i /j
.n/ .n/ .n/ .n/
C sup j sup . f0 .k / C yk / sup
t2Œ0;T . .n/ /t .n/
Ni Dn .Ni /t
n k
X p
.j f0 .Ni / f0 .Ni /j C jNyi yN i j/;
.n/ .n/
C
iD1
.n/ .n/ P
where, for n 2 N, k ; yk are the points of n , i.e., n D k ". .n/ ; y.n/ / . The
k k
relation limn!1 supt2Œ0;T jn .t/ tj D 0 is easily checked. Using (1.46) we infer
X
p
.j f0 .Ni / f0 .Ni /j C jNyi yN i j/ D 0
.n/ .n/
lim (1.47)
n!1
iD1
sup . f0 .Ni / C yN i /j
.n/ .n/ .n/ .n/
j sup . f0 .k / C yk /
.n/ .n/
n .k /t n .Ni /t
sup . f0 .Ni / C yN i /:
.n/ .n/ .n/ .n/
D sup . f0 .k / C yk / (1.48)
.n/ .n/
n .k /t n .Ni /t
.n/ .n/
Case I in which n .k / t and k 2 Œsj ; sjC1 for j 2 N. Since .sj ; sjC1 / \
fN1 ; : : : ; Np g ¤ ˛, there exists Nl 2 .sj1 ; sj /. In particular, Nl must satisfy
.n/ .n/ .n/ .n/
sup . f0 .Ni / C yN i /
.n/ .n/ .n/ .n/
f0 .k / C yk
.n/
n .Ni /t
f0 .k / C yk . f0 .Nl / C yN l / f0 .k / C f0 .Nl /
.n/ .n/ .n/ .n/ .n/ .n/
sup . f0 .Ni / C yN i /
.n/ .n/ .n/ .n/
f0 .k / C yk
.n/
n .Ni /t
f0 .k / C yk . f0 .N1 / C yN 1 /
.n/ .n/ .n/ .n/
Case III in which k 2 Œ0; s1 and n .N1 / > t. Noting that the set fi 2 f1; : : : ; pg W
.n/ .n/
sup . f0 .Ni / C yN i / D f0 .k / C yk f0 .0/
.n/ .n/ .n/ .n/ .n/ .n/
f0 .k / C yk
.n/
n .Ni /t
Sending in (1.52) and (1.53) j˛j and to zero and recalling (1.47) we arrive
at (1.45). The proof of Theorem 1.3.17 is complete. t
u
Lemma 1.3.18 N .a;b/ satisfies with probability one all the assumptions imposed on
0 in Theorem 1.3.17.
26 1 Perturbed Random Walks
Proof Plainly, N .a;b/ .Œ0; 1/ .1; 0/ D 0 a.s. and N .a;b/ .f0g .1; C1/ D 0
a.s. Further N .a;b/ .Œ0; T ..1; ı [ Œı; 1// < 1 a.s. for all ı > 0 and all
T > 0 because
a;b ..1; ı [ Œı; 1/ D
a;b .Œı; 1/ < 1, and N .a;b/ ..r1 ; r2 /
.0; 1/ 1 a.s. whenever 0 < r1 < r2 because
a;b ..0; 1/ D 1.
Fix any T > 0 and ı > 0. In order to show that N .a;b/ does not have clustered
jumps a.s. it suffices to check this property for N .a;b/ .Œ0; T ..ı; 1 [ //. This is
done on p. 223 in [237]. t
u
Proof of Theorem 1.3.14 According to Donsker’s theorem assumption (1.33)
implies
on D. It is a standard fact of the point processes theory that condition (1.37) entails
X
1fkC1 >0g ".k=n; kC1 =n1=2 / ) b
N .c;2/ (1.55)
k0
on Mp , see, for instance, Theorem 6.3 on p. 180 in [237]. Here, b N .c;2/ has the same
.c;2/
distribution as N but may depend on S2 .
The distribution of b N .c;2/ is completely determined by distributions of restrictions
of b
N .c;2/
to the sets Œ0; s .ı; 1. Thus in order to prove that S2 and b N .c;2/ are
actually independent, it suffices to check that b .c;2/
N .Œ0; s .ı; 1/ and S2 are
independent for each fixed s > 0 and each fixed ı > 0. Fix ı > 0 and s > 0
and put 0;n WD 0 and 0>;n WD 0, and then
p p
k;n WD inffj > k1
;n
W j nıg and k>;n WD inffj > k1
>;n
W j > nıg
Then . ;n /k2N are i.i.d. with generic copy ;n having the distribution
k p
Pf ;n 2 g D Pf 2 j nıg, while .k>;n /k2N are i.i.d. with generic copy >;n
p
having the distribution Pf >;n 2 g D Pf 2 j > nıg. For any " > 0,
p p p p
Pfj >;n j > n"g Pfjj > n"g=Pf > nıg c1 ı 2 nPfjj > n"g
converges to zero in probability. Indeed, for any r 2 N and all " > 0,
X
>
KŒnT X
r
1=2 1=2 >
P n jk>;n j > " P n jk>;n j > " C PfKŒnT > rg:
kD1 kD1
K
X
Œn
on D. Observe further9 that n1 KŒn ) ‡./ on D where ‡.t/ D t for t 0 which
implies
KŒn
X Œn
X
1=2
n ;n ; n1=2 ;n ) vS2 ./; vS2 ./
j j
jD1 jD1
>
on D D. Since KŒns is independent of . ;n /k2N we conclude that S2 and
k
b .c;2/
N .Œ0; s .ı; 1/ are independent, as claimed.
Using the independence of S2 and b N .c;2/ , relations (1.54) and (1.55) can be
combined into the joint convergence
X
n1=2 SŒn ; 1fkC1 >0g ".k=n; kC1 =n1=2 / ) vS2 ./; b
N .c;2/
k0
>
9
The weak convergence of finite-dimensional distributions is immediate from KŒnt C KŒnt D Œnt
>
and the fact that KŒnt converges in distribution. This extends to the functional convergence because
the limit is continuous and KŒnt is a.s. nondecreasing in t (recall Pólya’s extension of Dini’s
theorem: convergence of monotone functions to a continuous limit is locally uniform).
28 1 Perturbed Random Walks
P
f0 D vS2 , n D and 0 D b
k0 "fk=n; kC1 =n1=2 g N .c;2/ . We already know that
conditions (1.42) and (1.43) are fulfilled. Furthermore, by Lemma 1.3.18, b N .c;2/
satisfies with probability one all the assumptions imposed on 0 in Theorem 1.3.17.
Hence Theorem 1.3.17 is indeed applicable with our choice of fn and n , and (1.38)
follows. The proof of Theorem 1.3.14 is complete. t
u
Proof of Proposition 1.3.13 (i) Fix any T > 0. Since, for all " > 0,
ŒnTC1
Pfn1=2 max ŒnsC1 > "g D 1 Pf "n1=2 g
0sT
which implies
on Mp and thereupon
X
SŒn =an ; 1fkC1 >0g ".k=n;kC1 =an / ) „./; N .1;˛/
k0
Lemma 1.3.18, N .1;˛/ satisfies with probability one all the assumptions imposed
on 0 in Theorem 1.3.17. The proof of Proposition 1.3.13 is complete. t
u
.x/ WD supfn 2 N W Tn xg
with the usual conventions that sup ˛ D 0 and inf ˛ D 1. Let us further denote
by .x/, N.x/ and .x/ the corresponding quantities for the ordinary random walk
.Sn /n0 which is obtained in the special case D 0 a.s. after a time shift. For
instance, .x/ WD inffn 2 N W Sn > xg for x 2 R. We shall write for .0/, N for
N.0/ and for .0/.
Our aim is to find criteria for the a.s. finiteness of .x/, N .x/ and .x/ and
for the finiteness of their power and exponential moments. We first discuss the a.s.
finiteness. As far as N .x/ and .x/ are concerned no surprise occurs: the situation
is analogous to that for ordinary random walks.
Theorem 1.4.1 The following assertions are equivalent:
(i) .Tn /n2N is positively divergent.
(ii) N .x/ < 1 a.s. for some/all x 2 R.
(iii) .x/ < 1 a.s. for some/all x 2 R.
The situation around .x/ is different. Plainly, if lim supn!1 Tn D 1 a.s.,
then .x/ < 1 a.s. for all x 2 R. On the other hand, one might expect in the
opposite case limn!1 Tn D 1 a.s., that Pf .x/ D 1g > 0 for all x 0,
for this holds true for ordinary random walks. Namely, if limn!1 Sn D 1 a.s.,
then Pfsupn1 Sn 0g D Pf D 1g > 0. The following result shows that this
conclusion may fail for a PRW. It further provides a criterion for the a.s. finiteness
of .x/ formulated in terms of .; /.
30 1 Perturbed Random Walks
Theorem 1.4.2 Let .Tn /n2N be positively divergent or oscillating. Then .x/ < 1
a.s. for all x 2 R. Let .Tn /n2N be negatively divergent and x 2 R. Then .x/ < 1
a.s. if, and only, if Pf < 0; xg D 0.
The following theorems are on the finiteness of exponential moments of .x/,
N .x/, and .x/.
ea Pf D 0; xg < 1:
for any x 2 R, where a.x/ 2 .0; 1 equals the supremum of all positive a
satisfying (1.57). As a function of x, a.x/ is nonincreasing with lower bound
log Pf D 0g.
(b) If > 0 a.s., then a.x/ D 1 for all x 2 R, thus EeaN .x/ < 1 for any a > 0
and x 2 R.
(c) If Pf < 0g > 0, then the following assertions are equivalent:
Theorem 1.4.5 Let .Tn /n2N be a positively divergent PRW, a > 0 and R WD
log inf Eet .
t0
(a) Assume that Pf 0g D 1. Let x 2 R and assume that Pf xg > 0. Then the
following assertions are equivalent:
Theorem 1.4.7 Let .Tn /n0 be a positively divergent PRW and p > 0. Then the
following assertions are equivalent:
Proofs of Theorems 1.4.3, 1.4.5 and 1.4.7 can be found in [8]. These will not be
given here because while the proofs concerning .x/ are rather technical, the proofs
concerning .x/ rely on the arguments which are very similar to those exploited in
Section 1.3.2.
Proof of Theorem 1.4.1 If either N .x/ or .x/ is a.s. finite for some x, then
lim infn!1 Tn > 1 a.s. Hence, by Theorem 1.2.1, .Tn /n2N must be positively
divergent. The converse assertion holds trivially. t
u
One half of the proof of Theorem 1.4.2 is settled by the following lemma.
Lemma 1.5.1 Let x 2 R, Pf < 0; xg D 0 and p WD Pf xg < 1. Then
Pf .x/ > ng pn for n 2 N. If p D 1, then lim supn!1 Tn D 1 a.s.
Proof Let x 2 R and Pf < 0; xg D 0. Then p D 1 entails 0
a.s., thus limn!1 Sn D 1 a.s. (recalling our standing assumption) and thus, by
Theorem 1.2.1, lim supn!1 Tn D 1 a.s.
Now assume that p < 1. Then WD inffn 2 N W n > xg has a geometric
distribution, namely Pf > ng D pn for n 2 N. By assumption, k 0 a.s. for
k D 1; : : : ; n 1 on f D ng whence Tn D 1 C : : : C n1 C n n > x a.s. on
f D ng and therefore
for any n 2 N. t
u
Proof of Theorem 1.4.2 The first assertion is obvious. In view of Lemma 1.5.1, it
remains to argue that, given a negatively divergent PRW .Tn /, the a.s. finiteness of
.x/ for some x 2 R implies Pf < 0; xg D 0.
Suppose, on the contrary, that Pf < 0; xg > 0. Then we can fix " > 0 such
that Pf "; xg > 0. By negative divergence, supn1 Tn D T < 1 a.s. so
that we can further pick y 2 R such that PfT yg > 0. Define m WD inffk 2 N0 W
k" y xg. Then
Proof of Theorem 1.4.4 By Theorem 1.2.1, positive divergence of .Tn /n2N entails
We shall use results developed in Section 3.4. Consider the random process with
Pb .0/
immigration Y with generic response process X.t/ WD kD1 1fk tg and generic
renewal increment N WD Sb .0/
> 0 having distribution Pf 2 j > 0g. Then it
is easily seen that N .x/ D Y.x/ for all x 2 R. Therefore, by Theorem 3.4.1 and
Remark 3.4.2, EeaN .x/ < 1 if, and only, if
Z b
.0/
! !
X
E exp a N
1fk xyg 1 dU.y/ < 1; (1.66)
Œ0; 1/ kD1
N
where U.x/ N It satisfies Erickson’s
is the renewal function associated with .
inequality (formula (6.5)) which reads
N
JC .y/Pf > 0g U.y/ 2JC .y/Pf > 0g (1.67)
b
.0/
!
X Eea1fxg 1f>0g
E exp a 1fk xg D < 1 (1.68)
kD1
1 Eea1fxg 1fD0g
Z b
.0/
! !
X
E exp a N
1fk xyg 1 dU.y/
Œ0; 1/ kD1
Z
Eea1fxyg 1 N
D dU.y/
Œ0; 1/ 1 Eea1fxyg 1fD0g
Z
.ea 1/ Pf x yg N
D dU.y/
Œ0; 1/ 1 Eea1fxyg 1fD0g
34 1 Perturbed Random Walks
Z
.ea 1/ N
Pf. x/ ygdU.y/
1 Eea1fxg 1fD0g Œ0; 1/
where (1.67) has been utilized for the penultimate inequality and (1.65) for the last.
Since, conversely, (1.57) follows directly from (1.66), we have thus proved the
equivalence of (1.56) and (1.57). Checking the remaining assertions is easy and
therefore omitted.
(c): (1.59))(1.60). Since Pf < 0; xg ! Pf < 0g > 0 as x ! 1, we can
choose x 2 R so large such that Pf < 0; xg > 0. Using that N .x/ .x/1,
we infer from (1.59) that Eea .x/ < 1. According to Theorem 1.4.3(b), this implies
Ee < 1. The latter is equivalent to (1.60) by Theorem 6.3.5.
a
for each Borel function h W RnC1 ! RC where E denotes the expectation with
respect to P .
Set 0 WD 0 and, for n 2 N, let n denote the nth strictly increasing ladder epoch
of .Sk /k2N0 , i.e., 1 WD and
the renewal function of the corresponding ladder height sequence. Then, according
to Theorem 3.4.3 (with X.t/ D 1ftg ) it suffices to prove that
Z
r> .0/ WD l.y/1 dU > .y/ < 1; (1.71)
Œ0; 1/
Q
where l.x/ WD E nD1 e
a1fTn xg
, x 2 R.
For x 2 R, set
ˇ.x/ WD supfn W Tn xg
1.5 Proofs for Section 1.4 35
Now
X
Eeaˇ.x/ D Pf min Tn > xg C ean Pf n; Tn x; min Tk > xg
1n nC1k
n1
X
1C ean Pf n; Tn xg:
n1
where (1.69) has been utilized in the last step. Now let w;0 WD 0 and w;n WD
inffk > w;n1 W Sk Sw;n1 g for n 1 where inf ˛ D 1. We now make use of
the following duality (see, for instance, Theorem 2.3 on p. 224 in [18])
X X
P fSn 2 ; > ng D P fSw;n 2 ; w;n < 1g (1.74)
n0 n0
X Z
Sw;n >
E e 1fw;n <1g U ..z C Sw;n / /dF.z/
n0 R
X Z
Sw;n > >
E e 1fw;n <1g U .z /dF.z/ C U .Sw;n / (1.75)
n0 R
where in the last step weR have used the subadditivity of y 7! y , y 2 R and U > .y/ ,
y 0 (see (6.3)). Here, R U > .z /dF.z/ D EU > . / is finite due to (1.65) and the
fact that
2y 2y
U > .y/ R y Ry D 2JC .y/; y>0 (1.76)
0 PfS > xgdx 0 PfS1 > xgdx
where the first inequality follows from the fact that Sw;n 0 on fw;n < 1g. If,
on the other hand, P fw;1 < 1g D 1, then we can drop the indicators in (1.77)
and get E e Sw;n 1fw;n <1g D E e Sw;n . Since .Sw;n Sw;n1 /n2N are i.i.d. random
n
variables, we infer E e Sw;n D E e Sw;1 for each n 2 N. By the definition of
w;1 , P fSw;1 0g D 1. Furthermore, P fSw;1 < 0g > 0 because Pf < 0g >
0 by assumption. Consequently, E e Sw;1 < 1. From these facts, we derive the
convergence of the series in (1.77) as follows
X X 1
E e Sw;n D .E e Sw;1 /n D < 1:
n0 n0 1 E e Sw;1
Proof of Theorem 1.4.6 Assume first that 0 a.s. and fix an arbitrary x 2 R.
According to parts (a) and (b) of Theorem 1.4.4, whenever N .x/ < 1 a.s. it has
some finite exponential moments. In particular, E.N .x//p < 1 for every p > 0.
Therefore, from now on, we assume that Pf < 0g > 0.
(1.63),(1.64) follows from Theorem 6.3.7.
(1.63)) (1.62). For any x 2 R, EJC . / < 1 is equivalent to EJC ..x/ / < 1.
Further, (by the equivalence (1.63),(1.64)) we know that E.N.x//p < 1 for some
x 0 implies E.N.x//p < 1 for all x 0. Thus replacing by x it suffices to
prove that E.N .0//p < 1 if E.N.0//p < 1.
Case p 2 .0; 1/ Using the subadditivity of the function x 7! xp , x 0 we obtain
X p X p
p
N .0/ 1fTk 0; Sk1 0g C 1fTk 0; Sk1 >0g
k1 k1
p X
N.0/ C 1f0<Sk1
k g
a.s.
k1
X Z X
1
Pf0 < Sk1 xg D E 1fy<Sk xyg dU > .y/
k1 Œ0; 1/ kD0
1
X
DE U > .x Sk / U > .Sk /
kD0
having utilized the subadditivity of the function x 7! U > .x/, x 0 (see (6.3)) for the
penultimate step and (1.76) for the last. Now (1.78) follows from the last inequality
and (1.65).
Case p 1 According to Theorem 6.3.7, (1.64) implies
Let 0 D 0 and n D inffk > n1 W Sk > Sn1 g. P Retaining the notation
n
of Section 3.4 (but replacing with 0 ) let Xn .x/ WD kDn1 C1 1fTk xg and
38 1 Perturbed Random Walks
n0 WD Sn Sn1 and observe that Y.x/ D N.x/. Since the so defined n0 are a.s.
positive, we can apply Theorem 3.4.4 to conclude that it is enough to show that, for
every q 2 Œ1; p,
Z X
q
E 1fTk yg dU > .y/ < 1; (1.80)
Œ0; 1/ kD1
where, as above, U > is the renewal function of .Sn /n2N0 . Fix any q 2 Œ1; p. For
x 0, it holds that
X
q
1fTk xg
kD1
X
q
1fSk1
k x; k xg
C 1
fSk1 k x; k >xg
kD1
X
q X
q
2 q1
1 f
k xg
C 1 fSk1
k x; k >xg
kD1 kD1
Bq E q EU > . /:
Here, E q < 1 is a consequence of (1.79), and EU > . / < 1 follows from (1.76)
and (1.65).
Turning to the term involving I2 , notice that from the inequality
q
.x1 C : : : C xm /q mq1 .x1 C : : : C xqm /; x1 ; : : : ; xm 0
1.5 Proofs for Section 1.4 39
and the subadditivity of the function x 7! U > .x/, x 0 (see (6.3)) it follows that
Z Z
X
>
I2 .y/dU .y/ q1
1fSk1
k y; k >yg
dU > .y/
Œ0; 1/ kD1 Œ0; 1/
X
D q1 .U > . >
k Sk1 / U .k //
kD1
1
X
q1 U > .Sk /
kD0
1
X
q1 U > .1 C : : : C k /
kD0
1
X
>
q1 1 C U .1 / C : : : C U > .k /
kD1
1
X
>
D q1
1C . k/U .k /
kD1
X
q1 C q U > .k /:
kD1
By Hölder’s inequality,
X X
qC1 1=.qC1/
E q U > .k / .E qC1 /q=.qC1/ E U > .k / :
kD1 kD1
The finiteness of the first factor is secured by (1.79). According to Theorem 5.2 on
qC1
p. 24 in [119], the second factor is finite provided E qC1 < 1 and E U > . / <
1. The formerR follows from (1.79), the latter from (1.76) and (1.64). Thus we have
proved that E Œ0; 1/ I2 .y/d U > .y/ < 1, hence (1.80).
(1.62))(1.63). Assume that E.N .x//p < 1.
Case p 2 .0; 1/ We start by showing that, without loss of generality, we can assume
that and are independent. Let .0n /n2N be a sequence of i.i.d. copies of and
assume that this sequence is independent of the sequence ..n ; n //n2N . Define Tn0 WD
Sn1 C 0n , n 2 N and Fn0 WD ..k ; k /; 0k W k D 1; : : : ; n/. Then
0 0
P.Tn xjFn1 / D P.n x Sn1 jFn1 / D F.x Sn1 / a.s.
40 1 Perturbed Random Walks
P.Tn0 xjFn1
0
/ D P.0n x Sn1 jFn1
0
/ D F.x Sn1 / a.s.;
that is, the sequences .1fTn xg /n2N and .1fTn0 xg /n2N of nonnegative random vari-
ables are tangent. Hence, by Theorem 2 in [126],
X p X p
E 1fSn1 C0n xg cp E 1fSn1 Cn xg
n1 n1
for appropriate constant cp > 0. Since .k /k2N and .0k /k2N are independent, we may
work under the additional assumption of independence between the random walk
and the perturbating sequence. In the following, we do not introduce a new notation
to indicate this feature.
Let y x be such that Pf yg > 0 and let A WD fN.x y/ > 0g. Observe
that P.A/ > 0 since we assume that Pf < 0g > 0. The following inequality holds
a.s. on A:
X p
p
N .x/ 1fSk1 xy; k yg
k1
p
p X
D N.x y/ 1fSk1 xyg 1fk yg =N.x y/
k1
p1 X
N.x y/ 1fSk1 xyg 1fk yg ;
k1
where for the second inequality the concavity of t 7! tp , t 0 has been used. Taking
expectations gives
!
p X
1 > E N .x/ E 1A N.x y/p1 1fSk1 xyg 1fk yg
k1
where at the last step the convex function inequality (Theorem 3.2 in [63]), applied
to t 7! tp , has been utilized. An appeal to Lemma 6.3.3 completes the proof. t
u
Sometimes in the literature the term ‘perturbed random walk’ was used to denote
random sequences other than those defined in (1.1). See, for instance, [66, 73, 156,
189, 190, 265] and Section 6 in [119]. The last four references are concerned with the
nonlinear renewal theory. In it a very different class of perturbations is considered.
In particular, they must be uniformly continuous in probability and satisfy some
other conditions.
Theorem 1.3.1 and Proposition 1.3.4 were proved in [137] via a more compli-
cated argument.
Theorem 1.3.5 seems to be new.
Theorem 1.3.6 which is a much strengthened version of Theorem 3 in [16] was
proved in [137]. A similar result was mentioned in Example 2 of [111]. However
neither a precise formulation nor a proof has been given in [111].
While the nonlattice case of Theorem 1.3.8 is a particular case of Theorem 5.2 in
[107], the lattice case was settled in [157]. Other interesting results concerning the
tail behavior of supn1 Tn can be found in [123, 224, 225, 241].
Inequality (1.17) in Lemma 1.3.9 was obtained in Lemma 1(a) of [3].
Lemma 1.3.12 is an extended version of Lemma 2.2 in [9].
Formula (1.22) was earlier obtained in [107].
Proposition 1.3.13 seems to be new. Its one-dimensional version was proved
in Theorem 3 of [129] by using another approach which required the assumption
E2 < 1 instead of (1.34) in part (i).
Theorems 1.3.14 and 1.3.17 are borrowed from [155]. By using an argument
different from ours a result very similar to Theorem 1.3.14 was derived in [256].
Under the assumption that and are independent functional limit theorems
for maxk0 .kC1 1fSk tg / as t ! 1 were obtained in Theorem 4 of [226] and
Theorem 3.1 of [207]. The limit processes are time-changed extremal processes.
Allowing and to be dependent one-dimensional convergence of the aforemen-
tioned maximum was proved in [227].
The material of Sections 1.4 and 1.5 is taken from [8].
Chapter 2
Perpetuities
Let .Mk ; Qk /k2N be independent copies of an R2 -valued random vector .M; Q/ with
arbitrary dependence of the components, and let X0 be a random variable which is
independent of .Mk ; Qk /k2N . Put
…0 WD 1; …n WD M1 M2 : : : Mn ; n 2 N:
The sequence Xn n2N0 defined by
Xn D Mn Xn1 C Qn ; n 2 N;
X
n
Xn D ‰n .Xn1 / D ‰n ı : : : ı ‰1 .X0 / D …n X0 C .…n =…k /Qk ; n 2 N;
kD1
where ‰n .t/ WD Qn CMn t for n 2 N, .Xn /n2N is nothing else but the forward iterated
function system. Closely related is the backward iterated function system
X
n
Yn WD ‰1 ı : : : ı ‰n .0/ D …k1 Qk ; n 2 N:
kD1
In the case that X0 D 0 a.s. it is clear that Xn has the same distribution as Yn for each
fixed n. The random discounted sum
X
Y1 WD …k1 Qk ;
k1
obtained as the a.s. limit of Yn under appropriate conditions (see Theorem 2.1.1
below), is called perpetuity and is of interest in various fields of applied probability
like insurance and finance, the study of shot-noise processes or, as will be seen in
Chapter 4, of branching random walks.
Throughout Chapter 2, for the particular R x case D log jMj, we use the notation
introduced in (1.3) so that A .x/ D 0 Pf log jMj > ygdy, J .x/ D x=A .x/
for x > 0 and J .0/ D 1=PfjMj < 1g. Recall that J .x/ is finite if, and only, if
PfjMj < 1g > 0.
Goldie and Maller (Theorem 2.1 in [109]) gave the following complete charac-
terization of the a.s. convergence of the series which defines Y1 . We do not provide
a proof referring instead to the cited paper.
Theorem 2.1.1 Suppose that
Then
and
Moreover, if
and if at least one of the conditions in (2.2) fails to hold, then limn!1 jYn j D 1 in
probability.
2.1 Convergent Perpetuities 45
For m 2 N, set
X
.m/
Y1 WD QmC1 C MmC1 : : : Mk1 Qk :
kmC2
.m/
The random variable Y1 is a copy of Y1 independent of .Mk ; Qk /1km . With these
at hand and assuming that jY1 j < 1 a.s., the equalities
.1/ .m/
Y1 D Q1 C M1 Y1 D Ym C …m Y1 a.s. (2.6)
hold for any fixed m 2 N. Sometimes it is convenient to rewrite the first equality, in
a weaker form, as the distributional equality
d
Y1 D Q C MY1 ; (2.7)
d
Z D Q C cZ
46 2 Perpetuities
R R
Since .0;1/ et dX1 .t/ is independent of eT ; .0;T/ et dX.t/ and has the same
R
distribution as .0;1/ et dX.t/, the claim follows.
To set a link with results of Section 3 we note that, whenever X is a com-
pound Poisson Rprocess with positive jumps having finite logarithmic moment, the
distribution of .0;1/ et dX.t/ is a limit (stationary) distribution of a Poisson shot
R
noise process Œ0;t e.ty/ dX.y/. Hence, the limit distribution of a Poisson shot noise
process is the distribution of a perpetuity.
Exponential Functionals of Lévy Processes Let X WD .X.t//t0 be a Lévy
Rprocess. Whenever limt!1 X.t/ D 1 a.s., the a.s. finite random variable Z WD
1 X.t/
0 e dt is called exponential functional of X. Using X1 and T as introduced
2.1 Convergent Perpetuities 47
R1 RT
Since 0 eX1 .t/ dt is independent of eX.T/ ; 0 eX.t/dt and has the same distribu-
R 1 X.t/ R 1 X.t/
tion as 0 e dt we conclude that Z D 0 e dt is a perpetuity generated by
Z T
.M; Q/ D eX.T/ ; eX.t/dt : (2.9)
0
et=˛
.dt/ D 1.0;1/ .t/dt
.1 et=˛ /˛C1
The term stems from the fact that the right-hand side defines the Mittag–Leffler
function with parameter ˛, a generalization of the exponential function which
corresponds to ˛ D 1. Formula (2.12) entails
nŠ
E˛n D ; n 2 N:
..1 ˛//n .1 C n˛/
RT
which shows that Q D 0 eX.t/ dt has the Mittag–Leffler distribution with
parameter ˛. Further, for s > 0,
Z 1
sX.T/ 1 .1 C ˛.s 1//
Ee D EesX.t/ et dt D D
0 1 C ˆ.s/ .1 ˛/.1 C ˛s/
Z 1
1
D xs˛ x˛ .1 x/˛1 dx:
.˛/.1 ˛/ 0
This proves that M D eX.T/ has the same distribution as ˛˛ where Pf˛ 2 dxg D
1
.˛/.1˛/
x˛ .1x/˛1 1.0;1/ dx, i.e., ˛ has a beta distribution with parameters 1˛
and ˛.
Fixed Points of Inhomogeneous Smoothing Transforms With J deterministic
or random, finite or infinite with positive probability, let M WD .M .i/ /1iJ be
a collection of real-valued random variables. Also, for d 2 N, let Q be an Rd -
valued random vector arbitrarily dependent on M. The mapping T on the set of
PJ on.i/R that maps a distribution
to the distribution of the
d
probability measures
random vector iD1 M Xi C Q where .Xi /i2N are independent random vectors
with distribution
which are also independent of .M; Q , is called inhomogeneous
smoothing transform. The smoothing transform is called homogeneous if Q D 0 a.s.
Let be a fixed point of T, i.e., D T , and Y a random vector with distribution .
Then
d
X
J
YD M .i/ Yi C Q
iD1
where .Yi /i2N are independent copies of Y which are also independent of .M; Q/.
Obviously, the distribution of a perpetuity is the fixed point of an inhomogeneous
smoothing transform with J D 1 a.s. and d D 1.
Fixed Points of Poisson Shot Noise Transforms The homogeneous smoothing
transform T is called Poisson shot noise transform if M .i/ D h.Ti / for a Borel
function h 0 and the arrival times .Ti / in a Poisson process of positive intensity
. Let Y be a random variable with distribution that is a fixed point of the Poisson
shot noise transform concentrated on Œ0; 1/ and nondegenerate at 0. Then
d
X
YD h.Ti /Yi (2.13)
i1
where .Yi / are independent copies of Y which are also independent of .Tj / or,
equivalently,
Z 1
'.s/ D exp .1 '.h.y/s//dy ; s 0 (2.14)
0
Now we discuss the simplest situation in which the fixed points can be explicitly
identified.
Example 2.1.3 If h.y/ D ey , then the (nondegenerate at zero) fixed points of
the shot noise transforms exist if, and only if, 1. These are positive Linnik
distributions ˇ with tails
X
ˇ ..x; 1// D .ˇ/k xk = .1 C k/; x0
k0
For the proof, differentiate (2.14) (with h.y/ D ey ) to obtain a Bernoulli differential
equation ' 0 .s/ C s1 '.s/ s1 ' 2 .s/ D 0. Changing the variable z.s/ D 1='.s/
we arrive at z0 .s/ s1 z.s/ s1 D 0 which has solutions z.s/R D 1 C Cs for
C 2 R, whence '.s/ D .1 C Cs /1 . If C D 0, then '.s/ D 1 D Œ0;1/ esx "0 .dx/.
If C < 0 or > 1, '.s/ fails to be completely monotone (by Bernstein’s theorem
it cannot then be a Laplace transform). Indeed, in the first case ' takes negative
values, whereas in the second case it is not convex.
Let Y be a random variable as in (2.13) with finite mean m > 0. We shall
show that the size-biased distribution pertaining to the distribution of Y, i.e.,
.dx/ WD m1 xPfY 2 dxg, is the distribution of a perpetuity which corresponds
to independent M and Q where Q has the same distribution as Y, and M has the
distribution defined below. While doing so we assume, for simplicity, that h is
1
strictly decreasing and continuous on Œ0; 1/. Then the inverse function
R h is well
defined and decreasing which implies that the equality .A/ WD A xd.h1 .x//
where A is a Borel subset of Œh.1/; h.0/ defines a measure. Passing to the
expectation in (2.13) we obtain
X Z 1 Z
m D mE h.Ti / D m h.y/dy D m xd.h1 .x//
i1 0 Œh.1/;h.0/
Note that differentiating under the integral sign in (2.14) is legal because the
resulting integral is uniformly convergent. Since m1 ' 0 .s/ is the Laplace–Stieltjes
transform of , the last equality is equivalent to distributional equality (2.7), in
which the distribution of Y1 is , and M and Q are as stated above. Conversely,
as shown in Lemma 2.2 in [144], whenever (2.7) holds with independent M and
2.1 Convergent Perpetuities 51
˛ ˛ ˛1 ˛x
PfQ 2 dxg D x e 1.0;1/ .x/dx
.˛/
˛ ˛C1
PfY1 2 dxg D x˛ e˛x 1.0;1/ .x/dx:
.˛ C 1/
(b) M has a uniform distribution on Œq; 1 for some q 2 Œ0; 1/ distribution, EesQ D
.bCqs/.bCs/1, s 0, i.e., the distribution of Q is a mixture of the exponential
distribution and an atom with mass q at the origin, Y1 is .2; b/-distributed, i.e.,
EesY1 D m2 .m C s/2 , s 0.
(c) M has a Weibull distribution with parameter 1=2, i.e.,
p
e x
PfM 2 dxg D p 1.0;1/ .x/dx;
2 x
p p p
EesQ D .1 C b s/eb s , s 0 for some b > 0, EesY1 D eb s , s 0, i.e.,
Y1 has a positive stable distribution with index 1=2.
p !2
1=2 sQ 2s
(d) PfM 2 dxg D .x 1/1.0;1/ .x/dx, Ee D p , s 0,
sinh 2s
p p p
sY1 3.sinh 2s 2s cosh 2s/ .EesQ /0
Ee D p D ; s 0:
sinh3 2s EQ
The following example which seems to be new sets a link between perpetuities
and number theory.
Random Lüroth Series According to Theorems 1 and 2 in [169], any irrational
x 2 .0; 1/ has a unique representation in the form
1 1 .1/n1
xD C ::: C C :::
a1 a1 .a1 C 1/a2 a1 .a1 C 1/ : : : an1 .an1 C 1/an
52 2 Perpetuities
for some positive integers .ak /. The right-hand side is called alternating Lüroth
series.
Let .k /k2N be independent copies of a positive (not necessarily integer-valued)
random variable . Then the series
1 1 .1/n1
C ::: C C :::
1 1 .1 C 1/2 1 .1 C 1/ : : : n1 .n1 C 1/n
may be called random Lüroth series. Whenever the series converges a.s. (this
happens, for instance, if 1 a.s.), its sum is a perpetuity which corresponds to
1 1
.M; Q/ D ; :
. C 1/
Theorem 2.1.2 given below states that the distribution of Y1 is pure provided that
PfM D 0g D 0.
Theorem 2.1.2 If PfM D 0g D 0 and jY1 j < 1 a.s., then the distribution of Y1 is
either degenerate, absolutely continuous (w.r.t. the Lebesgue measure), or singular
continuous.
Proof Suppose that (2.5) holds true which particularly implies that PfQ D 0g < 1,
for otherwise the distribution of Y1 is clearly degenerate. By Theorem 2.1.1 we
thus also have that limn!1 …n D 0 a.s. Assume further that the distribution of Y1
is nondegenerate and has atoms. Denote by p the maximal weight (probability) of
atoms and by b1 ; : : : ; bd the atoms with weight p. Notice that d 1=p. In view of
(2.6) we have
X
PfY1 D bi g D PfYm C …m a D bi g PfY1 D ag; i D 1; : : : ; d (2.15)
a2A
for each m 2 N where A isP the set of all atoms of the distribution of Y1 . Since
PfM D 0g D 0, we have a2A PfYm C …m a D bi g 1. Now use PfY1 D
ag PfY1 D bi g to conclude that equalities (2.15) can only hold if the summation
extends only over bj , j D 1; : : : ; d, and so
X
d
PfYm C …m bj D bi g D 1; i D 1; : : : ; d (2.16)
jD1
2.1 Convergent Perpetuities 53
for each m 2 N. By letting m tend to infinity and using .…m ; Ym / ! .0; Y1 / a.s.
in (2.16), we arrive at
.s/
and therefore Pfj…mj < = g D 0 because PfjY1 j 2 .0; g D 1 dp2 > 0.
But this contradicts …m ! 0 a.s. and so d D 1, i.e., Y1 D b1 a.s. by (2.17).
Hence we have proved that if the distribution of Y1 is nondegenerate, it must be
continuous.
It remains to verify that a continuously distributed Y1 is of pure type. Let g.t/
be the characteristic function (ch.f.) of Y1 . By Lebesgue’s decomposition theorem
g.t/ D ˛1 g1 .t/ C ˛2 g2 .t/ where ˛1 ; ˛2 0, ˛1 C ˛2 D 1, and g1 .t/ and g2 .t/ are the
ch.f.’s of the absolutely continuous and the continuously singular components of the
distribution of Y1 , respectively. If ˛1 D 0, the distribution is singular continuous.
Suppose ˛1 > 0 so that g D g1 must be verified. Since the distribution of Y1
satisfies (2.7), we infer in terms of its ch.f.
and thus
R
which yields PfQ C MX1 2 Bg D PfX1 2 m1 .B q/gdPfM m; Q qg D 0. If
for all t ¤ 0. Hence g.t/ D g1 .t/ for all t 2 R which means that the distribution of
Y1 is absolutely continuous. t
u
The next examples demonstrate that the distribution of Y1 can indeed be
continuously singular as well as absolutely continuous.
Example 2.1.5 (c-Decomposable Distributions) Consider the situation where M is
a.s. equal to a constant c 2 .0; 1/, so that
d
Y1 D cY1 C Q:
1 es
'.s/ WD EesQ D ; s > 0;
n.1 es=n /
we conclude
Y
k1
1 es 1 es
EesY1 D lim '.sni / D lim k
D :
k!1 k!1 nk .1 esn / s
iD0
sin t Y
D cos.t2i /; t2R
t i0
variable niD1 ci1 Qi . For each xk construct an interval Ixk of length 2 inC1 ci1 D
1 n
c with center xk . Set On WD [2kD1 Ixk and note that PfY1 2 On g D 1
n
2.1 c/P
because inC1 ci1 Qi 2 Œ.1 c/1 cn ; .1 c/1 cn a.s. It remains to define A WD
\n1 On and observe that
pEeitQ
EeitY1 D ; t 2 R: (2.19)
1 .1 p/EeitQ
56 2 Perpetuities
This follows from a comment on p. 45, as the random variable N defined there has
a geometric distribution (starting at one) with parameter PfM D 0g.
In particular, if the distribution of Q is discrete, so is the distribution of Y1 . For
instance, if Q D 1 a.s., then the distribution of Y1 is geometric with parameter
p; if the distribution of Q is geometric (starting at zero) with parameter r, then the
distribution of Y1 is geometric (starting at zero) with parameter pr=.1 .1 p/r/.
On the one hand, the case when PfM D 0g > 0 is more complicated than the
complementary one, for the distribution of Y1 is not necessarily pure.
Example 2.1.7 Assume that M and Q are independent, PfM D 0g D p, PfM D
1g D 1 p for p 2 .0; 1/ and EesQ D .1 C s/=.1 C s/ for some 2 .0; 1/, i.e., the
distribution of Q is a mixture of the atom at zero and an exponential distribution with
parameter 1. Then, using (2.19) we conclude that the distribution of Y1 is a mixture
of the atom at zero with weight p=.1 .1 p/ / and an exponential distribution
with parameter p=.1 .1 p/ /.
On the other hand, it is simpler, for there is a simple criterion for the distribution of
Y1 to be (absolutely) continuous.
Theorem 2.1.3 Let PfM D 0g > 0. Then the distribution of Y1 is (absolutely)
continuous if, and only if, the conditional distribution PfQ 2 jM D 0g is
(absolutely) continuous.
Proof We only treat the continuity and refer to Theorem 5.1 in [27] for the absolute
continuity. If PfM D 0g D 1, then Y1 D Q1 , and there is nothing to prove.
Therefore we assume that PfM D 0g 2 .0; 1/.
(. For a Borel set C we have
which shows that the distribution of Y1 has a continuous component. The rest of
the proof exploits Lebesgue’s decomposition theorem and proceeds along the lines
of the second part of the proof of Theorem 2.1.2. Still, we have to verify that Q C
MX1 has a continuous distribution whenever X1 is independent of .M; Q/ and has
a continuous distribution, and PfQ 2 jM D 0g is a continuous distribution. The
claim follows from the equality
In this section ultimate criteria for the finiteness of power, exponential, and
logarithmic moments will be given. As far as power and logarithmic moments are
concerned, the key observation which goes back P to [176] is that in the range of
power and subpower tails the tail behavior of j n1 …n1 P Qn j coincides with that of
supn1 j…n1 Qn j. In particular, one may expect that Ef .j n1 …n1 Qn j/ < 1 if,
and only if, Ef .supn1 j…n1 Qn j/ < 1 for positive nondecreasing functions f of at
most power growth. As is seen from the subsequent presentation this is indeed true
(compare Theorem 2.1.4 and Theorem 1.3.1; Theorem 2.1.5 and Theorem 1.3.5).
In the range of exponential tails the equivalence discussed above Pdoes not hold
any more, and one has to investigate the finiteness of E exp a j n2N …n1 Qn j
for a > 0 directly, without resorting to the analysis of E exp a supn1 j…n1 Qn j .
Logarithmic Moments
Theorem 2.1.4 Let f W RC ! RC be a measurable, locally bounded function
regularly varying at 1 of positive index. Suppose that (2.1) and (2.5) hold, and that
limn!1 …n D 0 a.s. Then the following assertions are equivalent:
Ef .logC jMj/J .logC jMj/ < 1 and Ef .logC jQj/J .logC jQj/ < 1I
Ef .logC jY1 j/ < 1:
The proof of Theorem 2.1.4 which can be found in [6] will not be given
here, for it does not contain essential new ideas in comparison with the proof of
Theorem 1.3.1.
Power Moments
Theorem 2.1.5 Suppose that (2.1) and (2.5) hold, and let p > 0. The following
assertions are equivalent:
Remark 2.1.6 Further conditions equivalent to those in the previous theorem are
given by
ˇX ˇp
ˇ n ˇ
ˇ
E sup ˇ …k1 Qk ˇˇ < 1I (2.24)
n1 kD1
!p=2
X
E …2n1 Q2n < 1: (2.25)
n1
EY1
n
D E.Q C MY1 /n
whence
!
X
n1
n k nk k
n 1
EY1
n
D .1 EM / EM Q EY1 : (2.26)
kD0
k
called the abscissa of convergence of the moment generating function of jZj. Note
that Eer.Z/jZj may be finite or infinite.
Our next two results provide complete information on how r.Y1 / relates to r.Q/.
For convenience we distinguish the cases where PfjMj D 1g D 0 and PfjMj D 1g 2
.0; 1/. Recall that if conditions (2.1) and (2.5) hold then the distribution of Y1 is
nondegenerate if jY1 j < 1 a.s.
Theorem 2.1.7 Suppose that (2.1) and (2.5) hold, that PfjMj D 1g D 0, and let
s > 0. The following assertions are equivalent:
Theorem 2.1.8 Suppose that (2.1) and (2.5) hold, that PfjMj D 1g 2 .0; 1/, and
let s > 0. The following assertions are equivalent:
In particular,
1g D 1 and PfjMj D 1g 2 .0; 1/, then r.Y1 / D
if PfjMj
min r.Q/; r .M; Q/ where
nŠ
EY1
n
D ; n 2 N: (2.31)
n .1 EM/.1 EM 2 / .1 EM n /
Put an WD EY1
n
=nŠ and note that limn!1 a1
nC1 an D PfM < 1g. Hence, by the
Cauchy–Hadamard formula,
If PfM < 1g D 1, this is in full accordance with Theorem 2.1.7 because EerQ D
. r/1 for r 2 .0; / whence r.Q/ D . Suppose PfM < 1g < 1. According
to Theorem 2.1.8, r.Y1 / is the positive solution to the equation s PfM D 1g D 1
and thus indeed equal to PfM < 1g.
Proof of Theorem 2.1.5 (2.20), (2.21) follows from Theorem 1.3.5 after passing
to the logarithms. Observe that the condition Pflog jQj D 1g < 1 is a
consequence of (2.1). P
Since jY1 j Y1 WD n1 j…n1 Qn j it remains to prove the implications
(2.20))(2.23) and (2.22))(2.20).
60 2 Perpetuities
If p > 1, a similar inequality holds for kY1 kp where kkp denotes the usual Lp -norm.
Namely, by Minkowski’s inequality,
X X
jjY1 jjp k…k1 Qk kp D kMkpk1 kQkp D .1 kMkp /1 kQkp < 1:
k1 k1
which, in the notation introduced right before formula (2.6), is nothing but
condition (2.20) for the pair .…2 ; Y2 /. We only remark concerning the implication
(2.32))(2.20) that in the case p 1, by Minkowski’s inequality,
for all b; c > 0 and therefore (upon letting b tend to 1 and picking c large enough)
kQ1 C M1 Q2 kp C c kMkp
kQkp < 1:
PfjQj cg1=p
Q.2/
n WD Q2n1 C M2n1 Q2n ; n2N
2.1 Convergent Perpetuities 61
.2/ .2/
and note that ..M2n1 M2n ; Qn //n2N are independent copies of .…2 ; Y2 /. Let Qn
.2/
be a conditional symmetrization of Qn given M2n1 M2n such that the vectors
.2/ .2/ .2/ .2/
..M2n1 M2n ; Qn //n2N are also i.i.d. More precisely, Qn D Qn b Qn where
.2/ b.2/ .2/ b.2/
..M2n1 M2n ; Qn ; Qn //n2N are i.i.d. random variables and Qn ; Qn are condition-
ally i.i.d. given M2n1 M2n . By what has been pointed out above, the distribution
.2/ .2/
of Qn , and thus also of Qn , is nondegenerate. Putting Bn WD .M1 ; : : : ; Mn /
for n 2 N, we now infer with the help of Lévy’s symmetrization inequality (see
Corollary 5 on p. 72 in [68])
ˇ
ˇX ˇ ˇ !
.2/ ˇ ˇ n .2/ ˇ ˇ
P max j…2k2 Qk j > xˇB2n 2 P ˇ …2k2 Qk ˇˇ > xˇˇB2n
1kn ˇ
kD1
ˇX ˇ ˇ !
ˇ n ˇ ˇ
4 P ˇˇ …2k2 Qk ˇˇ > x=2ˇˇB2n
.2/
kD1
for all x > 0 and thus (recalling that the distribution of Y1 is continuous in the
present situation as pointed out right after Theorem 2.1.2)
n .2/
o
P sup j…2k2 Qk j > x 4 PfjY1 j > x=2g: (2.33)
k1
.2/
E sup j…2k2 Qk jp 8 EjY1 jp < 1:
k1
X
n
.2/
Sn WD log j…2n j D log jM2k1 M2k j and n WD log jQn j
kD1
for n 2 N. Then .Sn /n2N forms an ordinary zero-delayed random walk and Pfn D
.2/
1g < 1 because the distribution of Qn is nondegenerate. With this we see that
.2/
Since the pairs ..log jM2n1 M2n j; n //n2N are i.i.d., an application of Theorem 1.3.5
yields EepS1 D EjM1 M2 jp < 1 which is the second condition in (2.32).
62 2 Perpetuities
Left with the first half of (2.32), namely kY2 kp < 1, use (2.6) with m D 2
.2/
rendering jY2 j jY1 j C j…2 Y1 j and therefore
in the case p 1. The case 0 < p < 1 is handled similarly. The proof of
Theorem 2.1.5 is complete. u
t
Pn
Proof for Remark 2.1.6 (2.23))(2.24) and (2.23))(2.25). Use supn1 j kD1
P P 2
2 1=2
…k1 Qk j Y1 D n1 j…n1 Qn j and n1 …n1 Qn P
Y1 , respectively.
(2.24))(2.22) and (2.25))(2.21). Use jY1 j supn1 j nkD1 …k1 Qk j and
P 2
2 1=2
supn1 j…n1 Qn j n1 …n1 Qn , respectively. t
u
Proof of Theorem 2.1.7 Set .t/ WD EetY1 , b.t/ WD EetjY1 j , '.t/ WD EetQ and
' .t/ WD EetjQj . Note that .t/ b.s/ for all t 2 Œs; s and s > 0, and that
b
max. .t/; .t// b.t/ .t/ C .t/ for all t 0. As a consequence of (2.7)
we have .t/ D EetQ .Mt/ for all t 2 R. These facts will be used in several places
hereafter.
(2.27))(2.28). The almost sure finiteness of Y1 follows from Proposition 2.1.1.
We have to check that r.Y1 / r.Q/. To this end, we fix an arbitrary s 2 .0; r.Q//
and divide the subsequent proof into two steps.
Step 1 Assume first that jMj ˇ < 1 a.s. for some ˇ > 0. Since the function b ' is
convex and differentiable on Œ0; ˇs, its derivative is nondecreasing on that interval.
Therefore, for each k 2 Nnf1g, there exists k 2 Œ0; ˇ k1 s such that
' 0 .k /ˇ k1 s b
' .ˇ k1 s/ 1 D b
0 b ' 0 .ˇs/ˇ k1 s:
Step 2 Consider now the general case. Since PfjMj D 1g D 0, we can choose
ˇ 2 .0; 1/ such that
Since jM j ˇ a.s., Step 1 of the proof provides the desired conclusion if we still
' .s/ < 1 implies EesjQ j < 1. This is checked as follows:
verify that b
j
X
EesjQ Ees.jQ1 jC:::CjQT1 j/ D Ees.jQ1 jC:::CjQn j/ 1fT1 Dng
n1
" #
X Y
n1
sjQn j sjQk j
D E e 1fjMn jˇg e 1fjMk j>ˇg
n1 kD1
X
D EesjQj 1fjMjˇg ' .s/.1 /1 < 1:
n1 b
n1
' .s/ '.s/ C '.s/ < 1. This shows r.Y1 / r.Q/. The proof of
and thus b
Theorem 2.1.7 is complete. u
t
Proof of Theorem 2.1.8 (2.30))(2.29). By the same argument as in the proof of
the implication (2.28))(2.27), we infer EjMjp < 1 for all p > 0 and thereby
jMj 1 a.s. Moreover, as b.s/ < 1, inequality (2.36) holds here as well and gives
b
' .s/ < 1. It thus remains to prove the last two inequalities in (2.29). While doing
so we proceed in two steps.
64 2 Perpetuities
which together with EesQ .˙Ms/1fjMj<1g > 0 (as PfjMj < 1g > 0) implies
Thus, the last inequality in (2.29) which reads .1 a /.1 aC / > 0 in view of
b˙ D 0 holds.
Step 2 Assuming now PfM D 1g > 0, let ..Mk ; Qk //k2N be defined as in (2.34)
and (2.35), but with
implies
X
1 > Ee˙sQ 1fM D1g D a˙ C b˙ an b
n0
and thereupon a˙ 2 Œ0; 1/. The right-hand side of the last expression equals a˙ C
b˙ b
which gives the last inequality in (2.29).
1 a
(2.29))(2.30). Let ..Mk ; Qk //k2N be as defined in Step 2 of the present proof.
Assuming (2.29) we thus have
b˙ b
a˙ WD Ee˙sQ 1fM D1g D a˙ C < 1:
1 a
b˙
Ee˙sQ D Ee˙sQ 1fM>1g C EesQ 1fM>1g < 1:
1 a
Now let
b
T 0 WD 0; b
T k WD inffn > Tk1 W Mn < 1g; k2N
bk; b
and then ..M Qk //k2N in accordance with (2.34) and (2.35) for these stopping
times. We claim that Ee˙sb
Q
< 1 and thus Eesjb
Qj
< 1. Indeed,
X
Ee˙sb
Q
D Ee˙s.Q1 CM1 Q2 C:::CM1 :::Mn1 Qn / 1fM1 D:::DMn1
D1;Mn <1g
n1
X Ee˙sQ
In this section we are interested in the case when conditions (2.1), (2.5), and
EJ .logC jQj/ D 1 hold. Furthermore, we assume that the right tail of the
distribution of log jMj is lighter than that of logC jQj. Since the second condition
in (2.2) is violated, Theorem 2.1.1 tells us that .Yn / is a divergent perpetuity in
P
the sense that jYn j ! 1 as n ! 1. Of course, jXn j diverges in probability too
whenever X0 D 0 a.s. Our purpose is to prove functional limit theorems for the
Markov chains .Xn / and for the divergent perpetuities .Yn / under the aforementioned
assumptions.
We briefly recall the notation introducedP in Section 1.3.3 (see also ‘List of
Notation’). For positive a and b, N .a;b/ WD k ".t.a;b/ ; j.a;b/ / denotes a Poisson random
k k
measure on Œ0; 1/ .0; 1 with mean measure LEB
a;b where
a;b is a measure
on .0; 1 defined by
a;b .x; 1 D axb ; x > 0:
Theorem 2.2.1 treats the situation in which both Mk ’s and Qk ’s affect the limit
behavior of the processes in question (except in a less interesting case
D 0),
whereas in the situation of Theorem 2.2.5 only the contribution of the Qk ’s persists
in the limit.
Theorem 2.2.1 Assume that
E log jMj D
2 .1; C1/ (2.37)
and that
Also,
ˇ ˇ
logC ˇXŒnC1 ˇ .c;1/ .c;1/
)
‡./ C sup
tk C jk (2.40)
n t
.c;1/
k
instance, that M and Q are positive a.s. Since in view of (1.40) the right-hand side
of (1.39) is a.s. nonnegative, (1.39) in combination with the continuous mapping
theorem entails
C
max .log …k C log QkC1 / logC max .…k QkC1 /
0kŒn 0kŒn
D
n n
.c;1/ .c;1/
) sup
tk C jk :
.c;1/
tk
for some ˛ 2 .0; 1 and some ` slowly varying at 1. Let .bn / be a positive
sequence which satisfies limn!1 nPflog jQj > bn g D 1. In the case ˛ D 1 assume
additionally1 that limt!1 `.t/ D C1.
If lim infn!1 j…n j D 0 a.s. and E log jMj D 1 assume that
E log jMj ^ t
lim D 0: (2.42)
t!1 tPflog jQj > tg
1
Among other things this implies E logC jQj D 1.
68 2 Perpetuities
Then
ˇ ˇ
logC ˇYŒnC1 ˇ .1;˛/
) sup jk ; n ! 1; (2.44)
bn .1;˛/
tk
and
ˇ ˇ
logC ˇXŒnC1 ˇ .1; ˛/
) sup jk ; n ! 1: (2.45)
bn .1; ˛/
tk
Remark 2.2.6 The marginal distribution of the right-hand side of (2.44) is given by
˚ .1; ˛/
P sup jk x D exp.ux˛ /; x0
.1; ˛/
tk u
Denote by MpC the set of Radon point measures on Œ0; 1/ .0; 1 which satisfy
for all ı > 0 and all T > 0. The space MpC is endowed with the vague topology.
Denote by Mp the set of 2 MpC which satisfy
where the signs C and are arbitrarily arranged, and .cn / is some sequence of
ˇ PThe definition of Gn in the case
positive numbers. ˇ of empty sum stems from the fact
that we define ˇ kW k t ˙ exp.cn . f .k / C yk //ˇ WD exp.cn f .0// if there is no k such
that k t.
C
.n/ .n/
Theorem 2.3.1 P For n 2 N, let fn 2 D and n 2 Mp . Let k ; yk be the points of
n , i.e., n D k ". .n/ ; y.n/ / . Assume that f0 is continuous with f0 .0/ 0 and
k k
(A1) 0 .f0g .0; 1/ D 0 and 0 ..r1 ; r2 / .0; 1/ 1 for all positive r1 and r2
P r1 < r2 ;
such that
0 D k " .0/ ; y.0/ does not have clustered jumps, i.e., k ¤ j for k ¤ j;
.0/ .0/
(A2)
k k
(A3) if not all the signs under the sum defining Gn are the same, then
.0/ .0/ .0/ .0/
f0 .i / C yi ¤ f0 .j / C yj for i ¤ j (2.47)
and
.0/ .0/
F . f0 ; 0 /.T/ D sup f0 .k / C yk >0 (2.48)
.0/
k T
.n/
2
Condition (A6) together with the second part of (A1) ensures that #fk W k Tg 1.
70 2 Perpetuities
Then
lim Gn . fn ; n / D F . f0 ; 0 / (2.50)
n!1
on D in the J1 -topology.
For the proof of Theorems 2.2.1, 2.2.5, and 2.3.1 we need three inequalities:
for x; y 0;
logC .jxj/ logC .jyj/ 2 log 2 logC .jx C yj/ logC .jxj/ C logC .jyj/ C 2 log 2
(2.52)
for x; y 2 R;
.n/
F . fn ; n /.t/ Gn . fn ; n /.t/ c1 C
n log #fk W k tg C F . fn ; n /.t/
for all t 2 Œ0; T. In this case, (2.50) is a trivial consequence of Theorem 1.3.17
which treats the convergence limn!1 F . fn ; n / D F . f0 ; 0 / on D.
2.3 Proofs for Section 2.2 71
In what follows we thus assume that not all the signs are the same. Let D f0 D
s0 < s1 < < sm D Tg be a partition of Œ0; T such that
.0/ .0/
and that sup .0/ T; y.0/ > . f0 .k / C yk / > 0. The latter is possible in view of (2.48).
k k
Condition (A6) implies that 0 .Œ0; T .; 1/ D n .Œ0; T .; 1/ D p for
large enough n and some p 1. Denote by .Ni ; yN i /1ip an enumeration of the
points of 0 in Œ0; T .; 1 with N1 < N2 < : : : < Np and by .Ni ; yN i /1ip the
.n/ .n/
X
p
.jNi Ni j C jNyi yN i j/ D 0
.n/ .n/
lim
n!1
iD1
X
p
.j fn .Ni / f0 .Ni /j C jNyi yN i j/ D 0
.n/ .n/
lim (2.55)
n!1
iD1
because (A5) and the continuity of f0 imply that limn!1 fn D f0 uniformly on Œ0; T.
Define n to be continuous and strictly increasing functions on Œ0; T with
n .0/ D 0, n .T/ D T, n .Ni / D Ni for i D 1; : : : ; p, and let n be linearly
.n/
and
X .n/ .n/
Wn .t/ WD ˙ exp cn . fn .k / C yk / Vn .t/:
.n/
n .k /t
ˇ ˇ ˇ ˇ
ˇ Cˇ ˇ sup. f0 .Ni / C yN i /ˇˇ
C sup ˇc1
n log V n .t/
t2Œ0; T Ni t
ˇ ˇ
ˇ .0/ ˇ
C sup ˇ sup f0 .Ni / C yN i sup f0 .k / C yk ˇ
.0/
t2Œ0;T Ni t .0/
k t
In . / 2 log 2c1
n
ˇ
C ˇ
ˇ
1
cn sup log Wn .t/ˇ
t2Œ0;T
X
.n/ .n/
c1
n log
C
exp cn . fn .k / C yk /
.n/ .n/ .n/
n .k /T; k ¤Ni
˚ .n/
# k W n .k / T; k ¤ Ni
.n/ .n/
c1
n logC
.n/ .n/
sup exp cn . fn .k / C yk /
.n/ .n/ .n/
k T; k ¤Ni
˚ .n/
c1
n log
C
# k W k T
.n/ .n/ C
C sup fn .k / C yk : (2.57)
.n/ .n/ .n/
k T; k ¤Ni
.n/
For the last inequality we have used (2.51) and the fact that n .k / T if, and
.n/
only if, k T. The first term on the right-hand side of (2.57) converges to zero in
view of (2.49). As for the second, we apply Theorem 1.3.17 to infer
.n/ .n/ C .n/ .n/ C
sup . fn .k / C yk / D sup . fn .k / C yk /
.n/ .n/ .n/ .n/ .n/
k T; k ¤Ni k T; yk
.0/ .0/ C
! sup . f0 .k / C yk /
.0/ .0/
k T; yk
distinct for large enough n. Denote by ak;n < : : : < a1;n their increasing
rearrangement3 and put
ˇ ˇ
ˇ a2;n cn ak;n cn ˇˇ
Bn .t/ WD c1 ˇ
log ˇ1 ˙ ˙ ::: ˙
n
a a ˇ:
1;n 1;n
a2;n cn ak;n cn
Since limn!1 ˙ a1;n
˙ ::: ˙ a1;n
D 0, there is an Nk such that
Summarizing we have
Ni t Ni t
ˇ ˇ
ˇ .n/ ˇ
ˇ sup fn .Ni / C yN i sup f0 .Ni / C yN i j C jBn .t/ˇ
.n/
Ni t Ni t
p
X ˇ ˇ ˇ ˇ
In view of (2.55) and (2.58) the right-hand side tends to zero uniformly in t 2
Œ0; T as n ! 1. Equivalently, for any t 2 Œ0; T and any tn ! t as n ! 1,
limn!1 c1 N
n log jVn .tn /j D supNi t . f0 .i / C y
N i /. Recalling that we have picked
such that
.0/
sup f0 .Ni / C yN i D
.0/
sup f0 .k / C yk >0
Ni T .0/ .0/
k T; yk >
3
Although aj;n ’s depend on t we suppress this dependence for the sake of clarity.
74 2 Perpetuities
t2Œ0;T Ni t .0/
k t
where j j WD maxi .siC1 si / and !f0 ."/ WD supjuvj<"; u;v0 jf0 .u/ f0 .v/j is the
modulus of continuity of f0 . Of course, the right-hand side of the last inequality
tends to zero on sending j j and to zero.
Collecting pieces together and letting in (2.56) n ! 1 and then j j and tend to
zero we arrive at the desired conclusion limn!1 dT .Gn . fn ; n /; F . f0 ; 0 // D 0. u
t
Proof of Theorem 2.2.1 We only treat the case
< 0, the case
> 0 being similar,
the case
D 0 being much simpler. All unspecified limit relations are assumed to
hold as n ! 1.
Proof of (2.39) For k 2 N0 , set Sk WD log j…k j and kC1 WD log jQkC1 j. As a
consequence of the strong law of large numbers,
n1 SŒn )
‡./ (2.59)
where ‡.t/ D t for t 0 (actually, in (2.59) the a.s. convergence holds, see
Theorem 4 in [96]). According to Theorem 6.3 on p. 180 in [237], condition (2.38)
entails
X
1fkC1 >0g ".n1 k; n1 kC1 / ) N .c;1/ (2.60)
k0
on Mp . Now relations (2.59) and (2.60) can be combined into the joint convergence
X
n1 SŒn ; 1fkC1 >0g ".n1 k; n1 kC1 / )
‡./; N .c;1/ (2.61)
k0
4
Condition (2.54) is only used in this part of the proof.
2.3 Proofs for Section 2.2 75
Theorem 2.3.1 to
Gn . fn ; n /.t/
ˇ Œnt ˇ
Cˇ
ˇX ˇ
1
D n log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
ˇ Œnt ˇ
ˇX
Cˇ
ˇ
1
D n log ˇ sgn .…k QkC1 /e n.n1 SŒn.k=n/ Cn1 kC1 /
1fkC1 >0g ˇˇ;
kD0
P
so that fn ./ D n1 SŒn , f0 ./ D
‡./, n D k0 1fkC1 >0g "fn1 k; n1 kC1 g , 0 D
N .c;1/ , cn D n and the signs ˙ are defined by sgn.…k QkC1 /.
Now we shall show that the so defined functions and measures satisfy with
probability one all the conditions of Theorem 2.3.1. In view of (2.61), conditions
(A5) and (A6) are fulfilled. Furthermore, by Lemma 1.3.18 N .c;1/ satisfies with
probability one conditions (2.46), (A1), and (A2). The (nonnegative) expression
under the limit sign in (2.49) is dominated by n1 log.ŒnT/ which converges to zero
as n ! 1. Hence (2.49) holds. While checking (2.47) our argument is similar to
that given on p. 223 in [237]. We fix any T > 0, ı > 0 and use the representation
X
N .c;1/ .Œ0; T .ı; 1 \ / D ".Uk ;Vk / ./
kD1
where .Ui / are i.i.d. with a uniform distribution on Œ0; T, .Vj / are i.i.d. with PfV1
xg D .1 ı=x/1.ı;1/ .x/, and has a Poisson distribution with parameter Tc=ı, all
the random variables being independent. It suffices to prove that
I WD Pf 2;
Uk C Vk D
Ui C Vi for some 1 k < j g D 0:
˚ .c;1/ .c;1/
Condition (2.48) holds because P supt.c;1/ T .
tk C jk / 0 D 0 for each
k
T > 0, by (1.40).
76 2 Perpetuities
Thus, Theorem 2.3.1 is indeed applicable with our choice of fn and n , and we
conclude that
ˇX ˇ
ˇ Œn ˇ .c;1/ .c;1/
n 1 Cˇ
log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ ) sup
tk C jk : (2.62)
.c;1/
kD0 tk
P
Since limn!1 j…n j D 0 a.s. as a consequence of
< 0, k0 j…k j < 1 a.s. by
Theorem 2.1.1, whence
X
Œn
1 C
n log …k QkC1 1fjQkC1 j1g ) „./ (2.63)
kD0
Œn Œn
X X
xD …k QkC1 1fjQkC1 j>1g and y D …k QkC1 1fjQkC1 j1g
kD0 kD0
(so that x C y D YŒnC1 ) in combination with (2.62) and (2.63) we obtain (2.39) with
the help of Slutsky’s lemma.
Proof of (2.40) Recalling that X0 D 0 a.s. by assumption we use a representation
Œn
X
XŒnC1 D …ŒnC1 …k QkC1 (2.64)
kD0
˚ˇ ˇ
for every T > 0 because limt!1 tP ˇ log jMjˇ > t D 0 as a consequence of
Ej log jMjj < 1. This together with (2.59) proves n1 log j…ŒnC1 j )
‡./.
2.3 Proofs for Section 2.2 77
Pflog jQj > .1 C "/tg Pflog jMj > "tg Pflog jQj log jMj > tg
Pflog jQj > .1 "/tg
C Pflog jMj > "tg: (2.66)
Pflog jQ j > tg D Pflog jQj log jMj > tg Pflog jQj > tg ct1 ; t ! 1:
Arguing as in the proof of (2.39) we conclude that this limit relation in combination
with (2.59) ensures that
X
1
n log j…Œn j; 1flog jQkC1 j>0g ".n1 k; n1 log jQkC1 j/ )
‡./; N .c;1/
k0
Since the right-hand side is a.s. nonnegative (see (1.40)), we further have
ˇX ˇ
ˇ Œn ˇ
n1 logC e
Œn ˇˇ …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
ˇX ˇ C
ˇ Œn ˇ
D n1 log ˇˇ …k QkC1 1fjQkC1 j>1g ˇˇ C
Œn
kD0
ˇ Œn ˇ C
ˇX ˇ
D n 1 Cˇ
log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ C
Œn
kD0
.c;1/ .c;1/
)
‡./ C sup
tk C jk (2.67)
.c; 1/
tk
having utilized .x y/C D .xC y/C for x 2 R and y 0. Using (2.51) with
ˇX ˇ
ˇ Œn ˇ
x D e
Œn j…ŒnC1 j and y D e
Œn ˇˇ …k QkC1 1fjQkC1 j1g ˇˇ
kD0
ˇ PŒn ˇ
(so that xy D ˇ…ŒnC1 kD0 …k QkC1 1fjQkC1 >1jg ˇ) in combination with (2.65)
and (2.67) we obtain
ˇ Œn
X ˇ
ˇ ˇ
n1 logC ˇˇ…ŒnC1 …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
.c;1/ .c;1/
)
‡./ C sup
tk C jk (2.68)
.c;1/
tk
whence
n1 logC j…ŒnC1 j.Œn C 1/ max …k ) „./:
0kŒn
2.3 Proofs for Section 2.2 79
This implies
ˇ Œn
X ˇ
ˇ ˇ
n1 logC ˇˇ…ŒnC1 …k QkC1 1fjQkC1 j1g ˇˇ ) „./ (2.69)
kD0
because
ˇ Œn ˇ Œn
ˇ X ˇ X
Cˇ ˇ C
log ˇ…ŒnC1 …k QkC1 1fjQkC1 j1g ˇ log j…ŒnC1 j j…k j
kD0 kD0
C
log j…ŒnC1 j.Œn C 1/ max j…k j :
0kŒn
Œn Œn
X X
x D …ŒnC1 …k QkC1 1fjQkC1 j>1g and y D …ŒnC1 …k QkC1 1fjQkC1 j1g
kD0 kD0
PŒn
(so that x C y D …ŒnC1 kD0 …k QkC1 D XŒnC1 ) in combination with (2.68)
and (2.69). The proof of Theorem 2.2.1 is complete. t
u
Proof of Theorem 2.2.5 The proof proceeds along the lines of that of Theorem 2.2.1
but is simpler for the contribution of the Mk ’s is negligible. Therefore we only
provide details for fragments which differ principally from the corresponding ones
in the proof of Theorem 2.2.1.
Observe that
Indeed, since .bn / is a regularly varying sequence of index 1=˛, this is trivial when
˛ 2 .0; 1/. If ˛ D 1, this follows from the relation n1 bn `.bn / as n ! 1 and
our assumption that limt!1 `.t/ D 1.
Proof of (2.44) We recall the already used notation Sk D log j…k j and kC1 D
log jQkC1 j, k 2 N0 . According to Theorem 6.3 on p. 180 in [237], condition (2.41)
entails
X
1fkC1 >0g ".n1 k; b1
n kC1 /
) N .1;˛/ (2.71)
k0
SŒn
) „./ (2.72)
bn
80 2 Perpetuities
where „.t/ D 0 for t 0, then relations (2.71) and (2.72) can be combined into the
joint convergence
X
b1
n S Œn ; 1 " 1 1
fkC1 >0g .n k; bn kC1 / ) „./; N .1;˛/
k0
Using (2.73), (2.74), inequality (2.52), and Slutsky’s lemma we arrive at (2.44).
It only remains to check (2.72). To this end, it suffices to prove that
S0C DS0 WD 0; SnC WD logC jM1 jC: : :ClogC jMn j; Sn WD log jM1 jC: : :Clog jMn j
2.3 Proofs for Section 2.2 81
Since
n n
E .log jMj/ ^ bn D nPflog jMj > bn g C E log jMj1flog jMjbn g ;
bn bn
we infer
and
n
lim E log jMj1flog jMjbn g D 0: (2.77)
n!1 bn
X
n
Pflog jMk j > bn g D nPflog jMj > bn g
kD1
P
SnC =bn ! 0 (2.78)
Œn
X
x D …ŒnC1 …k QkC1 and y D …ŒnC1 X0
kD0
The proof of the last limit relation follows the pattern of that of (2.40) but is simpler.
Referring to the proof of (2.40) the only thing that needs to be checked is that
Pflog jQ j > tg D Pflog jQj log jMj > tg Pflog jQj > tg t˛ `.t/
as t ! 1. To this end, we shall use (2.66). As before, we only investigate the case
lim infn!1 j…n j D 0 a.s.
Case E log jMj < 1 We have limt!1 tPflog jMj > "tg D 0 whereas
limt!1 tPflog jQj > tg D 1 (recall that in the case ˛ D 1 we assume that
limt!1 `.t/ D 1). Therefore,
Since E log jMj < 1 entails E logC jMj < 1, the same argument proves (2.79)
for the right tail of logC jMj.
Case E log jMj D 1 and E logC jMj < 1 It suffices to check (2.79) which is a
consequence (2.42) and the regular variation of Pflog jQj > xg.
Case E log jMj D E logC jMj D 1 We only have to prove that
If lim supn!1 j…n j D 1 a.s., this follows from (2.43) and the regular variation of
Pflog jQj > tg. Suppose that limn!1 j…n j D 0 a.s., equivalently, limn!1 Sn D
1 a.s. Then, EJ .logC jMj/ < 1 according to Case (A3) of Remark 1.2.3 with
0 where J .t/ D t=E.log jMj ^ t/ for t > 0. In view of
Z 1
1=J .t/ D Pf log jMj > tygdy;
0
J is nondecreasing whence
Zn D ZI0n C Vn ;
d
n 2 N; Z0 WD c
84 2 Perpetuities
where In is a random index with values in the set f0; 1; : : : ; ng, the random variable
Zm0 is assumed to be independent of .In ; Vn / and distributed like Zm for each m 2 N0 ,
and c is a constant. If
d
.na Zn0 ; na Ina ; na Vn / ! .Y1 ; M; Q/; n!1
for some a > 0 then, necessarily, Y1 is independent of .M; Q/. Furthermore, the
distribution of Y1 satisfies (2.7) which can be seen on writing
Zn d ZI0n Ina Vn
a
D a a C a:
n In n n
of the inhomogeneous transforms were commenced only recently. While the scalar
inhomogeneous case was addressed in [11, 12, 14, 60, 62, 164], the articles [25, 58,
152, 208, 209] focussed at various multidimensional generalizations.
Fixed Points of Shot Noise Transforms In [141] the shot noise transforms were
introduced and investigation of their fixed points was initiated. Further results can
be found in [144, 145]. Example 2.1.3 is Proposition 8 in [134]. Example 2.1.4(a)
belongs to the folklore which means that its origin cannot be easily traced. Some
of its extensions can be found in [77, 78]. While part (b) of Example 2.1.4 is taken
from p. 72 in [144], part (c) is formula (2.5) in [76] and part (d) is Proposition 12 in
[229].
Random Lüroth Series We learned the notion of Lüroth series, both deterministic
and random, from [231]. Its authors investigated continuity properties of the
distributions of convergent random Lüroth series which are more general than those
discussed here. Since the authors of [231] never mentioned perpetuities, it seems
the connection between perpetuities and random Lüroth series is noticed here for
the first time.
Theorem 2.1.1 was proved in [109], the articles [84, 255] being the important
earlier contributions. Various extensions of Theorem 2.1.1 to wider settings can be
found in [48, 51, 87].
Theorem 2.1.2 is Theorem 1.3 in [9]. Earlier, it was proved in [113] under the
additional condition E log jMj 2 .1; 0/. However, for the conclusion that the
distribution of Y1 is continuous if nondegenerate the analytic argument of the last
cited paper is quite different from ours. This latter conclusion may also be derived
from Theorem 1 in [116], as has been done in Lemma 2.1 in [33]. If M and Q
are independent, sufficient conditions for the absolute continuity of the distribution
of Y1 were given in [222]. The proof in [222] relies heavily upon studying the
behavior of corresponding characteristic functions and used moment assumptions as
an indispensable ingredient. Without such assumptions it is not clear how absolute
continuity of the distribution of Y1 may be derived via an analytic approach.
Example 2.1.5(a) can be found as Example 4.3 in [257].
Example 2.1.5(b) is a special case of Example A2 in [194].
Example 2.1.5 (c,d) is taken from [175]. Our proof follows an argument given
in Section 4 of [234]. Parts (c) and (d) of Example 2.1.5 treat a well-studied class
of special cases when PfQ D ˙1g D 1=2. A short survey can be found in [75].
One would expect the distribution of Y1 to be absolutely continuous whenever c 2
.1=2; 1/. However, this is not true as there are values
p of c between 1=2 and 1 giving a
singular distribution of Y1 , for example, if c D . 51/=2, see [85, 86]. Meanwhile
it has been proved in [249] that, on the other hand, the distribution of Y1 is indeed
absolutely continuous for almost all values of c 2 .1=2; 1/.
Theorem 2.1.3 is Theorem 5.1 in [27].
Theorems 2.1.4 and 2.1.5 coincide with Theorem 1.2 in [6] and Theorem 1.4
in [9], respectively. In connection with Theorem 2.1.5 see also Proposition 10.1 in
[174] for the case that M; Q 0 in which (2.22) and (2.23) are clearly identical.
For the case p > 1, the implication (2.20)) (2.22) was proved in Theorem 5.1 of
86 2 Perpetuities
[255]. The following problem that is closely related to Theorem 2.1.5 has received
and is still receiving enormous attention in the literature: which conditions imposed
on the distributions of M and Q ensure that PfjY1 j > xg x˛ `.x/ as x ! 1
for some ˛ > 0 and some ` slowly varying at 1? This asymptotics can arise from
comparable right tail of jQj provided that the right tail of jMj is lighter, see [111,
115] and, for a multidimensional case, [240]. A very similar situation is described
in Theorem 1.3.6 which concerns the tails of suprema of perturbed random walks. A
more remarkable fact was noticed in [176]: the right tail of jY1 j can exhibit a power
asymptotic even in the case where jMj and jQj are light-tailed. For the last case,
papers [107, 115, 178] are further important contributions concerning perpetuities
(the book [59] provides a nice summary of results around the Kesten–Grincevičius–
Goldie theorem), while Theorem 1.3.8 is a result on the tail behavior of suprema of
perturbed random walks. Finally, we mention Theorem 3.1 in [83] which is a result
on tail behavior of Y1 obtained under a subexponentiality assumption.
Formula (2.26) can be found in Theorem 5.1 of [255]. A more complicated
recursive formula relating moments of fractional orders is obtained in Lemma 2.1
of [213] under the assumption that M and Q are nonnegative and independent.
Theorems 2.1.7 and 2.1.8 are Theorems 1.6 and 1.7 in [9]. Extending these results
and answering the question asked in [9], the recent paper [5] provides formulae for
supfr 2 Z W EerY1 < 1g and inffr 2 Z W EerY1 < 1g. Various results concerning
exponential and ‘Poisson-like’ tails of Y1 can be found in [5, 74, 108, 127, 128,
185, 206].
Theorems 2.2.1 and 2.2.5 are extended versions of Theorems 1.1 and 1.5 in [61].
The proofs given here are modified versions of those in [61]. Under the assumptions
M D e
a.s. for some
2 .1; 0/ and Q 0 a.s., one-dimensional versions of
our results can be found in parts (ii) and (iii) of Theorem 5 in [222].
As far as we know [114] was the first paper in which a limit theorem for Yn was
proved in the boundary case E log M D 0 under the assumption that M 0 a.s. Also,
weak convergence of one-dimensional distributions of divergent perpetuities has
been investigated in [26, 129, 222, 233] under various assumptions on M and Q. To
the best of our knowledge, (a) except in [61], functional limit theorems for divergent
perpetuities have not been obtained so far; (b) [61, 222] are the only contributions
to divergent non-boundary case limn!1 …n D 0 a.s. and EJ .logC jQj/ D 1.
Outside the area of limit theorems we are only aware of two papers [174, 269]
which investigate the latter case. The boundary case E log jMj D 0 has received
more attention in the literature, see [21, 53–55, 57, 114, 129, 233].
Chapter 3
Random Processes with Immigration
3.1 Definition
S0 D 0; Sk D 1 C : : : C k ; k 2 N:
where the last equality holds a.s. Following [148, 149], the process Y WD .Y.t//t2R
defined by
.t/1
X X
Y.t/ WD XkC1 .t Sk / D XkC1 .t Sk /; t2R (3.1)
k0 kD0
will be called random process with immigration at the epochs of a renewal process
or, for short, random process with immigration. Observe that jY.t/j < 1 a.s. for
each t because the number of summands in the sum defining Y.t/ is a.s. finite. The
process Y is called renewal shot noise process if X D h for a deterministic function
h, so that
.t/1
X X
Y.t/ WD h.t Sk / D h.t Sk /; t 2 R: (3.2)
k0 kD0
k
XX
Y.t/ D Zi;kC1 .t Sk /1fSk tg ; t0 (3.3)
k0 iD1
is the number of customers which are being served at time t. The latter random
process with immigration will prove to be of great importance in Chapter 5.
Let h W R ! R be a function satisfying h.x/ D 0 for x < 0 and h 2 D.R/.
(b) g.x; y/ WD h.x/y. The corresponding process Y3 defined by
.t/1
X
Y3 .t/ D kC1 h.t Sk /; t0
kD0
.t/1
X
Y4 .t/ D k ; t0
kD1
.t/1
X Z
t
Y5 .t/ WD kC1 ^ .t Sk / D Y2 .s/ds; t 0:
kD0 0
In this section we treat the situation when E.jX.t/j/ is finite and tends to 0 quickly
as t ! 1 while E < 1. Then Y is the superposition of a regular stream of
freshly started processes with quickly fading contributions of the processes that
90 3 Random Processes with Immigration
Suppose that
D E < 1, and that the distribution of is nonlattice, i.e., it is
not concentrated on any lattice dZ, d > 0. Further, we stipulate hereafter that the
basic probability space on which .Xk /k2N and .k /k2N are defined is rich enough to
accommodate
• an independent copy .k /k2N of .k /k2N ;
• a random variable 0 which is independent of .k /k2Znf0g and has distribution
Pf0 2 dxg D
1 E.1f2dxg /1.0;1/ .x/I
and
P
The shift-invariance alone implies that the intensity measure of k2Z "S
k
is a
constant multiple of the Lebesgue measure where the constant can be identified
as
1 by the elementary renewal theorem (formula (6.4)). In conclusion,
X
E "Sk .dx/ D
1 dx: (3.7)
k2Z
X
XkC1 .u C Sk /1fSk ug
k1
is a.s. finite because the number of nonzero summands is a.s. finite. Define
X X
Y .u/ WD XkC1 .u C Sk / D XkC1 .u C Sk /1fSk ug
k2Z k2Z
P the random variable Y .u/ being almost surely finite provided that the
with
series
X
k0 kC1 .u C S k /1 fSk ug converges in probability. It is natural to call .Y .u// u2R
stationary random process with immigration.
Theorem 3.2.1 provides sufficient conditions for weak convergence of the finite-
dimensional distributions of .Y.t C u//u2R as t ! 1.
Theorem 3.2.1 Suppose that
• X and are independent;
•
D E < 1;
• the distribution of is nonlattice.
If the function G.t/ WD EŒjX.t/j ^ 1 is directly
P Riemann integrable
1
(dRi) on RC D
Œ0; 1/, then, for each u 2 R, the series k0 XkC1 .u C Sk /1fSk ug is absolutely
convergent with probability one, and, for any n 2 N and any finite u1 < u2 < : : : <
un ,
d
Y.t C u1 /; : : : ; Y.t C un / ! Y .u1 /; : : : ; Y .un / ; t ! 1: (3.8)
1
See Section 6.2.2 for the definition and properties of directly Riemann integrable functions.
92 3 Random Processes with Immigration
whenever
< 1 and the distribution of is nonlattice.
Theorem 3.2.2 Assume that the distribution of is nonlattice. Let h W RC ! R
be a locally bounded, almost everywhere continuous, eventually nonincreasing and
nonintegrable function.
(A1) Suppose 2 WD Var < 1 and
Z 1
.h.y//2 dy < 1: (3.10)
0
Rt
Relation (3.11) also holds with
1 0 h.y/dy replaced by EY.t/.
3.2 Limit Theorems Without Scaling 93
For the rest of the theorem, assume that h is eventually twice differentiable2 and that
h00 is eventually nonnegative.
(A2) Suppose E r < 1 for some 1 < r < 2. If there exists a > 0 such that
h.y/ > 0 for y a,
Z 1
.h.y//r dy < 1; (3.12)
a
and3
then Yı is well defined as the a.s. limit in (3.9). Furthermore, (3.11) holds.
(A3) Suppose Pf > xg x˛ `.x/ as x ! 1 for some 1 < ˛ < 2 and some `
slowly varying at 1. If there exists an a > 0 such that h.y/ > 0 for y a;
Z 1
.h.y//˛ `.1=h.y//dy < 1; (3.14)
a
and
lim t`.c.t//.c.t//˛ D 1;
t!1
then Yı exists as the limit in probability in (3.9) and (3.11) holds.
We first derive equivalent condition for the direct Riemann integrability of G.t/ D
E.jX.t/j ^ 1/ that is more suitable for applications.
With probability one, X takes values in D.R/, and hence is continuous almost
everywhere (a.e.). This carries over to t 7! jX.t/j ^ 1. From Lebesgue’s dominated
convergence theorem we conclude that G is a.e. continuous. Since G is also
2
h is called eventually twice differentiable if there exists a t0 0 such that h is twice differentiable
on .t0 ; 1/
3
If h00 is eventually monotone, then (3.13) and (3.15) are consequences of (3.12) and (3.14),
respectively.
94 3 Random Processes with Immigration
bounded, it must be locally Riemann integrable. From this we conclude that the
direct Riemann integrability of G is equivalent to
X
sup EŒjX.t/j ^ 1 < 1: (3.16)
k0 t2Œk; kC1/
The direct Riemann integrability of G entails limt!1 G.t/ D 0 (see Section 6.2.2
for the proof) whence limt!1 X.t/ D 0 in probability. We now give an example in
which X.t/ does not converge to zero a.s., yet satisfies (3.16) (hence G is dRi).
Example 3.2.2 Let be uniformly distributed on Œ0; 1 and set
X
X.t/ WD 1˚kC k2
t<kC
; t 0:
k2 C1
k1
Then X.n C n2 .n2 C 1/1 / D 1 a.s. for n 2 N and thereupon lim sup X.t/ D 1 a.s.
t!1
On the other hand, supt2Œn; nC1/ EŒjX.t/j ^ 1 D supt2Œn; nC1/ EŒX.t/ D .n2 C 1/1
for n 2 N, and inequality (3.16) holds true.
Suppose that the first three assumptions of Theorem 3.2.1 hold.
Example 3.2.3 Let PfX.t/ 0g D 1 and PfX.t/ 2 .0; 1/g D 0 for each t 0.
Suppose that, with probability one, X gets absorbed into the unique absorbing state
0. This means that the random variable WD infft W X.t/ D 0g is a.s. finite, and
X.t/ D 0 for t . Then E < 1 is necessary and sufficient for (3.8) to hold.
Indeed, if E < 1, then from the equality
where k WD
Pinfft W Xk .t/ D 0g, and .u/ WD inffk 2 N0 W Sk ug. Given .Sj /,
the series k0 XkC1 .u C Sk /1fSk ug does not converge a.s. by the three series
3.2 Limit Theorems Without Scaling 95
theorem. Since the general term of the seriesP is nonnegative,then, given .Sj /, this
series diverges a.s. Hence, for each u 2 R, k2Z XkC1 .u C Sk /1fSk ug D 1 a.s.,
and (3.8) cannot hold.
Here are several specializations of the aforementioned process.
(a) In (3.3), let .Z1;1 .t//t0 be a subcritical or critical Bellman–Harris process (see
Chapter IV in [20] for the definition and many properties) with a single ancestor.
Then Y is a subcritical or critical Bellman–Harris process with immigration at
the epochs of a renewal process. Let and N be independent with being
distributed according to the life length distribution and N according to the
offspring distribution of the process. Suppose that Pf D 0g D 0, PfN D
0g < 1 and PfN D 1g < 1. Then Z1;1 .t/ C : : : C Z0 ;1 .t/ is a pattern of the
process X as discussed above, and we infer that Y satisfies (3.8) if, and only if,
E < 1. The latter is equivalent to
Z 1 X
k
1 PfZ1;1 .t/ D 0g Pf D kg dt < 1
0 k0
because
ˇXn ˇ
ˇ ˇ
lim ˇ kC1 exp.aSk /ˇ D 1
n!1
kD0
96 3 Random Processes with Immigration
in probability where 1 ; 2 ; : : : are i.i.d. copies of . The latter implies that (3.8)
cannot hold.
Denote by Np the set of Radon point measures on .1; 1 with the topology of
vague convergence, and let "x0 denote the probability measure concentrated at point
x0 2 R. Recall that, for mn ; m 2 Np , limn!1 mn D m vaguely if, and only if,
Z Z
lim f .x/mn .dx/ D f .x/m.dx/
n!1
First we prove three auxiliary results. Lemma 3.2.3 given next and the continuous
mapping theorem are the key technical tools in the proof of Theorem 3.2.1.
Lemma 3.2.3 Assume that
D E < 1 and that the distribution of is
nonlattice. Then
X X
"tSk ) "Sj ; t ! 1
k0 j2Z
on Np .
Proof Let h W R ! RC be a continuous function with a compact support. We have
to prove that
X d X
h.t Sk ! h.Sj /; t ! 1: (3.18)
k0 j2Z
and then SO k WD SO 0 C SO k , k 2 N0 . It is known (see p. 210 in [80]) that, for any fixed
> 0, there exist a.s. finite stopping times 1 D 1 . / and 2 D 2 . / such that
jS2 SO 1 j a.s. Define the coupled random walk
(
SO k ; if k 1 ;
SQ k WD P2 Ck1
SO 1 C jD2 C1 j ; if k 1 C 1;
for k 2 N0 . Then .SQ k /k2N0 D .SO k /k2N0 D .Sk /k2N0 . The construction of the
d d
for k 2 N0 .
With the same as above, we set h. / .x/ WD supjyxj h.y/ and h. / .x/ WD
infjyxj h.y/, x 2 R. The so defined functions are continuous functions with
compact supports. Indeed, if x is a discontinuity of h. / , then x or x C is a
discontinuity of h. Consequently, since h is continuous, so is h. / and, by the same
argument, h. / . The claim about supports is obvious. Observe that the sum on the
right-hand side of (3.18) is a.s. finite because the number of its nonzero terms is a.s.
finite. The same is true if h is replaced by h. / or h. / .
Using now (3.19) we infer
X X
2 1 X
h.t Sk / D h.t Sk / C h.t .S2 Ck SQ 1 Ck / SQ 1 Ck /
k0 kD0 k0
X
2 1 X
h.t Sk / C h. / .t SQ 1 Ck /
kD0 k0
X
2 1 X
1 1 X
D h.t Sk / h. / .t SQ k / C h. / .t SQ k /:
kD0 kD0 k0
The first two summands on the right-hand side tend to 0 a.s. as t ! 1 in view
of limt!1 h.t/ D limt!1 h. / .t/ D 0. With A. / WD infft W h. / .t/ ¤ 0g and
g. / .t/ WD h. / .t C A. / /, t 2 R, the third term satisfies
X X X
h. / .t SQ k / D g. / .t A. / SQ k / D g. / .t A. / Sk /
d
where the second distributional equality is a consequence of (3.6); the last equality
is due to the fact that h. / .Sk C A. / / D 0 for negative integer k, and the last
98 3 Random Processes with Immigration
P
distributional equality follows from the distributional shift invariance of k2Z "Sk
(see p. 90). Further, since h is continuous, we have h. / # h as # 0. Therefore
ˇ ˇ Z
ˇ X . / X ˇ . /
ˇ
lim Eˇ h .Sk / ˇ
h.Sk /ˇ D lim h .x/ h.x/ dx D 0
#0 #0
k2Z k2Z
P
for every continuity point x of the distribution function (d.f.) of j2Z h.Sj /. More
precisely, let .n /n2N
P be a sequence with n # 0 as n ! 1. Let x be a continuity
point of the d.f. of j2Z h.Sj / and x ı (ı > 0) be a continuity point of the d.f. of
P
P .n /
j2Z h.Sj / and the d.f.’s of j2Z h .Sj / (the set of these ı is dense in R). Then
˚X
lim sup P h.t Sk / > x
t!1
k0
X
2 1 X
1 1
lim sup P h.t Sk / h .n /
.t SQ k / > ı
t!1
kD0 kD0
˚X ˚ X . /
C lim sup P h.n / .t SQ k / > x ı D P h n .SQ k / > x ı :
t!1
k2Z k2Z
P
As n ! 1, the last expression tends to Pf j2Z h.Sj / > x ıg. Sending now ı # 0
along an appropriate sequence, we arrive at the desired conclusion. Corresponding
lower bounds can be obtained similarly, starting with
X X
1 1 X
h.t Sk / h. / .t SQ k / C h. / .t SQ k /:
k0 kD0 k0
in .D.R/; d/.
Proof Without loss of generality we assume that t D 0. It suffices to prove that there
exist n 2 ƒ, n 2 N such that, for any 1 < a < b < 1,
n o
lim max sup jn .u/ uj; sup j fn .tn C n .u// f .u/j D 0: (3.20)
n!1 u2Œa; b u2Œa; b
Put n .u/ WD
n .u/ tn and note that n 2 ƒ. Then (3.21) can be rewritten as
n o
lim max sup jn .u/ u C tn j; sup j fn .tn C n .u// f .u/j D 0
n!1 u2Œa; b u2Œa; b
Note that .D.R/Z ; dZ / is a complete and separable metric space. Now consider the
metric space .Np D.R/Z ; / where .; / WD dZ .; / C .; / (i.e., convergence
is defined componentwise). As the Cartesian product of complete and separable
spaces, .Np D.R/Z ; / is complete and separable.
.l/
For fixed c > 0, l 2 N and .u1 ; : : : ; ul / 2 Rl , define the mapping c W Np
D.R/Z ! Rl by
X
c.l/ m; . fk .//k2Z WD fk .tk C uj /1fjtk jcg
k jD1;:::;l
.l/
Lemma 3.2.5 The mapping c is continuous at all points .m; . fk /k2Z / for which
m.fc; 0; cg/ D 0 and for which u1 ; : : : ; ul are continuity points of fk .tk C / for all
k 2 Z.
Proof Let c > 0 and suppose that
.n/
mn ; . fk /k2Z ! m; . fk /k2Z ; n!1 (3.22)
rC 1
X
r X
m. \ Œc; 0/ D "tk ; and m. \ Œ0; c/ D "tk
kD1 kD0
where the empty sum is interpreted as 0. Theorem 3.13 in [235] further implies that
there is convergence of the points of mn in Œc; 0 to the points of m in Œc; 0 and
analogously with Œc; 0 replaced by Œ0; c. Since m has no point at 0, this implies
.n/
that limn!1 tk D tk for k D r ; : : : ; rC 1. On the other hand, (3.22) entails
.n/
limn!1 fk D fk in .D.R/; d/ for k D r ; : : : ; rC 1. Therefore, Lemma 3.2.4
ensures that
.n/ .n/
fk .tk C / ! fk .tk C /; n!1 (3.23)
P
having utilized (3.7) for the last equality. Therefore k0 jZk j < 1 a.s. by the
two-series theorem which implies jY .u/j < 1 a.s.
Using Lemma 3.2.3 and recalling that the space Np D.R/Z is separable we infer
X X
"tSk ; .XkC1 /k2Z ) "Sk ; .XkC1 /k2Z ; t!1 (3.26)
k0 k2Z
and that
X
l
d X
l
˛i Yc .t; ui / ! ˛i Yc .ui /; t ! 1:
iD1 iD1
X
l
d X
l
˛i Yc .ui / ! ˛i Y .ui /; c!1 (3.27)
iD1 iD1
and
ˇ X X
ˇ
ˇ l ˇ
lim lim sup P ˇˇ ˛i XkC1 .ui C t Sk /1fjtSk j>cg ˇˇ > D 0 (3.28)
c!1 t!1
iD1 k0
P
in particular, k2Z jXkC1 .uCSk/j < 1 a.s. Hence, by the monotone (or dominated)
convergence theorem,
X
jYc .u/ Y .u/j jXkC1 .u C Sk /j1fjSk j>cg ! 0
k2Z
as c ! 1 a.s.
Proof of (3.28) It suffices to prove
ˇ X ˇ
ˇ ˇ
lim lim sup P ˇ XkC1 .u C t Sk /1fjtSk j>cg ˇ > D 0
c!1 t!1
k0
X
P jXkC1 .u C t Sk /j1fjtSk j>c; jXkC1 .uCtSk /j1g > =2
k0
X
CP jXkC1 .u C t Sk /j1fjtSk j>c; jXkC1 .uCtSk /j>1g > =2
k0
X ˚
P jt Sk j > c; jXkC1 .u C t Sk /j > 1
k0
X
C 2 1 E G.u C t Sk /1fjtSk j>cg
k0
˚
Since P jt Sk j > c; jXkC1 .u Ct Sk /j > 1 EG.u Ct Sk /1fjtSk j>cg for k 2 N0 ,
the latter limit relation entails
X ˚
lim lim sup P jt Sk j > c; jXkC1 .u C t Sk /j > 1 D 0;
c!1 t!1
k0
thereby finishing the proof of (3.28). The proof of Theorem 3.2.1 is complete. t
u
The proof of convergence in Theorem 3.2.2 which can be found in [146] will not
be given here, for it is very similar to the proof of Lemma 3.2.3. We shall only show
that the limit random variables in Theorem 3.2.2 are well defined.
Proposition 3.2.6 Assume that
< 1 and that the distribution of is nonlattice.
Let h W RC ! R be locally bounded, eventually nonincreasing and nonintegrable.
Under the assumptions of Theorem 3.2.2,
X Z t
Yı D lim h.Sk /1fSk tg 1
h.y/dy
t!1 0
k0
exists as the limit in L2 in case (A1), as the a.s. limit in case (A2) and as the limit in
probability in case (A3). In all the three cases, it is a.s. finite.
104 3 Random Processes with Immigration
X
n Z n n1
X Z kC1
f .k/ f .y/dy D f .k/ f .y/dy C f .n/:
kD0 0 kD0 k
Since f is nonincreasing, each summand in the sum is nonnegative. Hence, the sum
is nondecreasing in n. On the other hand, it is bounded from above by
n1
X Z kC1 X
n1
f .k/ f .y/dy . f .k/ f .k C 1// f .0/ < 1:
kD0 k kD0
P R kC1
Consequently, the series k0 f .k/ k f .y/dy converges. Recalling that
limn!1 f .n/ exists completes the proof. t
u
Proof of Proposition 3.2.6 We only investigate cases (A1) and (A2) assuming
additionally that h is nonincreasing on RC (rather than eventually nonincreasing)
in case (A1) and that h is nonincreasing and twice differentiable on RC with h00 0
in case (A2). A complete proof in full generality can be found in [146] and [147].
Define
X Z t
Yt WD h.Sk /1fSk tg
1 h.y/dy; t 0:
k0 0
Since
X X
Yt Ys D h.Sk /1fs<Sk tg E h.Sk /1fs<Sk tg
k0 k0
3.2 Limit Theorems Without Scaling 105
k0 k0
X 2 Z t 2
1
DE h.t Sk /1fSk <tsg
h.y/dy
k0 s
where the last equality follows from (3.6) and (3.7). The first term on the right-hand
side equals
X X
E .h.t Sk //21fSk <tsg C 2 E h.t Si /1fSi <tsg h.t Sj /1fSj <tsg
k0 0i<j
Z ts Z ts Z
D
1
.h.t y// dy C 2
2 1
h.t y/ Q
h.t y x/ dU.x/dy
0 0 .0; tsy/
Z t Z t Z
D
1
.h.y// dy C 2
2 1
h.y/ Q
h.y x/ dU.x/dy
s s .0; ys/
P
where Q
U.x/ Q
WD n1 PfSn xg, x 0. Note that U.x/ D U.x/ 1 where U.x/ D
P
n0 PfS n xg is the renewal function. Hence,
Z t
E.Yt Ys /2 D
1
.h.y//2 dy
s
Z t Z
C 2
1 h.y/ Q
h.y x/ d.U.x/
1 x/dy:
s .0; ys/
Rt
Since h2 is assumed to be integrable, lims!1 supt>s s .h.y//2 dy D 0 and it remains
to check that
Z t Z
lim sup h.y/ Q
h.y x/ d.U.x/
1 x/dy D 0: (3.29)
t!1 t>s s .0; ys/
R tx
Put Hs;t .x/ WD s h.x C y/h.y/dy for x 2 Œ0; t s/ and Hs;t .x/ WD 0 for all other
x. Note that Hs;t .x/ is continuous and nonincreasing on RC . Changing the order of
integration followed by integration by parts gives
Z Z
t
h.y/ Q
h.y x/ d U.x/
1 x dy
s .0; ys/
Z Z tx
D Q
h.x C y/h.y/dyd U.x/
1 x
.0;ts/ s
106 3 Random Processes with Immigration
Z
Q
.U.x/
1 x/ d.Hs;t .x//
.0; ts/
ˇ ˇ
sup ˇU.x/
Q
1 xˇ Hs;t .0/
x0
Z
ˇ ˇ t
D sup ˇU.x/
Q
1 xˇ .h.y//2 dy:
x0 s
Q
By Lorden’s inequality (formula (6.6)), supx0 jU.x/
1 xj
2 Var < 1,
and (3.29) follows.
Proof for Case (A2) is divided into three steps.
P
Step
P 1 Prove that if Un WDR nkD0 .h.Sk / h.
k// converges a.s. as n ! 1, then
1 0 h.y/dy converges a.s. as t ! 1.
t
k0 h.Sk /1fSk tg
P P
Step 2 Prove that if the series j1 .j
/ kj h0 .
k/ converges a.s., then Un
converges a.s. as n ! 1.
Step 3PUse the threePseries theorem to check that, under the conditions stated, the
series j1 .j
/ kj h0 .
k/ converges a.s.
Step
Pn 1 Assume that
R Un converges a.s. Then, by Lemma 3.2.7, the sequence
1
n
kD0 h.Sk /
0 h.y/dy converges a.s., too. Set
j
. .t/ 1/ tj h.
. .t/ 1/ ^ t/ (3.31)
where the inequality follows from the monotonicity of h. By Theorem 3.4.4 in [119],
E r < 1 implies that
.t/
1 t D o.t1=r /; t!1 a.s.
3.2 Limit Theorems Without Scaling 107
Since
.t/
1 t D o.t1=r /; t!1 a.s.
This relation implies that the first factor in (3.31) is o.t1=r /, whereas the second
factor is o.t1=r / as t ! 1. The latter relation can be derived as follows. First, in
view of (3.12) and the monotonicity of h, we have
Œ
. .t/ 1/ ^ t t; t!1 a.s.
h.Sk/ h.
k/ D h0 .
k/.Sk
k/ C 21 h00 .k /.Sk
k/2 :
Set
X
n
In WD 21 h00 .k /.Sk
k/2
kD1
and write
X
n
Un h S0 C h.0/ D h0 .
k/.Sk
k/ C In
kD1
X
n X
n X
n
D S0 h0 .
k/ C .k
/ h0 .
j/ C In
kD1 kD1 jDk
X
n X
n X X
D S0 h0 .
k/ C .k
/ h0 .
j/ .Sn
n/ h0 .
k/ C In : (3.33)
kD1 kD1 jk knC1
108 3 Random Processes with Immigration
for all n. Using the first inequality in (3.34) and the fact that limy!1 h.y/ D 0,
one immediately infers that the first summand in (3.33) converges as n ! 1. The
a.s. convergence of the second (principal) term is assumed to hold here. By the
Marcinkiewicz–Zygmund law of large numbers (Theorem 2 on p. 125 in [68])
Sn
n D o.n1=r /; n ! 1 a.s. (3.35)
Therefore, in view of (3.32) and (3.34), the third term in (3.33) converges to zero
a.s. Further, limk!1 k1 k D
a.s. by the strong law of large numbers. Hence, in
view of (3.13),
21=r
h00 .k / D O.k / D O.k21=r /; ! 1:
h00 .k /.Sk
k/2 D o.k.21=r/ /; k!1 a.s.
which implies that Pa.s. as n ! 1, for 2 1=r > 1. Hence the a.s.
P In converges
convergence of k1 .k
/ jk h0 .
j/ entails that of Un .
Step 3 Set
X
ck WD h0 .
j/ and k WD ck .k
/; k 2 N:
jk
P
k1 .h.
k// < 1. In view of (3.34),
r
Condition (3.12) ensures that
0 1r
X X X
Ejk jr D Ej
jr @ .h0 .
j//A
k1 k1 jk
X
r Ej
jr .h.
.k 1///r < 1:
k1
P
Hence the series k1 k converges a.s. by Corollary 3 on p. 117 in [68]. t
u
3.3 Limit Theorems with Scaling 109
In this section, we focus on the case where the distribution of is in the domain
of attraction of a stable distribution of index ˛ 6D 1 and, if
< 1 (equivalently,
˛ > 1), either EX.t/ or Var X.t/ is too large for convergence to stationarity. In this
situation, we investigate weak convergence of the finite-dimensional distributions of
Yt .u/ WD .a.t//1 .Y.ut/b.ut// with suitable norming constants a.t/ > 0 and shifts
b.t/ 2 R. This convergence is mainly regulated by two factors: the tail behavior of
and the asymptotics of the finite-dimensional distributions of X.t/ as t ! 1. The
various combinations of these give rise to a broad spectrum of possible limit results.
Assuming that h.t/ WD EX.t/ is finite for all t 0, we start with the
decomposition
X X
Y.t/b.t/ D Y.t/ h.tSk /1fSk tg C h.tSk /1fSk tg b.t/ (3.36)
k0 k0
and observe that Yt .u/ may converge if at least one summand in (3.36), properly
normalized, converges weakly.
The asymptotics of the first summand, properly normalized, is accessible via
martingale central limit theory or convergence results for triangular arrays. When
E is finite, the normalizing constants and limit processes for the first summand are
completely determined by properties of X, the influence of the distribution of is
small. This phenomenon can easily be understood: the randomness induced by the
k is governed by the law of large numbers for .t/ and is thus degenerate in the
limit. When E is infinite and Pf > tg is regularly varying with index larger than
1, .t/, properly normalized, weakly converges to a nondegenerate distribution.
Hence, unlike in the finite mean case, the randomness induced by persists in the
limit.
The asymptotic behavior of the second summand, properly normalized, is driven
by the functional limit theorems or the strong approximation results for the first-
passage time process ..t//t0 as well as the behavior of the function h at infinity.
It turns out that there are situations in which one of the summands in (3.36)
dominates (cases (Bi1) and (Bi2), i D 1; 2; 3 of Theorem 3.3.18; cases (C1) and
(C2) of Theorem 3.3.19; the case when h 0), and those in which the contributions
of the summands are comparable (cases (Bi3), i D 1; 2; 3 of Theorem 3.3.18
and case (C3) of Theorem 3.3.19). A nice feature of the former situation is that
possible dependence of X and gets neutralized by normalization (provided that
limt!1 a.t/ D C1) so that the limit results are only governed by individual
contributions of X and . Suppose, for the time being, that the latter situation
prevails, i.e., the two summands in (3.36) are of the same order, and that X and
are independent. From the discussion above it should be clear that whenever E
is finite, the two limit random processes corresponding to the summands in (3.36)
110 3 Random Processes with Immigration
are independent, whereas this is not the case, otherwise. Still, we are able to show
that the summands in (3.36) converge jointly. When X and are dependent, proving
such a joint convergence remains an open problem.
Throughout the section we assume that h.t/ D EX.t/ is finite for all t 0 and
that the covariance
r.ut; wt/
lim D C.u; w/; u; w > 0:
t!1 r.t; t/
The definition implies that r.t; t/ is regularly varying at 1, i.e., r.t; t/ tˇ ` .t/ as
t ! 1 for some ` slowly varying at 1 and some ˇ 2 R which is called the index
of regular variation. In particular, C.a; a/ D aˇ for all a > 0 and further
r.ut; wt/
lim D C.u; w/; u; w > 0
t!1 r.t; t/
4
The canonical definition of the regular variation in R2C (see, for instance, [120]) requires
nonnegativity of r. The definitions of slowly and regularly varying functions on RC can be found
in Definitions 6.1.1 and 6.1.2 of Section 6.1.
3.3 Limit Theorems with Scaling 111
when C.s; t/ ¤ 0 for some s; t > 0, s ¤ t, and a centered Gaussian process with
independent values and variance E.Vˇ .u//2 D .1 C ˇ/1 u1Cˇ , otherwise.
Let S2 WD .S2 .t//t0 denote a standard Brownian motion and, for 1 < ˛ < 2, let
.S˛ .t//t0 denote a spectrally negative ˛-stable Lévy process such that S˛ .1/ has
the characteristic function
˚
z 7! exp jzj˛ .1˛/.cos. ˛=2/ C i sin. ˛=2/ sgn.z// ; z2R (3.38)
Also, we set I˛; 0 .u/ WD S˛ .u/ for u 0. The stochastic integral above is defined via
integration by parts: if > 0, then
Z u
I˛; .u/ D S˛ .y/.u y/ 1 dy; u > 0;
0
112 3 Random Processes with Immigration
This definition is consistent with the usual definition of a stochastic integral with
a deterministic integrand and the integrator being a semimartingale. We shall call
I˛; WD .I˛; .u//u0 fractionally integrated ˛-stable Lévy process.
Definition 3.3.6 reminds the notion of an inverse subordinator.
Definition 3.3.6 For ˛ 2 .0; 1/, let W˛ WD .W˛ .t//t0 be an ˛-stable subordinator
(nondecreasing Lévy process) with the Laplace exponent log E exp.zW˛ .t// D
.1 ˛/tz˛ , z 0. The inverse ˛-stable subordinator W˛ WD .W˛ .t//t0 is
defined by
The processes introduced in Definitions 3.3.7 and 3.3.8 arise as weak limits of the
second and the first summand in (3.36), respectively, in the case when Pf > tg is
regularly varying of index ˛ for ˛ 2 .0; 1/. We shall check that these are well
defined in Lemmas 3.3.25 and 3.3.27, respectively.
Definition 3.3.7 For 2 R, set
Z
J˛; .0/ WD 0; J˛; .u/ WD .u y/ dW˛ .y/; u > 0:
Œ0; u
Since the integrator W˛ has nondecreasing paths, the integral exists as a pathwise
Lebesgue–Stieltjes integral. We shall call J˛; WD .J˛; .u//u0 fractionally inte-
grated inverse ˛-stable subordinator.
Definition 3.3.8 Let W˛ be an inverse ˛-stable subordinator and C the limit
function for a wide-sense regularly varying function (see Definition 3.3.2) in R2C
of index ˇ for some ˇ 2 R. We shall denote by Z˛;ˇ WD .Z˛;ˇ .u//u>0 a process
which, given W˛ , is centered Gaussian with (conditional) covariance
Z
ˇ
E Z˛;ˇ .u/Z˛;ˇ .w/ˇW˛ D C.u y; w y/ dW˛ .y/; 0 < u w;
Œ0; u
Theorem 3.3.9 (case E < 1) and Theorem 3.3.10 (case E D 1) deal with the
asymptotics of the first summand in (3.36).
f:d:
We shall write Vt .u/ ) V.u/ as t ! 1 to denote weak convergence of finite-
dimensional distributions, i.e., for any n 2 N and any 0 < u1 < u2 < : : : < un < 1
d
.Vt .u1 /; : : : ; Vt .un // ! .V.u1 /; : : : ; V.un //; t ! 1:
Then
P
Y.ut/ k0 h.ut Sk /1fSk utg f:d:
p ) Vˇ .u/; t!1 (3.40)
1 tv.t/
as t ! 1.
114 3 Random Processes with Immigration
Then
s
Pf > tg X f:d:
Y.ut/ h.ut Sk /1fSk utg ) Z˛; ˇ .u/; t!1
v.t/ k0
When h.t/ D EX.t/ is not identically zero, the centerings used in Theorems 3.3.9
and 3.3.10 are random which is undesirable. Theorem 3.3.18 (case E < 1) and
Theorem 3.3.19 (case E D 1) stated below in Section 3.3.4 give limit results
with nonrandom centerings. These are obtained by combining Theorems 3.3.12
and 3.3.13 which are the results concerning weak convergence of the second
summand in (3.36) with Theorems 3.3.9 and 3.3.10, respectively.
In this section we investigate the asymptotics of the second summand in (3.36) under
the assumption that h is regularly varying at infinity
for some 2 R and some b̀ slowly varying at 1. Recall that b̀.t/ > 0 for all t 0
by the definition of slow variation (see Definition 6.1.1 in Section 6.1). Note further
that the functions h with limt!1 h.t/ D b 2 .0; 1/ are covered by condition (3.43)
with D 0 and limt!1 b̀.t/ D b.
Before we formulate our next results we have to recall that the distribution of
belongs to the domain of attraction of a 2-stable (normal) distribution if, and only
if, either 2 WD Var < 1 or Var D 1 and
for some ` slowly varying at 1. Further, the distribution of belongs to the domain
of attraction of an ˛-stable distribution, ˛ 2 .0; 2/ if, and only if,
for some ` slowly varying at 1. We shall not treat the case ˛ D 1, for it is
technically more complicated than the others and does not shed any new light on
weak convergence of random processes with immigration. If
D E D 1, then
necessarily ˛ 2 .0; 1/ (because we excluded the case ˛ D 1), and if
< 1, then
necessarily ˛ 2 .1; 2.
As before, let DŒ0; 1/ denote the space of right-continuous real-valued functions
on Œ0; 1/ with finite limits from the left at each positive point. Recall that ..t//t2R
is the first-passage time process defined by .t/ D inffk 2 N0 W Sk > tg for t 2 R. It
is well known that the following functional limit theorems hold:
.ut/
1 ut
) S˛ .u/; t!1 (3.44)
g.t/
.ut/
) W˛ .u/; t!1 (3.45)
g.t/
Recall that .Z.t//t0 is a renewal shot noise process. Relevance of the preceding
paragraphs for the subsequent presentation stems from the fact that (3.44) and (3.45)
are functional limit theorems for .Z.t//t0 which corresponds to h.t/ 1.
116 3 Random Processes with Immigration
as t ! 1.
(B2) Suppose that 2 D 1 and that
for some ` slowly varying at 1. Let c.t/ be a positive continuous function such
that limt!1 t`.c.t//.c.t//2 D 1. If condition (3.43) holds with > 1=2,
then
R ut
Z.ut/
1 0 h.y/dy f:d:
) I2; .u/; t ! 1:
3=2 c.t/h.t/
for some 1 < ˛ < 2 and some ` slowly varying at 1. Let c.t/ be a positive
continuous function such that limt!1 t`.c.t//.c.t//˛ D 1. If condition
(3.43) holds with > 1=˛, then
R ut Z
Z.ut/
1 0 h.y/dy f:d:
) .u y/ dS˛ .y/ D I˛; .u/; t ! 1:
11=˛ c.t/h.t/ Œ0; u
Theorem 3.3.12 only contains limit theorems with regularly varying normal-
ization. Now we treat the borderline situation when in (3.43) equals 1=2 yet
the function h2 is nonintegrable (we shall see that this gives rise to a slowly
varying normalization). This case bears some similarity with the case > 1=2
3.3 Limit Theorems with Scaling 117
(normalization is needed; the limit is Gaussian) and is very different from the case
when h2 is integrable. The principal new feature of the present situation is necessity
of sublinear time scaling as opposed to the time scalings u C t and ut used for the
other regimes. As might be expected of a transitional regime there are additional
technical complications. In particular, the techniques (tools related to stationarity;
the continuous mapping theorem along with the functional limit theorem for the
first-passage time process ..t//) used for the other regimes cannot be exploited
here. Our main technical tool is a strong approximation theorem.
Now we introduce a limit process X WD .X .u//u2Œ0;1 appearing in Theo-
rem 3.3.14 below. Let S2 D .S2 .u//u2Œ0;1 denote a Brownian motion independent
of D WD .D.u//u2Œ0;1 a centered Gaussian process with independent values which
satisfies E.D.u//2 D u. Then we set
R tCx.t;u/
Z.t C x.t; u//
1 0 h.y/dy f:d:
q Rt ) .X .u//u2Œ0;1
2
3 0 .h.y//2 dy u2Œ0;1
where 2 D Var ,
D E, and x W RC Œ0; 1 ! RC is any nondecreasing in the
second coordinate function that satisfies
R x.t;u/
0 .h.y//2 dy
lim Rt Du (3.48)
2 dy
0 .h.y//
t!1
(which can be checked as in the ‘moderate’ case), and one may take x.t; u/ D
1=
exp..log t/u /.
‘Fast’ b̀ If
for some > 0, some ı 2 .0; 1/ and some L slowly varying at 1, then
Theorem 3.3.18 Assume that h.t/ D EX.t/ is eventually monotone and not
identically zero, and that
• in cases (Bi1) and (Bi3), i D 1; 2; 3 the assumptions of Theorem 3.3.9 hold;
• in cases (Bi2) and (Bi3), i D 1; 2; 3 h.t/ t b̀.t/ as t ! 1 for some 2 R and
some b̀ slowly varying at 1; Rt
• in cases (Bi2), i D 1; 2; 3 limt!1 0 v.y/dy D 1 and there exists a positive
monotone function u such that v.t/ u.t/, t ! 1, or v is directly Riemann
integrable on Œ0; 1/;
• in cases (Bi3), i D 1; 2; 3 X is independent of .
(B1) Let 2 D Var < 1. Rt
(B11) If limt!1 .t.h.t//2 /= 0 v.y/dy D 0 (which is equivalent to
limt!1 .h.t//2 =v.t/ D 0), then
R ut
Y.ut/
1 0 h.y/dy f:d:
p ) Vˇ .u/; t!1 (3.50)
1 tv.t/
R ut
Y.ut/
1 0 h.y/dy f:d:
) I2; .u/; t ! 1:
3=2 c.t/h.t/
for some ˛ 2 .1; 2/ and some ` slowly varying at 1 and let c.t/ be a positive
function with limt!1 t`.c.t//.c.t//˛ D 1.
(B31) If limt!1 .c.t/h.t//2 =.tv.t// D 0, thenR relation (3.50) holds.
(B32) If > 1=˛ and limt!1 .c.t/h.t//2 = 0 v.y/dy D 1, then
t
R ut
Y.ut/
1 0 h.y/dy f:d:
) I˛; .u/; t!1
.˛C1/=˛ c.t/h.t/
for some ˛ 2 .0; 1/ and some ` slowly varying at 1, and that h is not identically
zero.
(C1) If the assumptions of Theorem 3.3.10 hold (with the same ˛ as above) and
v.t/Pf > tg
lim D 1; (3.52)
t!1 .h.t//2
3.3 Limit Theorems with Scaling 121
then
s
Pf > tg f:d:
Y.ut/ ) Z˛; ˇ .u/; t!1
v.t/
v.t/Pf > tg
lim D 0: (3.53)
t!1 .h.t//2
v.t/Pf > tg
lim D b 2 .0; 1/; (3.54)
t!1 .h.t//2
then
s Z
Pf > tg f:d:
Y.ut/ ) Z˛; ˇ .u/ C b1=2 .u y/.ˇ˛/=2 dW˛ .y/; t!1
v.t/ Œ0; u
Here W˛ under the integral sign is the same as in the definition of Z˛; ˇ
(Definition 3.3.8). In particular, the summands defining the limit process are
dependent.
There is a simple situation where weak convergence of the finite-dimensional
distributions obtained in Theorem 3.3.19 implies the J1 -convergence on DŒ0; 1/.
Corollary 3.3.20 Let X.t/ be almost surely nondecreasing with limt!1 X.t/ 2
.0; 1 almost surely. Assume that the assumptions of part (C2) of Theorem 3.3.19
are in force. Then the limit relations of part (C2) of Theorem 3.3.19 hold in the sense
of weak convergence in the J1 -topology on DŒ0; 1/.
As shown in the proof of Corollary 2.6 in [148], in Corollary 3.3.20 the limit
process J˛; is a.s. continuous. Now Corollary 3.3.20 follows from a modification
of the aforementioned proof which uses Remark 2.1 in [267] instead of Theorem 3
in [42].
122 3 Random Processes with Immigration
We close the section with two ‘negative’ results. According to Lemmas 3.3.26(b)
and 3.3.28, weak convergence of the finite-dimensional distributions in Theo-
rem 3.3.19 cannot be strengthened to weak convergence on D.0; 1/ whenever
either J˛; ˛ or Z˛; ˛ arises in the limit. We arrive at the same conclusion when
the limit process in Theorem 3.3.10 is a conditional white noise (equivalently,
C.u; w/ D 0 for u ¤ w) because no version of such a process belongs to D.0; 1/.
3.3.5 Applications
Unless the contrary is stated, the random variable appearing in this section may
be arbitrarily dependent on , and .k ; k /, k 2 N denote i.i.d. copies of .; /.
TheoremP3.3.21 given below is a specialization of Theorems 3.3.18 and 3.3.19
to Y.t/ D k0 1fSk t<Sk CkC1 g , t 0 the random process with immigration which
corresponds to X.t/ D 1f>tg . The result is stated explicitly because Theorem 5.1.3
in Section 5 that provides a collection of limit results for the number of empty boxes
in the Bernoulli sieve is just a reformulation of Theorem 3.3.21. On the other hand,
Theorem 3.3.21 is interesting on its own because of numerous applications of the so
defined Y (see ‘Queues and branching processes’ on p. 3 and Example 3.1.2(a)).
Theorem 3.3.21 Suppose that
(D22) If
then
P R ut
k0 1fSk ut<Sk CkC1 g
1 0 Pf > yg dy f:d:
) S2 .u/; t!1
3=2 c.t/Pf > tg
then, as t ! 1,
P R ut
1fSk ut<Sk CkC1 g
1 0 Pf > yg dy f:d:
) S2 .u/ C
b1=2 Vˇ .u/
k0
3=2 c.t/Pf > tg
Proof Since h.t/ D EX.t/ D Pf > tg and v.t/ D Pf > tgPf tg we infer
and this convergence is locally uniform in R2C as it is the case for limt!1 Pf >
.u _ w/tg=Pf > tg D .u _ w/ˇ by Lemma 6.1.4(a). In particular, condition
(3.37) holds with C.u; w/ D .u _ w/ˇ . Further, condition (3.39) holds because
j1f>tg Pf > tgj 1 a.s. Thus, all the standing assumptions of Theorem 3.3.18
hold for the particular case X.t/ D 1f>tg .
Since limt!1 .h.t//2 =v.t/ D 0, part (D1) is a consequence of part (B11) of
Theorem 3.3.18. Let now the assumptions of part (D2) be in force. The specializa-
tion of the condition limt!1 .c.t/h.t//2 =tv.t/ D 0 in part (B21) of Theorem 3.3.18
reads limt!1 t1 .c.t//2 Pf > tg D 0. Hence, part (D21) follows from part (B21)
of Theorem 3.3.18. Analogously, part (D23) is an immediate consequence of part
(B23) of Theorem 3.3.18. Further, use the regular variation of Pf >R tg together
with Lemma 6.1.4(c) to conclude that the condition limt!1 .c.t/h.t//2 = 0 v.y/dy D
t
1 2
1 in part (B22) of Theorem 3.3.18 takes the form limt!1 t .c.t// Pf > tg D 1.
By Lemma 3.3.31, c.t/ is regularly varying of index 1=2 which implies that ˇ D 0.
An appeal to part (B22) of Theorem 3.3.18 completes the proof of part (D22). A
similar argument proves part (D3).
Passing to part (D4) we conclude that condition (3.53) reads limt!1 Pf >
tg=Pf > tg D 0 which obviously holds when ˇ 2 Œ0; ˛/ and holds by the
assumption when ˇ D ˛. Thus, part (D4) is a consequence of part (C2) of
Theorem 3.3.19. t
u
Now we illustrate main results of the chapter for some other particular instances
of random processes with immigration. Here, our intention is to exhibit a variety of
situations that can arise rather than provide the most comprehensive treatment.
Example 3.3.1 Let X.t/ D 1ftg . Since h.t/ D Pf tg and v.t/ D Pf
Rt
tgPf > tg Pf > tg, we infer limt!1 t.h.t//2 = 0 v.y/dy D 1. Further, if
E < 1, then v is dRi on Œ0; 1/ by parts (a) and (d) of Lemma 6.2.1 because
it is nonnegative, bounded, a.e. continuous and dominated by R t the nonincreasing
and integrable function Pf > tg. If E D 1, i.e., limt!1 0 v.y/dy D 1, v is
equivalent to the monotone function u.t/ D Pf > tg. If 2 < 1 then, according to
part (B12) of Theorem 3.3.18,
P R
1 ut
k0 1fSk CkC1 utg
0 Pf ygdy f:d:
p ) S2 .u/
2 3
t
where is a normally distributed random variable with mean zero and variance 1=2
independent of a Brownian motion S2 . The process ‚ and may be arbitrarily
dependent. Put X.t/ D .t C 1/ˇ=2 ‚.t/ for ˇ 2 .1; 0/. Then EX.t/ D 0 and
f .u; w/ D EX.u/X.w/ D 21 .u C 1/ˇ=2 .w C 1/ˇ=2 ejuwj from which we conclude
that f is fictitious regularly varying in R2C of index ˇ. By stationarity, for each t > 0,
‚.t/ has the same distribution as . Hence,
the limit process being a centered Gaussian process with independent values (white
noise).
3.3 Limit Theorems with Scaling 127
Example 3.3.4 Let X.t/ D S2 ..t C 1/˛ /, Pf > tg t˛ and assume that X and
are independent. Then f .u; w/ D EX.u/X.w/ is uniformly regularly varying of
index ˛ in strips in R2C with limit function C.u; w/ D .u _ w/˛ . Relation (3.42)
follows from
for all y > 0. Thus, Theorem 3.3.10 (see also Remark 3.3.11), in which we take
P f:d:
u.t/ 1, applies and yields k0 XkC1 .ut Sk /1fSk utg ) Z˛; ˛ .u/.
Throughout the section the phrase ‘a process R is well defined’ means that the
random variable R.u/ is a.s. finite for any fixed u > 0.
Processes Vˇ (See Definition 3.3.4)
Lemma 3.3.22 Under the assumptions of Theorem 3.3.9 the process Vˇ is well
defined.
Proof If f .u; w/ is fictitious regularly varying in R2C , then Vˇ is a Gaussian process
with independent values.
Suppose now that f .u; w/ is uniformly regularly varying in strips in R2C . Then
relation (3.37) (with f replacing r) ensures continuity of the function u 7! C.u; u C
w/ on .0; 1/ for each w > 0 (an accurate proof of a similar fact is given on pp. 2–3
in [266]). From the Cauchy–Schwarz inequality, we deduce that
whence
Consequently,
Z u
C.u y; w y/ dy < 1; 0<uw
0
because
Ru ˇ > 1. Since .u; w/ 7! C.u; w/ is positive semidefinite, so is .u; w/ 7!
0 C.uy; wy/dy, 0 < u w. Hence the latter function is the covariance function
of some Gaussian process which proves the existence of Vˇ . t
u
Fractionally Integrated ˛-Stable Lévy Process I˛; (See Definition 3.3.5)
Lemma 3.3.23 Whenever > 1=˛, the process I˛; is well defined.
128 3 Random Processes with Immigration
for u 0. Thus, in both cases the integrals defining I˛; exist in the a.s. sense. t
u
Further, we provide a result on sample path properties of I˛; .
Lemma 3.3.24
(a) If either > 0 or ˛ D 2, then I˛; has a.s. continuous paths.
(b) If 1 < ˛ < 2 and 2 .1=˛; 0/, then every version I of I˛; is unbounded on
every interval of positive length, that is, there is an event 0 of probability 1
such that supa<t<b jI.t/j D 1 for all 0 a < b on 0 .
Proof Proof for (a). Assume first that > 0 and ˛ 2 .1; 2. With " > 0 write for
any u 0 (deterministic or random)
Z
ˇ ˇ u ˇ ˇ
1 ˇI˛; .u C "/ I˛; .u/ˇ .u C " y/ 1 .u y/ 1 ˇS˛ .y/ˇdy
0
Z uC" ˇ ˇ
C .u C " y/ 1 ˇS˛ .y/ˇdy
u
Then
Z u Z u
.S2 .u; !/ S2 .y; !//.u y/ 1 dy D K.u; y/.y/dy:
0 0
This implies that each of the two summands in (3.63) tends to 0 as t ! u where
for the first summand one additionally needs the dominated convergence theorem.
Starting with 0 < u < t < T and repeating the argument proves that I2; is a.s.
continuous on .0; T/. Since T > 0 was arbitrary, we infer that I2; is a.s. continuous
on .0; 1/.
For the proof of (b), we refer the reader to Proposition 2.13 (b) in [147]. t
u
Here are some other properties of I˛; .
(P1) I˛; is self-similar with Hurst index 1=˛ C , i.e., for every c > 0,
f:d:
.I˛; .cu//u>0 D .c1=˛C I˛; .u//u>0
f: d:
where D denotes equality of finite-dimensional distributions.
This follows from Theorem 3.3.12 and the fact that the functions g.t/h.t/ are
regularly varying of index 1=˛ C (see Lemma 3.3.31; the definition of g can be
found in the paragraph following formula (3.44)).
(P2) For fixed u > 0,
1=˛
d u˛ C1
I˛; .u/ D S˛ .1/
˛ C 1
130 3 Random Processes with Immigration
which shows that I2; .u/ has a normal distribution and I˛; .u/ for ˛ 2 .1; 2/
has a spectrally negative ˛-stable distribution.
Proof We only prove this for > 0. The proof for the case 2 .1=˛; 0/ can be
found in [147].
By self-similarity of I˛; it is sufficient to show that
Z 1 Z
S˛ .y/.1 y/ 1 dy D S˛ .1 y/dy D .˛ C 1/1=˛ S˛ .1/:
d
I˛; .1/ D
0 Œ0;1
R
The integral Œ0;1 S˛ .1 y/dy exists as a Riemann–Stieltjes integral and as such
can be approximated by
X
n
S˛ .1 k=n/..k=n/ ..k 1/=n/ /
kD1
X
n
D S˛ .1 k=n/ S˛ .1 .k C 1/=n/ .k=n/ DW In :
kD1
X
n
log E exp.izIn / D n1 log E exp.iz.k=n/ S˛ .1//; z 2 R:
kD1
Rv
Further, with u and v as above we see that I˛; .v/ D 0 S˛ .y/.v y/ 1 dy and
Z v
I˛; .u/ I˛; .v/ D S˛ .y/ .u y/ 1 .v y/ 1 dy C S˛ .v/.u v/
0
Z uv
C S˛ .y C v/ S˛ .v/ .u v y/ 1 dy
0
are dependent because while I˛; .v/ and the last summand on the right-hand side
are independent, I˛; .v/ and the sum of the first two terms on the right-hand side
are strongly dependent. In the case ˛ 2 .1; 2/ there is a short alternative proof.
If the increments were independent the process I˛; which is a.s. continuous by
Lemma 3.3.24(a) would be Gaussian (see Theorem 5 on p. 189 in [93]). However
this is not the case. t
u
Fractionally Integrated Inverse Stable Subordinators (See Definition 3.3.7)
Lemma 3.3.25 The process J˛; is well defined for all 2 R.
Proof When > 0, this follows trivially from J˛; .u/ u W˛ .u/ a.s. for u 0.
Recall that W˛ is an ˛-stable subordinator. When < 0, the claim of the lemma
is a consequence of
Z Z
J˛; .u/ D .u y/ dW˛ .y/ D .u y/ dW˛ .y/
Œ0; u Œ0; W˛ .W˛ .u//
where the finiteness follows from W˛ .W˛ .u// < u a.s. for each fixed u > 0. t
u
Integration by parts yields
Z u
J˛; .u/ D .u y/ 1 W˛ .y/dy; u>0
0
when ˛ < < 0. These representations show that J˛; is nothing else but the
Riemann–Liouville fractional integral (up to a multiplicative constant) of W˛ in
the first case and the Marchaud fractional derivative of W˛ in the second (see p. 33
and p. 111 in [244]).
We proceed with sample path properties of J˛; .
132 3 Random Processes with Immigration
Lemma 3.3.26
(a) Let ˛ C 2 .0; 1. Then J˛; is a.s. (locally) Hölder continuous with arbitrary
exponent < ˛ C . Let ˛ C > 1. Then J˛ is Œ˛ C -times continuously
differentiable on Œ0; 1/ a.s.
(b) Let ˛ C 0. Then every version J of J˛; is unbounded with positive
probability on every interval of positive length, that is, there is an event 0
of positive probability such that supa<t<b J.t/ D 1 for all 0 < a < b on 0 .
Proof for (a) The proof of Hölder continuity can be found in Theorem 2.7 of [143].
In the case ˛ C > 1 we use the statement for the case ˛ C 2 .0; 1 along with
the equality
Z u
J˛; .u/ D J˛; 1 .y/dy; u0
0
which shows that J˛; is a.s. continuously differentiable whenever J˛; 1 is a.s.
continuous.
Proof for (b) We shall write J˛; for J. Pick arbitrary positive c < d and note that
PfŒW˛ .c/; W˛ .d/ .a; b/g D Pfa < W˛ .c/ < W˛ .d/ < bg > 0:
for some deterministic constant r 2 .0; 1/ and some s WD s.!/ 2 Œc; d. Fix any
! 2 0 . There exists s1 WD s1 .!/ such that
W˛ .s; !/ W˛ .y; !/ .s y/ =˛ r =2
Z Z
s s
D W˛ .s; !/ W˛ .y; !/ dy W˛ .s; !/ W˛ .y; !/ dy
0 s1
Z s
21 r .s y/ =˛ dy D 1:
s1
Since u.!/ 2 ŒW˛ .c/; W˛ .d/ for all ! 2 0 , we obtain (3.64). Note that the claim
of part (b) then holds with 0 D 0 [ fŒW˛ .c/; W˛ .d/ .a; b/g. The proof of
Lemma 3.3.26 is complete. t
u
Here is the list of some other properties of J˛; .
(Q1) J˛; is self-similar with Hurst index ˛ C .
This follows from Theorem 3.3.13 and the fact that the functions h.t/=Pf >
tg are regularly varying of index ˛ C .
(Q2) Let 2 R. The following distributional equality holds
Z 1
ecZ˛ .t/ dt
d
J˛; .1/ D
0
where c WD .˛ C /=˛, and Z˛ WD Z˛ .t/ t0 is a drift-free killed subordinator with
the unit killing rate and the Lévy measure
et=˛
˛ .dt/ D 1.0; 1/ .t/dt:
.1 et=˛ /˛C1
In particular, while the distribution of J˛; ˛ .u/ is exponential with unit mean, the
distribution of J˛; .u/ for ˛ C > 0 is uniquely determined by its moments
k kŠ Yk
. C 1 C .j 1/.˛ C //
E J˛; .u/ D uk.˛C / (3.65)
..1 ˛//k jD1 .j.˛ C / C 1/
for all r > 0. Then, according to Theorem 4.1 in [191], with u fixed
.u/
for some killed subordinator Z˛ WD .Z˛ .t//t0 D .Z˛ .t//t0 where
Z 1
I WD exp.Z˛ .t//dt D u1 inffv W W˛ .v/ > u1=˛ g D u1 W˛ .u1=˛ / (3.66)
0
Rs
and .t/ WD inffs W 0 exp.Z˛ .v//dv tg for 0 t I (except in one place, we
suppress the dependence of Z˛ , I and .t/ on u for notational simplicity). With this
at hand
Z 1
J˛; .u1=˛ / D ..u1=˛ W˛ .t//˛ / =˛ 1fW˛ .t/u1=˛ g dt
0
Z uI
D u =˛ exp.. =˛/Z˛ ..t=u///dt
0
Z I
D u1C =˛ exp.. =˛/Z˛ ..t///dt
0
Z 1
1C =˛
Du exp..1 C =˛/Z˛ .t//dt:
0
for s 0.
We shall now check that W˛ .1/ has the Mittag–Leffler distribution with
parameter ˛, i.e., the distribution that is uniquely determined by its moments
nŠ
E.W˛ .1//n D ; n2N
..1 ˛//n .1 C n˛/
3.3 Limit Theorems with Scaling 135
(see (2.12) and the centered formula following it). Self-similarity of W˛ with index
1=˛ allows us to conclude that
d
for t > 0 which shows that W˛ .1/ D .W˛ .1//˛ . Recall that the equality
Z 1
EX D .. //1 y 1 EeyX dy
0
holds for positive random variables X and > 0. Setting X D W˛ .1/ and D p˛
for p > 0 we obtain
.1 C p/
E.W˛ .1//p D E.W˛ .1//p˛ D ; p>0
..1 ˛//p .1 C p˛/
nŠ nŠ
EI n D D ; n2N
..1 ˛// .1 C n˛/
n ˆ˛ .1/ : : : ˆ˛ .n/
where R has an exponential distribution with unit mean. Assume now that C˛ > 0
which is equivalent to c > 0. Since cZ˛ is a killed subordinator with the Laplace
exponent ˆ˛ .cs/ we obtain
kŠ
E.J˛; .1//k D ; k2N
ˆ˛ .c/ : : : ˆ˛ .ck/
by another application of formula (2.10). This proves (3.65) with u D 1. For other
u > 0 formula (3.65) follows by self-similarity. From the inequality
Z 1
.˛C /
ecZ˛ .t/ dt R;
d
u J˛; .u/ D
0
and the fact that EeaR < 1 for a 2 .0; 1/, we conclude that the distribution
of J˛; .u/ has some finite exponential moments which entails that it is uniquely
determined by its moments. t
u
136 3 Random Processes with Immigration
.1 C /
EJ˛; .v/J˛; .u/ D
.˛/..1 ˛//2 .1 C ˛ C /
Z v
.v y/ .u y/ y˛1 ..v y/˛ C .u y/˛ /dy
0
0 D EJ˛; ˛ .v/E.J˛; ˛ .u/ J˛; ˛ .v// ¤ EJ˛; ˛ .v/.J˛; ˛ .u/ J˛; ˛ .v//
where the first equality follows from EJ˛; ˛ .v/ D 1 (see (Q2)). We have proved
that the increments of J˛; are not independent whenever ˛ C 0.
When ˛ C ¤ 1, the increments of J˛; are not stationary because, by (Q2),
EJ˛; .u/ is a function of u˛C rather than u. When ˛ C D 1, one can show with
the help of (Q3) that, with 0 < v < u, E.J˛; .u/ J˛; .v//2 is not a function of
u v. t
u
Processes Z˛; ˇ (See Definition 3.3.8)
Lemma 3.3.27 Let ˛ 2 .0; 1/, ˇ 2 R and C denote the limit function for f .u; w/ D
Cov .X.u/; X.w// uniformly regularly varying in strips in R2C of index ˇ. Then the
process Z˛; ˇ is well defined.
Proof Use Lemma 3.3.25 in combination with (3.61) to infer
Z
….s; t/ WD C.s y; t y/ dW˛ .y/ < 1; 0 < s < t: (3.69)
Œ0; s
In order to prove that the process Z˛; ˇ is well defined, we shall show that the
function ….s; t/ is positive semidefinite, i.e., for any m 2 N, any 1 ; : : : ; m 2 R
3.3 Limit Theorems with Scaling 137
X
m X
j2 ….uj ; uj / C 2 r l ….ur ; ul /
jD1 1r<lm
m1 Z
X X
m
D k2 C.uk y; uk y/
iD1 .ui1 ; ui kDi
X
C2 r l C.ur y; ul y/ dW˛ .y/
ir<lm
Z
C m2 C.um y; um y/ dW˛ .y/ 0 a.s.
.um1 ; um
where u0 WD 0. Since the second term is nonnegative a.s., it suffices to prove that
so is the first. The function .u; w/ 7! C.u; w/, 0 < u w is positive semidefinite
as a limit of positive semidefinite functions. Hence, for each 1 i m 1 and
y 2 .ui1 ; ui /,
X
m X
k2 C.uk y; uk y/ C 2 r l C.ur y; ul y/ 0:
kDi ir<lm
Thus, the process Z˛; ˇ does exist as a conditionally Gaussian process with covari-
ance function ….s; t/, 0 < s t. t
u
Lemma 3.3.28 Let ˛ 2 .0; 1/ and ˇ ˛. Any version of the process Z˛; ˇ has
paths in the Skorokhod space D.0; 1/ with probability strictly less than 1. If further
C.u; w/ D 0 for all u ¤ w, u; w > 0, then any version has paths in D.0; 1/ with
probability 0.
Proof Let Z be a version of Z˛; ˇ . Pick arbitrary 0 < c < d. From the proof of
Lemma 3.3.26(b) we know that
Z
ˇ
E .Z.u//2 ˇW˛ D .u y/ˇ dW˛ .y/ D C1
Œ0; u
Note that the process W˛ is measurable with respect to the -algebra generated
by W˛ and that, given W˛ , the process Z is centered Gaussian. Hence, from
Theorem 3.2 on p. 63 in [1] (applied to .Z.t//t2ŒW˛ .c/; W˛ .d/ and .Z.t//t2ŒW˛ .c/; W˛ .d/
both conditionally given W˛ ), we conclude that (3.71) is equivalent to
ˇ
ˇ
E sup .Z.t//2 ˇ W˛ < 1 a.s.
t2ŒW˛ .c/; W˛ .d/
which cannot hold due to (3.70). Hence Z has paths in D.0; 1/ with probability less
than 1.
Finally, suppose that C.u; w/ D 0 for all u ¤ w, u; w > 0. Then, given W˛ , the
Gaussian process Z has uncorrelated, hence independent values. For any fixed t > 0
and any decreasing sequence .hn /n2N with limn!1 hn D 0 we infer
˚ ˇ
P Z is right-continuous at tˇW˛
n ˇ o
P lim sup Z.t C hn / D Z.t/ˇW˛ D 0 a.s. (3.72)
n!1
which proves that Z has paths in the Skorokhod space with probability 0. To
justify (3.72) observe that, given W˛ , the distribution of Z.t/ is Gaussian, hence
continuous, while lim supn!1 Z.tChn / is equal to a constant (possibly ˙1) a.s. by
the Kolmogorov zero-one law which is applicable because Z.t C h1 /, Z.t C h2 /; : : :
are (conditionally) independent. The proof of Lemma 3.3.28 is complete. t
u
X
m X Z ui
1Cˇ
.1 C ˇ/1 ˛j2 uj C2 ˛i ˛j C.ui y; uj y/ dy
jD1 1i<jm 0
DW D.u1 ; : : : ; um /: (3.74)
Thus, in order to prove (3.73), one may use the martingale central limit theorem
(Corollary 3.1 in [122]) whence it suffices to verify
X P
2
EFk ZkC1;t ! D.u1 ; : : : ; um / (3.75)
k0
and
X P
2
EFk ZkC1;t 1fjZkC1;t j>yg ! 0 (3.76)
k0
.a1 C : : : C am /2 1fja1 C:::Cam j>yg .ja1 j C : : : C jam j/2 1fja1 jC:::Cjam j>yg
m2 .ja1 j _ : : : _ jam j/2 1fm.ja1 j_:::_jam j/>yg
m2 a21 1fja1 j>y=mg C : : : C a2m 1fjam j>y=mg (3.77)
X .XkC1 .t Sk //2 P
1fSk tg EFk 1
1fjX .tS /j>yp
1 tv.t/g ! 0 (3.78)
k0
tv.t/ kC1 k
for all y > 0. We can take t instead of uj t here because v is regularly varying and
y > 0 is arbitrary.
140 3 Random Processes with Immigration
for all y > 0 where the definition of vy is given in (3.39). Recalling that
< 1
and that v is locally bounded, measurable, and regularly varying at infinity of index
ˇ 2 .1; 1/ an application of Lemma 6.2.14 with r1 D 0 and r2 D 1 yields
Z
v.t x/ dU.x/ const tv.t/:
Œ0; t
Since, according to (3.39), vy .t/ D o.v.t//, (3.79) follows the last centered formula
in combination with Lemma 6.2.13(b).
Proof of (3.75) It can be checked that
Pm P
X ˛j2 k0 1fSk uj tg v.uj t Sk /
2 jD1
EFk ZkC1;t D
k0
1 tv.t/
P P
2 1i<jm ˛i ˛j k0 1fSk ui tg f .ui t Sk ; uj t Sk /
C :
1 tv.t/
and
P R
k0 1fSk ui tg f .ui t Sk ; uj t Sk / Œ0; ui f ..ui y/t; .uj y/t/ dy .ty/
D
1 tv.t/
1 tv.t/
Z ui
P
! C.ui y; uj y/dy (3.81)
0
Also,
v..ui y/t/
lim D .ui y/ˇ
t!1 v.t/
uniformly in y 2 Œ0; ui " by virtue of (3.37) (with f replacing r). Two applications
of Lemma 6.4.2(b) (with Rt .y/ D .ty/=.
1 t/) yield
Z Z ui " 1Cˇ
v..ui y/t/ .ty/ P ˇ ui "1Cˇ
dy ! .u i y/ dy D
Œ0;ui " v.t/
1 t 0 1Cˇ
and
Z Z ui "
f ..ui y/t; .uj y/t/ .ty/ P
dy ! C.ui y; uj y/ dy:
Œ0;ui " v.t/
1 t 0
and
ˇˇ R ˇ
.ui "; ui f .t.ui y/; t.uj y// dy .ty/ˇ
lim lim sup P >ı D0
"!0C t!1 tv.t/
and
R
.ui "; ui j f ..ui y/t; .uj y/t/j dy U.ty/
lim lim sup D 0; (3.83)
"!0C t!1 tv.t/
142 3 Random Processes with Immigration
where for the second integral we have changed the variable s D uj t, invoked
Lemma 6.2.14 with r1 D .ui "/u1 1
j and r2 D ui uj and then got back to the original
variable t. These relations entail both (3.82) and (3.83). The proof of Theorem 3.3.9
is complete. t
u
Lemmas 3.3.29 and 3.3.30 are designed to facilitate the proofs of Theo-
rems 3.3.10 and 3.3.19.
Lemma 3.3.29 Suppose that condition (3.41) holds for some ˛ 2 .0; 1/ and that
f .u; w/ D Cov .X.u/X.w// is either uniformly regularly varying in strips in R2C or
fictitious regularly varying in R2C , in either of the cases, of index ˇ for some ˇ ˛
and with limit function C. If
Z
Pf > tg
lim lim sup v.t.z y// dy U.ty/ D 0 (3.84)
!1 t!1 v.t/ . z; z
Z X
m
Pf > tg
C 2 j2 v.t.uj y//1Œ0; uj .y/
v.t/ Œ0; um jD1
X
C 2 r l f .t.ur y/; t.ul y//1Œ0; ur .y/ dy .ty/
1r<lm
3.3 Limit Theorems with Scaling 143
X
m Z
d
! 1 b1=2 j .uj y/.ˇ˛/=2 dW˛ .y/
jD1 Œ0; uj
X
m Z
C 2 j2 .uj y/ˇ dW˛ .y/
jD1 Œ0; uj
X Z
C 2 r l C.ur y; ul y/ dW˛ .y/ (3.85)
1r<lm Œ0; ur
for any m 2 N, any real 1 ; : : : ; m , any 0 < u1 < : : : < um < 1, and any real 1
and 2 provided that whenever 1 > 0 condition (3.54) holds and
s Z
Pf > tg
lim lim sup h.t.z y// dy U.ty/ D 0 (3.86)
!1 t!1 v.t/ . z; z
X
E exp iz ZkC1; t ! E exp Dz2 =2 ; t!1 (3.90)
k0
and
X X
P
EG exp iz ZkC1; t EG exp iz b
Z kC1; t ! 0; t!1 (3.91)
k0 k0
144 3 Random Processes with Immigration
where, given G, b
Z 1; t ; b
Z 2; t ; : : : are conditionally independent normal random vari-
2
ables with mean 0 and variance EG ZkC1; t , i.e.,
EG exp.izb 2
Z kC1; t / D exp.EG .ZkC1; 2
t /z =2/; k 2 N0 :
Proof Apart from minor modifications, the following argument can be found in the
proof of Theorem 4.12 in [167] in which weak convergence of the row sums in
triangular arrays to a normal distribution is investigated. For any " > 0,
X
2 2 2 2 2
sup EG ZkC1; t " C sup EG ZkC1; t 1fjZkC1; t j>"g " C EG ZkC1; t 1fjZkC1; t j>"g :
k0 k0 k0
2 P
sup EG ZkC1; t ! 0: (3.92)
k0
In view of (3.87)
X
EG exp iz b
Z kC1; t
k0
X
2 2 d
D exp .EG ZkC1; t /z =2 ! exp.Dz2 =2/ (3.93)
k0
P
for each z 2 R. Next, we show that k0 ZkC1; t has the same distributional limit as
P
b
k0 Z kC1; t as t ! 1. To this end, for z 2 R, consider
ˇ X X ˇ
ˇ ˇ
ˇEG exp iz ZkC1; t EG exp iz Z kC1; t ˇˇ
b
ˇ
k0 k0
ˇY ˇ
ˇ Y ˇ
ˇ
Dˇ EG exp izZkC1; t EG exp izZ kC1; t ˇˇ
b
k0 k0
X ˇˇ ˇˇ
ˇEG exp izZkC1; t EG exp izb
Z kC1; t ˇ
k0
X ˇˇ ˇ
ˇ
ˇEG exp izZkC1; t 1 C 21 z2 EG ZkC1;
2
tˇ
k0
X ˇˇ ˇ
ˇ
C Z kC1; t 1 C 21 z2 EGb
ˇEG exp izb Z 2kC1; t ˇ
k0
X X
z2 2
EG ZkC1; 1
t 1 ^ 6 jzZkC1; t j C z
2
Z 2kC1; t 1 ^ 61 jzb
EGb Z kC1; t j
k0 k0
3.3 Limit Theorems with Scaling 145
where, to arrive at the last line, we have utilized jEG ./j EG .jj/ and the inequality
which can be found, for instance, in Lemma 4.14 of [167]. For any " 2 .0; 1/ and
z¤0
X X X
2 1 2 2
EG ZkC1; t 1 ^ 6 jzZkC1; t j " EG ZkC1; t C EG ZkC1; t 1fjZkC1; t j>6"=jzjg :
k0 k0 k0
Further,
X X
Z 2kC1; t 1 ^ 61 jzb
EGb Z kC1; t j 61 jzj EG jb
Z kC1; t j3
k0 k0
p p
2jzj X 2 3=2 2jzj 2
1=2 X
2
D p .EG ZkC1; t/ p sup EG ZkC1; t EG ZkC1; t:
3 k0 3 k0 k0
Thus, we have already proved (3.91) which together with (3.93) implies (3.89).
Relation (3.90) follows from (3.89) by Lebesgue’s dominated convergence theorem.
The proof of Lemma 3.3.30 is complete. t
u
In what follows, F denotes the -algebra generated by .Sn /n2N0 .
Proof of Theorem 3.3.10 As in the proof of Theorem 3.3.9 we can and do assume
that X is centered. Put r.t/ WD v.t/=Pf > tg. The process Z˛; ˇ is well defined by
Lemma 3.3.27. In view of the Cramér–Wold device (see p. 232) it suffices to check
that
1 X d X
m m
p j Y.uj t/ ! j Z˛;ˇ .uj / (3.94)
r.t/ jD1 jD1
X
m Z
D˛;ˇ .u1 ; : : : ; um / WD j2 .uj y/ˇ dW˛ .y/ (3.95)
jD1 Œ0;uj
X Z
C2 i j C.ui y; uj y/ dW˛ .y/:
1i<jm Œ0;ui
Equivalently,
X m
E exp iz j Z˛;ˇ .uj / D E exp.D˛;ˇ .u1 ; : : : ; um /z2 =2/; z 2 R:
jD1
Pm
where ZkC1;t WD .r.t//1=2 jD1 j XkC1 .uj t Sk /1fSk uj tg and
X P
2
EF ZkC1;t 1fjZkC1;t j>yg ! 0 (3.97)
k0
X Z X
m
2 1
EF ZkC1;t D j2 v..uj y/t/1Œ0;uj .y/
k0
r.t/ Œ0; um jD1
X
C2 i j f ..ui y/t; .uj y/t/1Œ0;ui .y/ dy .ty/
1i<jm
3.3 Limit Theorems with Scaling 147
we further conclude that (3.96) follows from Lemma 3.3.29 with 1 D 0 (observe
that conditions (3.54) and (3.86) are then not needed). In view of (3.77), (3.97) is a
consequence of
1 X P
1fSk tg EF .XkC1 .t Sk //2 1fjXkC1 .tSk /j>ypr.t/g ! 0 (3.98)
r.t/ k0
for all y > 0. To prove (3.98) we assume, without loss of generality, that the
function r is nondecreasing, for in the case ˇ D ˛ it is asymptotically equivalent
to a nondecreasing function u.t/ by assumption, while in the case ˇ > ˛ the
existence of such a function is guaranteed by Lemma 6.1.4(b) because r is then
regularly varying of positive index. Using this monotonicity and recalling that we
are assuming that h 0 whence b v y .t/ D E.X.t//2 1fjX.t/j>ypr.t/g , we conclude that
it is sufficient to check that
X
E 1fSk tg EF .XkC1 .t Sk //2 1fjXkC1 .tSk /j>ypr.tSk /g
k0
Z
D b
v y .t x/ dU.x/ D o.r.t//
Œ0; t
for all y > 0, by Markov’s inequality. In view of (3.42) the latter is an immediate
consequence of Lemma 6.2.16(b) with f1 .t/ D b v y .t/, f .t/ D v.t/, q.t/ D u.t/ and
D ˇ. The proof of Theorem 3.3.10 is complete. t
u
We shall need the following lemma.
Lemma 3.3.31 The functions g.t/ appearing in (3.44) are regularly varying of
index 1=2 in cases (B1) and (B2), and of index 1=˛ in case (B3).
Proof In case (B1) this is trivial. In cases (B2) and (B3) the claim follows from
Lemma 6.1.3. t
u
Proof of Theorem 3.3.12 We shall only give a proof for the case where h is
eventually nondecreasing so that 0. The proof in the complementary, more
complicated case where h is eventually nonincreasing can be found in [146].
To treat cases (B1), (B2), and (B3) simultaneously, put
R ut
Z.ut/ 0 h.y/dy
Zt .u/ WD ; t > 0; u 0
g.t/h.t/
for > 0 and note that I˛; is well defined by Lemma 3.3.23.
148 3 Random Processes with Immigration
f:d:
Then, to ensure the convergence Zt .u/ ) I˛; .u/ it suffices to check that, for any
u > 0,
R
Œ0; ut .h.ut y/ h .ut y// d.y/ P
! 0 (3.100)
g.t/h.t/
and
R ut
0 .h.y/ h .y// dy
lim D 0: (3.101)
t!1 g.t/h.t/
Further, for ut a,
ˇ R ut ˇ R ut
ˇ .h.y/ b h.y//dy ˇˇ jh.y/ b
ˇ 0 h.y/jdy
ˇ ˇ 0
ˇ g.t/h.t/ ˇ g.t/h.t/
R ˇ ˇ
ˇ b ˇ
Œ0; a h.y/ h.y/ dy
D ! 0: (3.103)
g.t/h.t/
In combination with (3.103) the latter proves (3.101). To check (3.104), write
Z Z Z
t t t
b
h.y/ h .y/ dy D E b
h.y/dy D Pf > tg b
h.y/dy
0 .t /C 0
Z t
C E1f tg b
h.y/dy:
t
The first term on the right-hand side tends to 0 because the regular variation of b
h
entails that of the integral by Lemma 6.1.4(c). The second term can be estimated as
follows
Rt
Ebh.t /1f tg E1f tg t b
h.y/dy
E D 1:
b
h.t/ b
h.t/
150 3 Random Processes with Immigration
Since Zt .0/ D I˛; .0/ D 0 a.s. we can and do assume that u1 > 0. Using the fact
that .0/ D 1 a.s. and integrating by parts, we have, for t > 0 and u > 0
Z
h .t.u y// .yt/ yt
Zt .u/ D dy
Œ0; u h .t/ g.t/
Z
h .ut/ h .t.u y// .yt/ yt
D C dy
g.t/h .t/ .0; u h .t/ g.t/
Z
.yt/ yt h .t.u y//
D dy
.0; u g.t/ h .t/
Z
.yt/ yt
D t .dy/
.0; u g.t/
Let > 0. By the regular variation of h , the finite measures t converge weakly
on Œ0; u to a finite measure on Œ0; u which is defined by .a; b D .u a/
.u b/ . Clearly, the limiting measure is absolutely continuous with density x 7!
.u x/ 1 on .0; u. This in combination with (3.44) enables us to conclude that
Z Z
.yt/ yt d
u
t .dy/ ! .u y/ 1 S˛ .y/ dy D I˛; .u/
.0;u g.t/ 0
3.3 Limit Theorems with Scaling 151
1 X
C .u t1 Sk / 1fSk "utg
g.t/ k0
in the J1 -topology on D.0; 1/ where r.u/ D 0 for all u > 0. Throughout the rest of
the proof we use arbitrary positive and finite a < b. Observe that
ˇ ˇ
ˇ h.ty/ ˇ
ˇ ."ut/
jI"; 1 .u; t/j sup ˇ ˇ y ˇ
.1"/uyu h.t/ g.t/
and thereupon
ˇ ˇ
ˇ h.ty/ ˇ
ˇ ."bt/
sup jI"; 1 .u; t/j sup ˇ ˇ y ˇ :
aub .1"/ayb h.t/ g.t/
152 3 Random Processes with Immigration
d
We have ."bt/=g.t/ ! W˛ ."b/ as a consequence of the functional limit theorem
for ..t//t0 (see (3.45)), This, combined with the uniform convergence theorem
for regularly varying functions (Lemma 6.1.4(a)) implies that the last expression
converges to zero in probability thereby proving the first relation in (3.106).
Turning to the second relation in (3.106) we observe that
Z
I"; 2 .u; t/ D .u y/ dy ..ty/=g.t// :
Œ0; "u
Since .ty/=g.t/ ) W˛ .y/ in the J1 -topology on DŒ0; 1/, and the limit W˛ is a.s.
continuous, an application of Lemma 6.4.2(c) proves the second relation in (3.106).
An appeal to Lemma 6.4.1 reveals that the proof of the theorem is complete if
we can show that for any fixed u > 0
Z Z
lim .uy/ dW˛ .y/ D J˛; .u/ D .uy/ dW˛ .y/ a.s. (3.107)
"!1 Œ0; "u Œ0; u
and
X
1
lim lim sup P h.ut Sk /1f"ut<Sk utg > ı D 0 (3.108)
"!1 t!1 g.t/h.t/ k0
d
t1 .t S .t/1 / ! ˛
This entails
lim lim sup Pf.ut/ ."ut/ > 0g D lim Pf˛ < 1 "g D 0
"!1 t!1 "!1
where t.u/ WD x.t; u/, u 2 Œ0; 1 (see (3.48) for the definition of x.t; u/).
Proof We first treat the principal part of the integral, namely, we check that
Rt
t.b/ h.y/h.y C t.b/ t.a/ /dy
lim D1b (3.109)
t!1 m.t/
Rt
where the notation m.t/ D 0 h2 .y/dy has to be recalled. Since h is regularly varying
at 1 of index 1=2 (see (3.47)), we conclude that m is slowly varying at 1 and
t.h.t//2
lim D0 (3.110)
t!1 m.t/
We shall frequently use that limt!1 t.b/ =t.a/ D 1 which is a consequence of the
slow variation and monotonicity of m. By monotonicity of h,
Z t
.b/ .a/ .b/ .a/
m.t C t t / m.2t t / h.y/h.y C t.b/ t.a/ /dy m.t/ m.t.b/ /
t.b/
which entails (3.109) in view of (3.48) and the slow variation of m. It remains to
show that
R t.b/
0 h.y/h.y C t.b/ t.a/ /dy
lim D0 (3.112)
t!1 m.t/
154 3 Random Processes with Immigration
and
R tCt.a/
h.y/h.y C t.b/ t.a/ /dy
lim t
D 0: (3.113)
t!1 m.t/
As for (3.112), we have, using monotonicity of h and (3.111),
Z t.b/ Z t.b/
.b/ .a/ .b/ .a/
h.y/h.y C t t /dy h.t t / h.y/dy D o.m.t.b/ // D o.m.t//
0 0
2
P R tCt.ui /
X k0 h.t C t.ui / Sk /1fSk tCt.ui / g
1 0 h.y/dy
˛i p
iD1 2
3 m.t/
d
! ˛1 X .u1 / C ˛2 X .u2 / (3.114)
for any real ˛1 ; ˛2 and any 0 u1 < u2 1. Observe that the random variable on
the right-hand side of (3.114) has a normal distribution with mean zero and variance
˛12 C ˛22 C 2˛1 ˛2 .1 u2 /.
Integrating by parts we see that the numerator of the left-hand side of (3.114)
equals
2
X Z
˛i h.t C t.ui / y/d. .y/
1 y/
iD1 Œ0; tCt.ui /
2
X
D ˛i h.t C t.ui / / .t C t.ui / /
1 .t C t.ui / /
iD1
3.3 Limit Theorems with Scaling 155
Z
.ui / .ui / 1
C . .t C t / ..t C t y//
y/d.h.y//
Œ0; tCt.ui /
p
where .t/ WD #fk 2 N0 W Sk tg for t 0. Since . .t/
1 t/= 2
3 t
converges in distribution5 to the standard normal distribution, we infer
2 p
X th.t C t.ui / / .t C t.ui / /
1 .t C t.ui / / P
˛i p p ! 0
iD1 m.t/ t
Reversing the time at the point t C t.u2 / by means of (3.6), we conclude that the
numerator of the left-hand side of (3.115) has the same distribution as
Z
˛1 . .y C t.u2 / t.u1 / / .t.u2 / t.u1 / /
1 y/d.h.y//
Œ0; tCt.u1 /
Z
C ˛2 . .y/
1 y/d.h.y// DW 1 C 2 C R.t/
Œ0; tCt.u2 /
where
Z
1 WD . .y C t.u2 / t.u1 / / .t.u2 / t.u1 / /
Œ0; tCt.u1 /
X2
1 y/dy ˛k h.y C t.uk / t.u1 / / ;
kD1
Z
2 WD ˛2 . .y/
1 y/d.h.y//
Œ0; t.u2 / t.u1 /
and
R.t/ WD ˛2 .t.u2 / t.u1 / /
1 .t.u2 / t.u1 / /
h.t.u2 / t.u1 / / h.t C t.u2 / /
p
5
This follows from the distributional convergence of . .t/
1 t/= 2
3 t to the standard
normal distribution (this is a consequence of part (B1) of (3.44)), the representation .t/ D
.t S0 /1fS0 tg and the distributional subadditivity of .t/ (see (6.2)).
156 3 Random Processes with Immigration
By the already mentioned central limit theorem for .t/ and (3.110)
R.t/ P
p ! 0:
m.t/
1 C 2 d
p ! ˛1 X .u1 / C ˛2 X .u2 /: (3.116)
2
3 m.t/
and
Observe that the process . .1/ .s C r//s0 is a copy of . .s//s0 and furthermore
. .y//0yr and . .1/ .y C r/ .1/ .r//y0 are independent.
Let us check that, for y 0,
where c WD 2E.S0 / C E.y0 / for y0 large enough. Note that c < 1 because
E 2 < 1 entails both ES0 < 1 and E.y/
1 y C const for all y 0 (Lorden’s
inequality, see (6.6) and (6.7)). Passing to the proof of (3.117) we obtain
X
.y C r/ .r/ .1/ .y C r/ D 1fS S y.S r/g
.r/Ck .r/ .r/
k0
X
1fS S yS0;1 g
.r/Ck .r/
k0
where 1 WD S .r/ r. Note that . .1/ .t//t0 is a copy of ..t//t0 independent
of both 1 and S0;1 . The last two random variables are independent copies of S0 .
Further, the inequality ES0 < 1 entails limy!1 E.y/PfS0 > yg D 0 because
E.y/
1 y as y ! 1 by the elementary renewal theorem (see (6.4)). With
these at hand we have
for large enough y0 , having utilized twice the distributional subadditivity of .1/ .t/
(see (6.2)) for the first term on the right-hand side.
Now (3.117) reveals that (3.116) is equivalent to
0 C 20 d
p 1 ! ˛1 X .u1 / C ˛2 X .u2 /
2 3
m.t/
where
Z 2
X
10 WD . .1/
.y C t .u2 /
t .u1 / 1
/
y/dy ˛k h.y C t .uk /
t .u1 /
/
Œ0; tCt.u1 / kD1
and
Z
20 WD ˛2 . .y/
1 y/d.h.y//
Œ0; t.u2 / t.u1 /
are independent.
Step 3 (Reduction to Independent Gaussian Variables) Recall that .1/ . C
t.u2 / t.u1 / / is a renewal process with stationary increments. Let S2;0 and S2;1 denote
independent Brownian motions which approximate ./ and .1/ . C t.u2 / t.u1 / /
in the sense of Lemma 6.2.17. We claim that
Z
K2 .t/ WD .m.t//1=2 j .1/ .y C t.u2 / t.u1 / /
1 y
Œ0; tCt.u1 /
X2
P
3=2 S2;1 .y/jdy ˛k h.y C t.uk / t.u1 / / ! 0 (3.118)
kD1
158 3 Random Processes with Immigration
and that
Z
K1 .t/ WD .m.t//1=2 ˛2 j .y/
1 y
Œ0; t.u2 / t.u1 /
P
3=2 S2;0 .y/jd.h.y// ! 0: (3.119)
With t0 and A as defined in Lemma 6.2.17, (3.118) follows from the inequality
Z
K2 .t/ K2 .t/1ft0 >tCt.u1 / g C .m.t//1=2 j .1/ .y C t.u2 / t.u1 / /
Œ0; t0
2
X
1 y
3=2 S2;1 .y/jdy ˛k h.y C t.uk / t.u1 / /
kD1
Z 2
X
1=r .uk / .u1 /
CA y dy ˛k h.y C t t / 1ft0 tCt.u1 / g
.t0 ; tCt.u1 / kD1
because the first two terms on the right-hand side Rtrivially converge to zero in
probability, whereas the third does so, for the integral .t0 ; 1/ y1=r d.h.y// converges
(use integration by parts). Relation (3.119) can be checked along the same lines.
Formulae (3.118) and (3.119) demonstrate that we reduced the original problem
to showing that
100 C 200 d
p ! ˛1 X .u1 / C ˛2 X .u2 /
m.t/
where
Z X2
100 WD S2;1 .y/d ˛k h.y C t.uk / t.u1 / /
Œ0; tCt.u1 / kD1
and
Z
200 WD ˛2 S2;0 .y/d.h.y//:
Œ0; t.u2 / t.u1 /
Since 100 C 200 is the sum of independent centered Gaussian random variables it
remains to check that
Var .100 C 200 / D Var 100 C Var 200 ˛12 C ˛22 C 2˛1 ˛2 .1 u2 / m.t/:
3.3 Limit Theorems with Scaling 159
Writing the integral defining 100 as the limit of integral sums we infer
Z tCt.u1 / 2
Var 100 D ˛12 h.y/ h t C t.u1 / dy
0
Z tCt.u1 / 2
C ˛22 h y C t.u2 / t.u1 / h t C t.u2 / C 2˛1 ˛2
0
Z tCt.u1 /
h.y/ h t C t.u1 / h y C t.u2 / t.u1 / h t C t.u2 / dy
0
Z tCt.u1 / Z tCt.u2 /
D ˛12 .h.y//2 dy C ˛22 .h.y//2 dy
0 t.u2 / t.u1 /
Z tCt.u1 / Z
t
C 2˛1 ˛2 h.y/h y C t.u2 / t.u1 / dy C o .h.y//2 dy :
0 0
The appearance of the o-term follows by (3.110) and (3.111). Arguing similarly we
obtain
Z t.u2 / t.u1 / 2
Var 200 D ˛22 h.y/ h t.u2 / t.u1 / dy
0
Z t.u2 / t.u1 / Z t
D ˛22 2
.h.y// dy C o 2
.h.y// dy :
0 0
For the proof of Theorem 3.3.18 we need two auxiliary results, Lemmas 3.3.33
and 3.3.34. Replacing the denominator in (3.40) by a function which grows faster
leads to weak convergence of finite-dimensional distributions to zero. However, this
result holds without the regular variation assumptions of Theorem 3.3.9.
160 3 Random Processes with Immigration
where the last equality follows from the assumption on s. The proof of
Lemma 3.3.33 is complete. u
t
Lemma 3.3.34 Assume that h is eventually monotone and eventually nonnegative
and that the distribution of belongs to the domain of attraction of an ˛-stable
distribution, ˛ 2 .1; 2 (i.e., relation (3.44) holds). Then
P R ut
k0 h.ut Sk /1fSk utg
1 0 h.y/ dy f:d:
) 0; t!1
r.t/
for any positive function r.t/ regularly varying at 1 of positive index that further
satisfies
r.t/
lim D1
t!1 c.t/h.t/
P R ut
k0 h.ut Sk /1fSk utg
1 0 h.y/dy f:d:
qR ) 0:
t
0 v.y/dy
Summing the last relation and (3.121) finishes the proof for cases (Bi1).
6
Lemma 3.3.34 requires that h be eventually monotone and eventually nonnegative. If h is
eventually nonpositive we simply replace it with h.
162 3 Random Processes with Immigration
because both factors tend to zero. Invoking Lemma 3.3.33 again allows us to
conclude that (3.123) holds in this case, too. Summing (3.122) and (3.123) finishes
the proof for cases (Bi2).
Cases (Bi3) We only give a proof for case (B13) in which 2 < 1, the other cases
being similar. Write
R ut P
Y.ut/
1 0 h.y/dy Y.ut/ k0 h.ut Sk /1fSk utg
p D p
th.t/ th.t/
P R
1 ut
k0 h.ut Sk /1fSk utg
0 h.y/dy
C p
th.t/
DW At .u/ C Bt .u/:
f:d:
At .u/ ) c1 Vˇ .u/
3.3 Limit Theorems with Scaling 163
p
wherec1 WD b
1 . From (3.122) we already know that
f:d:
Bt .u/ ) c2 I2; .u/ (3.124)
p
where c2 WD 2
3 . By the Cramér–Wold device (see p. 232) and Lévy’s
continuity theorem, it suffices to check that, for any m 2 N, any real numbers
˛1 ; : : : ; ˛m , ˇ1 ; : : : ; ˇm , any 0 < u1 < : : : ; um < 1 and any w; z 2 R,
X m X
m
lim E exp iw ˛j At .uj / C iz ˇr Bt .ur / (3.125)
t!1
jD1 rD1
X
m X
m
D E exp iwc1 ˛j Vˇ .uj / E exp izc2 ˇr I2; .ur /
jD1 rD1
X
m
D exp D.u1 ; : : : ; um /c21 w2 =2 E exp izc2 ˇr I2; .ur /
rD1
In view of (3.124)
X m X
m
d
exp iz ˇr Bt .ur / ! exp izc2 ˇr I2; .ur / :
rD1 rD1
Since X and are assumed independent, relations (3.75) and (3.76) read
X P
2
EF ZkC1;t ! D.u1 ; : : : ; um /
k0
and
X P
2
EF ZkC1;t 1fjZkC1;t j>yg ! 0
k0
164 3 Random Processes with Immigration
for all y > 0, respectively. With these at hand and noting that
p
1 tv.t/
y.t/ WD p ! c1 ;
th.t/
we infer
X m X
EF exp iw ˛j At .uj / D EF exp iwy.t/ ZkC1;t
jD1 k0
d
! exp.D.u1 ; : : : ; um /c21 w2 =2/
by formula (3.89) of Lemma 3.3.30. Since the right-hand side of the last expression
is nonrandom, Slutsky’s lemma implies
X m X m
exp iz ˇr Bt .ur / EF exp iw ˛j At .uj /
rD1 jD1
X
m
d
! exp izc2 ˇr I2; .ur / exp.D.u1 ; : : : ; um /c21 w2 =2/:
rD1
Invoking the Cramér–Wold device (see p. 232), Markov’s inequality, and the regular
variation of the normalization factor, we conclude that it is enough to prove that
s
Pf > tg X
E jh.t Sk /j1fSk tg
v.t/ k0
s Z
Pf > tg
D jh.t x/j dU.x/ ! 0: (3.127)
v.t/ Œ0; t
3.3 Limit Theorems with Scaling 165
This
p follows immediately from Lemma 6.2.16(b) p with f1 .t/ D jh.t/j, f .t/ D
v.t/Pf > tg, D .ˇ ˛/=2 and q.t/ D u.t/ for u.t/ defined in Theo-
rem 3.3.10. Note that f1 D o. f / in view of (3.52). The proof for case (C1) is
complete.
Case (C2) Using Theorem 3.3.13 we infer
Z
Pf > tg X f:d:
h.ut Sk /1fSk utg ) .u y/ dW˛ .y/ D J˛; .u/:
h.t/ k0 Œ0; u
This immediately follows from Lemma 6.2.16(b) with f .t/ D .h.t//2 =Pf > tg,
f1 .t/ D v.t/, D 2 C ˛ and q.t/ D .w.t//2 for w.t/ defined in Theorem 3.3.19.
Note that f1 D o. f / in view of (3.53). The proof for case (C2) is complete.
Case (C3) Put
s
Pf > tg X
AN t .u/ WD XkC1 .ut Sk / h.ut Sk / 1fSk utg ;
v.t/ k0
s
Pf > tg X
BN t .u/ WD h.ut Sk /1fSk utg
v.t/ k0
and
Z
1=2
A˛;ˇ .u/ WD b .u y/.ˇ˛/=2 dW˛ .y/ D b1=2 J˛; .ˇ˛/=2 .u/:
Œ0; u
X
m
d X
m
j .AN t .uj / C BN t .uj // ! j .Z˛; ˇ .uj / C A˛; ˇ .uj //
jD1 jD1
166 3 Random Processes with Immigration
Pm P
for k 2 N0 and t > 0. Then jD1 j AN t .uj / D k0 ZN kC1;t and
X Z X
m
Pf > tg
EF ZN kC1;t
2
D j2 v.t.uj y//1Œ0;uj .y/
k0
v.t/ Œ0; um jD1
X
C2 r l f .t.ur y/; t.ul y//1Œ0;ur .y/ dy .ty/:
1r<lm
for z 2 R.
Now we intend to show that under the present assumptions Lemma 3.3.29
is applicable. While relation (3.84) has already been checked in the proof of
Theorem 3.3.10, relation (3.54) holds by assumption. Thus, we are left with proving
that (3.86) holds which, under (3.54), is equivalent to
Z
Pf > tg
lim lim sup h.t.z y// dy U.ty/ D 0
!1 t!1 h.t/ . z; z
X
m X
1 j BN t .uj / C 2 EF ZN kC1;t
2
jD1 k0
s Z
Pf > tg X
m
D 1 j h.t.uj y//1Œ0;uj .y/dy .ty/
v.t/ Œ0;um jD1
Z X
m
Pf > tg
C 2 j2 v.t.uj y//1Œ0;uj .y/
v.t/ Œ0;um jD1
X
C 2 r l f .t.ur y/; t.ul y//1Œ0;ur .y/ dy .ty/
1r<lm
d X
m
! 1 j A˛; ˇ .uj / C 2 D˛;ˇ .u1 ; : : : ; um / (3.129)
jD1
for any real 1 and 2 with D˛; ˇ .u1 ; : : : ; um / defined in (3.95). Hence,
X m X
exp iz N
j Bt .uj / N 2 2
EF ZkC1;t z =2
jD1 k0
X m
d
! exp iz j A˛;ˇ .uj / D˛; ˇ .u1 ; : : : ; um /z2 =2
jD1
Hence the first summand on the right-hand side of (3.128) tends to zero in
probability if we verify that
X d
EF ZN kC1;t
2
! D˛;ˇ .u1 ; : : : ; um / (3.130)
k0
and
X P
EF ZN kC1;t
2
1fjZN kC1;t j>yg ! 0 (3.131)
k0
for all y > 0. Relation (3.130) follows from (3.129) with 1 D 0 and 2 D 1.
In view of inequality (3.77) relation (3.131) is implied by (3.98) which has already
been checked (in the proof of Theorem 3.3.10). This finishes the proof for case (C3).
The proof of Theorem 3.3.19 is complete. t
u
In this section we get rid of the condition X.t/ D 0 for t < 0. Thus .Y.t//t2R is
defined by
X
Y.t/ WD XkC1 .t Sk /; t 2 R:
k0
Also, we assume that X has nondecreasing paths and that limt!1 X.t/ D 0 a.s.
The results concerning finiteness of power and exponential moments of .Y.t//
defined above we are going to derive hereafter are actually a key in the analysis
of the moments of N.t/ the number of visits to .1; t of a PRW .Tn /n1 (see
Section 1.4). The link between N.t/ and Y.t/ is discussed next.
Example 3.4.1 If Xn .t/ D 1fn tg for a real-valued random variable n , n 2 N,
then Y.t/ equals the number of visits to .1; t of the PRW .Sn1 C n /n2N , thus
Y.t/ D N.t/.
Our first moment result for shot-noise processes, assuming 0 a.s., provides
two conditions which combined are necessary and sufficient for the finiteness of
EeaY.t/ for fixed a > 0 and
Pt 2 R. As before, let .x/ D inffn 1 W Sn > xg,
D .0/ and set U.x/ WD n0 PfSn xg.
3.4 Moment Results 169
and
!
Y
aXn .tSn1 /
l.t/ WD E e < 1: (3.134)
nD1
for t 2 R. Furthermore, the conditions imply r.t/ < 1 and l.t/ < 1 for all t 2 R.
Turning to power moments, we consider the case 0 a.s. only.
170 3 Random Processes with Immigration
Theorem 3.4.4 Let 0 a.s. Then for any p 1 and t 2 R, the following
assertions are equivalent:
X (3.141)
eaXn .tSn1 / 1
n1
Y
and eaY.t/ eaXn .tSn1 /
nD1
hold whenever Y.t/ < 1. Taking expectations in the above inequalities gives the
implications (3.132))(3.133) and (3.132))(3.134).
In turn, assume that (3.133) and (3.134) hold and define
Y
L.s/ WD eaXn .sSn1 /
nD1
for all s t. This is possible because l.t/ D EL.t/ < 1 in view of (3.134) and L is
a.s. nondecreasing. Next define Y0 ./ D Y00 ./ D 0 and
X
n Cn
X
Yn ./ WD Xk . Sk1 /; Yn0 ./ WD Xk . .Sk1 S //
kD1 kD C1
as n ! 1. Note that each Yn0 ./ is a copy of Yn ./ and further independent of
.L./; S /. Now observe that
0
Yn .t/ Y .t/ C Yn .t/1f n; S "g
0
C Yn .t "/1f n; S >"g
Y .t/ C Yn0 .t/1fS "g C Yn0 .t "/1fS >"g
EeaYn .t/ E L.t/1fS "g eaYn .t/ C L.t/1fS >"g eaYn .t"/
Y
n
n
EeaYn .t/ E eaXk .t/ D EeaX1 .t/ < 1 (3.143)
kD1
where the finiteness follows from EeaX.t/ < 1 which, in its turn, is a consequence
of (3.133). By solving (3.142) for EeaYn .t/ and letting n ! 1, we arrive at
Y
n1
EeaY.t/ .1 ˇ/n EeaY.tn"/ EL.t k"/
kD0
for any n 2 N. Hence EeaY.t/ < 1 as claimed if we verify EeaY.t0 / < 1 for some
t0 < t.
To this end, pick t0 such that r.t0 / < 1 which is possible because (3.133) in
combination with the monotone convergence theorem entails limt!1 r.t/ D 0. Note
also that r.t0 / < 1 implies EeaX.t0 / < 1. Define
X
n
aYn .t0 / aXk .t0 Sk1 /
bn WD Ee and cn WD E e 1
kD1
n 2 N, we obtain (under the usual convention that empty products are defined as 1)
n
X Y
n
aYn .t0 / aXk .t0 Sk1 /
e 1 D e 1 eaXj .t0 Sj1 /
kD1 jDkC1
n
X Y
n
eaXk .t0 Sk1 / 1 eaXj .t0 Sj1 CSk /
kD1 jDkC1
n
X kCn1
Y
eaXk .t0 Sk1 / 1 eaXj .t0 Sj1 CSk / :
kD1 jDkC1
g.t/ 1
!
X
Y
D E eaXn .tSn1 / 1 eaXk .tSk1 /
n1 knC1
ˇ !!
X Y ˇ
D E eaXn .tSn1 / 1 E eaXk .tSk1 / ˇˇSn
n1 knC1
X
aXn .tSn1 /
D E e 1 g.t Sn /
n1
X Z
D Eht .Sn / D E Eht .y C M/dU > .y/
n0 Œ0; 1/
Z Z
D E E eaX.tyCz/ 1 g.t y C z/ dPfM zgdU > .y/
Œ0;1/ Œ0;1/
Z
for any t 2 R and any u > 0. The distribution of M being concentrated on Œ0; 1/
(because Pf < 0g > 0 by assumption) and infinitely divisible (see Theorem 2
on p. 613 in [89]) has unbounded support, i.e., PfM > ug > 0 for any u > 0.
Consequently, g.t C u/ < 1 for any u > 0 if g.t/ < 1. By monotonicity, we also
have g.t C u/ < 1 for u < 0.
(3.136))(3.137). Put
n
Y
Ln .s/ WD exp aXk .s Sk1 Sn1 /
kDn1 C1
for n 2 N and s 2 R which are i.i.d. with L1 .s/ D L.s/ as defined in the proof of
Theorem 3.4.1. If EeaY.t/ < 1, then
X Y
eaY.t/ 1 D .Ln .t Sn1 / 1/ Lk .t Sk1 /
n1 knC1
X
.Ln .t Sn1 / 1/ :
n1
Taking expectations on both sides of this inequality gives r> .t/ < 1.
(3.138))(3.135). If r> .t/ < 1 for some t 2 R, then also l.t/ < 1 and, therefore,
r> .t0 / < 1 and l.t0 / 1 < 1 for some t0 t. Since
Y
n
eaYn .s/ Lk .s/;
kD1
we infer
X
n
cn WD E .Lk .t0 Sk1 / 1/
kD1
we have supn1 cn D r> .t0 / < 1 and thus find by a similar estimation as in the proof
of Theorem 3.4.1 for nonnegative that bn 1Ccn bn1 and thus bn .1r> .t0 //1
for all n 2 N. Hence, EeaY.t0 / < 1, for Yn .t0 / " Y.t0 / as n ! 1. t
u
For the proof of Theorem 3.4.4 we need a lemma.
Lemma 3.5.1 Let 1 p D nCı with n 2 N0 and ı 2 .0; 1. Then, for any x; y 0,
Rr
Proof For any 0 r 1, we have .1 C r/p D 1 C p 0 .1 C t/p1 dt. By the mean
value theorem for integrals, for some 2 .0; r/,
where in the last step we have used that 0 r 1. Now let x; y 0. When x y,
use the first estimate in (3.145) to get .x C y/p yp C p2p1 xyp1 . When y x use
the second estimate in (3.145) to infer .x C y/p xp C p2p1 xn yı . Thus, in any case,
(3.144) holds. t
u
Proof of Theorem 3.4.4 (3.139))(3.140). Let E.Y.t//p < 1 and q 2 Œ1; p.
Using the superadditivity of the function x 7! xq for x 0, we then infer
X Z
1 > E.Y.t// Eq
.Xk .t Sk1 // D
q
E.X.t y//q dU.y/
k1 Œ0; 1/
In the induction step, we assume that the asserted implication holds for p D n and
conclude that it then also holds for p D n C ı for all ı 2 .0; 1. To this end, assume
that p D n C ı for some n 2 N and ı 2 .0; 1 and that sq .t/ < 1 for all q 2 Œ1; p.
By induction hypothesis, E.Y.t//n < 1. For k 2 N and t 2 R, define
X
Yk .t/ WD Xj .t .Sj1 Sk //:
jkC1
Observe that Yk .t/ D XkC1 .t/ C YkC1 .t kC1 / for all t 2 R. Using (3.144), we get
X
n ı
C Xj .tSj1 / Yj .tSj / :
j1
E.Y.t//n < 1 implies that E.Y.t//q is finite for 0 < q n. Using this and the
monotonicity of Yj , we conclude
E.Y.t//p sp .t/ C p2p1 s1 .t/E.Y.t//p1 C sn .t/E.Y.t//ı < 1:
t
u
Random processes with immigration have been used to model various phenomena.
An incomplete list of possible areas of applications includes anomalous diffusion
in physics [210], earthquakes occurrences in geology [254], rainfall modeling in
meteorology [242, 258], highway traffic engineering [159, 204], river flow and
stream flow modeling in hydrology [193, 259], computer failures modeling [195]
and network traffic in computer sciences [186, 212, 238, 239], insurance [181, 182],
and finance [180, 245].
In the case where has an exponential distribution, the process Y (or its stationary
version) may be called random process with immigration at the epochs of a Poisson
process or random process with Poisson immigration. Weak convergence of random
processes with Poisson immigration has received considerable attention. In some
papers of more applied nature weak convergence of Yt .u/ D .a.t//1 .Y.ut/ b.ut//
for X having a specific form is investigated. In the list to be given next denotes
a random variable independent of and f a deterministic function which satisfies
certain restrictions which are specified in the cited papers:
• X.t/ D 1f>tg and X.t/ D t ^ , functional convergence, see [238];
• X.t/ D f .t/, stationary version of Y, functional convergence, see [180];
176 3 Random Processes with Immigration
Theorem 3.3.13 is Theorem 2.4 in [143]. For 2 Œ˛; 0 this result, accompanied
by convergence of moments, was earlier obtained in Theorem 2.9 of [146] under
a minor additional assumption. Actually, whenever > ˛ and h is eventually
monotone, there is weak convergence in the Skorokhod space D.0; 1/ endowed
with the J1 -topology. Eventually nondecreasing and nonincreasing h are covered
by Theorem 1.1 in [140] and Theorem 2.1 in [143], respectively. A perusal of
the proof of Theorem 2.1 in [143] reveals that the result actually holds without
the monotonicity assumption. We suspect that the same is true in the situation
of Theorem 1.1 in [140]. Functional limit theorem (3.45) is a consequence of
the well-known fact that SŒut , properly normalized, converges weakly in the J1 -
topology on DŒ0; 1/ to an ˛-stable subordinator (which has strictly increasing
paths a.s.) and Corollary 13.6.4 in [261]. Note that there is certain confusion about
convergence (3.45) in the literature (see [267] for more details).
Theorem 3.3.14 was proved in [142]. In different contexts the limit process X
from Theorem 3.3.14 has arisen in [46, 49]. It is an open problem whether the result
of Theorem 3.3.14 still holds under the sole assumption E 2 < 1 rather than E r <
1 for some r > 2. We think that a proof if exists should be technically involved.
An even more complicated open problem is: what happens in the case where the
distribution of belongs to the domain of attraction of a stable distribution with
finite mean?
Section 3.3.4 is based on [148].
Theorem 3.3.21 is obtained here as a specialization of Theorems 3.3.12
and 3.3.13. Originally, Theorem 3.3.21 was implicitly proved in [150] (see
Theorems 1.2 and 1.3 there) following earlier work in [138, 139]. Assuming that
and are independent, a counterpart of part (C1) of Theorem 3.3.21 with a
random centering (i.e., a result that follows from Theorem 3.3.9) was obtained in
Proposition 3.2 of [212].
With the exception of Theorem 3.3.21 Section 3.3.5 follows the presentation
in [148]. Functional limit theorems for Y corresponding to X.t/ D 1ftg similar
in spirit to Theorem 3.3.21 were recently obtained in Theorem 3.2 of [7]. These
provide a generalization of Example 3.3.1.
Section 3.3.6 is based on [140, 143, 146–148].
The results of Section 3.4 came from [8]. Inequality (3.144) is a variant of an
inequality we have learned from [118]. The proof given here is a slight modification
of the argument given in the cited reference.
Chapter 4
Application to Branching Random Walk
The purpose of this chapter is two-fold. First, we obtain a criterion for uniform
integrability of intrinsic martingales .Wn /n2N0 in the branching random walk as
a corollary to Theorem 2.1.1 that provides a criterion for the a.s. finiteness of
perpetuities. Second, we state a criterion for the existence of logarithmic moments
of a.s. limits of .Wn /n2N0 as a corollary to Theorems 1.3.1 and 2.1.4. While the
former gives a criterion for the existence of power-like moments for suprema
of perturbed random walks, the latter contains a criterion for the existence of
logarithmic moments of perpetuities. To implement the task, we shall exhibit an
interesting connection between these at first glance unrelated models which emerges
when studying the weighted random tree associated with the branching random walk
under the so-called size-biased measure.
For n 2 N0 , let Zn be the point process that defines the positions on R of the
individuals of the n-th generation, their total number given by Zn .R/.
Definition 4.1.1 The sequence .Zn /n2N0 is called branching random walk (BRW).
S
Let V WD n0 N be the infinite Ulam–Harris tree of all finite sequences
n
v D v1 : : : vn (shorthand for .v1 ; : : : ; vn /), with root ˛ (N0 WD f˛g) and edges
connecting each v 2 V with its successors vi, i 2 N. The length of v is denoted as
jvj. Call v an individual and jvj its generation number. A BRW .Zn /n2N0 may now
be represented as a random labeled subtree of V with the same root. This subtree
T is obtained recursively as follows. For any v 2 T, let N.v/ be the number of its
PN.v/
successors (children) and Z.v/ WD iD1 "Xi .v/ denote the point process describing
the displacements of the children vi of v relative to their mother. By assumption,
the Z.v/ are independent copies of Z. The Galton–Watson tree associated with this
model is now given by
and Xi .v/ denotes the label attached to the edge .v; vi/ 2 T T P and describes
the displacement of vi relative to v. Let us stipulate hereafter that jvjDn means
summationP over all vertices of T (not V) of length n. For v D v1 : : : vn 2 T, put
S.v/ WD niD1 Xvi .v1 : : : vi1 /.P
Then S.v/ gives the position of v on the real line (of
course, S.˛/ D 0), and Zn D jvjDn "S.v/ for all n 2 N0 .
Suppose there exists > 0 such that
Z
m. / WD E e x Z.dx/ 2 .0; 1/: (4.1)
R
For n 2 N, define Fn WD .Z.v/ W jvj n 1/, and let F0 be the trivial -algebra.
For n 2 N0 , put
Z X X
Wn . / D Wn WD .m. //n e x Zn .dx/ D .m. //n e Sv D L.v/
R
jvjDn jujDn
b
PfM D 0g D 0 and b
PfM D 1g < 1: (4.3)
The chosen notation for the multiplicative random walk associated with the given
BRW as opposed to the notation in Section 2.1 is intentional. Also, we keep the
definition of J .x/ from there (see p. 44).
Theorem 4.2.1 The martingale .Wn /n2N0 is uniformly .P-/ integrable if, and only
if, the following two conditions hold true:
lim …n D 0 b
P-a.s. (4.4)
n!1
and
Z
EW1 J .logC W1 / D xJ .log x/PfW1 2 dxg < 1: (4.5)
.1; 1/
182 4 Application to Branching Random Walk
There are three distinct cases in which conditions (4.4) and (4.5) hold simultane-
ously:
(A1) b
E log M 2 .1; 0/ and EW1 logC W1 < 1;
(A2) b
E log M D 1 and EW1 J .logC W1 / < 1;
(A3) b
E logC M D bE log M D C1, EW1 J .logC W1 / < 1, and
Z
log x
b
EJ logC M D R log x b
PfM 2 dxg < 1:
.1;1/ b
Pf log M > yg dy
0
Remark 4.2.2 Condition (4.4) together with EW1 logC W1 < 1 which is a well-
known condition in the branching processes theory are always sufficient for the
uniform integrability of .Wn /. It is curious that if b
E log M is infinite, the condition
EW1 logC W1 < 1 is not any longer necessary.
Remark 4.2.3 Using Theorem 4.2.1 we shall demonstrate that Doob’s condition is
not necessary for the supremum of a martingale to be integrable.
Let .Un / be a nonnegative martingale. It is known that Doob’s condition
supn0 EUn logC Un < 1 ensures E supn0 Un < 1 and thereupon the uniform
integrability of .Un /. Note that there are uniformly integrable martingales with
nonintegrable suprema. For instance, let .Sn /n2N0 be an ordinary finite mean
random walk with positive jumps. Then .Sn =n; .Sn ; SnC1 ; : : ://n2N forms a reversed
martingale. By Proposition V-3-11 in [221], this martingale is uniformly integrable.
However, Theorem 4.14 in [67] tells us that the supremum of this martingale is
nonintegrable provided that ES1 logC S1 D 1.
For the martingales .Wn / things are better: .Wn / is uniformly integrable if, and
only if, its supremum is integrable (see (4.10) in Lemma 4.3.3). By Theorem 4.2.1,
if conditions (A2) and EW1 logC W1 D 1 hold (the latter means that Doob’s
condition is violated), then .Wn / is uniformly integrable which implies that its
supremum is integrable.
Restricting to the case (A1), the existence of moments of W was studied in quite a
number of articles, see ‘Bibliographic Comments’. The following result goes further
by covering the cases (A2) and (A3) as well.
Theorem 4.2.4 If limn!1 …n D 0 b
P-a.s. and
EW1 f logC W1 J .logC W1 / < 1; (4.6)
We adopt the situation described in Section 4.1. Recall that Z denotes a generic copy
of the point process describing the displacements of children relative to its mother
in the considered population. In the sequel we shall need the associated modified
BRW with a distinguished ray .„n /n2N0 , called spine.
Let Z Pbe a point process whose distribution has Radon–Nikodym derivative
.m. //1 iD1 e
Xi
with respect to the distribution of Z. The individual „0 D
˛ residing at the origin of the real line has children, the displacements of which
relative to „0 are given by a copy Z0 of Z . All the children of „0 form the first
generation of the population, and among these the spinal successor „1 is picked
with a probability proportional to e s if s is the position of „1 relative to „0 (size-
biased selection). Now, while „1 has children the displacements of which relative to
v1 are given by another independent copy Z1 of Z , all other individuals of the first
generation produce and spread offspring according to independent copies of Z (i.e.,
in the same way as in the given BRW). All children of the individuals of the first
generation form the second generation of the population, and among the children of
„1 the next spinal individual „2 is picked with probability e s if s is the position
of „2 relative to „1 . It produces and spreads offspring according to an independent
copy Z2 of Z whereas all siblings of „2 do so according to independent copies
of Z, and so on. Let Z bn denote the point process describing the positions of all
members of the n-th generation. We call .Z bn /n2N0 modified BRW associated with the
ordinary BRW .Zn /n2N0 . Both, the BRW and its modified version, may be viewed
as a random weighted tree with an additional distinguished ray (the spine) in the
second case. On an appropriate measurable space .X; G/ specified below, they can
be realized as the same random element under two different probability measures P
and bP, respectively. Let
X WD f.t; s; / W t V; s 2 F.t/; 2 Rg
be the space of weighted rooted subtrees of V with the same root and a distinguished
ray (spine) where R WD f.0; 1 ; 2 ; : : :/ W k 2 N for all k 2 Ng denotes the set
of infinite rays and F.t/ denotes the set of functions s W V ! R [ f1g assigning
position s.v/ 2 R to v 2 t and s.v/ D 1 to v 62 t. Endow this space with
G WD .Gn W n 2 N0 / where Gn is the -algebra generated by the sets
where tn0 WD fv 2 t0 W jvj ng, tn ranges over the subtrees V with maxfjvj W v 2
tn g n, B over the Borel sets Rtn and over R. The subscript jtn means restriction
to the coordinates in tn while the subscript jn means restriction to all coordinates up
to the nth. Let further Fn Gn denote the -algebra generated by the sets
Then under b
P the identity map .T; S; „/ D .T; .S.v//v2V ; .„n /n2N0 / represents the
modified BRW with its spine, while .T; S/ under P represents the original BRW (the
way how P picks a spine does not matter and thus remains unspecified). Finally, the
random variable Wn W X ! Œ0; 1/ defined by
X
Wn .t; s; / WD .m. //n e s.v/
jvjDn
P
is Fn -measurable for each n 2 N0 and satisfies Wn D jvjDn L.v/. The relevance
of these definitions with respect to the P-martingale ..Wn ; Fn //n2N0 to be studied
hereafter is provided by the following lemma
Lemma 4.3.1 For each n 2 N0 , Wn is the Radon–Nikodym derivative of b
P with
respect to P on Fn . Moreover, if W WD lim supn!1 Wn , then
(1) .Wn / is a P-martingale and .1=Wn / is a b
P-supermartingale.
(2) EW D 1 if, and only if, b
PfW < 1g D 1.
(3) EW D 0 if, and only if, b
PfW D 1g D 1.
The link between the P-distribution and the b
P-distribution of Wn is provided by
Lemma 4.3.2 For each n 2 N0 , b
P.Wn 2 / is a size-biasing of P.Wn 2 /, that is
EWn f .Wn / D b
Ef .Wn /
EWn g.W0 ; : : : ; Wn / D b
Eg.W0 ; : : : ; Wn / (4.8)
for each nonnegative Borel function g on RnC1 . Finally, if .Wn /n2N0 is uniformly
P-integrable, then also
EWh.W0 ; W1 ; : : :/ D b
Eh.W0 ; W1 ; : : :/ (4.9)
and in mean with respect to P which immediately implies that W is the P-density of
b
P on F WD .Fn W n 2 N0 / and thereupon also (4.9). t
u
Also, we shall need another auxiliary result.
Lemma 4.3.3 Let .Wn /n2N be uniformly integrable with the a.s. limit W and put
W WD supn0 Wn . Then, for each a 2 .0; 1/, there exists b D b.a/ 2 RC such that
and
b
PfW > tg b
PfW > tg .b=a/ PfW > atg (4.11)
D .b=a/ b
PfW > atg
for all t > 1 where the first and the last equalities follow from (4.8) and (4.9),
respectively, the first inequality is a consequence of W W P-a.s., and the second
inequality is implied by (4.10). t
u
Next we have to make the connection with perpetuities. For u 2 T, let N .u/ denote
the set of children of u and, if juj D k,
X L.uv/
Wn .u/ D ; n 2 N0 :
vW uv2TkCn
L.u/
Since all individuals off the spine reproduce and spread as in the unmodified BRW,
S
we have that, under P as well as b P, the .Wn .u//n2N0 for u 2 n0 N .„n /nf„nC1 g
186 4 Application to Branching Random Walk
and
X L.u/ X e.S.u/S.„n1 //
Qn WD D : (4.13)
L.„n1 / m. /
u2N .„n1 / u2N .„n1 /
Then it is easily checked that the .Mn ; Qn /n2N are i.i.d. under b
P with distribution
given by
!!
XN
e X i e X i X e X j
N
b
Pf.M; Q/ 2 Ag D E 1A ;
iD1
m. / m. / jD1 m. /
!!
X X
DE L.u/1A L.u/; L.v/
jujD1 jvjD1
b
PfQ 2 dxg D x PfW1 2 dxg: (4.14)
b
PfQ D 0g D 0: (4.15)
which is in accordance with the definition given in (4.2). As we see from (4.12),
…n D M1 : : : Mn D L.„n /; n 2 N0 : (4.16)
4.5 Proofs for Section 4.2 187
Here is the lemma that provides the connection between .Wn /n2N0 and the perpe-
tuity generated by .Mn ; Qn /n2N . Let A be the -algebra generated by .Mn ; Qn /n2N0
and the family of displacements of the children of the „n relative to their mother,
i.e. of fS.u/ W u 2 N .„n /; n 2 N0 g. For n 2 N and k D 1; : : : ; n, put also
X L.u/
and notice that b
E Rn;k jA D 0 because each Wnk .u/ is independent of A with mean
one.
Lemma 4.4.1 With the previous notation the following identities hold true for each
n 2 N0
X
n
Xn1
Wn D …k1 Qk C Rn;k …k b
P-a.s. (4.17)
kD1 kD1
and
X
n X
n1
b
E Wn jA D …k1 Qk …k b
P-a.s. (4.18)
kD1 kD1
Proof Each v 2 Tn has a most recent ancestor in .„k /k2N0 . By using this and
recalling (4.13) and (4.16), one can easily see that
X
n X
Wn D L.„n / C L.u/Wnk .u/
kD1 u2N .„k1 /nf„k g
X
n
L.„k /
D …n C …k1 Qk C Rn;k
kD1
L.„k1 /
n
X
D …n C …k1 Qk C Rn;k …k
kD1
which obviously gives (4.17). But the second assertion is now immediate in view of
E.…k1 Rn;k jA/ D …k1b
b E.Rn;k jA/ D 0 a.s. t
u
Proof of Theorem 4.2.1 Sufficiency Suppose first that (4.4) and (4.5) hold true.
P
Recalling (4.14) we infer k1 …k1 Qk < 1 bP-a.s. by Theorem 2.1.1. Since Wn is
188 4 Application to Branching Random Walk
and thus lim infn!1 Wn < 1 b P-a.s. As .1=Wn /n2N0 constitutes a positive and
thus bP-a.s. convergent supermartingale by Lemma 4.3.1, we further infer W D
lim infn!1 Wn and thereupon the desired b
PfW < 1g D 1.
Necessity Assume now that .Wn /n2N0 is uniformly P-integrable, so that EW D 1
and thus b
PfW < 1g D 1 by Lemma 4.3.1(2). Furthermore, b PfW < 1g D 1 in
view of (4.11). The inequality
X L.v/
Wn L.„n1 / D …n1 Qn b
P-a.s.
L.„n1 /
v2N .„n1 /
which in combination with b PfM D 0g D 0, b PfM D 1g < 1 (see (4.3)) and b PfQ D
0g D 0 (see (4.15)) allows us to appeal to Theorem 2.1.1 to conclude validity of (4.4)
and (4.5). t
u
A similar argument can be used to deduce Theorem 4.2.4 from Theorems 1.3.1
and 2.1.4. We omit details which can be found in Theorem 1.4 of [6].
The martingale .Wn / defined in (4.1) has been extensively studied in the literature,
but first results were obtained in [179] and [35].
Theorem 4.2.1 For the case (A1), this is due to Biggins [35] and Lyons [202], see
also [187]. In the present form, the result has been obtained in [6] following an
earlier work [135].
Theorem 4.2.4 is Theorem 1.4 of [6]. Under the x log x condition various moment
results for W, the a.s. limit of .Wn /, can be found in [10, 36, 43, 136, 158, 196, 200,
243]. A counterpart of Theorem 4.2.4 for concave unbounded f was obtained in
[230]. In particular, the cited result covers some slowly varying f .
4.6 Bibliographic Comments 189
There are basically two probabilistic approaches towards finding conditions for
the existence of Eˆ.W/ for suitable functions ˆ. One method, worked out in
[135] and [158], hinges on getting first a moment-type result for perpetuities and
then translating it into the framework of branching random walks. The second
approach, first used in [13] for Galton–Watson processes and further elaborated in
[10], relies on the observation that BRWs bear a certain double martingale structure
which allows the repeated application of the convex function inequalities due to
Burkholder, Davis and Gundy (see, for instance, Theorem 2 on p. 409 in [68]) for
martingales. Both approaches have their merits and limitations. Roughly speaking,
the double martingale argument requires as indispensable ingredients only that ˆ
be convex and at most of polynomial growth. On the other hand, it also comes with
a number of tedious technicalities caused by the repeated application of the convex
function inequalities. The basic tool of the first method is only Jensen’s inequality
for conditional expectations (see [6] for more details), but it relies heavily on the
existence of a nonnegative concave function ‰ that is equivalent at 1 to the function
ˆ.x/=x. This clearly imposes a strong restriction on the growth of ˆ.
Section 4.3 The construction of the modified BRW is based on [38] and [202].
Lemma 4.3.1 is a combination of Proposition 12.1 and Theorem 12.1 in [38] and
Proposition 2 in [124].
Chapter 5
Application to the Bernoulli Sieve
The definition of the Bernoulli sieve which is an infinite allocation scheme can be
found on p. 1. Assuming that the number of balls to be allocated equals n (in other
words, using a sample of size n from a uniform distribution on Œ0; 1), denote by
Kn the number of occupied boxes and by Mn the index of the last occupied box.
Also, put Ln WD Mn Kn and note that Ln equals the number of empty boxes
within the occupancy range (i.e., we only count the empty boxes with indices not
exceeding Mn ).
The purpose of this chapter is two-fold. First, we present all the results accu-
mulated to date concerning weak convergence of finite-dimensional distributions of
.LŒeut /u>0 as t ! 1. Second, we demonstrate that some of these results (namely,
those given in Theorem 5.1.3) can be derived from Theorem 3.3.21 which is a
statement about weak convergence of finite-dimensional distributions of a particular
random process with immigration. The connection is hidden, and we shall spend
some time to uncover it.
WD Ej log Wj and WD Ej log.1 W/j
Case I in which
< 1: Ln converges in distribution to some L with a mixed
Poisson distribution (Theorem 5.1.1).
Case II in which
D 1 and < 1: Ln becomes asymptotically negligible
(Theorem 5.1.2).
Case III in which
< 1 and D 1: There are several possible modes of weak
convergence of .LŒeut /u>0 , properly normalized and centered (Theorem 5.1.3).
Case IV in which
D D 1: The asymptotics of Ln is determined by the behavior
of the ratio PfW tg=Pf1 W tg as t ! 0C. When the distribution of W
assigns much more mass to the neighborhood of 1 than to that of 0 equivalently the
ratio goes to 0, the number of empty boxes becomes asymptotically large. In this
situation finite-dimensional distributions of .LŒeut /u>0 , properly normalized without
centering, converge weakly under a condition of regular variation (Theorem 5.1.3).
If the roles of 0 and 1 are interchanged Ln converges to zero in probability
(Theorem 5.1.2). When the tails are comparable finite-dimensional distributions of
.LŒeut /u>0 converge weakly (Theorem 5.1.4).
Theorem 5.1.1 Suppose that the distribution of j log Wj is nonlattice and that
<
d
1. Then Ln ! L as n ! 1, and the distribution of L is mixed Poisson.
We are aware of two cases in which the distribution of L can be explicitly
identified.
It is easily checked that the distribution of L1 is geometric with parameter EW.
Curiously, the same is true for all n 2 N provided that the distribution of W is
symmetric about the midpoint 1=2.
d
Example 5.1.1 If W D 1 W, then Ln is geometrically distributed with success
probability 1=2 for all n 2 N.
Example 5.1.2 If W has a beta distribution with parameters > 0 and 1, i.e.,
PfW 2 dxg D x 1 1.0;1/ .x/dx, then L has a mixed Poisson distribution with
random parameter j log.1 W/j. In other words,
.1 C /.1 C s/
EsL D ; s 2 Œ0; 1: (5.1)
.1 C 2 s/
P
Then Ln ! 0 as n ! 1.
5.1 Weak Convergence of the Number of Empty Boxes 193
In the next theorem we investigate the cases where the distribution of log W
belongs to the domain of attraction of an ˛-stable distribution, ˛ 2 .0; 1/ [ .1; 2.
In particular, we treat two situations:
< 1 and D 1;
D D 1, and the left
tail of 1 W dominates the left tail of W.
where
D E j log Wj < 1 and Vˇ is a centered Gaussian process with
Pfj log Wj > tg t˛ `.t/ and Pfj log.1 W/j > tg tˇb̀.t/; t!1
for some ˛ 2 .0; 1/, some ˇ 2 Œ0; ˛ and some ` and b̀ slowly varying at 1. If
˛ D ˇ, assume additionally that
Then
Pfj log Wj > tg cPfj log.1 W/j > tg t˛ `.t/; t!1
for some ˛ 2 .0; 1/, some c > 0 and some ` slowly varying at 1. Then
f:d: X
LŒeut ) 1fW .1=c;˛/ .1=c;˛/ .1=c;˛/ DW R˛; c .u/; t ! 1:
˛ .tk /u<W˛ .tk /Cjk g
k
For fixed u > 0, the distribution of R˛; c .u/ is geometric with success probability
c.c C 1/1 , i.e.,
k
c 1
PfR˛; c D kg D ; k 2 N0 :
cC1 cC1
E .1 W/n
lim D c 2 .0; 1/
n!1 E Wn
which is implied by
Instead of the scheme with n balls we shall work with a Poissonized version of the
Bernoulli sieve in which the successive allocation times of the balls (the points Uk )
over the boxes (the intervals .Rj ; Rj1 ) are given by the sequence .Tk /k2N . More
precisely, the point Uk hits some box at the time Tk . Thus, the random number .t/
of balls will be allocated over the boxes within Œ0; t. Denote by j .t/ the number
of balls which fall into the jth box within Œ0; t. It is clear that, given the sequence
R, first, for each j, the process .j .t//t0 is a Poisson process with intensity Pj D
Rj1 Rj , and second, for different j’s, these processes are independent. It is this
latter property which demonstrates the advantage of the Poissonized scheme over
the original one.
Put M.t/ WD M.t/ , K.t/ WD K.t/ , and L.t/ WD L.t/ . For instance, L.t/ is then
the number of empty boxes within the occupancy range obtained by throwing .t/
balls. The Bernoulli sieve can be interpreted as the infinite allocation scheme in the
random environment .Wk / which is given by i.i.d. random variables. The first two
results of the present section reveal that one can investigate the asymptotics of a
relatively simple functional which is determined by the environment .Wj /j2N alone
rather than that of L.t/.
Let .bSn /n2N0 be the zero-delayed ordinary random walk defined by
b
S0 WD 0; b
Sn WD j log W1 j C : : : C j log Wn j; n2N
and put b
n WD j log.1 Wn /j for n 2 N.
196 5 Application to the Bernoulli Sieve
for any function a.t/ satisfying limt!1 a.t/ D 1. P In other words, the finite-
dimensional distributions of the process .L.eut / k0 1fb
Sk ut<b
Sk Cb
/
kC1 g u0
are
tight.
Lemma 5.2.3 given below allows us to implement a de-Poissonization, i.e., a
reverse transition from the scheme with Poisson number of balls to the original
scheme with deterministic number of balls.
Lemma 5.2.3 With no assumptions on the expectation of j log Wj
f:d:
L.eut / LŒeut ) 0; t ! 1:
.t/ WD inffk 2 N W b
Put b Sk > tg for t 0 and denote by
X
b WD Eb
U.t/ .t/ D Pfb
Sk tg; t0
k0
Set R.t/ WD E..et //. Using Markov’s inequality and the fact that the renewal
b is nondecreasing we obtain
function U.t/
n ˇ o
ˇ
P b .R.t// b
.t/ 1f0<R.t/t g > " ˇ R.t/
ˇ
ˇ
"1 E b .R.t// b
.t/ 1f0<R.t/t g ˇ R.t/
D "1 Ub t C R.t/ t U.t/
b 1f0<R.t/t g
b C / U.t/
"1 U.t b
for all > 0 and " > 0. This in combination with (6.10) yields
n ˇ o
ˇ
lim P b .t/ 1f0<R.t/t g > " ˇ R.t/ D 0 almost surely:
.R.t// b
t!1
Consequently
P
b
.R.t// b
.t/ 1f0<R.t/t g ! 0
for any > 0 and " > 0. With this at hand, recalling (5.5) and using the absolute
continuity of the distribution of E we conclude that
n o
lim sup P b .t/ 1fR.t/t> g > " PfE > g
.R.t// b
t!1
and thereupon
n o
lim lim sup P b .R.t// b
.t/ 1fR.t/t> g > " D 0:
!1 t!1
Step 2 We are looking for a good approximation for K.et / the number of boxes
discovered by the Poisson process within Œ0; et . More precisely, we shall prove that
X
P
K.et / 1 exp etb
Sk
.1 WkC1 / ! 0:
k0
where k .et / is the number of balls (in the Poissonized scheme) landing in the kth
box within Œ0; et . In view of
X
exp 2etb
Sk
.1 WkC1 /
Z
ty
D b
'.e / '.2ety / dU.y/
Œ0; 1/
where '.y/ WD E ey.1W/ . By Lemma 6.2.2, the function g0 .y/ D '.ey / '.2ey /
is dRi on R. Applying now the key renewal theorem for distributions with infinite
mean (Proposition 6.2.4) justifies relation (5.8).
Step 3 We intend to prove the relation
X
1 exp etb
P
Z1 .t/ WD Sk
.1 WkC1 / 1fb
S >tg
! 0:
k
k0
5.2 Poissonization and De-Poissonization 199
exp etb
P
Z2 .t/ WD Sk
.1 WkC1 / 1fb
S Cb
1fb ! 0:
k kC1 >tg S tg
k
k0
1 exp etb
Sk
.1 WkC1 / 1fb
S t<b
S Cb
k k kC1 g
k0
and show that limt!1 E Z2i .t/ D 0, i D 1; 2. Indeed, according to Lemma 6.2.2,
the functions g2 .y/ D E exp.ey .1 W//1f1W>ey g and g3 .y/ D E .1
exp.ey .1 W///1f1Wey g are dRi on Œ0; 1/. Hence, by the key renewal theorem
(Proposition 6.2.4),
Z Z
E Z21 .t/ D b ! 0 and E Z22 .t/ D
g2 .t y/ dU.y/ b !0
g3 .t y/ dU.y/
Œ0; t Œ0; t
X X
L.e /
t
1fb
S t<b
S Cb
D M.e /
t
1fb
k k kC1 g S tg k
k0 k0
X
and combining conclusions of the four steps finishes the proof of Lemma 5.2.1. u
t
Proof of Lemma 5.2.2 If the distribution of j log Wj is nonlattice, the proof of
Lemma 5.2.2 exploits the same formulae as the proof of Lemma 5.2.1. However,
while implementing Step 1 one has to use the Blackwell theorem (formula (6.8)) for
200 5 Application to the Bernoulli Sieve
the finite mean case rather than for the infinite mean case. Also, while implementing
the steps 2 through 4 one has to use Lemma 6.2.8 rather than Proposition 6.2.4.
If the distribution of j log Wj is l-lattice for some l > 0, an additional argument
is only needed for Step 1 of the proof of Lemma 5.2.1.
Step 1 Fix any > 0 and pick m 2 N such that ml. With this and " > 0, we
use the inequality
ˇ b C ml/ b
b
.R.t// b
.t/ ˇ U.t U.t/
P 1f0<R.t/t g > " ˇ R.t/
a.t/ "a.t/
b
.R.t// b
.t/ P
1f0<R.t/t g ! 0:
a.t/
To implement the steps 2 through 4 one may use Lemma 6.2.8 and argue as in
the nonlattice case. t
u
Proof of Lemma 5.2.3 It suffices to check that
P P
K.t/ KŒt ! 0 and M.t/ MŒt ! 0 (5.9)
and use the Cramér–Wold device (see p. 232). In view of the inequality PfM.t/ ¤
MŒt g PfK.t/ ¤ KŒt g which can be easily checked, only the first relation in (5.9)
needs a proof.
We first show that
p p P
K.t C x t/ K.t x t/ ! 0 (5.10)
D b
' .t x t/ey ' .t C x t/ey dU.y/
Œ0; 1/
5.2 Poissonization and De-Poissonization 201
for large enough t where '.y/ D E ey.1W/ . Since the function y 7! ' 0 .y/ is
nonincreasing, we infer
p p p p
' .t x t/ey ' .t C x t/ey ' 0 .t x t/ey 2x tey
According to Lemma 6.2.2, the function g4 .y/ D ' 0 .ey /ey is dRi on R. This and
Lemma 6.2.8 together imply that
Z p
p
b D O.1/:
' 0 .t x t/ey .t x t/ey dU.y/
Œ0; 1/
p p
Hence, limt!1 E K.tCx t/K.tx t/ D 0 for any x > 0 which entails (5.10).
The process .K.s//s0 is a.s. nondecreasing. This implies that
for any " > 0. Recalling (5.10) and using the central limit theorem for TŒt yield
˚
lim sup P jKŒt K.t/j > 2" PfjN .0; 1/j > xg
t!1
where N .0; 1/ denotes a random variable with the standard normal distribution.
Sending x to 1 establishes the first relation in (5.9). The proof of Lemma 5.2.3 is
complete. t
u
202 5 Application to the Bernoulli Sieve
the number of zero decrements of the Markov chain before the absorption. Assum-
ing that si; i1 > 0 for all M C 1 i n, the absorption at state M is certain, and
Zn is a.s. finite.
Neglecting zero decrements of I along with renumbering of indices lead to
a decreasing Markov chain J WD Jk .n/ k2N0 with J0 .n/ D n and transition
probabilities
si;j
e
si;j D ; i>jM
1 si; i
d
X
Zn D RJk .n/ 1fJk .n/>Mg : (5.11)
k0
where .j /MC1 jn are independent random variables which are independent of J,
j has an exponential distribution with mean sj;j =.1 sj;j /, and . .t//t0 is a unit
rate Poisson process which is independent of everything else. Since Zn converges
5.4 Proofs for Section 5.1 203
in distribution, the sequence in the parantheses must converge, too. The proof of
Lemma 5.3.1 is complete. u
t
Now we present one more construction of the Bernoulli sieve which highlights
the connection with nonincreasing Markov chains. The Bernoulli sieve can be
realized as a random allocation scheme in which n ‘balls’ are allocated over an
infinite array of ‘boxes’ indexed 1, 2; : : : according to the following rule. At the first
round each of n balls is dropped in box 1 with probability W1 . At the second round
each of the remaining balls is dropped in box 2 with probability W2 , and so on. The
procedure proceeds until all n balls get allocated. Let Ik .n/ denote the number of
remaining balls (out of n) after the kth round. Then I WD .Ik .n//k2N0 is a pattern of
nonincreasing Markov chains described above with M D 0 and
!
i
si;j D EW j .1 W/ij ; j i: (5.12)
j
d
L0 D 0; Ln D LIn .1/ C 1fIn .1/Dng ; n2N (5.13)
where on the right-hand side In .1/ is assumed independent of .Ln /n2N .
The fact that the distribution of L is mixed Poisson follows from Lemma 5.3.1
because Ln is the number of zero decrements before absorption of the nonincreasing
Markov chain I . The proof of Theorem 5.1.1 is complete. t
u
Proof for Example 5.1.1 The argument is based on recurrence (5.13) for marginal
d
distributions of the Ln . The symmetry W D 1 W yields EW k D E.1 W/k for
k 2 N and thereupon
by the induction hypothesis. Assuming now that PfLn D ig D 2i1 for all i < k
we have
X
n1
PfLn D kg D PfIn .1/ D jgPfLj D kg C PfIn .1/ D ngPfLn D k 1g
jD1
D 2k1 1 2PfIn .1/ D 0g C PfIn .1/ D 0g2k D 2k1
differences En;n En1;n ; En1;n En2;n ; : : : ; En;1 En;0 are independent exponential
random variables with expectations 1; 1=2; : : : ; 1=n. Since the Poisson process has
independent increments, M1 ; : : : ; Mn are independent. Since the Poisson process has
stationary increments we infer
Mj D #fk 2 N W b
d
Sk < EnjC1;n Enj;n g
and further
j X j j k
EsMj D Ee.1s/.EnjC1;nEnj;n / D D sk 1 :
j C .1 s/ k0
jC jC
Thus, Mj has a geometric distribution with success probability j=. C j/. Counting
the number of empty gaps .Sk ; SkC1 / which fit in .Enj; n ; EnjC1; n / we see that this
is Mn for j D n and .Mj 1/C for j D 2; : : : ; n whence
n Y j.j C 2 s/
n1
EsLn D ;
n C s jD1 .j C /.j C s/
and (5.1) follows by sending n ! 1 and evaluating the infinite product in terms of
the gamma function (see Example 1 on p. 239 in [262]). The generating function of
the stated mixed Poisson distribution equals
.1 C /.1 C s/
Ee j log.1W/j.1s/ D E.1 W/.1s/ D
.1 C 2 s/
and use the equality (consult the definition of N .1=c; ˛/ in ‘List of notation’ for the
used notation)
Z Z
.ı/
E ezR˛; c .u/ D E exp 1 ez1fW˛ .s/u<W˛ .s/Cyg ds
1=c; ˛ .dy/
Œ0; 1/ .ı; 1
206 5 Application to the Bernoulli Sieve
Thus, the distribution of R˛; c .u/ is mixed Poisson with the parameter
Z
1
c .u y/˛ dW˛ .y/ D J˛; ˛ .u/
Œ0; u
If
D 1 and < 1, then t 7! Pf1 > tg is dRi on RC by Lemma 6.2.1 (a), and
an application of Lemma 6.2.4 yields
X Z
E 1fb D b
Pf1 > t ygdU.y/ ! 0:
S t<b
k S Cb
k kC1 g
k0 Œ0; t
R
b
the convergence limt!1 Œ0; t Pf1 > t ygdU.y/ D 0 follows by Lemma 6.2.15
(with D j log Wj, f .t/ D Pfj log.1 W/j > tg and c D 0). The proof of
Theorem 5.1.2 is complete. t
u
5.5 Bibliographic Comments 207
for appropriate functions a.t/ > 0 and b.t/ 2 R and q appropriate limit processes ‚.
Rt
For instance, if Var .log W/ < 1, then a.t/ D
1 0 Pfj log.1 W/j > yg dy,
Rt
b.t/ D
1 0 Pfj log.1 W/j > yg dy and ‚.u/ D Vˇ .u/. Since in all cases (D1)–
(D4) of Theorem 3.3.21 limt!1 a.t/ D 1 Lemmas 5.2.1 (in case (D4)), 5.2.2 (in
cases (D1)–(D3)), and 5.2.3 Penable us to conclude that the last centered formula
holds with LŒeut replacing k0 1fbSk ut<b
Sk Cb
kC1 g
. The proof of Theorem 5.1.3 is
complete. t
u
The study of the Bernoulli sieve was initiated in [97]. Since then, several papers [7,
99–101, 103–105, 138, 139, 150, 220] have appeared which analyzed various
asymptotic properties of the Bernoulli sieve. While the articles [97, 100, 104] give a
fairly complete account of one-dimensional convergence of the number of occupied
boxes Kn , functional limit theorems for Kn were recently obtained in [7]. Further,
we note that one-dimensional convergence of the index of the last occupied box Mn
and the number of empty boxes Ln was investigated in [104]. See also [138, 139] for
further results on Ln .
Recall that the Bernoulli sieve is the infinite allocation scheme with the random
frequencies. The infinite allocation schemes with nonrandom frequencies have also
attracted some attention in the literature. Starting from Karlin’s fundamental paper
[171] which followed earlier work in [22, 72], several aspects of the scheme were
investigated in [23, 45, 79, 82, 90, 130, 211, 215, 216]. Surveys on the infinite
allocation schemes can be found in [24, 98]. It should be emphasized that the infinite
allocation schemes differ radically from the classical allocation scheme with finitely
many positive frequencies (detailed treatment of the latter scheme can be found in
monograph [184]).
Distributional convergence in Theorem 5.1.1 was proved in Theorem 3.3 of [105]
by using the argument outlined in the proof above. An alternative proof can be found
in Theorem 2.2(a) of [104]. Actually, the distributional convergence is accompanied
with convergence of moments of all positive orders. While the convergence of
expectations was established in [105], convergence of higher moments was later
derived in [220].
Examples 5.1.1 and 5.1.2 are Proposition 7.1 in [101] and Proposition 5.2 in
[104], respectively.
208 5 Application to the Bernoulli Sieve
Theorems 5.1.4 and 5.1.3 are taken from [150]. Theorem 5.1.3 strengthens
Theorem 1.1 in [139] and Theorem 1.2 in [138], respectively, which only deal with
one-dimensional convergence.
Section 5.2 follows the presentation in [150]. Although the papers [104, 138, 139]
offer several approaches to the de-Poissonization of the number of empty boxes, the
result of Lemma 5.2.3 is the strongest one out of those known to the author, even if
only one-dimensional distributions are concerned.
Section 5.3 is based on [104, 139]. In particular, Lemma 5.3.1 is Proposition 1.3
in [139]. We refer to [138, 139] for some other results which hold for general
nonincreasing Markov chains.
Chapter 6
Appendix
Rt
(d) Let D 1 and a > 0. Then t 7! a g.y/dy is a slowly varying function and
tg.t/
lim R t D 0:
t!1 g.y/dy
0
Part (a) of Lemma 6.1.4 is Theorem 1.5.2 in [44]; part (b) is a consequence of
Theorem 1.5.3 in [44]; parts (c) and (d) are two versions of Karamata’s theorem
(Proposition 1.5.8 and Proposition 1.5.9a in [44], respectively).
This and the next section are concerned with ordinary random walks. In the present
section we mainly treat random walks with nonnegative jumps, Proposition 6.2.6
being the only exception. In the next section random walks with two-sided jumps
are in focus.
Further, let .Sn /n2N0 be the zero-delayed ordinary random walk defined by S0 D 0
and Sn D 1 C : : : C n , n 2 N.
For x 2 R, set .x/ D inffk 2 N0 W Sk > xg. Plainly, .x/ D 0 for x < 0. Since
limn!1 Sn D C1 a.s., we have .x/ < 1 a.s. for each x 0. Put further
X
U.x/ D E.x/ D PfSn xg; x 2 R:
n0
Proof Fix any > 0. Since Pf D 0g < 1 we have Ee < 1 and further
X X
U.x/ D PfSn xg e x Ee Sn D e x .1 Ee /1 < 1
n0 n0
by Markov’s inequality. t
u
Distributional Subadditivity of .x/
.t C s/ .t/
(
inffk 2 N W S .t/ t C 1C .t/ C : : : C kC .t/ > sg; if S .t/ t s;
D
0; if S .t/ t > s:
The second term on the right-hand side has the same distribution as .s/ and
is independent of .t/ because the sequence .SkC .t/ S .t/ /k2N has the same
distribution as .Sk /k2N and is independent of .t/ (this is a consequence of the fact
that .t/ is a stopping time w.r.t. the filtration generated by .i /).
Integrating (6.2) over Œ0; 1/ immediately gives (6.3) for t; s 0. If t; s < 0, then
both sides of (6.3) equal zero. Finally, if t < 0 and s 0, (6.3) reads U.tCs/ U.s/.
This obviously holds for U is nondecreasing on R. t
u
Elementary Renewal Theorem
if
D E < 1, whereas the limit equals zero if
D 1.
Erickson’s Inequality
x 2x
Rx U.x/ R x ; x > 0: (6.5)
0 Pf > ygdy 0 Pf > ygdy
212 6 Appendix
U.x/
1 x
2 E 2 ; x 0: (6.6)
U.ın/
1 ın
1 ı C
2 E 2 ; n 2 N: (6.7)
if
D E 2 .0; 1/, whereas the limit equals zero if
D 1.
where the finiteness is ensured by part (a) of the definition above. As for the second,
observe that for any t 2 R there exists n 2 Z such that t 2 Œn 1; n/. It remains to
note that limn!˙1 supn1y<n f .y/ D 0 and that f .t/ supn1y<n f .y/.
Every nonnegative dRi function is improperly Riemann integrable (see
Remark 3.10.2 in [236]). The other implication does not necessarily hold just
because an improperly Riemann integrable function may be unbounded at
the vicinity of ˙1, whereas P a dRi function is globally bounded. A concrete
example is given by g.t/ D n1 n1=2 1Œn;nCn2 / .t/ which is obviously
P unbounded
improperly Riemann integrable. The function g .t/ D n1 1 Œn;nCn2 / .t/ is
For the last estimate we have used the change of variable and condition 2
.0; 1 a.s. Further, with z 2 .0; 1 fixed, the function y 7! y1 .1 eyz /eyz is
nonincreasing on Œ0; 1/. Hence the function y 7! ey g0 .y/ is nonincreasing, too.
By the same reasoning, with z 2 .0; 1 fixed, the functions y 7! y1 .1 eyz / and
y 7! 1Œz;1/ .y1 / are nonincreasing on .0; 1/, hence, so are their product and the
function y 7! ey g3 .y/. The monotonicity of g4 is obvious.
In view of
Z Z Z
0 1 1
g1 .y/dy D y1 E 1 ey dy Edy 2 .0; 1
1 0 0
if
D E < 1, whereas the limit equals zero, if
D 1.
6.2 Renewal Theory 215
for any h > 0. Thus repeating literally the proof of the key renewal theorem given
on p. 241 in [236] leads to the desired conclusion. t
u
P
Corollary 6.2.5 Suppose that E D 1. Then t S .t/1 ! C1 as t ! 1.
Proof For x > 0,
Z Z
Pft S .t/1 < xg D Pf > t ygdU.y/ D fx .t y/dU.y/
.tx;t Œ0; t
where fx .z/ WD Pf > zg1Œ0;x/ .z/ is dRi on Œ0; 1/ as nonincreasing Lebesgue
integrable function (see Lemma 6.2.1(a)). Thus, the last integral converges to zero
as t ! 1 which proves the claim. t
u
In Proposition 6.2.3 we did not discuss the lattice case, for it is subsumed in a
more general result, a version of the key renewal theorem for the lattice distributions
concentrated on the whole line. Even though the result is widely used in the
literature, we are not aware of any reference which would give a proof.
Proposition 6.2.6 Assume that has a ı-lattice distribution concentrated
P on R and
P
Suppose first that 0 a.s. Set u.ın/ WD k0 PfSk D ıng. In view of (6.9), for
any " 2 .0;
1 ı/ there exists a j0 2 N such that
1 ı " u.ıj/
1 ı C "
X
j0
X
D f x C ı.n j/ u.ıj/ C f x C ı.n j/ u.ıj/
jD0 jj0 C1
nj0 1
X
j0
X
f x C ı.n j/ u.ıj/ C .
1 ı C "/ f x C ıj : (6.11)
jD0 jD1
P
The assumption j2Z f .x C ıj/ < 1 ensures limn!1 f .x C ın/ D 0, whence
X X
lim sup E f .x C ın Sk /
1 ı f .x C ıj/
n!1
k0 j2Z
on letting in (6.11) first n ! 1 and then " to zero. The converse inequality for the
lower limit follows analogously.
The general case when takes values of both signs will now be handled by
reducing it to the case > 0 a.s. via a stopping time argument. We use the
representation
X X jC1
X1 X
E f .x C ın Sk / D E f .x C ın Si / D E f .x C ın Sk /
k0 j0 iDj k0
where .k /k2N0 are successive strictly increasing ladder epochs for .Sn / (see ‘List of
P 1
Notation’ for the precise definition), and f .x/ WD E jD0 f .xSj /, x 2 R (we write
for 1 ). The sequence .Sk /k2N0 is an ordinary random walk with positive jumps
having the same distribution as S . Observe that ES D
E by Wald’s identity, and
that the distribution of S is ı-lattice. Since
X X X 1 X
f .x C ıj/ D E f x C ı. j i/ 1fSk Dıig
j2Z j2Z kD0 i0
1 X
X X
DE 1fSk Dıig f .x C ı. j i//
kD0 i0 j2Z
X
D E f .x C ıj/ < 1;
j2Z
6.2 Renewal Theory 217
D E < 1. Then, as t ! 1,
d
t S .t/1 ; S .t/ t ! ; (6.12)
where the distribution of ; is given by
Z 1
Pf > u; > vg D
1 Pf > ygdy; u; v 0:
uCv
In particular,
Z u
1
Pf ug D Pf ug D
Pf > ygdy; u 0:
0
Furthermore, ; has the same distribution as UV; .1 U/V where U and V
are independent random variables, U has a uniform distribution on Œ0; 1, and the
distribution of V is given by PfV 2 dxg D
1 xPf 2 dxg, x > 0.
Proof Note that (6.12) is equivalent to
Z 1
lim Pft S .t/1 u; S .t/ t > vg D
1 Pf > ygdy (6.13)
t!1 uCv
for all nonnegative u and v because the limit distribution is continuous in Œ0; 1/
Œ0; 1/.
For fixed u; v 0, the function fu;v .t/ WD Pf > uCvCtg, t 0 is nonincreasing
and Lebesgue integrable on RC . The latter is a consequence of
Z 1 Z 1 Z 1
fu;v .t/dt D Pf > tgdt Pf > tgdt D
< 1:
0 uCv 0
218 6 Appendix
as t ! 1 which proves (6.13). The formulae for the marginal distributions follow
by setting u D 0 and v D 0, respectively, in the formula for the joint distribution.
Finally, the last assertion is a consequence of
t
u
integrals when t approaches 1 along subsequences other than nl in the l-lattice
case. Lemma 6.2.8 given below fills this gap.
Lemma 6.2.8 If f W R ! RC is dRi on RC , then
Z
lim sup f .t y/dU.y/ < 1:
t!1 Œ0; t
we obtain
Z X
f .t y/dU.y/ sup f .s/ U.t nl/ U.t .n 1/l/
Œ0; t n1 .n1/ls<nl
X
U.l/ sup f .s/
n1 .n1/ls<nl
having utilized the subadditivity of U on R (see (6.3)) for the last inequality. It
remains to remark that the series on the right-hand side converges because f is dRi.
t
u
Given below is a counterpart of the key renewal theorem in the case when f is
nonintegrable.
Lemma 6.2.9 Let 0 r1 < r2 R 1. Suppose that f W RC ! RC is either
t
nondecreasing and limt!1 . f .t/= 0 f .y/dy/ D 0, or nonincreasing and, if r2 D 1,
R .1r /t
locally integrable. If
D E 2 .0; 1/ and limt!1 .1r21/t f .y/dy D 1, then
Z Z .1r1 /t
1
f .t y/ dU.y/
f .y/ dy; t ! 1:
Œr1 t; r2 t .1r2 /t
Proof If the distribution of is nonlattice or 1-lattice, the proof runs the same
path as that of Theorem 4 in [248] which investigates the case r1 D 0, r2 D 1.
If the distribution of is l-lattice, the distribution of l1 is 1-lattice. Hence, putting
220 6 Appendix
t
u
Remark 6.2.10 When f is nondecreasing, the condition
Z t
lim f .t/ f .y/dy D 0
t!1 0
Rt
whereas
1 0 f .y/dy
1 et as t ! 1.
Remark 6.2.11 We now give an example which demonstrates that the result of
Lemma 6.2.9 may fail to hold for ill-behaved f . Let, for instance, f .t/ D 1QcC .t/
Rt
where QcC is the set of positive irrational numbers. Then 0 f .y/dy D t. Now suppose
the distribution of is concentrated at rational points in .0; 1/. Note that choosing
these points properly, the distribution of can be made lattice as well as nonlattice.
The
R points of increase of the renewal function U.y/ are rational points only. Hence
Œ0;t f .t y/dU.y/ D 0 for rational t.
f1 .t/ C f2 .t/
lim sup R t <1
t!1 0 f1 .y/ f2 .y/ dy
by the subadditivity of U (see (6.3)). As for I1 .t/ we obtain using again the
subadditivity of U
Œt1 Z
X
I1 .t/ D f1 .t/ f2 .t/ C f1 .t y/ f2 .t y/ dU.y/
jD0 . j; jC1
Œt1
X
f1 .t/ f2 .t/ C f1 .t j/ f2 .t j 1/ U. j C 1/ U. j/
jD0
Œt1
X
f1 .t/ C U.1/ f1 .t j/ f2 .t j 1/
jD0
Œt1
X
f1 .t/ C U.1/ f1 .Œt C 1 j/ f2 .Œt 1 j/
jD0
Z Œt
D U.1/ f1 .y/ f2 .y/ dy C O f1 .t/ C f2 .t/ :
2
Proof (a) For any ı 2 .0; 1/ there exists a t0 > 0 such that
1 ı f .t/=g.t/ 1 C ı (6.14)
for all t t0 .
Case r2 < 1 We have, for t .1 r2 /1 t0 ,
Z Z
.1 ı/ g.t y/ dU.y/ f .t y/ dU.y/
Œr1 t; r2 t Œr1 t; r2 t
Z
.1 C ı/ g.t y/ dU.y/:
Œr1 t; r2 t
R
Dividing both sides by Œr1 t; r2 t g.t y/ dU.y/ and sending t ! 1 and then ı ! 0C
gives the result.
Rt
Case r2 D 1 Since g is monotone, it is locally integrable. Further, limt!1 0
Rt
g.y/dy D 1 and limt!1 .g.t/= 0 g.y/dy/ D 0. Hence Lemma 6.2.9 applies and
yields
Z
lim g.t y/ dU.y/ D 1:
t!1 Œr t; t
1
yields
R
Œr t; t f .t y/ dU.y/
lim sup R 1 1 C ı:
t!1 Œr1 t; t g.t y/ dU.y/
where the first equivalence follows from Lemma 6.2.13(a) and the second is
a consequence of Lemma 6.2.9 (observe that, with h D f or h D g, the
Rt R .1r /t
relations limt!1 .h.t/= 0 h.y/dy/ D 0 and limt!1 .1r21/t h.y/dy D 1 hold by
Lemma 6.1.4(c) because h is regularly varying of index ˇ > 1). Finally, using
Lemma 6.1.4(c) we obtain
Z .1r1 /t
1 tg.t/
g.y/dy ..1 r1 /1Cˇ .1 r2 /1Cˇ /
E .1r2 /t .1 C ˇ/
tf .t/
..1 r1 /1Cˇ .1 r2 /1Cˇ /:
.1 C ˇ/
Proof Denote by V.t/ WD t S .t/1 the undershoot of .Sn /n2N0 at t and put h.t/ WD
P
f .t/=Pf > tg for t 0. Under the sole assumption E D 1 we have V.t/ ! 1
P
by Corollary 6.2.5 whence h.V.t// ! c. If c < 1, the function h is bounded and
limt!1 Eh.V.t// D c by the dominated convergence theorem. If c D 1, we obtain
limt!1 Eh.V.t// D c D 1 by Fatou’s lemma. In view of the representation
Z
f .t y/dU.y/ D Eh.V.t//; t0
Œ0; t
in particular,
Z
Pf > tg .1 C /
lim f .t y/ dU.y/ D I
t!1 f .t/ Œ0; t .1 ˛/.1 C ˛ C /
R
(b) Œ0; t f1 .t x/ dU.x/ D o. f .t/=Pf > tg/ as t ! 1 for any positive locally
bounded function f1 such that f1 .t/ D o. f .t// as t ! 1.
Proof (a) With r.t/ WD f .t/=Pf > tg for t 0, the expression under the double
limit in (6.15) equals Er.V.t//1fV.t/.1 /tg =r.t/ where, as before, V.t/ D t S .t/1
is the undershoot of .Sn /n2N0 at t.
Case 1 in which > ˛ or D ˛ and limt!1 r.t/ D 1. If > ˛, then,
by Lemma 6.1.4(b), there exists a nondecreasing function q such that r.t/ q.t/ as
t ! 1. If D ˛, such a function q exists by assumption. Now fix " > 0 and let
t0 > 0 be such that .1 "/q.t/ r.t/ .1 C "/q.t/ for all t t0 . Then
When ! 1, the last integral goes to zero which proves (6.15).
Case 2 in which D ˛ and r is bounded. Then Er.V.t//1fV.t/.1 /tg const
PfV.t/ .1 /tg for t 0. The rest of the proof is the same as in the previous
case.
Turning to the second assertion of part (a), we observe that
Z Z
Pf > tg f .t.1 y//
f .t y/ U.dy/ D Pf > tgU.t/ Ut .dy/
f .t/ Œ0; t Œ0; f .t/
for t t0 . According to part (a), the first term on the right-hand side grows like
const f .t/=Pf > tg. By Blackwell’s theorem in the infinite mean case (see (6.10)),
limt!1 .U.t/ U.t t0 // D 0. Dividing the inequality above by f .t/=Pf > tg and
sending first t ! 1 and then ı ! 0C finishes the proof. t
u
226 6 Appendix
Set .t/ D #fk 2 N0 W Sk tg for t 0 where the sequence .Sk / is as defined
in (3.4). Although the process . .t//t0 is known as stationary renewal process, it
is a process with stationary increments rather than a stationary process.
Lemma 6.2.17 Suppose that E r < 1 for some r > 2. Then there exists a
Brownian motion S2 such that, for some random, almost surely finite t0 > 0 and
deterministic A > 0,
j .t/
1 t
3=2 S2 .t/j At1=r
sup jSŒu
u S2 .u/j D O.t1=r / a.s.
0ut
and thereupon
sup j .u/
1 u
3=2 S2 .u/j D O.t1=r / a.s.
0ut
by Theorem 3.1 in [71]. This proves the lemma with possibly random A. As noted
in Remark 3.1 of the cited paper the Blumenthal 0 1 law ensures that the constant
A can be taken deterministic. t
u
In this section we discuss ordinary random walks .Sn /n2N0 with two-sided jumps,
Proposition 6.3.4 being the only exception. BeforeR we formulate the first result the
x
following notation has to be recalled: J .x/ D x= 0 Pf > ygdy, x > 0;
Theorem 6.3.1 Let .Sn /n2N0 be negatively divergent. For a function f as defined in
Theorem 1.3.1, the following assertions are equivalent:
C /J .b
1 > Ef .b C / D Ef . sup Sn /J . sup Sn /
0n 1 0n 1
C
2Ef . C / R 2Ef . C /J . C / < 1:
C
0 PfS > ygdy
We have used Erickson’s inequality (6.5) for the first inequality and PfS1 > yg
PfS > yg, y > 0 for the second.
(6.19))(6.16). According to the formula given on p. 1236 in [3],
X
Ef .sup Sn / D .1 / n Ef .Vn /
n0 n0
where WD Pfw < 1g and .Vn /n2N0 is a zero-delayed ordinary random walk
with increments having distribution PfSw 2 jw < 1g. It suffices to show that
Ef .V1 / < 1 (which is equivalent to (6.19)) entails Ef .Vn / D O.nı / as n ! 1 for
some ı > 0.
ˇ
The condition Ef .V1 / < 1 ensures that EV1 < 1 for some ˇ 2 .0; 1/. Set
1=ˇ
fˇ .x/ D f .x / and observe that fˇ still possesses all the properties of f stated in the
paragraph preceding the proof of (6.18))(6.19). By the subadditivity of x 7! xˇ
on RC
X
n
ˇ ˇ
Ef .Vn / Efˇ .Vk Vk1 /ˇ EV1 C EV1
kD1
ˇ X ˇ
ˇ n ˇ ˇ
c1 Efˇ ˇˇ .Vk Vk1 /ˇ EV1 ˇˇ C fˇ .nEV1 /
ˇ
kD1
Pm^n ˇ ˇ
for some c1 > 0. Since kD1 .Vk Vk1 / EV1 m2N0 is a martingale w.r.t. the
natural filtration we can use the Burkholder–Davis–Gundy inequality (Theorem 2
on p. 409 in [68]) to infer
ˇ X ˇ
ˇ n ˇ ˇ
Efˇ ˇˇ .Vk Vk1 /ˇ EV1 ˇˇ
kD1
ˇ m ˇ
ˇX ˇ ˇˇ
Efˇ ˇ
sup ˇ ˇ
.Vk Vk1 / EV1 ˇ
mn
kD1
6.3 Ordinary Random Walks 229
ˇ ˇ ˇ
c2 fˇ nEjV1 EV1 j C Efˇ sup j.Vm Vm1 /ˇ EV1 j
mn
ˇ ˇ ˇ ˇ
c2 fˇ nEjV1 EV1 j C nfˇ EjV1 EV1 j
for some c2 which does not depend on n. Thus, we have shown that Ef .Vn / exhibits
at most power growth. t
u
Formula (6.20) is needed for the proof of Theorem 3.4.3. Recall that
w D inffk 2 N W Sk 0g;
nC1 1
X X X
Eh.Sn / D E h.Sk /
n0 n0 kDn
XZ
D h.x C n C1 / C : : : C h.x C n C1 C : : : C nC1 1 / dPfSn xg
n0 Œ0; 1/
230 6 Appendix
XZ X
D E h.x C Sj /1f >jg dPfSn xg
n0 Œ0; 1/ j0
XZ
D E Eh.x C inf Sk /dPfSn xg
Œ0; 1/ k0
n0
Z X
D E Eh.x C inf Sk /d PfSn xg :
Œ0; 1/ k0
n0
t
u
P Lemma 6.3.3 is used in the proof of Theorem 1.4.6. Recall that N.x/ D
n0 1fSn xg for x 2 R is the number of visits of .Sn / to .1; x.
Lemma
P 6.3.3 Let p p > 0 and I P R be an open interval such that
p
E 1
n0 fSn 2Ig 2 .0; 1/. Then E n0 1fSn 2Jg < 1 for any bounded
interval J R. In particular, E.N.x// < 1 for some x 2 R entails E.N.y//p < 1
p
for every y 2 R.
P p
Proof Let I D .a; b/ be such that E n0 1fSn 2Ig 2 .0; 1/. We assume w.l.o.g.
that 1 < a < b < 1. We first show that
X p
E 1fjSn j<"g < 1 for some " > 0: (6.22)
n0
P p
Pick " > 0 so small that I" WD .a C "; b "/ satisfies E n0 1fSn 2I" g > 0.
Then PfSn 2 I" g > 0 for some n 2 N. In particular, Pf.I" / < 1g > 0 where
.I" / D inffn 2 N0 W Sn 2 I" g. Using the strong Markov property at .I" /, we get
X p X p
1>E 1fSn 2Ig E 1f .I" /<1g 1fjSn S .I" / j<"g
n0 n .I" /
X p
D Pf.I" / < 1g E 1fjSn j<"g :
n0
X
m X p
.mp1 _ 1/ E 1fSn 2Jk g :
kD1 n0
6.3 Ordinary Random Walks 231
Therefore, it suffices to prove the result under the additional assumption that the
length of J is at most ". Using the strong Markov property at .J/ WD inffn 2 N0 W
Sn 2 Jg gives
X p X p
E 1fSn 2Jg E 1f .J/<1g 1fjSn S .J/ j<"g
n0 n .J/
X p
D Pf.J/ < 1gE 1fjSn j<"g < 1:
n0
This proves the first assertion of the lemma. Concerning the second, assume that
E.N.x//p < 1 for some x 2 R. Then, for any y > x,
X p
E.N.y//p .2p1 _ 1/ E.N.x//p C E 1fx<Sn yg < 1
n0
where the last term is finite by the first part of the lemma. t
u
Given below are counterparts for .Sn / of the results obtained in Section 1.4 for
the perturbed random walks. Recall that .x/ and .x/ denote the first-passage time
of .Sn / into .x; 1/ and the last-exit time of .Sn / from .1; x, respectively.
Proposition 6.3.4 Assume that Pf 0g D 1 and let ˇ WD Pf D 0g 2 Œ0; 1/.
Then for a > 0 the following conditions are equivalent:
Theorem 6.3.6 Let .Sn /n2N0 be positively divergent with Pf < 0g > 0 and a > 0.
The following assertions are equivalent:
X
ean PfSn xg < 1 for some/all x 0I
n0
where 0 is the unique positive number such that Ee0 D inft0 Eet .
Theorem 6.3.7 Let .Sn /n2N0 be positively divergent with Pf < 0g > 0 and p > 0.
The following assertions are equivalent:
Here we collect several results of various flavors that are frequently used throughout
the book.
Cramér–Wold Device The relation
d
.Xt .u1 /; : : : ; Xt .un // ! .X.u1 ; : : : ; X.un ///; t!1
is equivalent to
X
n
d X
n
˛k Xt .uk / ! ˛k X.uk /; t!1 (6.23)
kD1 kD1
for any real ˛1 ; : : : ; ˛n . If Xt .u/ 0 a.s., it suffices that (6.23) holds for any
nonnegative ˛1 ; : : :, ˛n .
For the proof, see Theorem 29.4 on p. 397 in [41].
Lemma 6.4.1 Let .S; d/ be an arbitrary metric space. Suppose that .Zk;n ; Yn / are
random elements on S S. If Zk;n ) Zk on .S; d/ as n ! 1, Zk ) Z on .S; d/ as
6.4 Miscellaneous Results 233
k ! 1 and
(b) Assume that ft 2 DŒa; b and that the random process .Rt .y//ayb is a.s.
nondecreasing for each t > 0. Assume further that limt!1 ft .y/ D f .y/
uniformly in y 2 Œa; b and that Rt ) R as t ! 1 in the J1 -topology on
DŒa; b, the paths of .R.y//ayb being almost surely continuous. Then
Z Z
d
ft .y/ dRt .y/ ! f .y/ dR.y/; t ! 1:
Œa; b Œa; b
(c) Assume that the processes Rt are a.s. right-continuous and nondecreasing for
each t 0 and that Rt ) R as t ! 1 locally uniformly on DŒ0; 1/. Then,
for any " 2 .0; 1/ and any 2 R,
Z Z
.u y/ dRt .y/ ) .u y/ dR.y/; t!1
Œ0; "u Œ0; "u
with the deterministic result: if limt!1 xt D x in the M1 -topology on DŒa; b, then
Z Z
lim xt .y/t .dy/ D x.y/.dy/
t!1 Œa; b Œa; b
and
ˇZ Z ˇ
ˇ "u "u ˇ
sup ˇˇ .u y/ 1
ft .y/dy .u y/ 1
f .y/dyˇˇ
u2Œa; b 0 0
Z "u
sup .u y/ 1 jft .y/ f .y/j dy
u2Œa; b 0
Z "u
sup jft .u/ f .u/j sup .u y/ 1 dy
u2Œ0; b u2Œa; b 0
as t ! 1. t
u
Lemma 6.4.3 Let .Uk /k2N be independent random variables with a uniform dis-
tribution on Œ0;P
1 and T1 , T2 ; : : : the arrival times of a Poisson
P process with unit
intensity. Then nkD1 "nUk converges vaguely as n ! 1 to j1 "Tj .
Proof It suffices to prove the convergence of Laplace functionals
X
n Z 1
lim E exp f .nUk / D exp .1 exp.f .x///dx
n!1 0
kD1
R
1 1
.1R1n 0 .1 exp.f .x///dx/ for n > supfx W f .x/ >
n
The last expression equals
0g and converges to exp 0 .1 exp.f .x///dx as n ! 1. t
u
For the renewal theory our favorite source is [236]. The books [18, 119, 251] are
also highly recommended. Erickson’s inequality (6.5) and Lorden’s inequality (6.6)
were originally proved in [88] and [201], respectively. Elegant alternative proofs
of these results can be found on pp. 153–154 in [68] and [65], respectively. The
elementary renewal theorem, Blackwell’s theorem, and the key renewal theorem
(Proposition 6.2.3) are classical results. A reader-friendly proof of Proposition 6.2.3
can be found on pp. 241–242 in [236]. The result of Proposition 6.2.4 was mentioned
236 6 Appendix
on p. 959 in [139]. The key renewal theorem for the nonlattice distributions
concentrated on R was proved in Theorem 4.2 of [19]. Proposition 6.2.6 which can
be found in [157] is a counterpart of this result for lattice distributions. An idea of the
second part of our proof was borrowed from [19]. Proposition 6.2.7 is well known.
A fragment of this proposition which states that .; / has the same distribution as
.UV; .1 U/V/ came to our attention from [264]. Lemmas 6.2.2 and 6.2.8 are taken
from [150]. Lemmas 6.2.9, 6.2.13, 6.2.14, and 6.2.16(b) are borrowed from [148].
A version of Lemma 6.2.14 in the case where f is nondecreasing, r1 D 0, r2 D 1,
ˇ ¤ 0, and the distribution of is nonlattice was earlier obtained in Theorem 2.1 of
[217]. Lemma 6.2.12 is Lemma A.6 in [147]. Lemma 6.2.15 is Lemma 5.1 in [146].
Lemma 6.2.16(a) is a slight extension of Lemma 5.2 in [146]. Lemma 6.2.17 was
stated in [142].
The equivalence (6.16),(6.18),(6.19) of Theorem 6.3.1 is a result which is
well known under additional restrictions on and/or f , see Theorem 1 in [162] for
the case E 2 .1; 0/ and f an (increasing) power function, Theorem 3 in [3] for
the case E 2 .1; 0/ and regularly varying f , and Proposition 4.1 in [177] for the
case limn!1 Sn D 1 a.s. and f again a power function. The equivalence of (6.17)
to all the other conditions of the theorem was first observed in Lemma 3.5 of [6]. The
present proof of Theorem 6.3.1 uses techniques pertaining to the theory of ordinary
random walks that were developed by the previous writers and techniques related
to perturbed random walks that were discussed in Section 1. We learned the proof
of equality (6.21) from Lemma 2 in [173]. Lemma 6.3.3 is Lemma 5.2 in [8]. The
implication that E.N.x//p < 1 for some x 0 entails E.N.x//p < 1 for all
x 0 has earlier been proved on pp. 27–28 in [177] via an argument different from
ours. Proposition 6.3.4 is a version of Theorem 1 in [30]. A shorter proof can be
found in [151]. Theorems 6.3.5 and 6.3.6 were proved in [151]. Theorem 6.3.7 is a
combination of Theorem 2.1 in [177] and results obtained on pp. 27–28 of the same
paper.
Lemma 6.4.2(a,b) was originally proved in a slightly different form in Lemma
A.5 of [140]. Part (c) of this lemma was obtained in [143]. It seems that Lemma 6.4.2
does not follow from results obtained in [188] and Chapter VI, Section 6c in [160]
which are classical references concerning the convergence of stochastic integrals.
Bibliography
1. R. J. Adler, An introduction to continuity, extrema, and related topics for general Gaussian
processes. Institute of Mathematical Statistics, 1990.
2. A. Agresti, Bounds on the extinction time distribution of a branching process. Adv. Appl.
Probab. 6 (1974), 322–335.
3. G. Alsmeyer, On generalized renewal measures and certain first passage times. Ann. Probab.
20 (1992), 1229–1247.
4. G. Alsmeyer, J. D. Biggins and M. Meiners, The functional equation of the smoothing
transform. Ann. Probab. 40 (2012), 2069–2105.
5. G. Alsmeyer and P. Dyszewski, Thin tails of fixed points of the nonhomogeneous smoothing
transform. Preprint (2015) available at http://arxiv.org/abs/1510.06451
6. G. Alsmeyer and A. Iksanov, A log-type moment result for perpetuities and its application to
martingales in supercritical branching random walks. Electron. J. Probab. 14 (2009), 289–
313.
7. G. Alsmeyer, A. Iksanov and A. Marynych, Functional limit theorems for the number of
occupied boxes in the Bernoulli sieve. Stoch. Proc. Appl., to appear (2017).
8. G. Alsmeyer, A. Iksanov and M. Meiners, Power and exponential moments of the number of
visits and related quantities for perturbed random walks. J. Theoret. Probab. 28 (2015), 1–40.
9. G. Alsmeyer, A. Iksanov and U. Rösler, On distributional properties of perpetuities. J.
Theoret. Probab. 22 (2009), 666–682.
10. G. Alsmeyer and D. Kuhlbusch, Double martingale structure and existence of -moments for
weighted branching processes. Münster J. Math. 3 (2010), 163–211.
11. G. Alsmeyer and M. Meiners, Fixed points of inhomogeneous smoothing transforms. J.
Difference Equ. Appl. 18 (2012), 1287–1304.
12. G. Alsmeyer and M. Meiners, Fixed points of the smoothing transform: two-sided solutions.
Probab. Theory Relat. Fields. 155 (2013), 165–199.
13. G. Alsmeyer and U. Rösler, On the existence of -moments of the limit of a normalized
supercritical Galton-Watson process. J. Theoret. Probab. 17 (2004), 905–928.
14. G. Alsmeyer and U. Rösler, A stochastic fixed point equation related to weighted branching
with deterministic weights. Electron. J. Probab. 11 (2005), 27–56.
15. G. Alsmeyer and M. Slavtchova-Bojkova, Limit theorems for subcritical age-dependent
branching processes with two types of immigration. Stoch. Models. 21 (2005), 133–147.
16. V. F. Araman and P. W. Glynn, Tail asymptotics for the maximum of perturbed random walk.
Ann. Appl. Probab. 16 (2006), 1411–1431.
17. R. Arratia, A. D. Barbour and S. Tavaré, Logarithmic combinatorial structures: a probabilistic
approach. European Mathematical Society, 2003.
18. S. Asmussen, Applied probability and queues. 2nd Edition, Springer-Verlag, 2003.
19. K. B. Athreya, D. McDonald and P. Ney, Limit theorems for semi-Markov processes and
renewal theory for Markov chains. Ann. Probab. 6 (1978), 788–797.
20. K. B. Athreya and P. E. Ney, Branching processes. Springer-Verlag, 1972.
21. M. Babillot, Ph. Bougerol and L. Elie, The random difference equation Xn D An Xn1 C Bn
in the critical case. Ann. Probab. 25 (1997), 478–493.
22. R. R. Bahadur, On the number of distinct values in a large sample from an infinite discrete
distribution. Proc. Nat. Inst. Sci. India. 26A (1960), 66–75.
23. A. D. Barbour, Univariate approximations in the infinite occupancy scheme. Alea, Lat. Am.
J. Probab. Math. Stat. 6 (2009), 415–433.
24. A. D. Barbour and A. V. Gnedin, Small counts in the infinite occupancy scheme. Electron. J.
Probab. 14 (2009), 365–384.
25. F. Bassetti and D. Matthes, Multi-dimensional smoothing transformations: existence, regular-
ity and stability of fixed points. Stoch. Proc. Appl. 124 (2014), 154–198.
26. R. Basu and A. Roitershtein, Divergent perpetuities modulated by regime switches. Stoch.
Models. 29 (2013), 129–148.
27. A. D. Behme, Distributional properties of solutions of dVt D Vt dUt C dLt with Lévy noise.
Adv. Appl. Probab. 43 (2011), 688–711.
28. A. Behme, Exponential functionals of Lévy processes with jumps. Alea, Lat. Am. J. Probab.
Math. Stat. 12 (2015), 375–397.
29. A. Behme and A. Lindner, On exponential functionals of Lévy processes. J. Theoret. Probab.
28 (2015), 681–720.
30. Ju. K. Beljaev and V. M. Maksimov, Analytical properties of a generating function for the
number of renewals. Theor. Probab. Appl. 8 (1963), 108–112.
31. J. Bertoin, Random fragmentation and coagulation processes. Cambridge University Press,
2006.
32. J. Bertoin and I. Kortchemski, Self-similar scaling limits of Markov chains on the positive
integers. Ann. Appl. Probab. 26 (2016), 2556–2595.
33. J. Bertoin, A. Lindner and R. Maller, On continuity properties of the law of integrals of Lévy
processes. Séminaire de Probabilités XLI, Lecture Notes in Mathematics 1934 (2008), 137–
159.
34. J. Bertoin and M. Yor, Exponential functionals of Lévy processes. Probab. Surv. 2 (2005),
191–212.
35. J. D. Biggins, Martingale convergence in the branching random walk. J. Appl. Probab. 14
(1977), 25–37.
36. J. D. Biggins, Growth rates in the branching random walk. Z. Wahrscheinlichkeitstheorie
Verw. Geb. 48 (1979), 17–34.
37. J. D. Biggins and A. E. Kyprianou, Seneta-Heyde norming in the branching random walk.
Ann. Probab. 25 (1997), 337–360.
38. J. D. Biggins and A. E. Kyprianou, Measure change in multitype branching. Adv. Appl.
Probab. 36 (2004), 544–581.
39. J. D. Biggins and A. E. Kyprianou, The smoothing transform: the boundary case. Electron. J.
Probab. 10 (2005), 609–631.
40. P. Billingsley, Convergence of probability measures. Wiley, 1968.
41. P. Billingsley, Probability and measure. John Wiley & Sons, 1986.
42. N. H. Bingham, Limit theorems for occupation times of Markov processes. Z. Wahrschein-
lichkeitstheorie Verw. Geb. 17 (1971), 1–22.
43. N. H. Bingham, R. A. Doney, Asymptotic properties of supercritical branching processes II:
Crump-Mode and Jirina processes. Adv. Appl. Probab. 7 (1975), 66–82.
44. N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular variation. Cambridge University
Press, 1989.
45. L. V. Bogachev, A. V. Gnedin and Yu. V. Yakubovich, On the variance of the number of
occupied boxes. Adv. Appl. Math. 40 (2008), 401–432.
Bibliography 239
46. L. V. Bogachev and Z. Su, Gaussian fluctuations of Young diagrams under the Plancherel
measure. Proc. R. Soc. A. 463 (2007), 1069–1080.
47. A. A. Borovkov, Asymptotic Methods in Queuing Theory. Wiley, 1984.
48. P. Bougerol and N. Picard, Strict stationarity of generalized autoregressive processes. Ann.
Probab. 20 (1992), 1714–1730.
49. P. Bourgade, Mesoscopic fluctuations of the zeta zeros. Probab. Theory Relat. Fields. 148
(2010), 479–500.
50. O. Boxma, O. Kella and D. Perry, On some tractable growth-collapse processes with renewal
collapse epochs. J. Appl. Probab. 48A (2011), 217–234.
51. A. Brandt, The stochastic equation YnC1 D An Yn C Bn with stationary coefficients. Adv.
Appl. Probab. 18 (1986), 211–220.
52. L. Breiman, On some limit theorems similar to the arc-sin law. Theory Probab. Appl. 10
(1965), 323–331.
53. S. Brofferio, How a centred random walk on the affine group goes to infinity. Ann. Inst. H.
Poincaré Probab. Statist. 39 (2003), 371–384.
54. S. Brofferio and D. Buraczewski, On unbounded invariant measures of stochastic dynamical
systems. Ann. Probab. 43 (2015), 1456–1492.
55. S. Brofferio, D. Buraczewski and E. Damek, On the invariant measure of the random
difference equation Xn D An Xn1 C Bn in the critical case. Ann. Inst. H. Poincaré Probab.
Statist. 48 (2012), 377–395.
56. H. Brozius, Convergence in mean of some characteristics of the convex hull. Adv. Appl.
Probab. 21 (1989), 526–542.
57. D. Buraczewski, On invariant measures of stochastic recursions in a critical case. Ann. Appl.
Probab. 17 (2007), 1245–1272.
58. D. Buraczewski, E. Damek, S. Mentemeier and M. Mirek, Heavy tailed solutions of
multivariate smoothing transforms. Stoch. Proc. Appl. 123 (2013), 1947–1986.
59. D. Buraczewski, E. Damek and T. Mikosch, Stochastic models with power-law tails: the
equation X D AX C B. Springer, 2016.
60. D. Buraczewski, E. Damek and J. Zienkiewic, Precise tail asymptotics of fixed points of the
smoothing transform with general weights. Bernoulli. 21 (2015), 489–504.
61. D. Buraczewski and A. Iksanov, Functional limit theorems for divergent perpetuities in the
contractive case. Electron. Commun. Probab. 20 (2015), article 10, 1–14.
62. D. Buraczewski and K. Kolesko, Linear stochastic equations in the critical case. J. Difference
Equ. Appl. 20 (2014), 188–209.
63. D. L. Burkholder, B. J. Davis and R. F. Gundy, Integral inequalities for convex functions of
operators on martingales. In: Proceedings of the Sixth Berkeley Symposium on Mathematical
Statistics and Probability (Univ. California, Berkeley, CA, 1970/1971), vol. II: Probability
Theory, pp. 223–240. University of California Press, 1972.
64. A. Caliebe and U. Rösler, Fixed points with finite variance of a smoothing transformation.
Stoch. Proc. Appl. 107 (2003), 105–129.
65. H. Carlsson and O. Nerman, An alternative proof of Lorden’s renewal inequality. Adv. Appl.
Probab. 18 (1986), 1015–1016.
66. L.-C. Chen and R. Sun, A monotonicity result for the range of a perturbed random walk. J.
Theoret. Probab. 27 (2014), 997–1010.
67. Y. S. Chow, H. Robbins and D. Siegmund, Great expectations: the theory of optimal stopping.
Houghton Mifflin Company, 1971.
68. Y. S. Chow and H. Teicher, Probability theory: independence, interchangeability, martin-
gales. Springer, 1988.
69. E. Çinlar, Introduction to stochastic processes. Prentice-Hall, 1975.
70. D. Cline and G. Samorodnitsky, Subexponentiality of the product of independent random
variables. Stoch. Proc. Appl. 49 (1994), 75–98.
71. M. Csörgő, L. Horváth and J. Steinebach, Invariance principles for renewal processes. Ann.
Probab. 15 (1987), 1441–1460.
240 Bibliography
72. D. A. Darling, Some limit theorems assiciated with multinomial trials. Proc. Fifth Berkeley
Symp. on Math. Statist. and Probab. 2 (1967), 345–350.
73. B. Davis, Weak limits of perturbed random walks and the equation Yt D Bt C ˛ supfYs W s
tg C ˇ inffYs W s tg. Ann. Probab. 24 (1996), 2007–2023.
74. D. Denisov and B. Zwart, On a theorem of Breiman and a class of random difference
equations. J. Appl. Probab. 44 (2007), 1031–1046.
75. P. Diaconis and D. Freedman, Iterated random functions. SIAM Review. 41 (1999), 45–76.
76. C. Donati-Martin, R. Ghomrasni and M. Yor, Affine random equations and the stable . 12 /
distribution. Studia Scientarium Mathematicarum Hungarica. 36 (2000), 387–405.
77. D. Dufresne, On the stochastic equation L.X/ D L.B.X C C// and a property of gamma
distributions. Bernoulli. 2 (1996), 287–291.
78. D. Dufresne, Algebraic properties of beta and gamma distributions and applications. Adv.
Appl. Math. 20 (1998), 285–299.
79. O. Durieu and Y. Wang, From infinite urn schemes to decompositions of self-similar Gaussian
processes. Electron. J. Probab. 21 (2016), paper no. 43, 23 pp.
80. R, Durrett, Probability: theory and examples. 4th Edition, Cambridge University Press, 2010.
81. R. Durrett and T. Liggett, Fixed points of the smoothing transformation. Z. Wahrschein-
lichkeitstheorie Verw. Geb. 64 (1983), 275–301.
82. M. Dutko, Central limit theorems for infinite urn models. Ann. Probab. 17 (1989), 1255–1263.
83. P. Dyszewski, Iterated random functions and slowly varying tails. Stoch. Proc. Appl. 126
(2016), 392–413.
84. P. Embrechts and C. M. Goldie, Perpetuities and random equations. In Asymptotic Statistics:
Proceedings of the Fifth Prague Symposium (P. Mandl and M. Hus̆ková, eds.), 75–86.
Physica, 1994.
85. P. Erdős, On a family of symmetric Bernoulli convolutions. Amer. J. Math. 61 (1939), 974–
976.
86. P. Erdős, On the smoothness properties of Bernoulli convolutions. Amer. J. Math. 62 (1940),
180–186.
87. T. Erhardsson, Conditions for convergence of random coefficient AR.1/ processes and
perpetuities in higher dimensions. Bernoulli. 20 (2014), 990–1005.
88. K. B. Erickson, The strong law of large numbers when the mean is undefined. Trans. Amer.
Math. Soc. 185 (1973), 371–381.
89. W. Feller, An introduction to probability theory and its applications. Vol II, 2nd Edition.
Wiley, 1971.
90. Sh. K. Formanov and A. Asimov, A limit theorem for the separable statistic in a random
assignment scheme. J. Sov. Math. 38 (1987), 2405–2411.
91. F. Freund and M. Möhle, On the number of allelic types for samples taken from exchangeable
coalescents with mutation. Adv. Appl. Probab. 41 (2009), 1082–1101.
92. B. Fristedt, Uniform local behavior of stable subordinators. Ann. Probab. 7 (1979), 1003–
1013.
93. I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes I. Springer, 2004.
94. L. Giraitis and D. Surgailis, On shot noise processes with long range dependence. In
Probability Theory and Mathematical Statistics, Vol. I (Vilnius, 1989), 401–408. Mokslas,
1990.
95. L. Giraitis and D. Surgailis, On shot noise processes attracted to fractional Levy motion. In
Stable Processes and Related Topics (Ithaca, NY, 1990). Progress in Probability 25, 261–273.
Birkhäuser, 1991.
96. P. W. Glynn and W. Whitt, Ordinary CLT and WLLN versions of L D W. Math. Oper. Res.
13 (1988), 674–692.
97. A. V. Gnedin, The Bernoulli sieve. Bernoulli 10 (2004), 79–96.
98. A. Gnedin, A. Hansen and J. Pitman, Notes on the occupancy problem with infinitely many
boxes: general asymptotics and power laws. Probab. Surv. 4 (2007), 146–171.
99. A. Gnedin and A. Iksanov, Regenerative compositions in the case of slow variation: A renewal
theory approach. Electron. J. Probab. 17 (2012), paper no. 77, 19 pp.
Bibliography 241
100. A. Gnedin, A. Iksanov and A. Marynych, Limit theorems for the number of occupied boxes in
the Bernoulli sieve. Theory Stochastic Process. 16(32) (2010), 44–57.
101. A. Gnedin, A. Iksanov, and A. Marynych, The Bernoulli sieve: an overview. In Proceedings
of the 21st International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods
in the Analysis of Algorithms (AofA’10), Discrete Math. Theor. Comput. Sci. AM (2010),
329–341.
102. A. Gnedin, A. Iksanov and A. Marynych, On ƒ-coalescents with dust component. J. Appl.
Probab. 48 (2011), 1133–1151.
103. A. Gnedin, A. Iksanov and A. Marynych, A generalization of the Erdős-Turán law for the
order of random permutation. Combin. Probab. Comput. 21 (2012), 715–733.
104. A. Gnedin, A. Iksanov, P. Negadailov and U. Rösler, The Bernoulli sieve revisited. Ann. Appl.
Probab. 19 (2009), 1634–1655.
105. A. Gnedin, A. Iksanov and U. Roesler, Small parts in the Bernoulli sieve. In Proceedings of
the Fifth Colloquium on Mathematics and Computer Science, Discrete Math. Theor. Comput.
Sci. Proc. AI (2008), 235–242.
106. A. Gnedin, J. Pitman and M. Yor, Asymptotic laws for compositions derived from transformed
subordinators. Ann. Probab. 34 (2006), 468–492.
107. C. M. Goldie, Implicit renewal theory and tails of solutions of random equations. Ann. Appl.
Probab. 1 (1991), 126–166.
108. C. M. Goldie and R. Grübel, Perpetuities with thin tails. Adv. Appl. Probab. 28 (1996), 463–
480.
109. C. M. Goldie and R. A. Maller, Stability of perpetuities. Ann. Probab. 28 (2000), 1195–1218.
110. M. I. Gomes, L. de Haan and D. Pestana, Joint exceedances of the ARCH process. J. Appl.
Probab. 41 (2004), 919–926.
111. D. R. Grey, Regular variation in the tail behaviour of solutions of random difference
equations. Ann. Appl. Probab. 4 (1994), 169–183.
112. D. R. Grey and Lu Zhunwei, The fractional linear probability generating function in the
random environment branching process. J. Appl. Probab. 31 (1994), 38–47.
113. A. K. Grincevičius, On the continuity of the distribution of a sum of dependent variables
connected with independent walks on lines. Theory Probab. Appl. 19 (1974), 163–168.
114. A. K. Grincevičius, Limit theorems for products of random linear transformations on the line.
Lithuanian Math. J. 15 (1975), 568–579.
115. A. K. Grincevičius, One limit distribution for a random walk on the line. Lithuanian Math. J.
15 (1975), 580–589.
116. A. K. Grincevičius, Products of random affine transformations. Lithuanian Math. J. 20 (1980),
279–282.
117. A. K. Grincevičius, A random difference equation. Lithuanian Math. J. 21 (1981), 302–306.
118. A. Gut, On the moments and limit distributions of some first passage times. Ann. Probab. 2
(1974), 277–308.
119. A. Gut, Stopped random walks. Limit theorems and applications. 2nd Edition, Springer, 2009.
120. L. de Haan and S. I. Resnick, Derivatives of regularly varying functions in Rd and domains
of attraction of stable distributions. Stoch. Proc. Appl. 8 (1979), 349–355.
121. B. Haas and G. Miermont, Self-similar scaling limits of non-increasing Markov chains.
Bernoulli. 17 (2011) 1217–1247.
122. P. Hall and C. C. Heyde, Martingale limit theory and its applications. Academic Press, 1980.
123. X. Hao, Q. Tang and L. Wei, On the maximum exceedance of a sequence of random variables
over a renewal threshold. J. Appl. Probab. 46 (2009), 559–570.
124. S. C. Harris and M. I. Roberts, Measure changes with extinction. Stat. Probab. Letters. 79
(2009), 1129–1133.
125. L. Heinrich and V. Schmidt, Normal convergence of multidimensional shot noise and rates of
this convergence. Adv. Appl. Probab. 17 (1985), 709–730.
126. P. Hitczenko, Comparison of moments for tangent sequences of random variables. Probab.
Theory Relat. Fields. 78 (1988), 223–230.
127. P. Hitczenko, On tails of perpetuities. J. Appl. Probab. 47 (2010), 1191–1194.
242 Bibliography
128. P. Hitczenko and J. Wesołowski, Perpetuities with thin tails revisited. Ann. Appl. Probab. 19
(2009), 2080–2101. Erratum: Ann. Appl. Probab. 20 (2010), 1177.
129. P. Hitczenko and J. Wesołowski, Renorming divergent perpetuities. Bernoulli. 17 (2011),
880–894.
130. H. K. Hwang and S. Janson, Local limit theorems for finite and infinite urn models. Ann.
Probab. 36 (2008), 992–1022.
131. H. K. Hwang and T. H. Tsai, Quickselect and the Dickman function. Combin. Probab.
Comput. 11 (2002), 353–371.
132. D. L. Iglehart, Weak convergence of compound stochastic process. I. Stoch. Proc. Appl. 1
(1973), 11–31. Corrigendum, ibid. 1 (1973), 185–186.
133. D. L. Iglehart and D. P. Kennedy, Weak convergence of the average of flag processes. J. Appl.
Probab. 7 (1970), 747–753.
134. O. M. Iksanov, On positive distributions of the class L of self-decomposable laws. Theor.
Probab. Math. Statist. 64 (2002), 51–61.
135. A. M. Iksanov, Elementary fixed points of the BRW smoothing transforms with infinite number
of summands. Stoch. Proc. Appl. 114 (2004), 27–50.
136. A. M. Iksanov, On the rate of convergence of a regular martingale related to the branching
random walk. Ukrainian Math. J. 58 (2006), 368–387.
137. A. Iksanov, On the supremum of perturbed random walk. Bulletin of Kiev University. 1
(2007), 161–164 (in Ukrainian).
138. A. Iksanov, On the number of empty boxes in the Bernoulli sieve II. Stoch. Proc. Appl. 122
(2012), 2701–2729.
139. A. Iksanov, On the number of empty boxes in the Bernoulli sieve I. Stochastics. 85 (2013),
946–959.
140. A. Iksanov, Functional limit theorems for renewal shot noise processes with increasing
response functions. Stoch. Proc. Appl. 123 (2013), 1987–2010.
141. A. M. Iksanov and Z. J. Jurek, On fixed points of Poisson shot noise transforms. Adv. Appl.
Probab. 34 (2002), 798–825.
142. A. Iksanov, Z. Kabluchko and A. Marynych, Weak convergence of renewal shot noise
processes in the case of slowly varying normalization. Stat. Probab. Letters. 114 (2016), 67–
77.
143. A. Iksanov, Z. Kabluchko, A. Marynych and G. Shevchenko, Fractionally integrated inverse
stable subordinators. Stoch. Proc. Appl. 127 (2017), 80–106.
144. A. M. Iksanov and C. S. Kim, On a Pitman-Yor problem. Stat. Probab. Letters. 68 (2004),
61–72.
145. A. M. Iksanov and C. S. Kim, New explicit examples of Poisson shot noise transforms. Austr.
New Zealand J. Statist. 46 (2004), 313–321.
146. A. Iksanov, A. Marynych and M. Meiners, Limit theorems for renewal shot noise processes
with eventually decreasing response functions. Stoch. Proc. Appl. 124 (2014), 2132–2170.
147. A. Iksanov, A. Marynych, M. Meiners, Limit theorems for renewal shot noise processes with
decreasing response functions. (2013). Extended preprint version of [146] available at http://
arxiv.org/abs/arXiv:1212.1583v2
148. A. Iksanov, A. Marynych and M. Meiners, Asymptotics of random processes with immigration
I: Scaling limits. Bernoulli. 23, to appear (2017).
149. A. Iksanov, A. Marynych and M. Meiners, Asymptotics of random processes with immigration
II: Convergence to stationarity. Bernoulli. 23, to appear (2017).
150. A. M. Iksanov, A. V. Marynych and V. A. Vatutin, Weak convergence of finite-dimensional
distributions of the number of empty boxes in the Bernoulli sieve. Theory Probab. Appl. 59
(2015), 87–113.
151. A. Iksanov and M. Meiners, Exponential moments of first passage times and related quantities
for random walks. Electron. Commun. Probab. 15 (2010), 365–375.
152. A. Iksanov and M. Meiners, Fixed points of multivariate smoothing transforms with scalar
weights. Alea, Lat. Am. J. Probab. Math. Stat. 12 (2015), 69–114.
Bibliography 243
153. A. Iksanov and M. Möhle, On the number of jumps of random walks with a barrier. Adv.
Appl. Probab. 40 (2008), 206–228.
154. O. Iksanov and P. Negadailov, On the supremum of a martingale associated with a branching
random walk. Theor. Probab. Math. Statist. 74 (2007), 49–57.
155. A. Iksanov and A. Pilipenko, On the maximum of a perturbed random walk. Stat. Probab.
Letters. 92 (2014), 168–172.
156. A. Iksanov and A. Pilipenko, A functional limit theorem for locally perturbed random walks.
Probab. Math. Statist. 36 (2016), 353–368.
157. A. Iksanov and S. Polotskiy, Tail behavior of suprema of perturbed random walks. Theory
Stochastic Process. 21(36) (2016), 12–16.
158. A. M. Iksanov and U. Rösler, Some moment results about the limit of a martingale related
to the supercritical branching random walk and perpetuities. Ukrainian Math. J. 58 (2006),
505–528.
159. R. Iwankiewicz, Response of linear vibratory systems driven by renewal point processes.
Probab. Eng. Mech. 5 (1990), 111–121.
160. J. Jacod and A. N. Shiryaev, Limit theorems for stochastic processes. 2nd Edition, Springer,
2003.
161. P. Jagers, Age-dependent branching processes allowing immigration. Theory Probab. Appl.
13 (1968), 225–236.
162. S. Janson, Moments for first-passage and last-exit times, the minimum, and related quantities
for random walks with positive drift. Adv. Appl. Probab. 18 (1986), 865–879.
163. W. Jedidi, J. Almhana, V. Choulakian and R. McGorman, General shot noise processes
and functional convergence to stable processes. In Stochastic Differential Equations and
Processes. Springer Proc. Math. 7, 151–178, Springer, 2012.
164. P. R. Jelenković and M. Olvera-Cravioto, Implicit renewal theorem for trees with general
weights. Stoch. Proc. Appl. 122 (2012), 3209–3238.
165. Z. J. Jurek, Selfdecomposability, perpetuity laws and stopping times. Probab. Math Statist. 19
(1999), 413–419.
166. Z. J. Jurek and W. Vervaat, An integral representation for selfdecomposable Banach space
valued random variables. Z. Wahrscheinlichkeitstheorie Verw. Geb. 62 (1983), 247–262.
167. O. Kallenberg, Foundations of modern probability. Springer, 1997.
168. R. Kalpathy and H. Mahmoud, Perpetuities in fair leader election algorithms. Adv. Appl.
Probab. 46 (2014), 203–216.
169. S. Kalpazidou, A. Knopfmacher and J. Knopfmacher, Lüroth-type alternating series repre-
sentations for real numbers. Acta Arith. 55 (1990), 311–322.
170. R. Kapica and J. Morawiec, Refinement equations and distributional fixed points. Appl. Math.
Comput. 218 (2012), 7741–7746.
171. S. Karlin, Central limit theorems for certain infinite urn schemes. J. Math. Mech. 17 (1967),
373–401.
172. S. Karlin and H. M. Taylor, A first course in stochastic processes, 2nd Edition. Academic
Press, 1975.
173. R. Keener, A note on the variance of a stopping time. Ann. Statist. 15 (1987), 1709–1712.
174. H. G. Kellerer, Ergodic behaviour of affine recursions III: positive recurrence and null
recurrence. Technical report, Math. Inst. Univ. München, Theresienstrasse 39, D-8000
München, Germany. Available at http://www.mathematik.uni-muenchen.de/~kellerer/
175. R. Kershner and A. Wintner, On symmetric Bernoulli convolutions. Amer. J. Math. 57 (1935),
541–548.
176. H. Kesten, Random difference equations and renewal theory for products of random matrices.
Acta Math. 131 (1973), 207–248.
177. H. Kesten and R. A. Maller, Two renewal theorems for general random walks tending to
infinity. Probab. Theory Relat. Fields.106 (1996), 1–38.
178. P. Kevei, A note on the Kesten-Grincevičius-Goldie theorem. Electron. Commun. Probab. 21
(2016), paper no. 51, 12 pp.
244 Bibliography
179. J. F. C. Kingman, The first birth problem for an age-dependent branching process. Ann.
Probab. 3 (1975), 790–801.
180. C. Klüppelberg and C. Kühn, Fractional Brownian motion as a weak limit of Poisson shot
noise processes-with applications to finance. Stoch. Proc. Appl. 113 (2004), 333–351.
181. C. Klüppelberg and T. Mikosch, Explosive Poisson shot noise processes with applications to
risk reserves. Bernoulli. 1 (1995), 125–147.
182. C. Klüppelberg and T. Mikosch, Delay in claim settlement and ruin probability approxima-
tions. Scand. Actuar. J. 2 (1995), 154–168.
183. C. Klüppelberg, T. Mikosch and A. Schärf, Regular variation in the mean and stable limits
for Poisson shot noise. Bernoulli 9 (2003), 467–496.
184. V. F. Kolchin, B. A. Sevastyanov and V. P. Chistyakov, Random allocations. V.H.Winston &
Sons, 1978.
185. B. Kołodziejek, Logarithmic tails of sums of products of positive random variables bounded
by one. Ann. Appl. Probab., to appear (2017).
186. T. Konstantopoulos and S.-J. Lin, Macroscopic models for long-range dependent network
traffic. Queueing Systems Theory Appl. 28 (1998), 215–243.
187. D. Kuhlbusch, Moment conditions for weighted branching processes. PhD thesis, Universität
Münster, 2004.
188. T. G. Kurtz and P. Protter, Weak limit theorems for stochastic integrals and stochastic
differential equations. Ann. Probab. 19 (1991), 1035–1070.
189. T. L. Lai and D. Siegmund, A nonlinear renewal theory with applications to sequential
analysis. I. Ann. Statist. 5 (1977), 946–954.
190. T. L. Lai and D. Siegmund, A nonlinear renewal theory with applications to sequential
analysis. II. Ann. Statist. 7 (1979), 60–76.
191. J. Lamperti, Semi-stable Markov processes. Z. Wahrscheinlichkeitstheorie Verw. Geb. 22
(1972), 205–225.
192. J. A. Lane, The central limit theorem for the Poisson shot-noise process. J. Appl. Probab. 21
(1984), 287–301.
193. A. J. Lawrance and N. T. Kottegoda, Stochastic modelling of riverflow time series. J. Roy.
Statist. Soc. Ser. A. 140 (1977), 1–47.
194. G. Letac, A contraction principle for certain Markov chains and its applications. Random
matrices and their applications (Brunswick, Maine, 1984), 263–273, Contemp. Math. 50,
Amer. Math. Soc., 1986.
195. P. A. W. Lewis, A branching Poisson process model for the analysis of computer failure
patterns. J. Roy. Statist. Soc. Ser. B. 26 (1964), 398–456.
196. X. Liang and Q. Liu, Weighted moments for Mandelbrot’s martingales. Electron. Commun.
Probab. 20 (2015), paper no. 85, 12 pp.
197. T. Lindvall, Weak convergence of probability measures and random functions in the function
space DŒ0; 1/. J. Appl. Probab. 10 (1973), 109–121.
198. T. Lindvall, Lectures on the coupling method. Wiley, 1992.
199. Q. Liu, Fixed points of a generalized smoothing transformation and applications to the
branching random walk. Adv. Appl. Probab. 30 (1998), 85–112.
200. Q. Liu, On generalized multiplicative cascades. Stoch. Proc. Appl. 86 (2000), 263–286.
201. G. Lorden, On excess over the boundary. Ann. Math. Stat. 41 (1970), 520–527.
202. R. Lyons, A simple path to Biggins’ martingale convergence for branching random walk.
Classical and modern branching processes, IMA Volumes in Mathematics and its Applica-
tions. 84, 217–221, Springer, 1997.
203. H. M. Mahmoud, Distributional analysis of swaps in Quick Select. Theoret. Comput. Sci. 411
(2010), 1763–1769.
204. A. H. Marcus, Some exact distributions in traffic noise theory. Adv. Appl. Probab. 7 (1975),
593–606.
205. A. V. Marynych, A note on convergence to stationarity of random processes with immigration.
Theory Stochastic Process. 20(36) (2015), 84–100.
Bibliography 245
206. K. Maulik and B. Zwart, Tail asymptotics for exponential functionals of Lévy processes. Stoch.
Proc. Appl. 116 (2006), 156–177.
207. M. M. Meerschaert and S. A. Stoev, Extremal limit theorems for observations separated by
random power law waiting times. J. Stat. Planning and Inference. 139 (2009), 2175–2188.
208. M. Meiners and S. Mentemeier, Solutions to complex smoothing equations. Probab. Theory
Relat. Fields., to appear (2017).
209. S. Mentemeier, The fixed points of the multivariate smoothing transform. Probab. Theory
Relat. Fields. 164 (2016), 401–458.
210. R. Metzler and J. Klafter, The random walk’s guide to anomalous diffusion: a fractional
dynamics approach. Phys. Reports. 339 (2000), 1–77.
211. V. G. Mikhailov, The central limit theorem for a scheme of independent allocation of particles
by cells. Proc. Steklov Inst. Math. 157 (1983), 147–163.
212. T. Mikosch and S. Resnick, Activity rates with very heavy tails. Stoch. Proc. Appl. 116 (2006),
131–155.
213. T. Mikosch, G. Samorodnitsky and L. Tafakori, Fractional moments of solutions to stochastic
recurrence equations. J. Appl. Probab. 50 (2013), 969–982.
214. D. R. Miller, Limit theorems for path-functionals of regenerative processes. Stoch. Proc. Appl.
2 (1974), 141–161.
215. Sh. A. Mirakhmedov, Randomized decomposable statistics in a generalized allocation
scheme over a countable set of cells. Diskret. Mat. 1 (1989), 46–62 (in Russian).
216. Sh. A. Mirakhmedov, Randomized decomposable statistics in a scheme of independent
allocation of particles into cells. Diskret. Mat. 2 (1990), 97–111 (in Russian).
217. N. R. Mohan, Teugels’ renewal theorem and stable laws. Ann. Probab. 4 (1976), 863–868.
218. M. Möhle, On the number of segregating sites for populations with large family sizes. Adv.
Appl. Probab. 38 (2006), 750–767.
219. P. Mörters and Yu. Peres, Brownian motion. Cambridge University Press, 2010.
220. P. Negadailov, Limit theorems for random recurrences and renewal-type processes. PhD
thesis, University of Utrecht, the Netherlands. Available at http://igitur-archive.library.uu.nl/
dissertations/2010-0823-200228/negadailov.pdf
221. J. Neveu, Discrete-parameter martingales. North-Holland, 1975.
222. A. G. Pakes, Some properties of a random linear difference equation. Austral. J. Statist. 25
(1983), 345–357.
223. A. G. Pakes and N. Kaplan, On the subcritical Bellman-Harris process with immigration. J.
Appl. Probab. 11 (1974), 652–668.
224. Z. Palmowski and B. Zwart, Tail asymptotics of the supremum of a regenerative process. J.
Appl. Probab. 44 (2007), 349–365.
225. Z. Palmowski and B. Zwart, On perturbed random walks. J. Appl. Probab. 47 (2010), 1203–
1204.
226. E. I. Pancheva and P. K. Jordanova, Functional transfer theorems for maxima of iid random
variables. Comptes Rendus de l’Académie Bulgare des Sciences. 57 (2004), 9–14.
227. E. Pancheva, I. K. Mitov and K. V. Mitov, Limit theorems for extremal processes generated by
a point process with correlated time and space components. Stat. Probab. Letters. 79 (2009),
390–395.
228. J. C. Pardo, V. Rivero and K. van Schaik, On the density of exponential functionals of Lévy
processes. Bernoulli. 19 (2013), 1938–1964.
229. J. Pitman and M. Yor, Infinitely divisible laws associated with hyperbolic functions. Canad. J.
Math. 55 (2003), 292–330.
230. S. V. Polotskiy, On moments of some convergent random series and limits of martingales
related to a branching random walk. Bulletin of Kiev University. 2 (2009), 135–140 (in
Ukrainian).
231. M. Pratsiovytyi and Yu. Khvorostina, Topological and metric properties of distributions of
random variables represented by the alternating Lüroth series with independent elements.
Random operators and stochastic equations. 21 (2013), 385–401.
232. W. E. Pruitt, General one-sided laws of the iterated logarithm. Ann. Probab. 9 (1981), 1–48.
246 Bibliography
233. S. T. Rachev and G. Samorodnitsky, Limit laws for a stochastic process and random recursion
arising in probabilistic modelling. Adv. Appl. Probab. 27 (1995), 185–202.
234. J. I. Reich, Some results on distributions arising from coin tossing. Ann. Probab. 10 (1982),
780–786.
235. S. Resnick, Extreme values, regular variation, and point processes. Springer-Verlag, 1987.
236. S. I. Resnick, Adventures in stochastic processes. 3rd printing, Birkhäuser, 2002.
237. S. I. Resnick, Heavy-tail phenomena. Probabilistic and statistical modeling. Springer, 2007.
238. S. Resnick and H. Rootzén, Self-similar communication models and very heavy tails. Ann.
Appl. Probab. 10 (2000), 753–778.
239. S. Resnick and E. van den Berg, Weak convergence of high-speed network traffic models. J.
Appl. Probab. 37 (2000), 575–597.
240. S. I. Resnick and E. Willekens, Moving averages with random coefficients and random
coefficient autoregressive models. Commun. Statist. Stoch. Models. 7 (1991), 511–525.
241. C. Y. Robert, Asymptotic probabilities of an exceedance over renewal thresholds with an
application to risk theory. J. Appl. Probab. 42 (2005), 153–162.
242. I. Rodriguez-Iturbe, D. R. Cox and V. Isham, Some models for rainfall based on stochastic
point processes. Proc. R. Soc. Lond. A. 410 (1987), 269–288.
243. U. Rösler, V. A. Topchii and V. A. Vatutin, Convergence conditions for the weighted
branching process. Discrete Mathematics and Applications. 10 (2000), 5–21.
244. St. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional integrals and derivatives: theory
and applications. Gordon and Breach, 1993.
245. G. Samorodnitsky, A class of shot noise models for financial applications. Athens conference
on applied probability and time series analysis, Athens, Greece, March 22–26, 1995. Vol. I:
Applied probability. In honor of J. M. Gani. Lect. Notes Stat., Springer-Verlag. 114 (1996),
332–353.
246. V. Schmidt, On finiteness and continuity of shot noise processes. Optimization. 16 (1985),
921–933.
247. W. Schottky, Spontaneous current fluctuations in electron streams. Ann. Phys. 57 (1918),
541–567.
248. M. S. Sgibnev, Renewal theorem in the case of an infinite variance. Sib. Math. J. 22 (1982),
787–796.
249. B. Solomyak, On the random series ˙i (an Erdös problem). Ann. Math. 242 (1995), 611–
625.
250. L. Takács, On secondary stochastic processes generated by recurrent processes. Acta Math.
Acad. Sci. Hungar. 7 (1956), 17–29.
251. H. Thorisson, Coupling, stationarity, and regeneration. Springer, 2000.
252. G. Toscani, Wealth redistribution in conservative linear kinetic models. EPL (Europhysics
Letters). 88 (2009), 10007.
253. K. Urbanik, Functionals on transient stochastic processes with independent increments.
Studia Math. 103 (1992), 299–315.
254. D. Vere-Jones, Stochastic models for earthquake occurrence. J. Roy. Statist. Soc. Ser. B. 32
(1970), 1–62.
255. W. Vervaat, On a stochastic difference equation and a representation of nonnegative infinitely
divisible random variables. Adv. Appl. Probab. 11 (1979), 750–783.
256. Y. Wang, Convergence to the maximum process of a fractional Brownian motion with shot
noise. Stat. Probab. Letters. 90 (2014), 33–41.
257. T. Watanabe, Absolute continuity of some semi-selfdecomposable distributions and self-
similar measures. Probab. Theory Relat. Fields. 117 (2000), 387–405.
258. E. Waymire and V. K. Gupta, The mathematical structure of rainfall representations: 1. A
review of the stochastic rainfall models. Water Resour. Res. 17 (1981), 1261–1272.
259. G. Weiss, Shot noise models for the generation of synthetic streamflow data. Water Resour.
Res. 13 (1977), 101–108.
260. M. Westcott, On the existence of a generalized shot-noise process. Studies in probability and
statistics (papers in honour of Edwin J. G. Pitman), 73–88. North-Holland, 1976.
Bibliography 247
Bernoulli sieve, 1, 2, 191, 203, 207 stable Lévy process, 112, 116, 120, 123,
number of empty boxes, 2, 122, 191, 195, 127, 147, 162, 177
203 continuity, 128
weak convergence, 192–194 unboundedness, 128
Blackwell theorem, 199, 212, 215, 225
Breiman theorem, 7, 16, 17
Brownian motion, 20, 21, 111, 123, 124, 126, intrinsic martingale in the branching random
127, 226 walk, 180, 188
logarithmic moment, 179, 182,
188
direct Riemann integrability, 91, 93, 198, 199, supremum, 185
212, 213 uniform integrability, 179, 181
sufficient conditions, 213
distribution
lattice, 7, 8, 17, 19, 200 key renewal theorem
Mittag–Leffler, 48, 134, 135 for distributions with infinite mean, 198,
nonlattice, 7, 8, 90, 92, 192, 199, 214 199, 215
positive Linnik, 50 lattice case, 18, 215
nonlattice case, 214
version for nonintegrable functions, 219,
elementary renewal theorem, 91, 157, 211 220, 223
Erickson inequality, 13, 33, 36, 211
exponential functional of a Lévy process, 46,
58, 84, 134 Lüroth series, 51, 85
Lamperti representation, 133
Lorden inequality, 106, 156, 212
fixed point
of Poisson shot noise transform, 49, 85
of smoothing transform, 49, 84 nonincreasing Markov chain, 202
fractionally integrated
inverse stable subordinator, 112, 116, 121,
123, 131, 136, 152, 165, 193, 206 ordinary random walk, 1, 4, 18, 19, 210,
Hölder continuity, 132 226
unboundedness, 132 first-passage time, 29
distributional subadditivity, 148, 155, Poisson random measure, 20, 21, 25, 75, 193,
157, 211 235
exponential moment, 231 Poissonization, 195
power moment, 232
last-exit time, 29
exponential moment, 232 random process
power moment, 232 birth and death, 95
number of visits, 29, 230 conditionally Gaussian, 112, 114, 121, 122,
exponential moment, 231 125, 127, 136, 137, 145, 164, 165,
power moment, 232 167
overshoot, 19, 90, 217 extremal, 20
supremum, 229 Gaussian, 111, 113, 119, 120, 122,
power moment, 227 125–127, 138, 161, 162, 193, 207
undershoot, 90, 217 inverse stable subordinator, 112, 115, 125,
126, 134, 206
semi-stable Markov, 133
perpetuity, 1, 43, 179, 185 shot noise
almost sure finiteness, 44 Poisson, 46, 88, 175
continuity properties, 52 renewal, 87, 92
absolute continuity, 54, 56 stable Lévy, 21, 111, 130
discreteness, 55 stable subordinator, 112, 131, 193
mixtures, 56 stationary, 194
singular continuity, 54 stationary Ornstein–Uhlenbeck, 126
exponential moment, 58 stationary renewal, 90, 96, 203
logarithmic moment, 57, 179 strong approximation, 226
power moment, 57 with immigration, 33, 87, 175
related Markov chain, 43, 45 examples, 88
tail behavior, 57 exponential moment, 169
weak convergence, 66, 67 power moment, 170
perturbed random walk, 1, 3 stationary, 91
first-passage time, 29 weak convergence, 91, 92, 113–115,
almost sure finiteness, 29 117, 119, 120
exponential moment, 30 regular variation, 6, 15, 20, 118, 147, 148, 209
last-exit time, 29 in R2C , 110
almost sure finiteness, 29 fictitious, 110, 127, 142
exponential moment, 31 limit function, 112, 127, 136, 142
power moments, 31 uniform in strips, 111, 127, 136, 142
number of visits, 29 wide-sense, 110–112
almost sure finiteness, 29 renewal function, 13, 33, 34, 37, 38, 196, 210
exponential moments, 30 subadditivity, 36, 37, 39, 211
power moments, 31
supremum, 6
exponential moment, 6 Skorokhod space, 20, 87, 137
power moment, 6, 179, 227 J1 -topology, 20, 23, 28, 70, 115, 121, 126,
tail behavior, 7 151, 233
weak convergence, 20, 21 M1 -topology, 115, 233