You are on page 1of 260

Probability and Its Applications

Alexander Iksanov

Renewal Theory
for Perturbed
Random Walks
and Similar
Processes
Probability and Its Applications

Series editors
Steffen Dereich
Davar Khoshnevisan
Andreas Kyprianou
Sidney I. Resnick

More information about this series at http://www.springer.com/series/4893


Alexander Iksanov

Renewal Theory
for Perturbed Random Walks
and Similar Processes
Alexander Iksanov
Faculty of Computer Science
and Cybernetics
Taras Shevchenko National
University of Kyiv
Kyiv, Ukraine

ISSN 2297-0371 ISSN 2297-0398 (electronic)


Probability and Its Applications
ISBN 978-3-319-49111-0 ISBN 978-3-319-49113-4 (eBook)
DOI 10.1007/978-3-319-49113-4

Library of Congress Control Number: 2016961210

Mathematics Subject Classification (2010): 60-02, 60G, 60K

© Springer International Publishing AG 2016


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, express or implied, with respect to the material contained herein or for any
errors or omissions that may have been made.

Printed on acid-free paper

This book is published under the trade name Birkhäuser, www.birkhauser-science.com


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my family
Preface

The present book offers a detailed treatment of perturbed random walks, per-
petuities, and random processes with immigration. These objects are of major
importance in modern probability theory, both theoretical and applied. Furthermore,
these have been used to model various phenomena. Areas of possible applications
include most of the natural sciences as well as insurance and finance. Recent years
have seen an explosion of activity around perturbed random walks, perpetuities, and
random processes with immigration. Over the last decade, several nice results have
been proved, and some efficient techniques and methods have been worked out. This
book is a result of a growing author’s conviction that the time has come to present
in a book format main developments in the area accumulated to date. Summarizing,
the first purpose of this book is to provide a thorough discussion of the state of the
art in the area with a special emphasis on the methods employed. Although most of
the results are given in a final form as ultimate criteria, there are still a number of
open questions. Some of these are stated in the text.
Formally, the main objects are related because each of these is a derived process
of i.i.d. pairs .X1 ; 1 /, .X2 ; 2 /; : : :. Here 1 , 2 ; : : : are real-valued random variables,
whereas X1 , X2 ; : : : are real-valued random variables in the case of perturbed random
walks and perpetuities (with nonnegative entries) and X1 , X2 ; : : : are DŒ0; 1/-valued
random processes in the case of random processes with immigration. As far as
perturbed random walks .Tn /n2N defined by

Tn WD 1 C : : : C n1 C Xn ; n2N

are concerned, the main motivation behind our interest is to what extent classical
results (some of these are given in Section 6.3) for ordinary random walks .1 C : : :
C n /n2N must be adjusted in the presence of a perturbating sequence. A similar
motivation is also our driving force in studying weak convergence of random
processes with immigration .X.t//t0 defined by
X
X.t/ WD XkC1 .t  1  : : :  k /1f1 C:::Ck tg ; t  0:
k0

vii
viii Preface

If Xk .t/  1 and k  0 for all k 2 N, then X.t/ is nothing else but the first time the
ordinary random walk exits the interval
P .1; t. This is a classical object of renewal
theory, and it is well known that . k0 1f1 C:::Ck utg /u0 satisfies a functional limit
theorem as t ! 1. If Xk .t/ is not identically one, the asymptotic behavior of X.t/
is affected by both the first-passage time process as above and fluctuations of the
“perturbating” sequence .Xk /k2N . With this point of view, the subject matter of the
book is one generalization of renewal theory. Thus, the second purpose of the book
is to work out theoretical grounds of such a generalization. Actually, the connections
between the main objects extend far beyond the formal definition. The third purpose
of the book is to exhibit these links in full. As a warm-up, we now give two examples
in which perturbed random walks are linked to perpetuities and random processes
with immigration, respectively.
(a) To avoid at this point introducing additional notation, we only discuss perpe-
tuities with nonnegative
P entries that are almost surely convergent series of the
form Y1 WD n1 exp.Tn /. It turns out that whenever the tail of the distribution
of Y1 is sufficiently heavy, the asymptotic behavior of PfY1 > xg as x ! 1
is completely determined by that of Pfsupn1 Tn > log xg. In particular, if the
power or logarithmic moments of supn1 exp.Tn / are finite, so are those of Y1 ;
see Sections 1.3.1
P and 2.1.4. A similar relation also exists between the finite-
n-perpetuities nkD1 exp.Tk / and maxima max1kn exp.Tk /, though this time
with respect to weak convergence; see Section 2.2.
(b) The number of visits to .1; t of the perturbed random walk is a certain
random process with immigration evaluated at point t. The moment results for
general random processes with immigration derived in Section 3.4 are a key in
the analysis of the moments of the numbers of visits (see Section 1.4 for the
latter).
As has already been mentioned, the random processes treated here allow for
numerous applications. The fourth purpose of the book is to add two less known
examples to the list of possible applications. In Section 4 we show that a criterion
for the finiteness of perpetuities can be used to prove an ultimate version of Biggins’
martingale convergence theorem which is concerned with the intrinsic martingales
in supercritical branching random walks. For the proof, we describe and exploit an
interesting connection between these at first glance unrelated models which emerges
when studying the weighted random tree associated with the branching random
walk under the so-called size-biased measure. In Section 5 we investigate weak
convergence of the number of empty boxes in the Bernoulli sieve which is a random
allocation scheme generated by a multiplicative random walk and a uniform sample
on Œ0; 1. We demonstrate that the problem amounts to studying weak convergence
of a particular random process with immigration which is actually an operator
defined on a particular perturbed random walk. We emphasize that the connection
between the Bernoulli sieve and certain random processes with immigration remains
veiled unless we consider the Bernoulli sieve as the occupancy scheme in a
random environment, and functionals in question are analyzed conditionally on the
environment.
Preface ix

I close this preface with thanks and acknowledgments. I thank my family for all
their love and creating a nice working atmosphere, both at home and dacha, where
most of the research and writing of this monograph was done. The other portion
of the research underlying the present document was mostly undertaken during my
frequent visits to Münster under the generous support of the University of Münster
and DFG SFB 878 “Geometry, Groups and Actions.” I thank Gerold Alsmeyer for
making these visits to Münster possible and always being ready to help. Matthias
Meiners helped in arranging my visits to Münster, too, which is highly appreciated. I
thank Oleg Zakusylo, my former supervisor, for all-round support at various stages
of my scientific career. I thank my colleagues and friends (in alphabetical order)
Gerold Alsmeyer, Darek Buraczewski, Sasha Gnedin, Zakhar Kabluchko, Sasha
Marynych, Matthias Meiners, Andrey Pilipenko, Uwe Rösler, Zhora Shevchenko,
and Vladimir Vatutin, in collaboration with whom many of the results presented
in this book were originally obtained. Sasha Marynych scrutinized the entire book
and found many typos, inconsistencies, and other blunders of mine. Apart from
this, I owe special thanks to Sasha for his ability to be helpful almost at any time.
I am grateful to Darek Buraczewski, Matthias Meiners, Andrey Pilipenko, and
Igor Samoilenko who read some chapters of the book and gave me useful advices
concerning the presentation and detected several errors.

Kyiv, Ukraine Alexander Iksanov


Contents

1 Perturbed Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1


1.1 Definition and Relation to Other Models . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 1
1.2 Global Behavior .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 4
1.3 Supremum of the Perturbed Random Walk . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.3.1 Distributional Properties.. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 6
1.3.2 Proofs for Section 1.3.1 . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 8
1.3.3 Weak Convergence . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 20
1.3.4 Proofs for Section 1.3.3 . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 22
1.4 First-Passage Time and Related Quantities
for the Perturbed Random Walk . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 29
1.5 Proofs for Section 1.4 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 32
1.6 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 41
2 Perpetuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 43
2.1 Convergent Perpetuities .. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 44
2.1.1 Criterion of Finiteness .. . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 44
2.1.2 Examples of Perpetuities . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 45
2.1.3 Continuity of Perpetuities . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 52
2.1.4 Moments of Perpetuities.. . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 57
2.1.5 Proofs for Section 2.1.4 . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 59
2.2 Weak Convergence of Divergent Perpetuities . . . . .. . . . . . . . . . . . . . . . . . . . 65
2.3 Proofs for Section 2.2 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 68
2.4 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 83
3 Random Processes with Immigration .. . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 87
3.1 Definition .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 87
3.2 Limit Theorems Without Scaling.. . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 89
3.2.1 Stationary Random Processes with Immigration .. . . . . . . . . . . . . 90
3.2.2 Weak Convergence . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 91
3.2.3 Applications of Theorem 3.2.1 .. . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 93
3.2.4 Proofs for Section 3.2.2 . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 96

xi
xii Contents

3.3 Limit Theorems with Scaling.. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 109


3.3.1 Our Approach .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 109
3.3.2 Weak Convergence of the First Summand in (3.36) . . . . . . . . . . 113
3.3.3 Weak Convergence of the Second Summand in (3.36) . . . . . . . 114
3.3.4 Scaling Limits of Random Processes with Immigration .. . . . . 119
3.3.5 Applications .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 122
3.3.6 Properties of the Limit Processes . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 127
3.3.7 Proofs for Sections 3.3.2 and 3.3.3 . . . . . . . .. . . . . . . . . . . . . . . . . . . . 138
3.3.8 Proofs for Section 3.3.4 . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 159
3.4 Moment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 168
3.5 Proofs for Section 3.4 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 170
3.6 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 175
4 Application to Branching Random Walk.. . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 179
4.1 Definition of Branching Random Walk . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 179
4.2 Criterion for Uniform Integrability of Wn and Moment Result . . . . . . . 181
4.3 Size-Biasing and Modified Branching Random Walk . . . . . . . . . . . . . . . . . 183
4.4 Connection with Perpetuities .. . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 185
4.5 Proofs for Section 4.2 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 187
4.6 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 188
5 Application to the Bernoulli Sieve . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 191
5.1 Weak Convergence of the Number of Empty Boxes . . . . . . . . . . . . . . . . . . 191
5.2 Poissonization and De-Poissonization . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 195
5.3 Nonincreasing Markov Chains and Random Recurrences . . . . . . . . . . . . 202
5.4 Proofs for Section 5.1 .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 203
5.5 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 207
6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 209
6.1 Regular Variation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 209
6.2 Renewal Theory .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 210
6.2.1 Basic Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 210
6.2.2 Direct Riemann Integrability and the Key
Renewal Theorem.. . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 212
6.2.3 Relatives of the Key Renewal Theorem . . .. . . . . . . . . . . . . . . . . . . . 218
6.2.4 Strong Approximation of the Stationary Renewal Process . . . 226
6.3 Ordinary Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 226
6.4 Miscellaneous Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 232
6.5 Bibliographic Comments . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 235

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 237

Index . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . 249
List of Notation

Z – the set of integers


N – the set of positive integers; N0 WD N [ f0g
R – the set of real numbers; RC D Œ0; 1/; R2 D R  R; R2C D .0; 1/  .0; 1/
x ^ y D min.x; y/; x _ y D max.x; y/
"x – probability measure concentrated at x
i.i.d. – independent and identically distributed
dRi – directly
R 1 Riemann integrable
.x/ D 0 yx1 ey dy, x > 0 – Euler’s gamma function
D.I/ – the Skorokhod space of real-valued right-continuous functions which are
defined on the interval I (typical examples are I D R, I D Œ0; 1/, I D .0; 1/, and
I D Œa; b for finite a < b) and have finite limits from the left
D D DŒ0; 1/
‡ 2 D is defined by ‡.t/ D t for t  0
„ 2 D is defined by „.t/ D 0 for t  0
Mp – the set of locally finite point measures on Œ0; 1/  .1; 1
MpC – the set of locally finite point measures on Œ0; 1/  .0; 1
Mp – the set of  2 MpC which satisfy .Œ0; T  .0; 1/ < 1 for all T > 0
Np – the set of locally finite point measures on .1; 1

.k ; k / – i.i.d. R2 -valued random vectors


.Sn /n2N0 – zero-delayed ordinary random walk with increments k
For x 2 R, .x/ D inffk 2 N W Sk > xg,  D .0/;
w .x/ D inffk 2 N W Sk  xg, w D w .0/;
.x/ D inffk 2 N W Sk < xg,  D .0/;
w .x/ D inffk 2 N W Sk  xg, w D w .0/;
N.x/ D #fn 2 N0 W Sn  xg;
.x/ D supfn 2 N0 W Sn  xg with the usual conventions that sup ˛ D 0 and
inf ˛ D 1
.n /n2N0 – the sequence of strictly increasing ladder epochs of .Sn /n2N0 defined by
0 D 0, 1 D , and n D inffk > n1 W Sk > Sn1 g for n  2
.Tn /n2N – perturbed random walk defined by Tn D Sn1 C n

xiii
xiv List of Notation

For x 2 R,   .x/ D inffn 2 N W Tn > xg;


N  .x/ D #fn 2 N W Tn  xg;
 .x/ D supfn 2 N W Tn  xg
P
N .a;b/ D k ".t.a;b/ ; j.a;b/ / for a; b > 0 – a Poisson random measure on Œ0; 1/ 
k k
.0; 1 with intensity measure LEB 
a;b where ".t; x/ is the probability measure
concentrated at .t; x/ 2 Œ0; 1/  .0; 1, LEB is the Lebesgue  measure on Œ0; 1/,
and
a;b is the measure on .0; 1 defined by
a;b .x; 1 D axb for x > 0
For ˇ > 1, Vˇ is a Gaussian process introduced in Definition 3.3.4
S2 – a standard Brownian motion
S˛ for 1 < ˛ < 2 – a spectrally negative ˛-stable Lévy process with the
characteristic function (3.38) R
For ˛ 2 .1; 2 and > 1=˛, ¤ 0, I˛; .0/ WD 0, I˛; .u/ D Œ0; u .u  y/ dS˛ .y/
for u > 0; I˛; 0 .u/ WD S˛ .u/ for u  0.
W˛ for ˛ 2 .0; 1/ – an ˛-stable subordinator with
 log E exp.zW˛ .t// D .1  ˛/tz˛ for z  0
W˛ for ˛ 2 .0; 1/ – an inverse ˛-stable subordinator R
For ˛ 2 .0; 1/ and 2 R, J˛; .0/ D 0, J˛; .u/ D Œ0; u .u  y/ dW˛ .y/ for u > 0
For ˛ 2 .0; 1/ and ˇ 2 R, Z˛; ˇ is a process introduced in Definition 3.3.8

d
D – equality of one-dimensional distributions
d
X  Y means that PfX > zg  PfY > zg for all z 2 R
d
! – convergence in distribution of random variables or random vectors
f:d:
Vt .u/ ) V.u/ as t ! 1 – weak convergence of finite-dimensional distributions,
i.e., for any n 2 N and any 0 < u1 < u2 < : : : < un < 1,

d
.Vt .u1 /; : : : ; Vt .un // ! .V.u1 /; : : : ; V.un //; t ! 1:

P
! – convergence in probability
) – convergence in distribution in a function space
f1 .t/  f2 .t/ as t ! A means that limt!A .f1 .t/=f2 .t// D 1
We stipulate hereafter that all unspecified limit relations hold as t ! 1 or n !
1. Which of the alternatives prevails should be clear from the context.
Chapter 1
Perturbed Random Walks

1.1 Definition and Relation to Other Models

Let .k ; k /k2N be a sequence of i.i.d. two-dimensional random vectors with generic
copy .; /. No condition is imposed on the dependence structure between  and
. Let .Sn /n2N0 be the zero-delayed ordinary random walk with increments n for
n 2 N, i.e., S0 D 0 and Sn D 1 C : : : C n , n 2 N. Then define its perturbed variant
.Tn /n2N , that we call perturbed random walk (PRW), by

Tn WD Sn1 C n ; n 2 N: (1.1)

Recently it has become a very popular object of research. A number of references


to recent publications will be given in Section 1.6.
Functionals of PRWs appear in several areas of Applied Probability as demon-
strated by the following examples.
P
Perpetuities Provided that n1 eTn is a.s. convergent, this sum is called perpetu-
ity due to its interpretation as a sum of discounted payment streams in insurance and
finance. Perpetuities have received an enormous amount of attention which by now
has led to a more or less complete theory. A part of it is to be presented in Chapter 2.
The Bernoulli Sieve Let R WD .Rk /k2N0 be a multiplicative random walk defined
by

Y
k
R0 WD 1; Rk WD Wi ; k 2 N;
iD1

where .Wk /k2N are independent copies of a random variable W taking values in the
open interval .0; 1/. Also, let .Uj /j2N be independent random variables which are
independent of R and have the uniform distribution on Œ0; 1. A random allocation

© Springer International Publishing AG 2016 1


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_1
2 1 Perturbed Random Walks

scheme in which ‘balls’ U1 , U2 , etc. are allocated over an infinite array of ‘boxes’
.Rk ; Rk1 , k 2 N is called Bernoulli sieve.
Since a particular ball falls into the box .Rk ; Rk1  with random probability

Pk WD Rk1  Rk D W1 W2  : : :  Wk1 .1  Wk /; (1.2)

the Bernoulli sieve is also the classical infinite allocation scheme with the ran-
dom frequencies .Pk /k2N . In this setting it is assumed that, given the random
frequencies .Pk /, the balls are allocated over an infinite collection of the boxes
.R1 ; R0 ; .R2 ; R1 ; : : : independently with probability Pj of hitting box j. Assuming
that the number of balls equals n, denote by Kn the number of nonempty boxes and
by Ln the number of empty boxes within the occupancy range.
From the very definition it is clear that the Bernoulli sieve is connected with
.b
T k /k2N the PRW generated by the couples .j log Wk j; j log.1  Wk /j/k2N . For
instance, the logarithmic size of the largest box in the Bernoulli sieve equals
log supk1 .W1 W2  : : :  Wk1 .1  Wk // D supk1 .b T k / the supremum of the PRW
.bT k /. There is a deeper relation between the Bernoulli sieve and the PRW .b T k /.
In particular, it was proved in [100] that the weak convergence of Kn , properly
normalized and centered, is completely determined by the weak convergence of

b
N.x/ WD #fk 2 N W Pk  ex g
D #fk 2 N W W1  : : :  Wk1 .1  Wk /  ex g; x > 0;

again properly normalized and centered. Notice that b


N.x/ is the number of visits to
.1; x by .bT k /. Whenever E log j1  Wj D 1, there is a similar correspondence
between Ln and

#fk 2 N W W1  : : :  Wk1 .1  Wk / < ex g  #fk 2 N W W1  : : :  Wk1 < ex g:

This will be discussed in depth in Chapter 5.


The GEM Distribution and the Poisson–Dirichlet Distribution Random dis-
crete probability distributions with frequencies .Pk / D .exp.b T k // given by (1.2)
are called residual allocation or stick-breaking models, see p. 119 in [17] and p. 89
in [31]. In the most popular and analytically best tractable case when W has a beta
distribution with parameters > 0 and 1, i.e., PfW 2 dxg D x 1 1.0;1/ .x/dx,
.Pk / follows the GEM (Griffiths–Engen–McCloskey) distribution with parameter
. Rearranging the components of .Pk / in nonincreasing order gives us a vector
having the Poisson–Dirichlet distribution with parameter . The Poisson–Dirichlet
distribution and the GEM distribution (with parameter ) are important objects
in the theory of random combinatorial structures. To illustrate this point we only
mention that the former (the latter) is the distributional limit of the sequence of
large (ordered) cycles in the so-called -biased permutations (Corollary 5.11 and
p. 107 in [17], respectively).
1.1 Definition and Relation to Other Models 3

Processes with Regenerative Increments Let V WD .V.t//t0 be a càdlàg process


starting at zero and drifting to 1 a.s. Suppose there exists an ordinary random
walk . n /n2N such that the segments (also called cycles)
   
V.t/ 0t 1
; V. 1 C t/  V. 1 / 0t 2  1 ; : : :

are i.i.d. Then V may be called a process with regenerative increments. For a simple
example, take a Lévy process V with negative mean and n D n. For n 2 N, put

n WD V. n /  V. n1 / and n WD sup V.t/  V. n1 /:


n1 t< n

Then .k ; k /k2N are i.i.d., and

sup V.t/ D sup.1 C : : : C n1 C n /;


t0 n1

i.e., the supremum of the process with regenerative increments can be represented
as the supremum of an appropriate PRW.
The supremum of the PRW is a relatively simple functional that has received
considerable attention in the literature. The corresponding results concerning
finiteness, existence of moments, and tail behavior will be presented in Section 1.3.
Queues and Branching Processes Suppose that  and  are both positive and
define
X X
Y .t/ WD 1fSk CkC1 tg and Y  .t/ WD 1fSk t<Sk CkC1 g ; t  0:
k0 k0

In a GI=G=1-queuing system, where customers arrive at times S0 D 0 < S1 < S2 <


: : : and are immediately served by one of infinitely many idle servers, the service
time of the kth customer being kC1 , Y .t/ gives the number of customers served
up to and including time t  0, whereas Y  .t/ gives the number of busy servers at
time t. Another interpretation of Y  .t/ emerges in the context of a degenerate pure
immigration Bellman–Harris branching process in which each individual is sterile,
immigration occurs at the epochs S1 , S2 etc., and the lifetimes of the ancestor and
the subsequent immigrants are 1 ; 2 ; : : :. Then Y  .t/ gives the number of particles
alive at time t  0. The process .Y  .t//t0 was also used to model the number
of active sessions in a computer network [186, 212]. Also, let us note that Y .t/ is
the number of visits to the interval Œ0; t of the perturbed random walk .Sn C nC1 /
(without the assumption that  and  are positive, Y .t/ will be investigated in depth
in Section 1.4), and Y  .t/ is the difference between the number of visits to Œ0; t of
the ordinary random walk .Sn / and .Sn C nC1 /. Some weak convergence results for
Y .t/ and Y  .t/ can be found in Example 3.3.1 and Theorem 3.3.21, respectively, as
specializations of more general theorems obtained in Chapter 3.
4 1 Perturbed Random Walks

.1/ .2/
Alternating Renewal Process Let . k ; k /k2N be a sequence of i.i.d. copies of
.1/ .1/ .2/
a Œ0; 1/  Œ0; 1/-valued random vector . .1/ ; .2/ /. The sequence 1 , 1 C 1 ,
.1/ .2/ .1/
1 C 1 C 2 ; : : : is sometimes called alternating renewal process. Related to
this sequence is a perturbed random walk .Tk /k2N with  D .1/ C .2/ and  D .1/
which is especially simple, for it forms a nondecreasing sequence. This is in contrast
to general perturbed random walks which do not normally possess such a property.

1.2 Global Behavior

We assume throughout the section that

Standing Assumption: Pf D 0g < 1 and Pf D 0g < 1:

It is well known that a nontrivial zero-delayed ordinary random walk .Sn /n2N0 (i.e.,
a random walk starting at the origin with increment distribution not degenerate at 0)
exhibits one of the following three regimes:
1) drift to C1 (positive divergence): limn!1 Sn D 1 a.s.;
2) drift to 1 (negative divergence): limn!1 Sn D 1 a.s.;
3) oscillation: lim infn!1 Sn D 1 and lim supn!1 Sn D 1 a.s.
PRWs exhibit the same trichotomy. In order to state the result precisely some further
notation is needed. As usual, let1  C D  _ 0 and   D . _ 0/ D . ^ 0/. Then,
for x > 0, define
Z x
x
A˙ .x/ WD Pf˙ > yg dy D E. ˙ ^ x/ and J˙ .x/ WD (1.3)
0 A ˙ .x/

whenever the denominators are nonzero. Notice that J˙ .x/ for x > 0 is well defined
if, and only if, Pf˙ > 0g > 0. In this case, we set J˙ .0/ WD 1=Pf˙ > 0g. The
following theorem, though not stated explicitly, can be read off from Theorem 2.1
in [109].
Theorem 1.2.1 Any PRW .Tn /n2N satisfying the standing assumption is either
positively divergent, negatively divergent or oscillating. Positive divergence takes
place if, and only if,

lim Sn D 1 and EJC . / < 1; (1.4)


n!1

1
We use x _ y or max.x; y/, x ^ y or min.x; y/ interchangeably, depending on typographical
convenience.
1.2 Global Behavior 5

while negative divergence takes place if, and only if,

lim Sn D 1 a.s. and EJ .C / < 1: (1.5)


n!1

Oscillation occurs in the remaining cases, thus if, and only if, either

1 D lim inf Sn < lim sup Sn D 1 a.s.,


n!1 n!1

or

lim Sn D 1 a.s. and EJC . / D 1;


n!1

or

lim Sn D 1 a.s. and EJ .C / D 1:


n!1

Remark 1.2.2 As a consequence of Theorem 1.2.1 it should be observed that a PRW


.Tn / may oscillate even if the corresponding ordinary random walk .Sn / drifts to
˙1.
Remark 1.2.3 There are three distinct cases in which conditions (1.5) hold:
(A1) E 2 .1; 0/ and EC < 1;
(A2) E D 1 and EJ .C / < 1;
(A3) E C D E  D 1, EJ . C / < 1 and EJ .C / < 1.
For further references we state it explicitly that .Tn / is negatively divergent
whenever limn!1 Sn D 1 and EC < 1. This is a consequence of J .x/ D
O.x/ as x ! 1.
Proof of Theorem 1.2.1 By the equivalence (2.1),(2.3) of Theorem 2.1 in [109],
conditions (1.5) are necessary and sufficient2 for limn!1 Tn D 1 a.s. and thus,
by symmetry, (1.4) is equivalent to limn!1 Tn D 1 a.s. By the implication (2.2))
(2.3) of Theorem 2.1 in [109], lim supn!1 Tn < 1 a.s. entails limn!1 Tn D 1
a.s. This proves the remaining assertions. t
u

2
To give a better feeling of the result, consider the simplest situation when E 2 .1; 0/ and
EC < 1. Then, by the strong law of large numbers, Sn drifts to 1 at a linear rate. On the
other hand, limn!1 n1 C C
n D 0 a.s. by the Borel–Cantelli lemma which shows that n grows at
most sublinearly. Combining pieces together shows limn!1 .Sn1 C n / D 1 a.s.
6 1 Perturbed Random Walks

1.3 Supremum of the Perturbed Random Walk

1.3.1 Distributional Properties

In this section we investigate various distributional properties of T WD supn1 Tn


including the a.s. finiteness, finiteness of power and exponential moments and tail
behavior. The moment results will be of great use in Sections 2.1.4 and 4.2.
First of all, it is clear that T is a.s. finite if, and only if, limn!1 Tn D 1 a.s.
According to Theorem 1.2.1 the latter is equivalent to (1.5). Now we give a criterion
for the finiteness of power-like moments.
Theorem 1.3.1 Let .Sn /n2N0 be a negatively divergent ordinary random walk and
f W RC ! RC a measurable, locally bounded function regularly varying at 1 of
positive index (see Definition 6.1.2). Then the following assertions are equivalent:

Ef .TC / < 1I (1.6)


Ef . C /J . C / < 1 and Ef .C /J .C / < 1 (1.7)

(see (1.3) for the definition of J );


   
Ef . sup Tn /C J . sup Tn /C < 1; (1.8)
1n 1n

where  WD inffk 2 N W Sk < 0g.


Remark 1.3.2 Functions f of interest in Theorem 1.3.1 are, for instance, f .x/ D
x˛ logk x or f .x/ D x˛ exp.ˇ log x/ for ˛ > 0, ˇ  0, 2 Œ0; 1/, k 2 N and large
enough x, where logk denotes k-fold iteration of the logarithm.
Remark 1.3.3 When  D 0 a.s., the equivalence of (1.6), (1.7), and (1.8) follows
from Theorem 6.3.1 given in the Appendix. The cited theorem states that the
condition Ef .Sw /1fw <1g < 1, where w WD inffk 2 N W Sk  0g, is equivalent
to the other conditions of the theorem. The inequality Ef .TCw C1 /1fw <1g < 1
is treated separately, in Proposition 1.3.4 below, because it is not equivalent
to (1.6), (1.7), and (1.8).
Proposition 1.3.4 Let .Sn /n2N0 be negatively divergent. For a function f as defined
in Theorem 1.3.1 the following assertions are equivalent:

Ef .TCw C1 /1fw <1g < 1I (1.9)


Ef . C /J . C / < 1 and Ef .C / < 1 (1.10)

We proceed with a criterion for the finiteness of exponential moments.


1.3 Supremum of the Perturbed Random Walk 7

Theorem 1.3.5 Let a > 0 and3 Pf D 1g 2 Œ0; 1/. The following assertions are
equivalent:

EeaT < 1I (1.11)


Eea < 1 and Eea < 1: (1.12)

The subsection closes with two results on the tail behavior.


Theorem 1.3.6 Suppose that there exist positive a and such that

Eea < 1 and Ee.aC / < 1:

Then the following conditions are equivalent:

Pf > xg  eax `.ex /; x!1 (1.13)

where ` is slowly varying at C1 (see Definition 6.1.1), and

PfT > xg  .1  Eea /1 eax `.ex /; x ! 1: (1.14)

Remark 1.3.7 The known Breiman4 theorem states that if U and V are nonnegative
independent random variables such that PfU > xg is regularly varying at C1 of
index ˛, ˛  0, and EV ˛C < 1 for some > 0, then

PfUV > xg  EV ˛ PfU > xg: (1.15)

It is known that in some cases, for instance, if PfU > xg  const x˛ relation (1.15)
holds under the sole assumption EV ˛ < 1 (see Lemma 2.1 in [110]). Thus, if `
in (1.13) is equivalent to a constant, the equivalence (1.13),(1.14) holds whenever
Eea < 1, irrespective of the condition Ee.aC / < 1.
Recall that a distribution is called nonlattice if it is not concentrated on any lattice
ıZ, ı > 0. A distribution is called ı-lattice if it is concentrated on the lattice ıZ and
not concentrated on any lattice ı1 Z for ı1 > ı.
Theorem 1.3.8 Suppose that there exists positive a such that

Eea D 1; Eea  C < 1 and Eea < 1: (1.16)

3
A strange assumption Pf D 1g 2 Œ0; 1/ which is made here and in Lemma 1.3.12 is of
principal importance for the proof of Theorem 2.1.5.
4
Actually, Breiman (Proposition 3 in [52]) only proved the result for ˛ 2 .0; 1/. The whole range
˛ > 0 was later covered by Corollary 3.6 (iii) in [70].
8 1 Perturbed Random Walks

If the distribution of e is nonlattice, then

lim eax PfT > xg D C;


x!1

 0 
where C WD E ea1  ea.1 CT / 1f1 CT0 1 g 2 .0; 1/ and T0 WD supn2 .Tn  1 /.
If the distribution of e is ı-lattice, then, for each x 2 R,

lim e.ıkCx/a PfT > ık C xg D C.x/


k!1

for some positive ı-periodic function C.x/.

1.3.2 Proofs for Section 1.3.1

Lemma 1.3.9 given next collects some relevant properties of functions f introduced
in Theorem 1.3.1.
Lemma 1.3.9 Let f be a function as defined in Theorem 1.3.1. Then there exists a
differentiable and nondecreasing on RC function h which further satisfies h.0/ D 0,

h.x C y/  c.h.x/ C h.y//; (1.17)


h.x C y/J .x C y/  2c.h.x/J .x/ C h.y/J .y// (1.18)

for all x; y  0 and a constant c > 0, and f .x/  h.x/ as x ! 1.


Proof By Theorem 1.8.2 in [44], there exists a differentiable function g which is
nondecreasing on Œc; 1/ with g0 .c/ > 0 for some c > 0 and satisfies f .x/  g.x/ as
x ! 1. The function h.x/ WD g.c C x/  g.c/ is differentiable and nondecreasing
on RC with h.0/ D 0, h0 .0/ > 0 and f .x/  h.x/ as x ! 1.
Now we check (1.17). Using monotonicity of h gives

h.x C y/ h.2x/

h.x/ C h.y/ h.x/

for 0  y  x. While, as x ! 0, the right-hand side tends to 2 because h0 .0/ > 0, as


x ! 1 it tends to 2˛ , where ˛ is the index of regular variation of h. Hence (1.17)
holds whenever 0  y  x. Exchanging x and y completes the proof of (1.17).
Inequality (1.18) is an immediate consequence of (1.17) and J .2x/  2J .x/
for x  0. t
u
Lemma 1.3.10 and Lemma 1.3.11 are preparatory results for the proof of
Theorem 1.3.1. We recall the notation T D supn1 Tn .
1.3 Supremum of the Perturbed Random Walk 9

Lemma 1.3.10 If   0 a.s., then for any b > 0 such that

a WD PfT  bg > 0;
P
the function V.x/ WD 1 C n1 Pfmax1kn Tk  b; Sn > xg satisfies

V.x/  aJ .x/ (1.19)

for each x > 0.


.x/ .x/ Pn
Proof For x > 0, put S0 WD 0 and Sn WD kD1 .k ^ x/ for n 2 N. Let
n o
Tx WD inf n  1 W Sn  x or max Tk > b :
1kn

Then
X
ETx D PfTx  ng D V.x/
n1

and Wald’s identity provide us with

.x/
ESTx D E. ^ x/ ETx D A .x/V.x/; x > 0: (1.20)

Putting B WD fT  bg, we also have

.x/
x1B  ..STx / ^ x/1B  .STx / ^ x  STx :

Consequently,

.x/
ESTx  ax

which in combination with (1.20) implies (1.19). t


u
Lemma 1.3.11 Suppose   0 a.s. Let f be the function defined in Theorem 1.3.1.
Then

Ef .T / < 1 ) Ef .C /J .C / < 1:

Proof We first note that the moment assumption and limx!1 f .x/ D 1 together
ensure T < 1 a.s. Therefore, there exists a b > 0 such that a D PfT  bg > 0. In
view of Lemma 1.3.9, in the following we can and do assume that f is differentiable
with f 0 .x/  0 on RC .
10 1 Perturbed Random Walks

Now fix any c > b and infer for x  b (with V as in the previous lemma)
X n o
PfT > xg D Pf1 > xg C P max Tk  x; TnC1 > x
1kn
n1
X n o
 Pf1 > c C xg C P max Tk  b; TnC1 > x; nC1 > c C x
1kn
n1
0 1
Z X n o
 @1 C P max Tk  b; Sn > x  y A dPf  yg
.cCx; 1/ 1kn
n1

D EV.  x/1f>cCxg  a EJ .  x/1f>cCxg ;

the last inequality following by Lemma 1.3.10. With this at hand, we further obtain
Z 1
1 > Ef .T /  f 0 .x/PfT > xgdx
b
Z 1
a f 0 .x/ EJ .  x/1f>cCxg dx
b
 Z c 
0
D a E 1f>bCcg f .x/J .  x/dx
b
Z !
=2
0
 a E 1f>2cg f .x/J .  x/dx
b
 
 a E 1f>2cg . f .=2/  f .b//J .=2/
 
 21 a E 1f>2cg . f .=2/  f .b//J ./ ;

having utilized J .x=2/  J .x/=2 for the last inequality. Recalling that T < 1
a.s. ensures EJ .C / < 1 by Theorem 1.2.1 we infer Ef .C =2/J .C / < 1. The
proof of Lemma 1.3.11 is complete because f varies regularly. t
u
The proofs of the implication (1.6))(1.7) in Theorem 1.3.1 and the implication
(1.11))(1.12) in Theorem 1.3.5 as well as the proof of Theorem 1.3.8 are (partially)
based on a lemma.
Lemma 1.3.12 Let Pf D 1g 2 Œ0; 1/. The following inequalities hold:

PfT > xg  Pf > xg (1.21)

for all x 2 R and

PfT > xg  Pf > yg Pfsup Sn > x  yg (1.22)


n0
1.3 Supremum of the Perturbed Random Walk 11

for all x; y 2 R. Furthermore, if ˆ W Œ0; 1/ ! Œ0; 1/ is any nondecreasing,


differentiable function, then

Eˆ.C /  Eˆ.TC / (1.23)

and

Eˆ.sup Sn /  ˆ.0/ C c Eˆ.c C TC / (1.24)


n0

for a constant c 2 .1; 1/ that does not depend on ˆ.


Proof Inequality (1.21) which is a consequence of f1 > xg fT > xg
immediately implies (1.23).
For any fixed x; y 2 R, put  WD inffk  0 W Sk > x  yg with the usual
convention inf ˛ D 1. Note that fsupn0 Sn > x  yg D f < 1g and fT > xg

f < 1;  C1 > yg. Inequality (1.22) now follows from


X
PfT > xg  Pf < 1;  C1 > yg D Pf D n; nC1 > yg
n0
X
D Pf > yg Pf D ng D Pf > yg Pf < 1g
n0

D Pf > yg Pfsup Sn > x  yg:


n0

In order to obtain (1.24) fix any c > 1 such that Pf > cg  1=c. Then (1.22)
with y D c provides us with

PfT C c > xg  Pfsup Sn > xg=c


n0

for x 2 R which in combination with


Z 1
Eˆ.sup Sn /  ˆ.0/ D ˆ0 .x/ Pfsup Sn > xgdx
n0 0 n0

finally gives (1.24). t


u
Proof of Theorem 1.3.1 By Lemma 1.3.9, regularly varying f can be assumed
differentiable and nondecreasing on RC .
(1.6))(1.7). Use (1.24) with ˆ D f to infer Ef .supn0 Sn / < 1. Now the
implication (6.16))(6.18) of Theorem 6.3.1 entails Ef . C /J . C / < 1.
Further, we have
 
Ef .sup .1 ^ 0 C : : : C n1 ^ 0 C n //C  Ef .TC / < 1;
n1
12 1 Perturbed Random Walks

and the finiteness of the left-hand side entails Ef .C /J .C / < 1 by
Lemma 1.3.11 because k ^ 0  0 a.s.
(1.7) ) (1.8). By Lemma 1.3.9, we can assume that f is nondecreasing and
R1
satisfies (1.18). Since 1=J .t/ D 0 Pf > xtgdx we conclude that J is
nondecreasing with limt!1 J .t/ D 1. Hence we can assume that f  J is
nondecreasing.
Since
 C
sup .Sk1 C k /  sup Sk C sup C
k a.s.
1k 0k 1 1k

we infer
   
f . sup .Sk1 C k //C J . sup .Sk1 C k //C
1k 1k
        
 c f sup S k J sup Sk C f sup C C
k J sup k a.s.
0k 1 0k 1 1k 1k

for some c > 0, in view of (1.18). By the implication (6.18))(6.17) of Theo-


rem 6.3.1, Ef . C /J . C / < 1 entails Ef .sup0k 1 Sk /J .sup0k 1 Sk / < 1.
Further,

X
   
f sup C
k J  sup C
k D sup f .C
k /J  .C
k /  f .C C
k /J .k / a.s.
1k 1k 1k kD1

Observe that  is the stopping time w.r.t. the filtration .Fn /n2N0 , where F0 WD
f˛; g and, for n 2 N, Fn is the -algebra generated by .k ; k /1kn . Hence

X
   
Ef sup C
k J  sup C
k  E f .C C C C
k /J .k / D EEf . /J . / < 1
1k 1k kD1

by Wald’s identity.
(1.8))(1.6). Without loss of generality (see Lemma 1.3.9), we can assume that f is
nondecreasing and differentiable with f .0/ D 0.
Define the sequence .n /n2N0 of ladder epochs associated with , given by 0 WD
0, 1 WD  and

n WD inffk > n1 W Sk < Sn1 g


P
for n  2. Put further b Sn WD njD1 b
 n WD Sn  Sn1 , b  j D Sn and

b
n WD sup.n1 C1 ; n1 C1 C n1 C2 ; : : : ; n1 C1 C : : : C n 1 C n /
1.3 Supremum of the Perturbed Random Walk 13

for n 2 N, and b S0 WD 0. The random vectors .b  n ;b


n /n2N are independent copies
b / WD .S ; sup1k Tk /. Moreover, T D supn1 Tn D supn1 .b
of .;b Sn1 C b
n /.
Using this representation we obtain, for fixed y > 0,
Z 1 X
 
Ef .TC / D Ef .sup .b n //C 
Sn1 C b f 0 .x/ Pfb
Sn1 C b
n > xgdx
n1 0 n1
Z 1 X
D f 0 .x/ Pfb
Sn1 C b
n > x;b
n > x C ygdx
0 n1
Z 1 X
C f 0 .x/ Pfb
Sn1 C b
n > x; x < b
n  x C ygdx
0 n1

D I1 C I2 :

Since Ef .b
PC /J .b
C / < 1 (trivially) entails Ef .b
C / < 1 and the renewal function
b b
U.x/ WD n1 PfSn1 < xg is finite for all x  0 (see (6.1)) the second integral is
easily estimated as
0 1
X Z 1
I2  @ Pfb
Sn1  ygA f 0 .x/ Pfb
 > xgdx
n1 0

b Ef .b
D U.y/ C / < 1:

Left with an estimation of I1 we obtain


Z 1 Z X
I1 D 0
f .x/ Pfb
Sn1 < z  xgdPfb
  zgdx
0 .xCy;1/ n1

Z 1 Zb
C
D 0 b   x/1
f .x/EU.b b C /
dx  EU.b f 0 .x/ dx
fb
>xCyg
0 0

b C /f .b C
2b
D EU.b C /  E R C /
f .b
C
0 PfS > zgdz
C
2b

 ER C / D Ef .b
f .b C /J .b
C / < 1
C
0 PfS1 > zgdz

having utilized Erickson’s inequality (formula (6.5)) for the penultimate inequality
and an easy observation that fS1 > zg fS > zg for z > 0 for the last
inequality. The proof of Theorem 1.3.1 is complete. t
u
Proof of Proposition 1.3.4 (1.10))(1.9). By Lemma 1.3.9, we can and do assume
that f satisfies (1.17). According to the implication (6.18))(6.19) of Theorem 6.3.1,
14 1 Perturbed Random Walks

the condition Ef . C /J . C / < 1 entails Ef .Sw /1fw <1g < 1. Further,
Ef .C C
w C1 /1fw <1g D Ef . /Pfw < 1g, whence

Ef .TCw C1 /1fw <1g  Ef .Sw C C


w C1 /1fw <1g

 cEf .Sw /1fw <1g C cEf .C


w C1 /1fw <1g < 1

in view of (1.17).
(1.9))(1.10). Pick 2 R such that Pf > g > 0. Then
X
1 > Ef .TCw C1 /1fw <1g D Ef ..Sk C kC1 /C /1fS1 <0;:::;Sk1 <0;Sk 0g
k1
Z
D Ef ..Sw C x/C /1fw <1g dPf  xg
R
Z
 Ef ..Sw C x/C /1fw <1g dPf  xg
. ;1/
   
 Ef .C /1f> g Pfw < 1g _ Ef ..Sw C /C /1fw <1g Pf > g

which clearly implies (1.10). The proof of Proposition 1.3.4 is complete. t


u
Proof of Theorem 1.3.5 (1.12))(1.11). First of all, observe that conditions (1.12)
imply limn!1 Sn D 1 a.s. and EC < 1 thereby ensuring limn!1 Tn D
limn!1 .Sn1 C n 1fn >1g / D 1 a.s. by Theorem 1.2.1 which is equivalent
to supn1 Tn < 1 a.s.
Further,
X X
eaT  eaTn D eaSn1 ean a.s.
n1 n1

Passing to the expectations we infer


X X
EeaT  EeaSn1 ean D Eea .Eea /n1 D Eea .1  Eea /1 < 1
n1 n1

as desired, where the independence of Sn and nC1 for each n has been used.
(1.11))(1.12). Using formulae (1.23) and (1.24) of Lemma 1.3.12 with ˆ.x/ D
eax we conclude that Eea < 1 which is the second part of (1.12) and also
E exp.a supn0 Sn / < 1. Left with proving that the last inequality entails5 Eea < 1
1.3 Supremum of the Perturbed Random Walk 15

which is the first part of (1.12) we note that


 
W D max 1; eaS1 W 0  eaS1 W 0 a.s.; (1.25)

where W WD exp.a supn0 Sn / and W 0 WD exp.a supn0 .SnC1  S1 //. Since W 0 is


a copy of W the latter implies EW  Eea EW, whence Eea  1. In order to
conclude strict inequality observe that Eea D 1 would give E.W  eaS1 W 0 / D 0
and thereupon eaS1 W 0 D W  1 a.s. in view of (1.25). But since limn!1 Sn D 1
a.s. entails PfW D 1g D Pfsupn1 Sn  0g D Pf D 1g > 0, where  D inffk 2
N W Sk > 0g, the independence of W 0 and S1 would further imply

PfeaS1 W 0 < 1g  PfeaS1 < 1; W 0 D 1g D PfW D 1gPfS1 < 0g > 0

which is a contradiction. Therefore Eea < 1. The proof of Theorem 1.3.5 is


complete. u
t
Proof of Theorem 1.3.6 Denote by F.x/ WD Pfe  xg the distribution function of
e . For n 2 N, let Fn be the -algebra generated by .k ; k /1kn . (1.13))(1.14).
Relation (1.13) is equivalent to the fact that 1  F is regularly varying at C1 of
index a. We have to prove that

PfeT > xg D Pfsup eSn CnC1 > xg  .1  Eea /1 .1  F.x//; x ! 1:


n0

We start by noting that the assumptions entail E 2 Œ1; 0/ and EC < 1. Hence
T is a.s. finite by Theorem 1.2.1. Write

PfeT > xg X PfeSn CnC1 > xjFn g


D 1C E1f max eSk CkC1 xg
1  F.x/ n1 0kn1 1  F.x/
X 1  F.xeSn /
D 1C E1f eSk CkC1 xg
n1
max
0kn1 1  F.x/

and apply Fatou’s lemma twice to obtain

PfeT > xg X 1
lim inf 1C EeaSn D :
x!1 1  F.x/ n1
1  Eea

5
Actually, E exp.a supn0 Sn / < 1 if, and only if, Eea < 1. To prove the implication ( just use
P
the inequality E exp.a supn0 Sn /  E n0 eaSn D .1  Eea /1 .
16 1 Perturbed Random Walks

On the other hand,

PfeT > xg X 1  F.xeSn /


1C E :
1  F.x/ n1
1  F.x/

By Breiman’s theorem (see Remark 1.3.7)

1  F.xeSn /
lim E D EeaSn :
x!1 1  F.x/

Hence, according to Lebesgue’s dominated convergence theorem, the relation

PfeT > xg X 1
lim sup 1C EeaSn D
x!1 1  F.x/ n1
1  Eea

1  F.xeSn /
follows once we can find a sequence .un /n2N such that E  un for each
P 1  F.x/
n 2 N and all x large enough and n1 un < 1.
Pick ı 2 .0; min.a; // that satisfies Ee.aCı/ < 1. Since the function x 7! Eex
is convex on .0; a C / we also have Ee.aı/ < 1. For this ı and any positive A1
there exists a positive x1 such that

xaCı .1  F.x//  1=A1

whenever x  x1 . Further, Potter’s bound (Theorem 1.5.6 (iii) in [44]) tells us that
for any positive A2 there exists a positive x2 such that

1  F.ux/
 A2 max.uaCı ; uaı /
1  F.x/

whenever x  x2 and ux  x2 . Put x0 WD max.x1 ; x2 ; 1/. Since, for x  x0 ,

1  F.xeSn / 0 Ee
xaCı .aCı/Sn
.aCı/Sn
E 1feSn >x=x0 g  aCı  A1 xaCı
0 Ee and
1  F.x/ x .1  F.x//

1  F.xeSn /
E 1feSn x=x0 g  A2 E max.e.aı/Sn ; e.aCı/Sn /
1  F.x/
 A2 .Ee.aı/Sn C Ee.aCı/Sn /;
1.3 Supremum of the Perturbed Random Walk 17

the sequence .un / defined by


 n
un WD const max.Ee.aı/ ; Ee.aCı/ / ; n2N

serves our needs.


(1.14))(1.13). The random variable T satisfies the following equality

T D max.1 ; 1 C sup.2 ; 2 C 3 ; 2 C 3 C 4 ; : : :// D max.1 ; 1 C T0 / a.s.;


(1.26)

where T0 D supn2 .Tn  1 / is independent of .1 ; 1 / and has the same distribution
as T . On the one hand,
0
PfeT > xg  1  F.x/ C Pfe1CT > xg

whence
0
1  F.x/ Pfe1 CT > xg
lim inf  1  lim 0 D 1  Eea (1.27)
x!1 PfeT > xg x!1 PfeT > xg

having utilized Breiman’s theorem for the last equality. On the other hand,
0
PfeT > xg D 1  F.x/ C E1fe1 xg PfeT > xe1 jF1 g

whence
 
1  F.x/
lim inf 1  Eea (1.28)
x!1 PfeT > xg

by Fatou’s lemma. A combination of (1.27) and (1.28) yields

1  F.x/  PfeT > xg.1  Eea /  xa `.x/

which is equivalent to (1.13). The proof of Theorem 1.3.6 is complete. t


u
Proof of Theorem 1.3.8 For the nonlattice case, see Theorem 5.2 in [107] and its
proof.
Assume that the distribution of e is ı-lattice. We shall use the random variables
which appear in representation (1.26). Set, for x 2 R,

P.x/ WD eax PfT > xg

and
 
0
Q.x/ WD e PfT > xg  Pf1 C T > xg :
ax
18 1 Perturbed Random Walks

Since
Z
eax Pf1 C T0 > xg D P.x  t/dPf 0  tg; x 2 R;
R

where  0 is a random variable with distribution Pf 0 2 dxg D eax Pf 2 dxg, we
conclude that P is a (locally bounded) solution to the renewal equation
Z
P.x/ D P.x  t/dPf 0  tg C Q.x/; x 2 R: (1.29)
R

It is well known that


X
P.x/ D E Q.x  Sj0 /; x 2 R;
j2Z

where .Sk0 /k2N0 is a zero-delayed ordinary random walk with jumps having the
distribution of  0 . Observe that Eeb   < 1 for all b > 0. In particular, Eea   <
1 which in combination with the second condition in (1.16) ensures Eea  2 R.
The convexity of m.x/ WD Eex on Œ0; a together with m.0/ D m.a/ D 1 implies
that m is increasing at the left neighborhood of a whence the left derivative m0 .a/ is
positive. Since E 0 D Eea  D m0 .a/, we have proved that E 0 2 .0; 1/. Further,
X
0  eax Q.x C ıj/
j2Z
X  
D eaıj Pfmax.1 ; 1 C T0 / > x C ıjg  Pf1 C T0 > x C ıjg
j2Z
X  
D eaıj Pf1 > x C ıj; 1 C T0 < 1 g  Pf1 C T0 > x C ıj; 1 C T0 < 1 g
j2Z
X
 eaıj Pf1 > x C ıjg:
j2Z

The assumption Eea < 1 guaranteesP that the last series converges for each x 2 R.
Thus, we have checked that the series j2Z Q.x C ıj/ converges for each x 2 R.
By the key renewal theorem for the lattice case (Proposition 6.2.6)

ı X
lim P.x C ın/ D Q.x C ıj/ DW C.x/: (1.30)
n!1 Eea  j2Z

It remains to show that C.x/ > 0. To this end, pick y 2 R such that p WD Pf >
yg > 0. For any fixed x > 0, there exists i 2 Z such that x  y 2 Œıi; ı.i C 1//. With
1.3 Supremum of the Perturbed Random Walk 19

the help of (1.22) we obtain, for large enough n,

P.x C ın/ D ea.xCın/ PfT > x C ıng


 peay ea.xyCın/ Pfsup Sk > x  y C ıng
k0

 pea.y1/ eaı.iCnC1/ Pfsup Sk > ı.n C i C 1/g:


k0

Therefore, it suffices to prove that

lim inf eaın Pfsup Sk > ıng > 0: (1.31)


n!1 k0

For x  0, set .x/ WD inffk 2 N W Sk > xg, with the usual convention that
inf ˛ D 1, and  WD .0/. Define a new probability measure6 Pa by

Ea h.S0 ; : : : ; Sk / D EeaSk h.S0 ; : : : ; Sk /; k2N (1.32)

for each Borel function h W RnC1 ! Œ0; 1/, where Ea is the corresponding
expectation. Since the P-distribution of  0 is the same as the Pa -distribution of S1 , we
have Ea S1 D E 0 2 .0; 1/. Therefore, .Sn /n2N0 , under Pa , is an ordinary random
walk with the positive drift whence E.x/ < 1 for each x  0 and thereupon
Ea S D Ea S1 Ea  2 .0; 1/. Further, for each x > 0,

eax Pfsup Sk > xg D eax Pf.x/ < 1g D eax Ea eaS .x/ 1f .x/<1g
k0

D Ea ea.S .x/ x/

having utilized (1.32) for the second equality. Since S1 , under Pa , has a ı-lattice
distribution, an application of Theorem 10.3(ii) on p. 104 in [119] allows us to
conclude that S .ın/ ın converges in Pa -distribution as n ! 1 to a random variable
Y with Pa fY D ıkg D EaıS Pa fS  ıkg, k 2 N0 . This immediately implies that

lim eaın Pfsup Sk > ıng D lim Ea ea.S .ın/ın/ D Ea eaY > 0;
n!1 k0 n!1

a result that is stronger than (1.31). The proof of Theorem 1.3.8 is complete. t
u

6
This is indeed a probability measure because, in view of the first condition in (1.16), .eaSn /n2N0 is
a nonnegative martingale with respect to the natural filtration.
20 1 Perturbed Random Walks

1.3.3 Weak Convergence


P
For positive a and b, let N .a;b/ WD k ".t.a;b/ ; j.a;b/ / be a Poisson random measure on
k k
Œ0; 1/  .0; 1 with intensity measure LEB 
a;b , where ".t; x/ is the probability
measure concentrated at .t; x/ 2 Œ0; 1/  .0; 1, LEB is the Lebesgue measure on
Œ0; 1/, and
a;b is the measure on .0; 1 defined by
 

a;b .x; 1 D axb ; x > 0:

Denote by D D DŒ0; 1/ the Skorokhod space of real-valued right-continuous


functions which are defined on Œ0; 1/ and have finite limits from the left at each
positive point. Assuming that

E D 0 and v 2 WD Var  < 1 (1.33)

we investigate weak convergence of max0kŒn .Sk C kC1 /, properly normalized,


on D equipped with the J1 -topology. It is easily seen that whenever max0kn Sk
dominates max1knC1 k the limit distribution of an max0kŒn .Sk C kC1 / coin-
cides with the limit distribution of an max0kŒn Sk which is the distribution of
sup0t S2 .t/ where S2 D .S2 .t//t0 is a Brownian motion. If, on the other hand,
max1knC1 k dominates max0kn Sk , then the limit distribution coincides with
the limit distribution of an max1kŒnC1 k which is the distribution of an extremal
process under a regular variation assumption.
Proposition 1.3.13
(i) Suppose that (1.33) holds and that

lim t2 Pf > tg D 0: (1.34)


t!1

Then

n1=2 max .Sk C kC1 / ) v sup S2 .s/; n!1 (1.35)


0kŒn 0s

where S2 is a Brownian motion.


(ii) Suppose that (1.33) holds and that limt!1 t2 Pf > tg D 1 and Pf > tg is
regularly varying at 1 (of index ˛, ˛ 2 .0; 2). Let a.t/ be a positive function
which satisfies limt!1 tPf > a.t/g D 1. Then

.1;˛/
max .Sk C kC1 /=a.n/ ) sup jk ; n ! 1: (1.36)
0kŒn .1;˛/
tk 

If in addition to (1.33) the condition

Pf > rg  ct2 ; t!1 (1.37)


1.3 Supremum of the Perturbed Random Walk 21

holds for some c > 0, then contributions of max0kn Sk and max1knC1 k to the
asymptotic behavior of max0kn .Sk C kC1 / are comparable. This situation which
is more interesting than the other two is treated in Theorem 1.3.14 given below.
Theorem 1.3.14 Suppose that (1.33) and (1.37) hold. Then
.c;2/ .c;2/
n1=2 max .Sk C kC1 / ) sup .vS2 .tk / C jk /; n ! 1; (1.38)
0kŒn .c;2/
tk 

where S2 is a Brownian motion independent of N .c;2/ .


We stress that even though  and  are allowed to be arbitrarily dependent, the
processes S2 and N .c;2/ arising in the limit are independent. For this to hold, it is
essential that S2 is a.s. continuous. Suppose now that the distribution of  belongs
to the domain of attraction of a stable distribution, other than normal. In this case
a counterpart of (1.38) should hold if n1=2 is replaced by another appropriate
normalization, and S2 is replaced by a stable Lévy process. Furthermore, it is likely
that the limit stable Lévy process and the limit extremal process should be dependent
at least in some cases when  and  are dependent.
Here is another result of the same flavor as Theorem 1.3.14. A proof will not be
given for it mimics7 that of Theorem 1.3.14.
Theorem 1.3.15 Suppose that E D
2 .1; C1/ and Pf > tg  ct1 as
t ! 1. Then
.c;1/ .c;1/
n1 max .Sk C kC1 / ) sup .
tk C jk /; n ! 1: (1.39)
0kŒn .c;1/
tk 

Remark 1.3.16 The marginal distribution of the right-hand side of (1.39) can be
explicitly computed and is given by
8
ˆ x
u c=

n o ˆ
< ; x 
u; if
> 0;
 .c;1/ .c;1/   xx c=j
j
P sup
tk C jk x D ; x  0 if
< 0; (1.40)
.c;1/ ˆ xCj
ju
tk u :̂exp.cu=x/; x  0 if
D 0:

We only provide details for the case


< 0 (see Remark 2.2.6 for the case
D 0).
For x  0, the probability on the left-hand side of (1.40) equals
˚   
P N .c;1/ .t; y/ W t  u;
t C y > x D 0
  
D exp  EN .c;1/ .t; y/ W t  u;
t C y > x

7
The only principal difference is that one should use SŒn =n )
‡.t/ on D where ‡.t/ D t for
t  0, rather than Donsker’s theorem in the form (1.54).
22 1 Perturbed Random Walks

 
because N .c;1/ .t; y/ W t  u;
t C y > x is a Poisson random variable. It remains to
note that
Z uZ
 
EN .c;1/ .t; y/ W t  u;
t C y > x D 1f
tCy>xg
c; 1 .dy/dt
0 Œ0;1/
Z u
Dc .x C j
jt/1 dt
0

D .c=j
j/.log.x C j
ju/  log x/:

Using an analogous argument we can obtain a (rather implicit) formula for the
marginal distribution of the right-hand side of (1.38):
n o  Z 
 .c;1/ .c;1/ 
u
1fvS2 .t/<xg
P sup vS2 .tk / C jk  x D E exp c dt :
tk
.c;1/
u 0 .x  vS2 .t//2

It does not seem possible to simplify the last expression. A similar formula for
finite-dimensional distributions can be found in Proposition 1 of [256].

1.3.4 Proofs for Section 1.3.3

Let C WD CŒ0; 1/ be the set of continuous functions defined on Œ0; 1/. Denote by
Mp the set of Radon point measures  on Œ0; 1/  .1; 1 which satisfy

.Œ0; T  ..1; ı [ Œı; 1// < 1 (1.41)

for all ı > 0 and all T > 0. The set Mp is endowed with the vague topology. Define
the mapping F from D  Mp to D by
8
< sup . f . k / C yk /; if k  t for some k;
F . f ; / .t/ WD kW k t
:
f .0/; otherwise;
P
where  D k ". k ; yk / . Assumption (1.41) ensures that F . f ; / 2 D. If (1.41) does
not hold, F . f ; / may lose right-continuity.
Theorem 1.3.17 For n 2 N, let fn 2 D and n 2 Mp . Assume that f0 2 C and
• 0 .Œ0; 1/  .1; 0/ D 0 and 0 .f0g  .1; C1/ D 0,
• 0 ..r1 ; r2 /  .0; 1/  1 for all positive r1 and r2 such that r1 < r2 ,
P
• 0 D k " .0/ ; y.0/  does not have clustered jumps, i.e., k ¤ j for k ¤ j.
.0/ .0/
k k
1.3 Supremum of the Perturbed Random Walk 23

If

lim fn D f0 (1.42)
n!1

in the J1 -topology on D and

lim n jŒ0;1/.0;1 D 0 (1.43)


n!1

on Mp , then

lim F . fn ; n / D F . f0 ; 0 / (1.44)
n!1

in the J1 -topology on D.
Proof It suffices to prove convergence (1.44) on DŒ0; T for any T > 0 such that
0 .fTg  .0; 1/ D 0 (the last condition ensures that F . f0 ; 0 / is continuous at T).
Let dT be the standard Skorokhod metric on DŒ0; T. Then

dT .F . fn ; n /; F . f0 ; 0 //
 dT .F . fn ; n /; F . f0 ; n // C dT .F . f0 ; n /; F . f0 ; 0 //
 sup jF . fn ; n /.t/  F . f0 ; n /.t/j C dT .F . f0 ; n /; F . f0 ; 0 //
t2Œ0;T

 sup j fn .t/  f0 .t/j C dT .F . f0 ; n /; F . f0 ; 0 //


t2Œ0;T

having utilized the fact that dT is dominated by the uniform metric. It follows
from (1.42) and the continuity of f0 that limn!1 fn D f0 uniformly on Œ0; T.
Therefore we are left with checking that

lim dT .F . f0 ; n /; F . f0 ; 0 // D 0: (1.45)
n!1

Let ˛ D f0 D s0 < s1 <    < sm D Tg be a partition of Œ0; T such that

0 .fsk g  .0; 1/ D 0; k D 1; : : : ; m:

Pick now > 0 so small that

0 ..sk ; skC1 /  . ; 1/  1; k D 0; : : : ; m  1:

Condition (1.43) implies that 0 .Œ0; T  . ; 1/ D n .Œ0; T  . ; 1/ D p


for large enough n and some p  1. Denote by . Ni ; yN i /1ip an enumeration of the
points of 0 in Œ0; T  . ; 1 with N1  N2  : : :  Np and by . Ni ; yN i /1ip the
.n/ .n/
24 1 Perturbed Random Walks

analogous enumeration of the points of n in Œ0; T  . ; 1. Then


X
p
.j Ni  Ni j C jNyi  yN i j/ D 0:
.n/ .n/
lim (1.46)
n!1
iD1

Define n to be continuous and strictly increasing functions on Œ0; T with


n .0/ D 0, n .T/ D T, n . Ni / D Ni for i D 1; : : : ; p, and let n be linearly
.n/

interpolated elsewhere on Œ0; T. Further, write

dT .F . f0 ; n /; F . f0 ; 0 //
 sup jn .t/  tj
t2Œ0;T

. f0 . Ni / C yN i /j
.n/ .n/ .n/ .n/
C sup j sup . f0 . k / C yk /  sup
t2Œ0;T  . .n/ /t .n/
Ni Dn . Ni /t
n k

C sup j sup. f0 . k / C yk /  sup. f0 . Ni / C yN i /j


t2Œ0;T k t Ni t

X p
.j f0 . Ni /  f0 . Ni /j C jNyi  yN i j/;
.n/ .n/
C
iD1

 .n/ .n/  P
where, for n 2 N, k ; yk are the points of n , i.e., n D k ". .n/ ; y.n/ / . The
k k
relation limn!1 supt2Œ0;T jn .t/  tj D 0 is easily checked. Using (1.46) we infer

X
p
.j f0 . Ni /  f0 . Ni /j C jNyi  yN i j/ D 0
.n/ .n/
lim (1.47)
n!1
iD1

because f0 is continuous. To proceed, put j˛j WD maxi .siC1  si / and let

!f0 ."/ WD sup j f0 .u/  f0 .v/j


juvj<"; u;v0

denote the modulus of continuity of f0 . We have, for t 2 Œ0; T,8

sup . f0 . Ni / C yN i /j
.n/ .n/ .n/ .n/
j sup . f0 . k / C yk / 
.n/ .n/
n . k /t n . Ni /t

sup . f0 . Ni / C yN i /:
.n/ .n/ .n/ .n/
D sup . f0 . k / C yk /  (1.48)
.n/ .n/
n . k /t n . Ni /t

Pick now k … f Ni ; i D 1; : : : ; pg and consider three cases.


.n/ .n/

.n/ .n/ .n/ .n/


8
We recall that sup . .n/ /t . f0 . k / C yk / D f0 .0/ and sup . N.n/ /t . f0 . Ni / C yN i / D f0 .0/ if
n k n i
the supremum is taken over the empty set.
1.3 Supremum of the Perturbed Random Walk 25

.n/ .n/
Case I in which n . k /  t and k 2 Œsj ; sjC1  for j 2 N. Since .sj ; sjC1 / \
f N1 ; : : : ; Np g ¤ ˛, there exists Nl 2 .sj1 ; sj /. In particular, Nl must satisfy
.n/ .n/ .n/ .n/

n . Nl /  n . k /  t and j Nl  k j  2j˛j. Further


.n/ .n/ .n/ .n/

sup . f0 . Ni / C yN i / 
.n/ .n/ .n/ .n/
f0 . k / C yk 
.n/
n . Ni /t

f0 . k / C yk  . f0 . Nl / C yN l /  f0 . k / C  f0 . Nl /
.n/ .n/ .n/ .n/ .n/ .n/

 !f0 .2j˛j/ C (1.49)

.n/ .n/ .n/


because yN l > 0 and all yk other than yN i , i D 1; : : : ; p are smaller than or equal
to .
Case II in which k 2 Œ0; s1  and n . N1 /  t. We have
.n/ .n/

sup . f0 . Ni / C yN i /
.n/ .n/ .n/ .n/
f0 . k / C yk 
.n/
n . Ni /t

 f0 . k / C yk  . f0 . N1 / C yN 1 /
.n/ .n/ .n/ .n/

 !f0 .j˛j/ C : (1.50)

Case III in which k 2 Œ0; s1  and n . N1 / > t. Noting that the set fi 2 f1; : : : ; pg W
.n/ .n/

n . Ni /  tg is empty and recalling our convention stated in the footnote we infer


.n/

sup . f0 . Ni / C yN i / D f0 . k / C yk  f0 .0/
.n/ .n/ .n/ .n/ .n/ .n/
f0 . k / C yk 
.n/
n . Ni /t

 !f0 .j˛j/ C : (1.51)

Now it follows from (1.48)–(1.51) that

sup . f0 . Ni / C yN i /j  !f0 .2j˛j/ C :


.n/ .n/ .n/ .n/
j sup . f0 . k / C yk /  (1.52)
.n/ .n/
n . k /t n . Ni /t

Arguing similarly we infer

j sup. f0 . k / C yk /  sup. f0 . Ni / C yN i /j  !f0 .2j˛j/ C : (1.53)


k t Ni t

Sending in (1.52) and (1.53) j˛j and to zero and recalling (1.47) we arrive
at (1.45). The proof of Theorem 1.3.17 is complete. t
u
Lemma 1.3.18 N .a;b/ satisfies with probability one all the assumptions imposed on
0 in Theorem 1.3.17.
26 1 Perturbed Random Walks

Proof Plainly, N .a;b/ .Œ0; 1/  .1; 0/ D 0 a.s. and N .a;b/ .f0g  .1; C1/ D 0
a.s. Further N .a;b/ .Œ0; T  ..1; ı [ Œı; 1// < 1 a.s. for all ı > 0 and all
T > 0 because
a;b ..1; ı [ Œı; 1/ D
a;b .Œı; 1/ < 1, and N .a;b/ ..r1 ; r2 / 
.0; 1/  1 a.s. whenever 0 < r1 < r2 because
a;b ..0; 1/ D 1.
Fix any T > 0 and ı > 0. In order to show that N .a;b/ does not have clustered
jumps a.s. it suffices to check this property for N .a;b/ .Œ0; T  ..ı; 1 [ //. This is
done on p. 223 in [237]. t
u
Proof of Theorem 1.3.14 According to Donsker’s theorem assumption (1.33)
implies

n1=2 SŒn ) vS2 ./ (1.54)

on D. It is a standard fact of the point processes theory that condition (1.37) entails
X
1fkC1 >0g ".k=n; kC1 =n1=2 / ) b
N .c;2/ (1.55)
k0

on Mp , see, for instance, Theorem 6.3 on p. 180 in [237]. Here, b N .c;2/ has the same
.c;2/
distribution as N but may depend on S2 .
The distribution of b N .c;2/ is completely determined by distributions of restrictions
of b
N .c;2/
to the sets Œ0; s  .ı; 1. Thus in order to prove that S2 and b N .c;2/ are
actually independent, it suffices to check that b .c;2/
N .Œ0; s  .ı; 1/ and S2 are
independent for each fixed s > 0 and each fixed ı > 0. Fix ı > 0 and s > 0
and put 0;n WD 0 and 0>;n WD 0, and then
p p
k;n WD inffj > k1
;n
W j  nıg and k>;n WD inffj > k1
>;n
W j > nıg

for k 2 N. Further, we set

Kn WD #fk 2 N W k;n  ng and Kn> WD #fk 2 N W k>;n  ng:

Then . ;n /k2N are i.i.d. with generic copy  ;n having the distribution
k p
Pf ;n 2 g D Pf 2 j  nıg, while . k>;n /k2N are i.i.d. with generic copy  >;n
p
having the distribution Pf >;n 2 g D Pf 2 j > nıg. For any " > 0,
p p p p
Pfj >;n j > n"g  Pfjj > n"g=Pf > nıg  c1 ı 2 nPfjj > n"g

which proves that limn!1 n1=2  >;n D 0 in probability. Since


X
>
KŒnT D ".k=n; kC1 =n1=2 / .Œ0; T  .ı; 1/;
k0
1.3 Supremum of the Perturbed Random Walk 27

where T > 0 is arbitrary, converges to b


N .c;2/ .Œ0; T  .ı; 1/ in distribution, the
right-hand side of

ˇ Œnt KŒnt ˇ ˇ KŒnt
>
ˇ >
KŒnT
ˇX X ˇ ˇX ˇ X
n1=2 ˇ
sup ˇ i  ˇ
 ;n ˇ D n 1=2 ˇ
sup ˇ ˇ
 k>;n ˇ  n 1=2
j k>;n j
j
t2Œ0; T iD1 jD1 t2Œ0; T kD1 kD1

converges to zero in probability. Indeed, for any r 2 N and all " > 0,

 X
>
KŒnT   X
r 
1=2 1=2 >
P n j k>;n j > "  P n j k>;n j > " C PfKŒnT > rg:
kD1 kD1

Sending first n ! 1 and then r ! 1 proves the claim. Therefore,

K
X
Œn

n1=2  ;n ) vS2 ./


j
jD1


on D. Observe further9 that n1 KŒn ) ‡./ on D where ‡.t/ D t for t  0 which
implies

 KŒn
X Œn
X 
1=2
 
n  ;n ; n1=2  ;n ) vS2 ./; vS2 ./
j j
jD1 jD1

>
on D  D. Since KŒns is independent of . ;n /k2N we conclude that S2 and
k
b .c;2/
N .Œ0; s  .ı; 1/ are independent, as claimed.
Using the independence of S2 and b N .c;2/ , relations (1.54) and (1.55) can be
combined into the joint convergence
 X 
 
n1=2 SŒn ; 1fkC1 >0g ".k=n; kC1 =n1=2 / ) vS2 ./; b
N .c;2/
k0

on D  Mp (endowed with the product topology). By the Skorokhod representation


theorem there are versions which converge a.s. Retaining the original notation
for these versions we want to apply Theorem 1.3.17 with fn ./ D n1=2 SŒn ,

 >
9
The weak convergence of finite-dimensional distributions is immediate from KŒnt C KŒnt D Œnt
>
and the fact that KŒnt converges in distribution. This extends to the functional convergence because

the limit is continuous and KŒnt is a.s. nondecreasing in t (recall Pólya’s extension of Dini’s
theorem: convergence of monotone functions to a continuous limit is locally uniform).
28 1 Perturbed Random Walks

P
f0 D vS2 , n D and 0 D b
k0 "fk=n; kC1 =n1=2 g N .c;2/ . We already know that
conditions (1.42) and (1.43) are fulfilled. Furthermore, by Lemma 1.3.18, b N .c;2/
satisfies with probability one all the assumptions imposed on 0 in Theorem 1.3.17.
Hence Theorem 1.3.17 is indeed applicable with our choice of fn and n , and (1.38)
follows. The proof of Theorem 1.3.14 is complete. t
u

Proof of Proposition 1.3.13 (i) Fix any T > 0. Since, for all " > 0,
 ŒnTC1
Pfn1=2 max ŒnsC1 > "g D 1  Pf  "n1=2 g
0sT

 .ŒnT C 1/Pf > "n1=2 g ! 0

as n ! 1 in view of (1.34), we infer


P
n1=2 max ŒnsC1 ! 0;
0sT

which implies

n1=2 ŒnC1 ) „./

on D where „.t/ D 0 for t  0. Hence, in view of (1.54),

n1=2 .SŒn C ŒnC1 / ) vS2 ./

by Slutsky’s lemma. Relation (1.35) now follows by the continuous mapping


theorem because the supremum functional is continuous in the J1 -topology.
(ii) Since limn!1 n1=2 a.n/ D 1 (see Lemma 6.1.3) we have

SŒn =a.n/ ) „./

in view of (1.54). Further, Proposition 7.2 in [237] tells us that


X
1fkC1 >0g ".k=n;kC1 =an / ) N .1;˛/
k0

on Mp and thereupon
 X 
 
SŒn =an ; 1fkC1 >0g ".k=n;kC1 =an / ) „./; N .1;˛/
k0

on D  Mp equipped with the product topology. Arguing as in the proof of


Theorem 1.3.14 we obtain (1.36)
P by an application of Theorem 1.3.17 with fn ./ D
.1;˛/
SŒn =an , f0 D „, n D k0 ".k=n;kC1 =a.n// and 0 D N . Recall that, by
1.4 First-Passage Time and Related Quantities for the Perturbed Random Walk 29

Lemma 1.3.18, N .1;˛/ satisfies with probability one all the assumptions imposed
on 0 in Theorem 1.3.17. The proof of Proposition 1.3.13 is complete. t
u

1.4 First-Passage Time and Related Quantities


for the Perturbed Random Walk

For x 2 R, define the first passage time into .x; 1/

  .x/ WD inffn 2 N W Tn > xg;

the number of visits to .1; x

N  .x/ WD #fn 2 N W Tn  xg;

and the associated last exit time

 .x/ WD supfn 2 N W Tn  xg

with the usual conventions that sup ˛ D 0 and inf ˛ D 1. Let us further denote
by .x/, N.x/ and .x/ the corresponding quantities for the ordinary random walk
.Sn /n0 which is obtained in the special case  D 0 a.s. after a time shift. For
instance, .x/ WD inffn 2 N W Sn > xg for x 2 R. We shall write  for .0/, N for
N.0/ and for .0/.
Our aim is to find criteria for the a.s. finiteness of   .x/, N  .x/ and  .x/ and
for the finiteness of their power and exponential moments. We first discuss the a.s.
finiteness. As far as N  .x/ and  .x/ are concerned no surprise occurs: the situation
is analogous to that for ordinary random walks.
Theorem 1.4.1 The following assertions are equivalent:
(i) .Tn /n2N is positively divergent.
(ii) N  .x/ < 1 a.s. for some/all x 2 R.
(iii)  .x/ < 1 a.s. for some/all x 2 R.
The situation around   .x/ is different. Plainly, if lim supn!1 Tn D 1 a.s.,
then   .x/ < 1 a.s. for all x 2 R. On the other hand, one might expect in the
opposite case limn!1 Tn D 1 a.s., that Pf  .x/ D 1g > 0 for all x  0,
for this holds true for ordinary random walks. Namely, if limn!1 Sn D 1 a.s.,
then Pfsupn1 Sn  0g D Pf D 1g > 0. The following result shows that this
conclusion may fail for a PRW. It further provides a criterion for the a.s. finiteness
of   .x/ formulated in terms of .; /.
30 1 Perturbed Random Walks

Theorem 1.4.2 Let .Tn /n2N be positively divergent or oscillating. Then   .x/ < 1
a.s. for all x 2 R. Let .Tn /n2N be negatively divergent and x 2 R. Then   .x/ < 1
a.s. if, and only, if Pf < 0;   xg D 0.
The following theorems are on the finiteness of exponential moments of   .x/,
N .x/, and  .x/.


Theorem 1.4.3 Let a > 0 and x 2 R.


(a) If Pf < 0;   xg D 0, then E exp.a  .x// < 1 if, and only if,

ea Pf D 0;   xg < 1:

(b) If Pf < 0;   xg > 0, then

E exp.a  .x// < 1;


E exp.a  .y// < 1 for all y 2 R;
E exp.a/ < 1;
 log inf Eet  a
t0

are equivalent assertions.


Theorem 1.4.4 Let .Tn /n2N be a positively divergent PRW.
(a) If   0 a.s., then the assertions

E exp.aN  .x// < 1; (1.56)


ea Pf D 0;   xg C Pf D 0;  > xg < 1 (1.57)

are equivalent for each a > 0 and x 2 R. As a consequence,


n  .x/
o
a > 0 W EeaN < 1 D .0; a.x// (1.58)

for any x 2 R, where a.x/ 2 .0; 1 equals the supremum of all positive a
satisfying (1.57). As a function of x, a.x/ is nonincreasing with lower bound
 log Pf D 0g.

(b) If  > 0 a.s., then a.x/ D 1 for all x 2 R, thus EeaN .x/ < 1 for any a > 0
and x 2 R.
(c) If Pf < 0g > 0, then the following assertions are equivalent:

E exp.aN  .x// < 1 for some/all x 2 R; (1.59)


E exp.aN.x// < 1 for some/all x 2 R; (1.60)
 log inf Eet  a: (1.61)
t0
1.4 First-Passage Time and Related Quantities for the Perturbed Random Walk 31

Theorem 1.4.5 Let .Tn /n2N be a positively divergent PRW, a > 0 and R WD
 log inf Eet .
t0

(a) Assume that Pf  0g D 1. Let x 2 R and assume that Pf  xg > 0. Then the
following assertions are equivalent:

E exp.a  .x// < 1I


X
ean PfTn  yg < 1 for some/all y  xI
n1

a <  log Pf D 0g and Ee  < 1;

where is the unique positive number satisfying Ee  D ea .


(b) If Pf < 0g > 0, then the following assertions are equivalent:

E exp.a  .x// < 1 for some/all x 2 RI


X
ean PfTn  xg < 1 for some/all x 2 RI
n1

a < R and Ee  < 1 or a D R; Ee  > 0 and Ee  < 1

where is the minimal positive number satisfying Ee  D ea .


Given next are criteria on the finiteness of power moments of N  .x/ and  .x/.
As for   .x/, results are not yet complete, and we refrain from discussing them here.
Theorem 1.4.6 Let .Tn /n0 be a positively divergent PRW and p > 0. The
following conditions are equivalent:

E.N  .x//p < 1 for some/all x 2 RI (1.62)


E.N.x// < 1 for some/all x  0I
p
(1.63)
E.JC .  //pC1 < 1: (1.64)

Theorem 1.4.7 Let .Tn /n0 be a positively divergent PRW and p > 0. Then the
following assertions are equivalent:

E.  .x//p < 1 for some/all x 2 RI


E. .y//p < 1 for some/all y  0 and E.JC . //pC1 < 1I
E.JC .  //pC1 < 1 and E.JC . //pC1 < 1:
32 1 Perturbed Random Walks

Proofs of Theorems 1.4.3, 1.4.5 and 1.4.7 can be found in [8]. These will not be
given here because while the proofs concerning .x/ are rather technical, the proofs
concerning .x/ rely on the arguments which are very similar to those exploited in
Section 1.3.2.

1.5 Proofs for Section 1.4

Proof of Theorem 1.4.1 If either N  .x/ or  .x/ is a.s. finite for some x, then
lim infn!1 Tn > 1 a.s. Hence, by Theorem 1.2.1, .Tn /n2N must be positively
divergent. The converse assertion holds trivially. t
u
One half of the proof of Theorem 1.4.2 is settled by the following lemma.
Lemma 1.5.1 Let x 2 R, Pf < 0;   xg D 0 and p WD Pf  xg < 1. Then
Pf  .x/ > ng  pn for n 2 N. If p D 1, then lim supn!1 Tn D 1 a.s.
Proof Let x 2 R and Pf < 0;   xg D 0. Then p D 1 entails   0
a.s., thus limn!1 Sn D 1 a.s. (recalling our standing assumption) and thus, by
Theorem 1.2.1, lim supn!1 Tn D 1 a.s.
Now assume that p < 1. Then  WD inffn 2 N W n > xg has a geometric
distribution, namely Pf > ng D pn for n 2 N. By assumption, k  0 a.s. for
k D 1; : : : ; n  1 on f D ng whence Tn D 1 C : : : C n1 C n  n > x a.s. on
f D ng and therefore

Pf  .x/ > ng D PfTk  x for k D 1; : : : ; ng  Pf > ng D pn :

for any n 2 N. t
u
Proof of Theorem 1.4.2 The first assertion is obvious. In view of Lemma 1.5.1, it
remains to argue that, given a negatively divergent PRW .Tn /, the a.s. finiteness of
  .x/ for some x 2 R implies Pf < 0;   xg D 0.
Suppose, on the contrary, that Pf < 0;   xg > 0. Then we can fix " > 0 such
that Pf  ";   xg > 0. By negative divergence, supn1 Tn D T < 1 a.s. so
that we can further pick y 2 R such that PfT  yg > 0. Define m WD inffk 2 N0 W
k"  y  xg. Then

Pf  .x/ D 1g D PfT  xg


 
 P max k  "; max k  x; sup Tj  Sm  y
1km 1km j>m

D Pf  ";   xgm PfT  yg > 0

yields the desired contradiction. t


u
1.5 Proofs for Section 1.4 33

Proof of Theorem 1.4.4 By Theorem 1.2.1, positive divergence of .Tn /n2N entails

EJC . / < 1: (1.65)

(a) and (b): Fix any a > 0 and x 2 R. For y  0, define

.y/ WD inffn  1 W n > yg:


b

We shall use results developed in Section 3.4. Consider the random process with
Pb .0/
immigration Y with generic response process X.t/ WD kD1 1fk tg and generic
renewal increment N WD Sb  .0/
> 0 having distribution Pf 2 j > 0g. Then it

is easily seen that N .x/ D Y.x/ for all x 2 R. Therefore, by Theorem 3.4.1 and

Remark 3.4.2, EeaN .x/ < 1 if, and only, if

Z b
 .0/
! !
X
E exp a N
1fk xyg  1 dU.y/ < 1; (1.66)
Œ0; 1/ kD1

N
where U.x/ N It satisfies Erickson’s
is the renewal function associated with .
inequality (formula (6.5)) which reads

N
JC .y/Pf > 0g  U.y/  2JC .y/Pf > 0g (1.67)

for all y > 0. Since

Eea1fxg 1fD0g D ea Pf D 0;   xg C Pf D 0;  > xg;

we see that (1.57) is equivalent to

b
 .0/
!
X Eea1fxg 1f>0g
E exp a 1fk xg D < 1 (1.68)
kD1
1  Eea1fxg 1fD0g

Validity of (1.68) further implies (1.66) because

Z b
 .0/
! !
X
E exp a N
1fk xyg  1 dU.y/
Œ0; 1/ kD1
Z
Eea1fxyg  1 N
D dU.y/
Œ0; 1/ 1  Eea1fxyg 1fD0g
Z
.ea  1/ Pf  x  yg N
D dU.y/
Œ0; 1/ 1  Eea1fxyg 1fD0g
34 1 Perturbed Random Walks

Z
.ea  1/ N
 Pf.  x/  ygdU.y/
1  Eea1fxg 1fD0g Œ0; 1/

2.ea  1/Pf > 0g


 EJC ..  x/ / < 1
1  Eea1fxg 1fD0g

where (1.67) has been utilized for the penultimate inequality and (1.65) for the last.
Since, conversely, (1.57) follows directly from (1.66), we have thus proved the
equivalence of (1.56) and (1.57). Checking the remaining assertions is easy and
therefore omitted.
(c): (1.59))(1.60). Since Pf < 0;   xg ! Pf < 0g > 0 as x ! 1, we can
choose x 2 R so large such that Pf < 0;   xg > 0. Using that N  .x/    .x/1,

we infer from (1.59) that Eea .x/ < 1. According to Theorem 1.4.3(b), this implies
Ee < 1. The latter is equivalent to (1.60) by Theorem 6.3.5.
a

(1.60))(1.61). If EeaN.x/ < 1 for some x 2 R, then, by monotonicity, EeaN.x/ <


1 for some x  0. Now the implication follows by Theorem 6.3.5.
(1.61))(1.59). In view of (1.61) there exists a minimal > 0 such that Ee  D
ea . This can be used to define a new probability measure P by

E h.S0 ; : : : ; Sn / D ean Ee Sn h.S0 ; : : : ; Sn /; n2N (1.69)

for each Borel function h W RnC1 ! RC where E denotes the expectation with
respect to P .
Set 0 WD 0 and, for n 2 N, let n denote the nth strictly increasing ladder epoch
of .Sk /k2N0 , i.e., 1 WD  and

n WD inffk > n1 W Sk > Sn1 g

for n  2. Further, denote by


X
U > .x/ WD PfSn  xg; x0 (1.70)
n0

the renewal function of the corresponding ladder height sequence. Then, according
to Theorem 3.4.3 (with X.t/ D 1ftg ) it suffices to prove that
Z
 
r> .0/ WD l.y/1 dU > .y/ < 1; (1.71)
Œ0; 1/

Q 
where l.x/ WD E nD1 e
a1fTn xg
, x 2 R.
For x 2 R, set

ˇ.x/ WD supfn   W Tn  xg
1.5 Proofs for Section 1.4 35

if min1n Tn  x, and let ˇ.x/ WD 0, otherwise. Then l.x/  Eeaˇ.x/ . Therefore,


(1.71) follows from
Z
 
E exp.aˇ.y//1 dU > .y/ < 1: (1.72)
Œ0; 1/

Now
X
Eeaˇ.x/ D Pf min Tn > xg C ean Pf  n; Tn  x; min Tk > xg
1n nC1k
n1
X
 1C ean Pf  n; Tn  xg:
n1

Consequently, with F.y/ WD Pf  yg, y 2 R denoting the distribution function of


,
X
Eeaˇ.x/  1  ean Pf  n; Tn  xg
n1
X
D ean EF.x  Sn1 /1f ng
n1
X
D ea E e Sn F.x  Sn /1f >ng (1.73)
n0

where (1.69) has been utilized in the last step. Now let w;0 WD 0 and w;n WD
inffk > w;n1 W Sk  Sw;n1 g for n  1 where inf ˛ D 1. We now make use of
the following duality (see, for instance, Theorem 2.3 on p. 224 in [18])
X X
P fSn 2 ;  > ng D P fSw;n 2 ; w;n < 1g (1.74)
n0 n0

Using this in (1.73) gives


X
Eeaˇ.x/  1  ea E e Sw;n F.x  Sw;n /1fw;n <1g :
n0

Integrating with x replaced by y w.r.t. dU > .y/ gives


Z
a
 
e E eaˇ.y/ 1 dU > .y/
Œ0; 1/
Z X  
Sw;n >
 E e F.y  Sw;n /1fw;n <1g dU .y/
Œ0; 1/ n0
36 1 Perturbed Random Walks

X  Z 
Sw;n > 
 E e 1fw;n <1g U ..z C Sw;n / /dF.z/
n0 R

X  Z 
Sw;n >  >
 E e 1fw;n <1g U .z /dF.z/ C U .Sw;n / (1.75)
n0 R

where in the last step weR have used the subadditivity of y 7! y , y 2 R and U > .y/ ,
y  0 (see (6.3)). Here, R U > .z /dF.z/ D EU > . / is finite due to (1.65) and the
fact that
2y 2y
U > .y/  R y  Ry D 2JC .y/; y>0 (1.76)
0 PfS > xgdx 0 PfS1 > xgdx

which is a consequence of Erickson’s inequality (6.5) and S  S1C a.s. Further,


again by the subadditivity of U > .y/, we have U > .y/ D O.y/ as y ! 1. In view of
this, in order to prove the finiteness of the series in (1.75) it suffices to show that
X
E e Sw;n 1fw;n <1g < 1 (1.77)
n0

for some 0 < < .


Since P fw;n < 1g D Pfw;1 < 1; w;2  w;1 < 1; : : : ; w;n  w;n1 <
1g D .P fw;1 < 1g/n we conclude that, if P fw;1 < 1g < 1, then
X X
E e Sw;n 1fw;n <1g  P fw;n < 1g
n0 n0
X n 1
D P fw;1 < 1g D < 1
n0
P fw;1 D 1g

where the first inequality follows from the fact that Sw;n  0 on fw;n < 1g. If,
on the other hand, P fw;1 < 1g D 1, then we can drop the indicators in (1.77)
and get E e Sw;n 1fw;n <1g D E e Sw;n . Since .Sw;n  Sw;n1 /n2N are i.i.d. random
 n
variables, we infer E e Sw;n D E e Sw;1 for each n 2 N. By the definition of
w;1 , P fSw;1  0g D 1. Furthermore, P fSw;1 < 0g > 0 because Pf < 0g >
0 by assumption. Consequently, E e Sw;1 < 1. From these facts, we derive the
convergence of the series in (1.77) as follows
X X 1
E e Sw;n D .E e Sw;1 /n D < 1:
n0 n0 1  E e Sw;1

The proof of Theorem 1.4.4 is complete. t


u
1.5 Proofs for Section 1.4 37

Proof of Theorem 1.4.6 Assume first that   0 a.s. and fix an arbitrary x 2 R.
According to parts (a) and (b) of Theorem 1.4.4, whenever N  .x/ < 1 a.s. it has
some finite exponential moments. In particular, E.N  .x//p < 1 for every p > 0.
Therefore, from now on, we assume that Pf < 0g > 0.
(1.63),(1.64) follows from Theorem 6.3.7.
(1.63)) (1.62). For any x 2 R, EJC . / < 1 is equivalent to EJC ..x/ / < 1.
Further, (by the equivalence (1.63),(1.64)) we know that E.N.x//p < 1 for some
x  0 implies E.N.x//p < 1 for all x  0. Thus replacing  by   x it suffices to
prove that E.N  .0//p < 1 if E.N.0//p < 1.
Case p 2 .0; 1/ Using the subadditivity of the function x 7! xp , x  0 we obtain
X p X p
 
p
N .0/  1fTk 0; Sk1 0g C 1fTk 0; Sk1 >0g
k1 k1
 p X
 N.0/ C 1f0<Sk1 
k g
a.s.
k1

Since E.N.0//p < 1 by assumption, it remains to check that


X
Pf0 < Sk1  
k g < 1: (1.78)
k1

By Theorem 1.2.1, limn!1 Tn D 1 a.s. implies limn!1 Sn D C1 a.s. The


latter ensures E < 1. Let U > be the renewal function of the (strict) ladder height
sequence (see (1.70)). For x  0 we infer

X Z X
 1 
Pf0 < Sk1  xg D E 1fy<Sk xyg dU > .y/
k1 Œ0; 1/ kD0

 1 
X 
DE U > .x  Sk /  U > .Sk /
kD0

 EU > .x/  2EJC .x/

having utilized the subadditivity of the function x 7! U > .x/, x  0 (see (6.3)) for the
penultimate step and (1.76) for the last. Now (1.78) follows from the last inequality
and (1.65).
Case p  1 According to Theorem 6.3.7, (1.64) implies

E pC1 < 1: (1.79)

Let 0 D 0 and n D inffk > n1 W Sk > Sn1 g. P Retaining the notation
n
of Section 3.4 (but replacing  with  0 ) let Xn .x/ WD kDn1 C1 1fTk xg and
38 1 Perturbed Random Walks

n0 WD Sn  Sn1 and observe that Y.x/ D N.x/. Since the so defined n0 are a.s.
positive, we can apply Theorem 3.4.4 to conclude that it is enough to show that, for
every q 2 Œ1; p,
Z X
 q
E 1fTk yg dU > .y/ < 1; (1.80)
Œ0; 1/ kD1

where, as above, U > is the renewal function of .Sn /n2N0 . Fix any q 2 Œ1; p. For
x  0, it holds that
X
 q
1fTk xg
kD1
X
 
  q
 1fSk1  
k x; k xg
C 1  
fSk1 k x; k >xg
kD1
 X
 q X
 q 
 2 q1
1 f
k xg
C 1 fSk1  
k x; k >xg
kD1 kD1

DW 2q1 .I1 .x/ C I2 .x//:

By Theorem 5.2 on p. 24 in [119], there exists a positive constant Bq such that


EI1 .x/  Bq E q Pf  xg and thereupon
Z Z
E I1 .y/dU >.y/  Bq E q Pf  ygdU >.y/
Œ0; 1/ Œ0; 1/

 Bq E q EU > . /:

Here, E q < 1 is a consequence of (1.79), and EU > . / < 1 follows from (1.76)
and (1.65).
Turning to the term involving I2 , notice that from the inequality
q
.x1 C : : : C xm /q  mq1 .x1 C : : : C xqm /; x1 ; : : : ; xm  0
1.5 Proofs for Section 1.4 39

and the subadditivity of the function x 7! U > .x/, x  0 (see (6.3)) it follows that
Z  Z
X
>
I2 .y/dU .y/   q1
1fSk1  
k y; k >yg
dU > .y/
Œ0; 1/ kD1 Œ0; 1/


X
D  q1 .U > . > 
k  Sk1 /  U .k //
kD1
 1
X
  q1 U > .Sk /
kD0

 1
X
  q1 U > .1 C : : : C k /
kD0
  1
X 
 >  
  q1 1 C U .1 / C : : : C U > .k /
kD1
  1
X 
> 
D q1
1C .  k/U .k /
kD1

X
  q1 C  q U > .k /:
kD1

By Hölder’s inequality,

X  X
 qC1 1=.qC1/
E q U > .k /  .E qC1 /q=.qC1/ E U > .k / :
kD1 kD1

The finiteness of the first factor is secured by (1.79). According to Theorem 5.2 on
 qC1
p. 24 in [119], the second factor is finite provided E qC1 < 1 and E U > .  / <
1. The formerR follows from (1.79), the latter from (1.76) and (1.64). Thus we have
proved that E Œ0; 1/ I2 .y/d U > .y/ < 1, hence (1.80).
(1.62))(1.63). Assume that E.N  .x//p < 1.
Case p 2 .0; 1/ We start by showing that, without loss of generality, we can assume
that  and  are independent. Let .0n /n2N be a sequence of i.i.d. copies of  and
assume that this sequence is independent of the sequence ..n ; n //n2N . Define Tn0 WD
Sn1 C 0n , n 2 N and Fn0 WD ..k ; k /; 0k W k D 1; : : : ; n/. Then
0 0
P.Tn  xjFn1 / D P.n  x  Sn1 jFn1 / D F.x  Sn1 / a.s.
40 1 Perturbed Random Walks

where F.t/ D Pf  tg, t 2 R and, analogously,

P.Tn0  xjFn1
0
/ D P.0n  x  Sn1 jFn1
0
/ D F.x  Sn1 / a.s.;

that is, the sequences .1fTn xg /n2N and .1fTn0 xg /n2N of nonnegative random vari-
ables are tangent. Hence, by Theorem 2 in [126],
X p X p
E 1fSn1 C0n xg  cp E 1fSn1 Cn xg
n1 n1

for appropriate constant cp > 0. Since .k /k2N and .0k /k2N are independent, we may
work under the additional assumption of independence between the random walk
and the perturbating sequence. In the following, we do not introduce a new notation
to indicate this feature.
Let y  x be such that Pf  yg > 0 and let A WD fN.x  y/ > 0g. Observe
that P.A/ > 0 since we assume that Pf < 0g > 0. The following inequality holds
a.s. on A:
X p
  p
N .x/  1fSk1 xy; k yg
k1
 p
 p X
D N.x  y/ 1fSk1 xyg 1fk yg =N.x  y/
k1
 p1 X
 N.x  y/ 1fSk1 xyg 1fk yg ;
k1

where for the second inequality the concavity of t 7! tp , t  0 has been used. Taking
expectations gives
!
 p X
1 > E N  .x/  E 1A N.x  y/p1 1fSk1 xyg 1fk yg
k1

D Pf  yg E.N.x  y// : p

An appeal to Lemma 6.3.3 completes the proof of this case.


Case p  1 It holds that
X p
 p
1 > E N  .x/  E 1fSk1 xy; k yg
k1
 p
 const E N.x  y/ .Pf  yg/p ;
1.6 Bibliographic Comments 41

where at the last step the convex function inequality (Theorem 3.2 in [63]), applied
to t 7! tp , has been utilized. An appeal to Lemma 6.3.3 completes the proof. t
u

1.6 Bibliographic Comments

Sometimes in the literature the term ‘perturbed random walk’ was used to denote
random sequences other than those defined in (1.1). See, for instance, [66, 73, 156,
189, 190, 265] and Section 6 in [119]. The last four references are concerned with the
nonlinear renewal theory. In it a very different class of perturbations is considered.
In particular, they must be uniformly continuous in probability and satisfy some
other conditions.
Theorem 1.3.1 and Proposition 1.3.4 were proved in [137] via a more compli-
cated argument.
Theorem 1.3.5 seems to be new.
Theorem 1.3.6 which is a much strengthened version of Theorem 3 in [16] was
proved in [137]. A similar result was mentioned in Example 2 of [111]. However
neither a precise formulation nor a proof has been given in [111].
While the nonlattice case of Theorem 1.3.8 is a particular case of Theorem 5.2 in
[107], the lattice case was settled in [157]. Other interesting results concerning the
tail behavior of supn1 Tn can be found in [123, 224, 225, 241].
Inequality (1.17) in Lemma 1.3.9 was obtained in Lemma 1(a) of [3].
Lemma 1.3.12 is an extended version of Lemma 2.2 in [9].
Formula (1.22) was earlier obtained in [107].
Proposition 1.3.13 seems to be new. Its one-dimensional version was proved
in Theorem 3 of [129] by using another approach which required the assumption
E2 < 1 instead of (1.34) in part (i).
Theorems 1.3.14 and 1.3.17 are borrowed from [155]. By using an argument
different from ours a result very similar to Theorem 1.3.14 was derived in [256].
Under the assumption that  and  are independent functional limit theorems
for maxk0 .kC1 1fSk tg / as t ! 1 were obtained in Theorem 4 of [226] and
Theorem 3.1 of [207]. The limit processes are time-changed extremal processes.
Allowing  and  to be dependent one-dimensional convergence of the aforemen-
tioned maximum was proved in [227].
The material of Sections 1.4 and 1.5 is taken from [8].
Chapter 2
Perpetuities

Let .Mk ; Qk /k2N be independent copies of an R2 -valued random vector .M; Q/ with
arbitrary dependence of the components, and let X0 be a random variable which is
independent of .Mk ; Qk /k2N . Put

…0 WD 1; …n WD M1 M2  : : :  Mn ; n 2 N:
 
The sequence Xn n2N0 defined by

Xn D Mn Xn1 C Qn ; n 2 N;

is a homogeneous Markov chain. In view of the representation

X
n
Xn D ‰n .Xn1 / D ‰n ı : : : ı ‰1 .X0 / D …n X0 C .…n =…k /Qk ; n 2 N;
kD1

where ‰n .t/ WD Qn CMn t for n 2 N, .Xn /n2N is nothing else but the forward iterated
function system. Closely related is the backward iterated function system

X
n
Yn WD ‰1 ı : : : ı ‰n .0/ D …k1 Qk ; n 2 N:
kD1

In the case that X0 D 0 a.s. it is clear that Xn has the same distribution as Yn for each
fixed n. The random discounted sum
X
Y1 WD …k1 Qk ;
k1

obtained as the a.s. limit of Yn under appropriate conditions (see Theorem 2.1.1
below), is called perpetuity and is of interest in various fields of applied probability

© Springer International Publishing AG 2016 43


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_2
44 2 Perpetuities

like insurance and finance, the study of shot-noise processes or, as will be seen in
Chapter 4, of branching random walks.

2.1 Convergent Perpetuities

2.1.1 Criterion of Finiteness

Throughout Chapter 2, for the particular R x case  D log jMj, we use the notation
introduced in (1.3) so that A .x/ D 0 Pf log jMj > ygdy, J .x/ D x=A .x/
for x > 0 and J .0/ D 1=PfjMj < 1g. Recall that J .x/ is finite if, and only, if
PfjMj < 1g > 0.
Goldie and Maller (Theorem 2.1 in [109]) gave the following complete charac-
terization of the a.s. convergence of the series which defines Y1 . We do not provide
a proof referring instead to the cited paper.
Theorem 2.1.1 Suppose that

PfM D 0g D 0 and PfQ D 0g < 1: (2.1)

Then

lim …n D 0 a.s. and EJ .logC jQj/ < 1I (2.2)


n!1
X
j…n1 Qn j < 1 a.s. (2.3)
n1

and

PfjMj D 1g < 1 and sup j…n1 Qn j < 1 a:s: (2.4)


n2N

are equivalent conditions which imply

lim Yn D Y1 a.s. and jY1 j < 1 a.s.


n!1

Moreover, if

PfQ C Mr D rg < 1 for all r 2 R; (2.5)

and if at least one of the conditions in (2.2) fails to hold, then limn!1 jYn j D 1 in
probability.
2.1 Convergent Perpetuities 45

For m 2 N, set
X
.m/
Y1 WD QmC1 C MmC1  : : :  Mk1 Qk :
kmC2

.m/
The random variable Y1 is a copy of Y1 independent of .Mk ; Qk /1km . With these
at hand and assuming that jY1 j < 1 a.s., the equalities
.1/ .m/
Y1 D Q1 C M1 Y1 D Ym C …m Y1 a.s. (2.6)

hold for any fixed m 2 N. Sometimes it is convenient to rewrite the first equality, in
a weaker form, as the distributional equality

d
Y1 D Q C MY1 ; (2.7)

where, on the right-hand side, Y1 is assumed independent of .M; Q/. It is known


(see Theorem 1.5 in [255] or Theorem 3.1 in [109]) that the distribution of Y1 forms
the only possible solution to (2.7) (considered as a distributional equation with the
distribution of Y1 being unknown), unless Q C Mr D r a.s. for some r 2 R. Under
the latter degeneracy condition, the solutions to (2.7) are either all distributions on
R, or those symmetric around r, or "r .
Let us find out what happens in the ‘trivial cases’ when one of the conditions (2.1)
does not hold.
(a) If PfM D 0g > 0, then N WD inffk 2 N W Mk D 0g < 1 a.s., andPthe perpetuity
trivially converges, the limit being an a.s. finite random variable NkD1 …k1 Qk .
Hence, in this case no condition on the distribution of Q is needed to ensure the
finiteness of Y1 . P
(b) If PfQ D 0g D 1, then k1 …k1 Qk D 0 a.s.
To close the section, we note that the distribution of Y1 forms a stationary
distribution of the Markov chain .Xn / whenever it is positive recurrent. Even though
Y1 is the a.s. limit of the backward system .Yn /, the forward sequence .Xn /
converges to Y1 in distribution only.

2.1.2 Examples of Perpetuities

Decomposable Distributions For fixed c 2 .0; 1/, the distribution of a random


variable Z is called c-decomposable, if it satisfies

d
Z D Q C cZ
46 2 Perpetuities

where on the right-hand side the random variable Q is independent of Z. The


distribution is called selfdecomposable if it is c-decomposable for every c 2 .0; 1/.
According to a classical characterization of the selfdecomposable distributions
obtained in Theorem 3.2 of [166], the distribution of a random variable Y is
selfdecomposable if, and only if, there exists a unique in distribution Lévy process
X RWD .X.t//t0 with E logC jX.1/j < 1 such that Y has the same distribution
as .0;1/ et dX.t/. There is a huge difference between c-decomposable and selfde-
composable distributions. The latter are always infinitely divisible and absolutely
continuous w.r.t. the Lebesgue measure. The former may be rather ill-behaved, for
instance, continuous singular w.r.t. the Lebesgue measure.
Plainly, any c-decomposable distribution is the distribution of a perpetuity which
corresponds to M D c a.s. Similarly, a random variable with the selfdecomposable
distribution admits infinitely many perpetuity representations (with M D c a.s.)
obtained as c runs over the interval .0; 1/. We shall now show that there is a wider
collection of perpetuity representations with
 Z 
.M; Q/ D eT ; es dX.s/ ;
.0;T

where T is either independent of X, or a stopping time w.r.t the filtration generated


by X. It is a consequence of the (strong) Markov property of X that X1 WD .X.T C
t/  X.T//t0 is a copy of X independent of .X.t//0tT . Assuming for simplicity
that X is a subordinator,
R i.e., a Lévy process with nondecreasing paths, in which case
the integral .0;1/ es dX.s/ exists as a pathwise Lebesgue–Stieltjes integral, we can
write
Z Z Z Z
et dX.t/ D et dX.t/ C et dX.t/ D et dX.t/
.0;1/ .0;T .T;1/ .0;T
Z
C e.TCt/ d.X.T C t/  X.T//
.0;1/
Z Z
t T
D e dX.t/ C e et dX1 .t/:
.0;T .0;1/

R  R 
Since .0;1/ et dX1 .t/ is independent of eT ; .0;T/ et dX.t/ and has the same
R
distribution as .0;1/ et dX.t/, the claim follows.
To set a link with results of Section 3 we note that, whenever X is a com-
pound Poisson Rprocess with positive jumps having finite logarithmic moment, the
distribution of .0;1/ et dX.t/ is a limit (stationary) distribution of a Poisson shot
R
noise process Œ0;t e.ty/ dX.y/. Hence, the limit distribution of a Poisson shot noise
process is the distribution of a perpetuity.
Exponential Functionals of Lévy Processes Let X WD .X.t//t0 be a Lévy
Rprocess. Whenever limt!1 X.t/ D 1 a.s., the a.s. finite random variable Z WD
1 X.t/
0 e dt is called exponential functional of X. Using X1 and T as introduced
2.1 Convergent Perpetuities 47

above we can write


Z 1 Z T Z 1 Z T
ZD eX.t/ dt D eX.t/ dt C eX.t/ dt D eX.t/ dt (2.8)
0 0 T 0
Z 1 Z T Z 1
X.T/ .X.TCt/X.T// X.t/ X.T/
Ce e dt D e dt C e eX1 .t/ dt:
0 0 0

R1  RT 
Since 0 eX1 .t/ dt is independent of eX.T/ ; 0 eX.t/dt and has the same distribu-
R 1 X.t/ R 1 X.t/
tion as 0 e dt we conclude that Z D 0 e dt is a perpetuity generated by
 Z T 
.M; Q/ D eX.T/ ; eX.t/dt : (2.9)
0

Let now X be a non-killed subordinator with Laplace exponent ˆ.s/ WD


 log EesX.1/, s  0. In this case the moments of positive integer orders of Z admit
a simple representation (see Theorem 2 in [34]):
Z T n

E eX.t/ dt D ; n 2 N; (2.10)
0 .c C ˆ.1//  : : :  .c C ˆ.n//

where T denotes a random variable independent of X and having an exponential


distribution with parameter c  0 (if c D 0, T is interpreted as C1).
We claim that the M and Q defined in (2.9) with T independent of X and having
an exponential distribution with parameter c > 0 are dependent. To check this, we
use the formula (see Proposition 2.4 in [153])
a
EQa M b D EQa1 M b
c C ˆ.a C b/

for a > 0 and b  0, which is an extension of (2.10). Specializing this to a D


b D 1 gives EQM D .c C ˆ.2//1 EM which proves the dependence because
EQ D .c C ˆ.1//1 ¤ .c C ˆ.2//1 in view of (2.10).
Given next is an observation that will be relevant in Example 2.1.8.
P.t/
Example 2.1.1 Let X be a compound Poisson process defined by X.t/ D iD1 i ,
t  0, where . i /i2N are independent copies of a nonnegative random variable with
Pf D 0g D 2 Œ0; 1/ which are independent of a Poisson process ..t//t0 with
R 1  > 0. Using (2.8) with T being the first arrival time in ..t// we conclude
intensity
that 0 eX.t/ dt is a perpetuity generated by independent M WD eX.T/ D e
RT
and Q WD 0 eX.t/ dt D T. Observe that Q has an exponential distribution with
parameter  and PfM D 1g D 2 Œ0; 1/. Since

ˆ.s/ D  log EesX.1/ D .1  Ees / D .1  EM s /; s  0;


48 2 Perpetuities

an appeal to (2.10) gives


Z 1 n
nŠ nŠ
E eX.t/ dt D D n
0 ˆ.1/  : : :  ˆ.n/  .1  EM/  : : :  .1  EM n /
(2.11)
for n 2 N.
As far as we know, discussed below is the only example of a perpetuity which
corresponds to dependent M and Q in which one can find the marginal distributions
of M and Q explicitly.
Example 2.1.2 Let X be a drift-free non-killed subordinator with the Lévy measure

et=˛
.dt/ D 1.0;1/ .t/dt
.1  et=˛ /˛C1

for some ˛ 2 .0; 1/. Equivalently,


Z
.1  ˛/.1 C ˛s/
ˆ.s/ D .1  est /.dt/ D  1; s0
Œ0;1/ .1 C ˛.s  1//

where ./ is the gamma function. Let T be a random variable independent of X


R 1distribution with parameter 1. We already know that the
and having an exponential
exponential functional 0 eX.t/dt is a perpetuity with dependent M and Q given
in (2.9).
We shall say that a random variable ˛ has the Mittag–Leffler distribution with
parameter ˛ 2 .0; 1/ if
X sn
Ees.1˛/ ˛ D ; s  0: (2.12)
n0
.1 C n˛/

The term stems from the fact that the right-hand side defines the Mittag–Leffler
function with parameter ˛, a generalization of the exponential function which
corresponds to ˛ D 1. Formula (2.12) entails


E ˛n D ; n 2 N:
..1  ˛//n .1 C n˛/

Using (2.10) we infer


Z T n
X.t/ nŠ
E e dt D ; n 2 N;
0 ..1  ˛//n .1 C n˛/
2.1 Convergent Perpetuities 49

RT
which shows that Q D 0 eX.t/ dt has the Mittag–Leffler distribution with
parameter ˛. Further, for s > 0,
Z 1
sX.T/ 1 .1 C ˛.s  1//
Ee D EesX.t/ et dt D D
0 1 C ˆ.s/ .1  ˛/.1 C ˛s/
Z 1
1
D xs˛ x˛ .1  x/˛1 dx:
.˛/.1  ˛/ 0

This proves that M D eX.T/ has the same distribution as ˛˛ where Pf˛ 2 dxg D
1
.˛/.1˛/
x˛ .1x/˛1 1.0;1/ dx, i.e., ˛ has a beta distribution with parameters 1˛
and ˛.
Fixed Points of Inhomogeneous Smoothing Transforms With J deterministic
or random, finite or infinite with positive probability, let M WD .M .i/ /1iJ be
a collection of real-valued random variables. Also, for d 2 N, let Q be an Rd -
valued random vector arbitrarily dependent on M. The mapping T on the set of
PJ on.i/R that maps a distribution
to the distribution of the
d
probability measures
random vector iD1 M Xi C Q where .Xi /i2N are independent random vectors
with distribution
which are also independent of .M; Q , is called inhomogeneous
smoothing transform. The smoothing transform is called homogeneous if Q D 0 a.s.
Let be a fixed point of T, i.e., D T , and Y a random vector with distribution .
Then

d
X
J
YD M .i/ Yi C Q
iD1

where .Yi /i2N are independent copies of Y which are also independent of .M; Q/.
Obviously, the distribution of a perpetuity is the fixed point of an inhomogeneous
smoothing transform with J D 1 a.s. and d D 1.
Fixed Points of Poisson Shot Noise Transforms The homogeneous smoothing
transform T is called Poisson shot noise transform if M .i/ D h.Ti / for a Borel
function h  0 and the arrival times .Ti / in a Poisson process of positive intensity
. Let Y be a random variable with distribution that is a fixed point of the Poisson
shot noise transform concentrated on Œ0; 1/ and nondegenerate at 0. Then

d
X
YD h.Ti /Yi (2.13)
i1

where .Yi / are independent copies of Y which are also independent of .Tj / or,
equivalently,
 Z 1 
'.s/ D exp   .1  '.h.y/s//dy ; s  0 (2.14)
0

where '.s/ WD EesY , s  0.


50 2 Perpetuities

Now we discuss the simplest situation in which the fixed points can be explicitly
identified.
Example 2.1.3 If h.y/ D ey , then the (nondegenerate at zero) fixed points of
the shot noise transforms exist if, and only if,   1. These are positive Linnik
distributions ˇ with tails
X
ˇ ..x; 1// D .ˇ/k xk = .1 C k/; x0
k0

for each ˇ > 0, and Laplace–Stieltjes transforms


Z
exp.sx/ ˇ .dx/ D .1 C ˇs /1 ; s  0:
Œ0;1/

For the proof, differentiate (2.14) (with h.y/ D ey ) to obtain a Bernoulli differential
equation ' 0 .s/ C s1 '.s/  s1 ' 2 .s/ D 0. Changing the variable z.s/ D 1='.s/
we arrive at z0 .s/  s1 z.s/  s1 D 0 which has solutions z.s/R D 1 C Cs for
C 2 R, whence '.s/ D .1 C Cs /1 . If C D 0, then '.s/ D 1 D Œ0;1/ esx "0 .dx/.
If C < 0 or  > 1, '.s/ fails to be completely monotone (by Bernstein’s theorem
it cannot then be a Laplace transform). Indeed, in the first case ' takes negative
values, whereas in the second case it is not convex.
Let Y be a random variable as in (2.13) with finite mean m > 0. We shall
show that the size-biased distribution  pertaining to the distribution of Y, i.e.,
 .dx/ WD m1 xPfY 2 dxg, is the distribution of a perpetuity which corresponds
to independent M and Q where Q has the same distribution as Y, and M has the
distribution  defined below. While doing so we assume, for simplicity, that h is
1
strictly decreasing and continuous on Œ0; 1/. Then the inverse function
R h is well
defined and decreasing which implies that the equality .A/ WD  A xd.h1 .x//
where A is a Borel subset of Œh.1/; h.0/ defines a measure. Passing to the
expectation in (2.13) we obtain
X Z 1 Z
m D mE h.Ti / D m h.y/dy D m xd.h1 .x//
i1 0 Œh.1/;h.0/

which shows that  is a probability measure. Differentiating (2.14) yields


Z 1 Z
1 0 1 0
m ' .s/ D '.s/ .m ' .h.y/s//h.y/dy D '.s/ .m1 ' 0 .sx//.dx/:
0

Note that differentiating under the integral sign in (2.14) is legal because the
resulting integral is uniformly convergent. Since m1 ' 0 .s/ is the Laplace–Stieltjes
transform of  , the last equality is equivalent to distributional equality (2.7), in
which the distribution of Y1 is  , and M and Q are as stated above. Conversely,
as shown in Lemma 2.2 in [144], whenever (2.7) holds with independent M and
2.1 Convergent Perpetuities 51

Q, PfM D 0g D 0 and the distribution of Y1 being the size-biased distribution


pertaining to the distribution of Q, the distribution of Q has to be a fixed point of a
Poisson shot noise transform. Given below are four examples of this kind.
Example 2.1.4
(a) M has a beta distribution with parameters 1 and ˛ > 0, i.e.,

PfM 2 dxg D ˛.1  x/˛1 1.0;1/ .x/dx;

Q has a .˛; ˛/-distribution, i.e.,

˛ ˛ ˛1 ˛x
PfQ 2 dxg D x e 1.0;1/ .x/dx
.˛/

where ./ is the gamma function, Y1 is .˛ C 1; ˛/-distributed, i.e.,

˛ ˛C1
PfY1 2 dxg D x˛ e˛x 1.0;1/ .x/dx:
.˛ C 1/

(b) M has a uniform distribution on Œq; 1 for some q 2 Œ0; 1/ distribution, EesQ D
.bCqs/.bCs/1, s  0, i.e., the distribution of Q is a mixture of the exponential
distribution and an atom with mass q at the origin, Y1 is .2; b/-distributed, i.e.,
EesY1 D m2 .m C s/2 , s  0.
(c) M has a Weibull distribution with parameter 1=2, i.e.,
p
e x
PfM 2 dxg D p 1.0;1/ .x/dx;
2 x
p p p
EesQ D .1 C b s/eb s , s  0 for some b > 0, EesY1 D eb s , s  0, i.e.,
Y1 has a positive stable distribution with index 1=2.
p !2
1=2 sQ 2s
(d) PfM 2 dxg D .x  1/1.0;1/ .x/dx, Ee D p , s  0,
sinh 2s
p p p
sY1 3.sinh 2s  2s cosh 2s/ .EesQ /0
Ee D p D ; s  0:
sinh3 2s EQ

The following example which seems to be new sets a link between perpetuities
and number theory.
Random Lüroth Series According to Theorems 1 and 2 in [169], any irrational
x 2 .0; 1/ has a unique representation in the form

1 1 .1/n1
xD  C ::: C C :::
a1 a1 .a1 C 1/a2 a1 .a1 C 1/  : : :  an1 .an1 C 1/an
52 2 Perpetuities

for some positive integers .ak /. The right-hand side is called alternating Lüroth
series.
Let . k /k2N be independent copies of a positive (not necessarily integer-valued)
random variable . Then the series

1 1 .1/n1
 C ::: C C :::
1 1 . 1 C 1/ 2 1 . 1 C 1/  : : :  n1 . n1 C 1/ n

may be called random Lüroth series. Whenever the series converges a.s. (this
happens, for instance, if  1 a.s.), its sum is a perpetuity which corresponds to
 
1 1
.M; Q/ D  ; :
. C 1/

2.1.3 Continuity of Perpetuities

Theorem 2.1.2 given below states that the distribution of Y1 is pure provided that
PfM D 0g D 0.
Theorem 2.1.2 If PfM D 0g D 0 and jY1 j < 1 a.s., then the distribution of Y1 is
either degenerate, absolutely continuous (w.r.t. the Lebesgue measure), or singular
continuous.
Proof Suppose that (2.5) holds true which particularly implies that PfQ D 0g < 1,
for otherwise the distribution of Y1 is clearly degenerate. By Theorem 2.1.1 we
thus also have that limn!1 …n D 0 a.s. Assume further that the distribution of Y1
is nondegenerate and has atoms. Denote by p the maximal weight (probability) of
atoms and by b1 ; : : : ; bd the atoms with weight p. Notice that d  1=p. In view of
(2.6) we have
X
PfY1 D bi g D PfYm C …m a D bi g PfY1 D ag; i D 1; : : : ; d (2.15)
a2A

for each m 2 N where A isP the set of all atoms of the distribution of Y1 . Since
PfM D 0g D 0, we have a2A PfYm C …m a D bi g  1. Now use PfY1 D
ag  PfY1 D bi g to conclude that equalities (2.15) can only hold if the summation
extends only over bj , j D 1; : : : ; d, and so

X
d
PfYm C …m bj D bi g D 1; i D 1; : : : ; d (2.16)
jD1
2.1 Convergent Perpetuities 53

for each m 2 N. By letting m tend to infinity and using .…m ; Ym / ! .0; Y1 / a.s.
in (2.16), we arrive at

PfY1 D bi g D d1 ; i D 1; : : : ; d: (2.17)

Suppose d  2 and let U and V be independent copies of Y1 which are also


.s/
independent of .Mn ; Qn /n2N . Put Y1 WD U  V a symmetrization of Y1 . Its
d
distribution has support  WD fbi  bj W i; j D 1; : : : ; dg. Since Ym C …m U D
d
Ym C …m V DY1 for each m 2 N, we see that the distribution of
    .s/
Dm WD Ym C …m U  Ym C …m V D …m Y1

has a support m contained in . Put  WD min. \ .0; 1// and  WD max .


.s/
Using the independence of …m and Y1 in combination with PfM D 0g D 0, we
now infer
.s/
0 D PfjDm j 2 .0;  /g D Pfj…mY1 j 2 .0;  /g
 Pfj…m j <  =  g PfjY1
.s/
j 2 .0;  g

.s/
and therefore Pfj…mj <  =  g D 0 because PfjY1 j 2 .0;  g D 1  dp2 > 0.
But this contradicts …m ! 0 a.s. and so d D 1, i.e., Y1 D b1 a.s. by (2.17).
Hence we have proved that if the distribution of Y1 is nondegenerate, it must be
continuous.
It remains to verify that a continuously distributed Y1 is of pure type. Let g.t/
be the characteristic function (ch.f.) of Y1 . By Lebesgue’s decomposition theorem
g.t/ D ˛1 g1 .t/ C ˛2 g2 .t/ where ˛1 ; ˛2  0, ˛1 C ˛2 D 1, and g1 .t/ and g2 .t/ are the
ch.f.’s of the absolutely continuous and the continuously singular components of the
distribution of Y1 , respectively. If ˛1 D 0, the distribution is singular continuous.
Suppose ˛1 > 0 so that g D g1 must be verified. Since the distribution of Y1
satisfies (2.7), we infer in terms of its ch.f.

g.t/ D EeitQ g.Mt/; t2R (2.18)

and thus

˛1 g1 .t/ C ˛2 g2 .t/ D ˛1 EeitQ g1 .Mt/ C ˛2 EeitQ g2 .Mt/:

Since PfM D 0g D 0 and g1 .t/ is the ch.f. of an absolutely continuous distribution,


so is t 7! EeitQ g1 .Mt/. Indeed, let X1 be a random variable with the ch.f. g1
independent of .M; Q/. For a Borel set B with LEB.B/ D 0 we also have
LEB.m1 .B  q// D 0 for any m ¤ 0 and q 2 R. Hence PfX1 2 m1 .B  q/g D 0
54 2 Perpetuities

R
which yields PfQ C MX1 2 Bg D PfX1 2 m1 .B  q/gdPfM  m; Q  qg D 0. If

˛2 EeitQ g2 .Mt/ D ˛3 g3 .t/ C ˛4 g4 .t/

where ˛3 ; ˛4  0, ˛3 C ˛4 D 1, and g3 .t/ and g4 .t/ are the ch.f.’s of the


absolutely continuous and the continuously singular components, respectively, then
the uniqueness of the Lebesgue decomposition entails ˛1 g1 .t/ D ˛1 EeitQ g1 .Mt/ C
˛2 ˛3 g3 .t/ and thus upon setting t D 0 that ˛2 ˛3 D 0. Consequently, g1 .t/ D
EeitQ g1 .Mt/ which means that g1 is also a solution to functional equation (2.18). By
considering the bounded continuous function g  g1 and utilizing g.0/  g1 .0/ D 0
in combination with …n ! 0 a.s. (the latter follows from Theorem 2.1.1 because
the distribution of Y1 is continuous), we infer upon iterating (2.18) for g  g1 and
an appeal to the dominated convergence theorem that
ˇ ˇ
jg.t/  g1 .t/j  lim Eˇg.…n t/  g1 .…n t/ˇ D 0
n!1

for all t ¤ 0. Hence g.t/ D g1 .t/ for all t 2 R which means that the distribution of
Y1 is absolutely continuous. t
u
The next examples demonstrate that the distribution of Y1 can indeed be
continuously singular as well as absolutely continuous.
Example 2.1.5 (c-Decomposable Distributions) Consider the situation where M is
a.s. equal to a constant c 2 .0; 1/, so that

d
Y1 D cY1 C Q:

(a) If c D 1=2 and Q has a Poisson distribution, then the distribution of Y1 is


singularly continuous.
(b) If c D 1=n for some fixed positive integer n and PfQ D k=ng D 1=n,
k D 0; 1; : : : ; n  1, then the distribution of Y1 is uniform on Œ0; 1 and thus
absolutely continuous.
(c) If c D 1=2 and PfQ D ˙1g D 1=2, then the distribution of Y1 is uniform on
Œ2; 2 and thus absolutely continuous.
(d) If c 2 .0; 1=2/ and PfQ D ˙1g D 1=2, then the distribution of Y1 is singularly
continuous.
 P k 
Proof (a) Plainly, g.t/ WD E exp.itY1 / D exp  k0 .eit2  1/ , t 2 R for some
 > 0 and
 X 
jg.t/j D exp   .1  cos.t2k // ; t 2 R:
k0
2.1 Convergent Perpetuities 55

Since the distribution of Q is nondegenerate, so is the distribution of Y1 . In view


of Theorem 2.1.2 and the Riemann–Lebesgue lemma it thus suffices to show that
jg.t/j does
P not converge toizero, as t ! 1. Set tn WD 2 , n 2 N. Then jg.tn /j D
n

exp. i0 .1  cos.2 /// > 0 which proves the claim.


(b) This follows by a direct computation. Since

1  es
'.s/ WD EesQ D ; s > 0;
n.1  es=n /

we conclude

Y
k1
1  es 1  es
EesY1 D lim '.sni / D lim k
D :
k!1 k!1 nk .1  esn / s
iD0

This is the Laplace–Stieltjes transform of the uniform distribution on Œ0; 1.


(c) This is equivalent to Euler’s formula

sin t Y
D cos.t2i /; t2R
t i0

which follows by repeated application of sin t D 2 sin.t=2/ cos.t=2/ the double-


angle formula.
(d) In view of Theorem 2.1.2 it suffices to show that PfY1 2 Ag D 1 for some Borel
PLEB.A/ D 0. Denote by .xk /kD1;:::;2 the possible valuesP
set A with of the random
n

variable niD1 ci1 Qi . For each xk construct an interval Ixk of length 2 inC1 ci1 D
1 n
c with center xk . Set On WD [2kD1 Ixk and note that PfY1 2 On g D 1
n
2.1  c/P
because inC1 ci1 Qi 2 Œ.1  c/1 cn ; .1  c/1 cn  a.s. It remains to define A WD
\n1 On and observe that

LEB.A/  LEB.On /  2.1  c/1 .2c/n

for each n 2 N. Hence LEB.A/ D 0 in view of c 2 .0; 1=2/. t


u
It is clear that the distribution of Y1 is absolutely continuous whenever M and Q
are independent and at least one of these has an absolutely continuous distribution.
Either part of Example 2.1.4 provides an explicit example of this kind.
Example 2.1.6 given next shows that condition PfM D 0g D 0 in Theorem 2.1.2
is indispensable.
Example 2.1.6 If M and Q are independent,PPfM D 0g D p, PfM D 1g D 1  p
N
for p 2 .0; 1/, the distribution of Y1 D kD1 Qk is compound geometric with
characteristic function

pEeitQ
EeitY1 D ; t 2 R: (2.19)
1  .1  p/EeitQ
56 2 Perpetuities

This follows from a comment on p. 45, as the random variable N defined there has
a geometric distribution (starting at one) with parameter PfM D 0g.
In particular, if the distribution of Q is discrete, so is the distribution of Y1 . For
instance, if Q D 1 a.s., then the distribution of Y1 is geometric with parameter
p; if the distribution of Q is geometric (starting at zero) with parameter r, then the
distribution of Y1 is geometric (starting at zero) with parameter pr=.1  .1  p/r/.
On the one hand, the case when PfM D 0g > 0 is more complicated than the
complementary one, for the distribution of Y1 is not necessarily pure.
Example 2.1.7 Assume that M and Q are independent, PfM D 0g D p, PfM D
1g D 1  p for p 2 .0; 1/ and EesQ D .1 C s/=.1 C s/ for some 2 .0; 1/, i.e., the
distribution of Q is a mixture of the atom at zero and an exponential distribution with
parameter 1. Then, using (2.19) we conclude that the distribution of Y1 is a mixture
of the atom at zero with weight p =.1  .1  p/ / and an exponential distribution
with parameter p=.1  .1  p/ /.
On the other hand, it is simpler, for there is a simple criterion for the distribution of
Y1 to be (absolutely) continuous.
Theorem 2.1.3 Let PfM D 0g > 0. Then the distribution of Y1 is (absolutely)
continuous if, and only if, the conditional distribution PfQ 2 jM D 0g is
(absolutely) continuous.
Proof We only treat the continuity and refer to Theorem 5.1 in [27] for the absolute
continuity. If PfM D 0g D 1, then Y1 D Q1 , and there is nothing to prove.
Therefore we assume that PfM D 0g 2 .0; 1/.
(. For a Borel set C we have

PfY1 2 Cg D PfQ 2 CjM D 0gPfM D 0g C PfY1 2 CjM ¤ 0gPfM ¤ 0g

which shows that the distribution of Y1 has a continuous component. The rest of
the proof exploits Lebesgue’s decomposition theorem and proceeds along the lines
of the second part of the proof of Theorem 2.1.2. Still, we have to verify that Q C
MX1 has a continuous distribution whenever X1 is independent of .M; Q/ and has
a continuous distribution, and PfQ 2 jM D 0g is a continuous distribution. The
claim follows from the equality

PfQ C MX1 D xg D PfQ D x; M D 0g


Z
C PfX1 2 m1 .x  q/gdPfM  m; Q  qg D 0
m¤0

which holds for any x 2 R.


). We have PfY1 D xg D PfMY1 C Q D xg  PfQ D xjM D 0gPfM D 0g
for any x 2 R. Therefore if the distribution of Y1 is continuous, we infer PfQ D
xjM D 0g=0 for any x 2 R. t
u
2.1 Convergent Perpetuities 57

2.1.4 Moments of Perpetuities

In this section ultimate criteria for the finiteness of power, exponential, and
logarithmic moments will be given. As far as power and logarithmic moments are
concerned, the key observation which goes back P to [176] is that in the range of
power and subpower tails the tail behavior of j n1 …n1 P Qn j coincides with that of
supn1 j…n1 Qn j. In particular, one may expect that Ef .j n1 …n1 Qn j/ < 1 if,
and only if, Ef .supn1 j…n1 Qn j/ < 1 for positive nondecreasing functions f of at
most power growth. As is seen from the subsequent presentation this is indeed true
(compare Theorem 2.1.4 and Theorem 1.3.1; Theorem 2.1.5 and Theorem 1.3.5).
In the range of exponential tails the equivalence discussed above  Pdoes not hold
any more, and one has to investigate the finiteness of E exp  a j n2N …n1 Qn j
for a > 0 directly, without resorting to the analysis of E exp a supn1 j…n1 Qn j .
Logarithmic Moments
Theorem 2.1.4 Let f W RC ! RC be a measurable, locally bounded function
regularly varying at 1 of positive index. Suppose that (2.1) and (2.5) hold, and that
limn!1 …n D 0 a.s. Then the following assertions are equivalent:

Ef .logC jMj/J .logC jMj/ < 1 and Ef .logC jQj/J .logC jQj/ < 1I
Ef .logC jY1 j/ < 1:

The proof of Theorem 2.1.4 which can be found in [6] will not be given
here, for it does not contain essential new ideas in comparison with the proof of
Theorem 1.3.1.
Power Moments
Theorem 2.1.5 Suppose that (2.1) and (2.5) hold, and let p > 0. The following
assertions are equivalent:

EjMjp < 1 and EjQjp < 1I (2.20)


E sup j…n1 Qn jp < 1I (2.21)
n1
ˇX ˇp
ˇ ˇ
ˇ
Eˇ …n1 Qn ˇˇ < 1I (2.22)
n1
X p
E j…n1 Qn j < 1: (2.23)
n1
58 2 Perpetuities

Remark 2.1.6 Further conditions equivalent to those in the previous theorem are
given by
ˇX ˇp
ˇ n ˇ
ˇ
E sup ˇ …k1 Qk ˇˇ < 1I (2.24)
n1 kD1
!p=2
X
E …2n1 Q2n < 1: (2.25)
n1

See Section 2.1.5 for the proof.


Whenever EY1
n
, n 2 N is finite, passing to the expectations in (2.7) gives

EY1
n
D E.Q C MY1 /n

whence
!
X
n1
n  k nk  k
n 1
EY1
n
D .1  EM / EM Q EY1 : (2.26)
kD0
k

Thus, the value of EY1 n


can be recovered recursively provided that we know the
values EM Q for i; j 2 N0 , i C j  n. According to (2.10), formula (2.26)
i j

significantly simplifies in the case that Y1 is the exponential functional of a


subordinator.
Exponential Moments Given any real-valued random variable Z, we define

r.Z/ WD supfr > 0 W EerjZj < 1g;

called the abscissa of convergence of the moment generating function of jZj. Note
that Eer.Z/jZj may be finite or infinite.
Our next two results provide complete information on how r.Y1 / relates to r.Q/.
For convenience we distinguish the cases where PfjMj D 1g D 0 and PfjMj D 1g 2
.0; 1/. Recall that if conditions (2.1) and (2.5) hold then the distribution of Y1 is
nondegenerate if jY1 j < 1 a.s.
Theorem 2.1.7 Suppose that (2.1) and (2.5) hold, that PfjMj D 1g D 0, and let
s > 0. The following assertions are equivalent:

PfjMj < 1g D 1 and EesjQj < 1I (2.27)

EesjY1 j < 1: (2.28)

In particular, if PfjMj < 1g D 1, then r.Y1 / D r.Q/.


2.1 Convergent Perpetuities 59

Theorem 2.1.8 Suppose that (2.1) and (2.5) hold, that PfjMj D 1g 2 .0; 1/, and
let s > 0. The following assertions are equivalent:

PfjMj  1g D 1; EesjQj < 1; a˙ 2 Œ0; 1/ and b bC < .1a/.1aC / (2.29)

where a˙ D a˙ .s/ WD Ee˙sQ 1fMD1g and b˙ D b˙ .s/ WD Ee˙sQ 1fMD1g ;

EesjY1 j < 1: (2.30)

In particular,
   1g D 1 and PfjMj D 1g 2 .0; 1/, then r.Y1 / D
if PfjMj
min r.Q/; r .M; Q/ where

r .M; Q/ WD supfr > 0 W b .r/bC .r/ < .1  a .r//.1  aC .r//g:

Here is an example illustrating the last two theorems.


Example 2.1.8 Let Q be an exponential random variable with parameter  > 0 and
M be independent of Q with Pf0 < M  1g D 1 and EM < 1. According to
Example 2.1.1,


EY1
n
D ; n 2 N: (2.31)
n .1  EM/.1  EM 2 /    .1  EM n /

Put an WD EY1
n
=nŠ and note that limn!1 a1
nC1 an D  PfM < 1g. Hence, by the
Cauchy–Hadamard formula,

r.Y1 / D .lim sup a1=n


n /
1
D lim a1
nC1 an D PfM < 1g:
n!1 n!1

If PfM < 1g D 1, this is in full accordance with Theorem 2.1.7 because EerQ D
.  r/1 for r 2 .0; / whence r.Q/ D . Suppose PfM < 1g < 1. According

to Theorem 2.1.8, r.Y1 / is the positive solution to the equation s PfM D 1g D 1
and thus indeed equal to PfM < 1g.

2.1.5 Proofs for Section 2.1.4

Proof of Theorem 2.1.5 (2.20), (2.21) follows from Theorem 1.3.5 after passing
to the logarithms. Observe that the condition Pflog jQj D 1g < 1 is a
consequence of (2.1). P

Since jY1 j  Y1 WD n1 j…n1 Qn j it remains to prove the implications
(2.20))(2.23) and (2.22))(2.20).
60 2 Perpetuities

(2.20))(2.23). If 0 < p  1, just use the subadditivity of x 7! xp on Œ0; 1/ in


combination with the independence of …k1 and Qk for each k to infer
X
p
EY1  Ej…k1 jp EjQk jp
k1
X
D .EjMjp /k1 EjQjp D .1  EjMjp /1 EjQjp < 1:
k1

If p > 1, a similar inequality holds for kY1 kp where kkp denotes the usual Lp -norm.
Namely, by Minkowski’s inequality,
X X

jjY1 jjp  k…k1 Qk kp D kMkpk1 kQkp D .1  kMkp /1 kQkp < 1:
k1 k1

(2.22))(2.20). Let us start by pointing out that (2.20) is equivalent to

EjQ1 C M1 Q2 jp < 1 and EjM1 M2 jp < 1: (2.32)

which, in the notation introduced right before formula (2.6), is nothing but
condition (2.20) for the pair .…2 ; Y2 /. We only remark concerning the implication
(2.32))(2.20) that in the case p  1, by Minkowski’s inequality,

kQ1 1fjQ1 jb;jQ2 jcg kp  k.Q1 C M1 Q2 /1fjQ1 jb;jQ2 jcg kp


C kM1 1fjQ1 jbg kp kQ2 1fjQ2 jcg kp

for all b; c > 0 and therefore (upon letting b tend to 1 and picking c large enough)

kQ1 C M1 Q2 kp C c kMkp
kQkp  < 1:
PfjQj  cg1=p

If 0 < p < 1 a similar argument using the subadditivity of 0  x 7! xp yields the


conclusion. Next, we note that the conditional distribution of Q1 C M1 Q2 given …2
cannot be degenerate, for otherwise either Q C cM D c or .M1 ; Q1 / D .1; c/ a.s.
for some c 2 R by Proposition 1 in [117]. But both alternatives are here impossible,
the first by our assumption (2.5), the second by jY1 j < 1 a.s. Let us also mention
that jY1 j < 1 a.s. in combination with (2.5) ensures limn!1 …n D 0 a.s. by
Theorem 2.1.1.
Put

Q.2/
n WD Q2n1 C M2n1 Q2n ; n2N
2.1 Convergent Perpetuities 61

.2/ .2/
and note that ..M2n1 M2n ; Qn //n2N are independent copies of .…2 ; Y2 /. Let Qn
.2/
be a conditional symmetrization of Qn given M2n1 M2n such that the vectors
.2/ .2/ .2/ .2/
..M2n1 M2n ; Qn //n2N are also i.i.d. More precisely, Qn D Qn  b Qn where
.2/ b.2/ .2/ b.2/
..M2n1 M2n ; Qn ; Qn //n2N are i.i.d. random variables and Qn ; Qn are condition-
ally i.i.d. given M2n1 M2n . By what has been pointed out above, the distribution
.2/ .2/
of Qn , and thus also of Qn , is nondegenerate. Putting Bn WD .M1 ; : : : ; Mn /
for n 2 N, we now infer with the help of Lévy’s symmetrization inequality (see
Corollary 5 on p. 72 in [68])

ˇ
ˇX ˇ ˇ !
.2/ ˇ ˇ n .2/ ˇ ˇ
P max j…2k2 Qk j > xˇB2n  2 P ˇ …2k2 Qk ˇˇ > xˇˇB2n
1kn ˇ
kD1
ˇX ˇ ˇ !
ˇ n ˇ ˇ
 4 P ˇˇ …2k2 Qk ˇˇ > x=2ˇˇB2n
.2/

kD1

D 4 P.jY2n j > x=2jB2n/ a.s.

for all x > 0 and thus (recalling that the distribution of Y1 is continuous in the
present situation as pointed out right after Theorem 2.1.2)
n .2/
o
P sup j…2k2 Qk j > x  4 PfjY1 j > x=2g: (2.33)
k1

As a consequence of this in combination with EjY1 jp < 1 we conclude

.2/
E sup j…2k2 Qk jp  8 EjY1 jp < 1:
k1

Now put S0 WD 0 and

X
n
.2/
Sn WD log j…2n j D log jM2k1 M2k j and n WD log jQn j
kD1

for n 2 N. Then .Sn /n2N forms an ordinary zero-delayed random walk and Pfn D
.2/
1g < 1 because the distribution of Qn is nondegenerate. With this we see that

.2/

E sup j…2k2 Qk jp D E exp p sup.Sn C nC1 / < 1:


k1 n0

Since the pairs ..log jM2n1 M2n j; n //n2N are i.i.d., an application of Theorem 1.3.5
yields EepS1 D EjM1 M2 jp < 1 which is the second condition in (2.32).
62 2 Perpetuities

Left with the first half of (2.32), namely kY2 kp < 1, use (2.6) with m D 2
.2/
rendering jY2 j  jY1 j C j…2 Y1 j and therefore

kY2 kp  kY1 kp .1 C k…2 kp / < 1

in the case p  1. The case 0 < p < 1 is handled similarly. The proof of
Theorem 2.1.5 is complete. u
t
Pn
Proof for Remark 2.1.6 (2.23))(2.24) and (2.23))(2.25). Use supn1 j kD1

P P 2

2 1=2 
…k1 Qk j  Y1 D n1 j…n1 Qn j and n1 …n1 Qn P
Y1 , respectively.
(2.24))(2.22) and (2.25))(2.21). Use jY1 j  supn1 j nkD1 …k1 Qk j and
P 2

2 1=2
supn1 j…n1 Qn j  n1 …n1 Qn , respectively. t
u

Proof of Theorem 2.1.7 Set .t/ WD EetY1 , b.t/ WD EetjY1 j , '.t/ WD EetQ and
' .t/ WD EetjQj . Note that .t/  b.s/ for all t 2 Œs; s and s > 0, and that
b
max. .t/; .t//  b.t/  .t/ C .t/ for all t  0. As a consequence of (2.7)
we have .t/ D EetQ .Mt/ for all t 2 R. These facts will be used in several places
hereafter.
(2.27))(2.28). The almost sure finiteness of Y1 follows from Proposition 2.1.1.
We have to check that r.Y1 /  r.Q/. To this end, we fix an arbitrary s 2 .0; r.Q//
and divide the subsequent proof into two steps.
Step 1 Assume first that jMj  ˇ < 1 a.s. for some ˇ > 0. Since the function b ' is
convex and differentiable on Œ0; ˇs, its derivative is nondecreasing on that interval.
Therefore, for each k 2 Nnf1g, there exists k 2 Œ0; ˇ k1 s such that

' 0 . k /ˇ k1 s  b
' .ˇ k1 s/  1 D b
0  b ' 0 .ˇs/ˇ k1 s:

With this at hand r.Y1 /  r.Q/ follows from


0 1 0 1
X X Y
b.s/  E exp @s j…k1 Qk jA  E exp @s ˇ k1 jQk jA D b
' .ˇ k1 s/
k1 k1 k1
0 1
X

' .s/ exp @


b ' .ˇ k1 s/  1/A  b
.b ' 0 .ˇs/ˇs.1  ˇ/1 < 1:
' .s/ exp b
k2

Step 2 Consider now the general case. Since PfjMj D 1g D 0, we can choose
ˇ 2 .0; 1/ such that

PfjMj > ˇg < 1 and WD EesjQj 1fjMj>ˇg < 1:


2.1 Convergent Perpetuities 63

Define the a.s. finite stopping times

T0 WD 0; Tk WD inffn > Tk1 W jMn j  ˇg; k 2 N:


P
We have Y1 D Q1 C k1 M1  : : :  Mk1

Qk where, for k 2 N,

Mk WD MTk1 C1  : : :  MTk and (2.34)


Qk WD QTk1 C1 C MTk1 C1 QTk1 C2 C : : : C MTk1 C1  : : :  MTk 1 QTk ; (2.35)

so that .Mk ; Qk / are independent copies of


 P1 
.M  ; Q / WD …T1 ; Q1 C TkD1 …k1 Qk :

Since jM  j  ˇ a.s., Step 1 of the proof provides the desired conclusion if we still

' .s/ < 1 implies EesjQ j < 1. This is checked as follows:
verify that b
j
X
EesjQ  Ees.jQ1 jC:::CjQT1 j/ D Ees.jQ1 jC:::CjQn j/ 1fT1 Dng
n1
" #
X Y
n1
sjQn j sjQk j
D E e 1fjMn jˇg e 1fjMk j>ˇg
n1 kD1
X
D EesjQj 1fjMjˇg ' .s/.1  /1 < 1:
n1  b
n1

(2.28))(2.27). If EesjY1 j < 1, we have EjY1 jp < 1 and therefore, by


Theorem 2.1.5, EjMjp < 1 for all p > 0. The latter in combination with PfjMj D
1g D 0 implies jMj < 1 a.s. Finally, if b.s/ < 1 and c WD minjtjs .t/ (clearly
> 0), then

1 > .t/ D EetQ .Mt/  c'.t/; t 2 fs; sg; (2.36)

' .s/  '.s/ C '.s/ < 1. This shows r.Y1 /  r.Q/. The proof of
and thus b
Theorem 2.1.7 is complete. u
t
Proof of Theorem 2.1.8 (2.30))(2.29). By the same argument as in the proof of
the implication (2.28))(2.27), we infer EjMjp < 1 for all p > 0 and thereby
jMj  1 a.s. Moreover, as b.s/ < 1, inequality (2.36) holds here as well and gives
b
' .s/ < 1. It thus remains to prove the last two inequalities in (2.29). While doing
so we proceed in two steps.
64 2 Perpetuities

Step 1 Suppose first that PfM D 1g D 0 in which case b˙ D 0. We have

.s/ D EesQ .Ms/1fjMj<1g C .s/ EesQ 1fMD1g and


sQ sQ
.s/ D Ee .Ms/1fjMj<1g C .s/ Ee 1fMD1g

which together with EesQ .˙Ms/1fjMj<1g > 0 (as PfjMj < 1g > 0) implies

Ee˙sQ 1fMD1g D a˙ < 1:

Thus, the last inequality in (2.29) which reads .1  a /.1  aC / > 0 in view of
b˙ D 0 holds.
Step 2 Assuming now PfM D 1g > 0, let ..Mk ; Qk //k2N be defined as in (2.34)
and (2.35), but with

T0 WD 0; Tk WD inffn > Tk1 W MTk1 C1  : : :  Mn > 1g; k 2 N:



Then PfM  D 1g D 0, and we infer from Step 1 that Ee˙sQ 1fM D1g < 1. But

e˙sQ1 1fM1 D1g D e˙sQ1 1fM1 D1g C e˙s.Q1 Q2 / 1fM1 DM2 D1g
X
C e˙s.Q1 Q2 :::Qn / 1fM1 D1;M2 D:::DMn1 D1;Mn D1g
n3

implies

X
1 > Ee˙sQ 1fM D1g D a˙ C b˙ an b
n0

and thereupon a˙ 2 Œ0; 1/. The right-hand side of the last expression equals a˙ C
b˙ b
which gives the last inequality in (2.29).
1  a
(2.29))(2.30). Let ..Mk ; Qk //k2N be as defined in Step 2 of the present proof.
Assuming (2.29) we thus have

 b˙ b
a˙ WD Ee˙sQ 1fM D1g D a˙ C < 1:
1  a

Using this and



e˙sQ1 D e˙sQ1 1fM1 >1g C e˙s.Q1 Q2 / 1fM1 D1;M2 >1g
X
C e˙s.Q1Q2 :::Qn / 1fM1 D1;M2 D:::DMn1 D1;Mn >1g
n3
2.2 Weak Convergence of Divergent Perpetuities 65

we further obtain that

 b˙
Ee˙sQ D Ee˙sQ 1fM>1g C EesQ 1fM>1g < 1:
1  a

Now let

b
T 0 WD 0; b
T k WD inffn > Tk1 W Mn < 1g; k2N

bk; b
and then ..M Qk //k2N in accordance with (2.34) and (2.35) for these stopping
times. We claim that Ee˙sb
Q
< 1 and thus Eesjb
Qj
< 1. Indeed,
X
Ee˙sb
     
Q
D Ee˙s.Q1 CM1 Q2 C:::CM1 :::Mn1 Qn / 1fM1 D:::DMn1

D1;Mn <1g
n1


X Ee˙sQ


D Ee˙sQ 1fM <1g an1


˙  < 1:
n1
1  a˙

So we have PfjMjb D 1g D 0 and Eesjb Qj


< 1 and may thus invoke Theorem 2.1.7
sjY1 j
to finally conclude Ee < 1 because Y1 is also the perpetuity generated by
b b
.M; Q/. The proof of Theorem 2.1.8 is complete. t
u

2.2 Weak Convergence of Divergent Perpetuities

In this section we are interested in the case when conditions (2.1), (2.5), and
EJ .logC jQj/ D 1 hold. Furthermore, we assume that the right tail of the
distribution of log jMj is lighter than that of logC jQj. Since the second condition
in (2.2) is violated, Theorem 2.1.1 tells us that .Yn / is a divergent perpetuity in
P
the sense that jYn j ! 1 as n ! 1. Of course, jXn j diverges in probability too
whenever X0 D 0 a.s. Our purpose is to prove functional limit theorems for the
Markov chains .Xn / and for the divergent perpetuities .Yn / under the aforementioned
assumptions.
We briefly recall the notation introducedP in Section 1.3.3 (see also ‘List of
Notation’). For positive a and b, N .a;b/ WD k ".t.a;b/ ; j.a;b/ / denotes a Poisson random
k k
measure on Œ0; 1/  .0; 1 with mean measure LEB 
a;b where
a;b is a measure
on .0; 1 defined by
 

a;b .x; 1 D axb ; x > 0:

We write ) to denote weak convergence in the Skorokhod space D D DŒ0; 1/


endowed with the J1 -topology, and we set sup ˛ D 0.
66 2 Perpetuities

Theorem 2.2.1 treats the situation in which both Mk ’s and Qk ’s affect the limit
behavior of the processes in question (except in a less interesting case
D 0),
whereas in the situation of Theorem 2.2.5 only the contribution of the Qk ’s persists
in the limit.
Theorem 2.2.1 Assume that

E log jMj D
2 .1; C1/ (2.37)

and that

lim tPflog jQj > tg D c (2.38)


t!1

for some c > 0. Then, as n ! 1,


ˇ ˇ
logC ˇYŒnC1 ˇ  .c;1/ .c;1/ 
) sup
tk C jk : (2.39)
n .c;1/
tk 

Also,
ˇ ˇ
logC ˇXŒnC1 ˇ  .c;1/ .c;1/ 
)
‡./ C sup 
tk C jk (2.40)
n t
.c;1/

k

provided that X0 D 0 a.s. in the cases


¤ 0 where ‡.t/ D t for t  0.
Remark 2.2.2 According to Theorem 2.2.1, .Yn / is a divergent perpetuity
under (2.37) and (2.38). Let us check this directly invoking Theorem 2.1.1.
Since (2.37) and (2.38) clearly imply EJ .logC jQj/ D 1 we have to show that
conditions (2.1) and (2.5) of Theorem 2.1.1 hold. The conditions PfM D 0g D 0 and
PfQ D 0g < 1 follow from (2.37) and (2.38), respectively. Suppose Q C Mr D r
a.s. for some r 2 R. In view of PfQ D 0g < 1 we have r ¤ 0 and then
jQj D jrjj1  Mj  jrj.1 C jMj/ a.s. Since E log.1 C jMj/ < 1 by (2.37) we must
have E logC jQj < 1 which contradicts (2.38). Hence (2.5) holds.
Remark 2.2.3 The marginal distribution of the right-hand side of (2.39) is given
by (1.40). The marginal distribution of the right-hand side of (2.40) is the same.
Indeed, while this is obvious in the case
D 0 just because the right-hand sides
d
of (2.39) and (2.40) coincide, this is a consequence of Xn D Yn which holds in the
cases
¤ 0 in view of the assumption X0 D 0 a.s.
Remark 2.2.4 Comparing formulae (1.39) of Theorem 1.3.15 and (2.39) of The-
orem 2.2.1 reveals that, under (2.37) and (2.38), logC jYŒnC1 j exhibits the same
asymptotic behavior as max0kŒn .log j…k j C log jQkC1 j/ (the maximum of the
perturbed random walk generated by .log jMj; log jQj/). When MQ  0 a.s., much
more can be said: (2.39) is an almost immediate consequence of (1.39). Assume, for
2.2 Weak Convergence of Divergent Perpetuities 67

instance, that M and Q are positive a.s. Since in view of (1.40) the right-hand side
of (1.39) is a.s. nonnegative, (1.39) in combination with the continuous mapping
theorem entails
 C  
max .log …k C log QkC1 / logC max .…k QkC1 /
0kŒn 0kŒn
D
n n
 .c;1/ .c;1/ 
) sup
tk C jk :
.c;1/
tk 

Using this limit relation together with the inequality


X 
C
  C
n
log max .…k QkC1 /  log …k QkC1
0kn
kD0
 
 log.n C 1/ C logC max .…k QkC1 /
0kn

we arrive at (2.39). We do not know whether (2.39) can be directly deduced


from (1.39) in the situation when M and Q take values of both signs. Our proof of
(2.39), which is essentially independent of (1.39), rests implicitly on the following
simple analytic fact. For p 2 N, let x1 ; : : : ; xp be distinct positive numbers and
limn!1 cn D 1. Then
ˇ p ˇ
ˇ X cn ˇ1=cn
lim ˇ ˙xk ˇˇ D max xk
n!1 ˇ 1kp
kD1

where the signs ˙ are arbitrarily arranged.


Theorem 2.2.5 Suppose that PfM D 0g D 0 and that

Pflog jQj > tg  t˛ `.t/; t!1 (2.41)

for some ˛ 2 .0; 1 and some ` slowly varying at 1. Let .bn / be a positive
sequence which satisfies limn!1 nPflog jQj > bn g D 1. In the case ˛ D 1 assume
additionally1 that limt!1 `.t/ D C1.
If lim infn!1 j…n j D 0 a.s. and E log jMj D 1 assume that
 
E log jMj ^ t
lim D 0: (2.42)
t!1 tPflog jQj > tg

1
Among other things this implies E logC jQj D 1.
68 2 Perpetuities

If lim supn!1 j…n j D 1 a.s. and E logC jMj D 1 assume that


 
E logC jMj ^ t
lim D 0: (2.43)
t!1 tPflog jQj > tg

Then
ˇ ˇ
logC ˇYŒnC1 ˇ .1;˛/
) sup jk ; n ! 1; (2.44)
bn .1;˛/
tk 

and
ˇ ˇ
logC ˇXŒnC1 ˇ .1; ˛/
) sup jk ; n ! 1: (2.45)
bn .1; ˛/
tk 

Remark 2.2.6 The marginal distribution of the right-hand side of (2.44) is given by
˚ .1; ˛/ 
P sup jk  x D exp.ux˛ /; x0
.1; ˛/
tk u

for u > 0. Indeed, for c > 0, u > 0 and x  0,


˚ .c; ˛/  ˚   
P sup jk  x D P N .c;˛/ .t; y/ W t  u; y > x D 0
.c; ˛/
tk u
  
D exp  EN .c;˛/ .t; y/ W t  u; y > x D exp.ucx˛ /:

2.3 Proofs for Section 2.2

Denote by MpC the set of Radon point measures  on Œ0; 1/  .0; 1 which satisfy

.Œ0; T  Œı; 1/ < 1 (2.46)

for all ı > 0 and all T > 0. The space MpC is endowed with the vague topology.
Denote by Mp the set of  2 MpC which satisfy

.Œ0; T  .0; 1/ < 1


2.3 Proofs for Section 2.2 69

for all T > 0. Define the mapping F from D  MpC to D by


8
< sup . f . k / C yk /; if k  t for some k;
F . f ; / .t/ WD kW k t
:
f .0/; otherwise
P
where  D k ". k ; yk / . Note that F is almost the same as in Section 1.3.4, the only
difference being that its domain there is larger. Also, for each n 2 N, we define the
mapping Gn from D  Mp to D by
( ˇ ˇ
CˇP ˇ
c1
n log kW k t ˙ exp.cn . f .k / C yk // ; if k  t for some k;
Gn . f ; / .t/ WD
C
f .0/; otherwise

where the signs C and  are arbitrarily arranged, and .cn / is some sequence of
ˇ PThe definition of Gn in the case
positive numbers. ˇ of empty sum stems from the fact
that we define ˇ kW k t ˙ exp.cn . f . k / C yk //ˇ WD exp.cn f .0// if there is no k such
that k  t.
C
 .n/ .n/ 
Theorem 2.3.1 P For n 2 N, let fn 2 D and n 2 Mp . Let k ; yk be the points of
n , i.e., n D k ". .n/ ; y.n/ / . Assume that f0 is continuous with f0 .0/  0 and
k k

(A1) 0 .f0g  .0; 1/ D 0 and 0 ..r1 ; r2 /  .0; 1/  1 for all positive r1 and r2
P r1 < r2 ;
such that
0 D k " .0/ ; y.0/  does not have clustered jumps, i.e., k ¤ j for k ¤ j;
.0/ .0/
(A2)
k k
(A3) if not all the signs under the sum defining Gn are the same, then
.0/ .0/ .0/ .0/
f0 . i / C yi ¤ f0 . j / C yj for i ¤ j (2.47)

and
 .0/ .0/ 
F . f0 ; 0 /.T/ D sup f0 . k / C yk >0 (2.48)
.0/
k T

for each T > 0 such that 0 .fTg; .0; 1/ D 0;


(A4) limn!1 cn D 1 and2
.n/
lim c1 log #fk W k  Tg D 0 (2.49)
n!1 n

for each T > 0 such that 0 .fTg; .0; 1/ D 0;


(A5) limn!1 fn D f0 on D in the J1 -topology.
(A6) limn!1 n D 0 on MpC .

.n/
2
Condition (A6) together with the second part of (A1) ensures that #fk W k  Tg  1.
70 2 Perpetuities

Then

lim Gn . fn ; n / D F . f0 ; 0 / (2.50)
n!1

on D in the J1 -topology.
For the proof of Theorems 2.2.1, 2.2.5, and 2.3.1 we need three inequalities:

logC x  log y  logC .xy/  logC x C logC y (2.51)

for x; y  0;

logC .jxj/  logC .jyj/  2 log 2  logC .jx C yj/  logC .jxj/ C logC .jyj/ C 2 log 2
(2.52)
for x; y 2 R;

j logC jxj  logC jyjj  logC .jx  yj/ C 2 log 2 (2.53)

for x; y 2 R. Inequality (2.51) is a consequence of the subadditivity of x ! xC . The


right-hand inequality in (2.52) follows from

logC .jxj/  log.1 C jxj/  logC .jxj/ C log 2; x2R

and the subadditivity of x 7! log.1 C jxj/, namely,

logC .jx C yj/  log.1 C jx C yj/  log.1 C jxj/ C log.1 C jyj/


 logC .jxj/ C logC .jyj/ C 2 log 2; x; y 2 R:

Using the already proved right-hand inequality with x D u C v and y D v yields

logC .juj/  logC .ju C vj/ C logC .jvj/ C 2 log 2;

the left-hand inequality in (2.52). Inequality (2.53) is just another representation of


(2.52).
Proof of Theorem 2.3.1 It suffices to prove convergence (2.50) on DŒ0; T for any
T > 0 such that 0 .fTg  .0; 1/ D 0 because the last condition ensures that
F. f0 ; 0 / is continuous at T.
If all the signs under the sum defining Gn are the same, then

.n/
F . fn ; n /.t/  Gn . fn ; n /.t/  c1 C
n log #fk W k  tg C F . fn ; n /.t/

for all t 2 Œ0; T. In this case, (2.50) is a trivial consequence of Theorem 1.3.17
which treats the convergence limn!1 F . fn ; n / D F . f0 ; 0 / on D.
2.3 Proofs for Section 2.2 71

In what follows we thus assume that not all the signs are the same. Let D f0 D
s0 < s1 <    < sm D Tg be a partition of Œ0; T such that

0 .fsk g  .0; 1/ D 0; k D 1; : : : ; m:

Pick now > 0 so small that

0 ..sk ; skC1 /  . ; 1/  1; k D 0; : : : ; m  1 (2.54)

.0/ .0/
and that sup .0/ T; y.0/ > . f0 . k / C yk / > 0. The latter is possible in view of (2.48).
k k
Condition (A6) implies that 0 .Œ0; T  . ; 1/ D n .Œ0; T  . ; 1/ D p for
large enough n and some p  1. Denote by . Ni ; yN i /1ip an enumeration of the
points of 0 in Œ0; T  . ; 1 with N1 < N2 < : : : < Np and by . Ni ; yN i /1ip the
.n/ .n/

analogous enumeration of the points of n in Œ0; T  . ; 1. Then

X
p
.j Ni  Ni j C jNyi  yN i j/ D 0
.n/ .n/
lim
n!1
iD1

and more importantly

X
p
.j fn . Ni /  f0 . Ni /j C jNyi  yN i j/ D 0
.n/ .n/
lim (2.55)
n!1
iD1

because (A5) and the continuity of f0 imply that limn!1 fn D f0 uniformly on Œ0; T.
Define n to be continuous and strictly increasing functions on Œ0; T with
n .0/ D 0, n .T/ D T, n . Ni / D Ni for i D 1; : : : ; p, and let n be linearly
.n/

interpolated elsewhere on Œ0; T. For n 2 N and t 2 Œ0; T, set


X  .n/ 
˙ exp cn . fn . Ni / C yN i /
.n/
Vn .t/ WD
.n/
Ni Dn . Ni /t

and
X  .n/ .n/ 
Wn .t/ WD ˙ exp cn . fn . k / C yk /  Vn .t/:
.n/
n . k /t

With this at hand we have

dT .Fn . fn ; n /; G. f0 ; 0 //  sup jn .t/  tj (2.56)


t2Œ0; T
ˇ ˇˇˇ
ˇ C ˇˇ ˇ
ˇ
ˇ
Cˇ ˇˇ
C c1
n sup ˇ log Wn .t/ C V n .t/  log V n .t/
t2Œ0; T
72 2 Perpetuities

ˇ ˇ ˇ ˇ
ˇ Cˇ ˇ  sup. f0 . Ni / C yN i /ˇˇ
C sup ˇc1
n log V n .t/
t2Œ0; T Ni t
ˇ    ˇ
ˇ .0/ ˇ
C sup ˇ sup f0 . Ni / C yN i  sup f0 . k / C yk ˇ
.0/

t2Œ0;T Ni t .0/
k t

where dT is the standard Skorokhod metric on DŒ0; T.


We treat the terms on the right-hand side of (2.56) separately.
1st Term The relation limn!1 supt2Œ0; T jn .t/  tj D 0 is easily checked.
2nd Term We denote the second term by In . / and use inequality (2.53) which
yields

In . /  2 log 2c1
n

C ˇ
ˇ
1
 cn sup log Wn .t/ˇ
t2Œ0;T
 X 
 .n/ .n/ 
 c1
n log
C
exp cn . fn . k / C yk /
.n/ .n/ .n/
n . k /T; k ¤ Ni

˚ .n/ 
# k W n . k /  T; k ¤ Ni
.n/ .n/
 c1
n logC


 .n/ .n/ 
 sup exp cn . fn . k / C yk /
.n/ .n/ .n/
k T; k ¤ Ni
 ˚ .n/ 
 c1
n log
C
# k W k  T
  .n/ .n/ C
C sup fn . k / C yk : (2.57)
.n/ .n/ .n/
k T; k ¤ Ni

.n/
For the last inequality we have used (2.51) and the fact that n . k /  T if, and
.n/
only if, k  T. The first term on the right-hand side of (2.57) converges to zero in
view of (2.49). As for the second, we apply Theorem 1.3.17 to infer
 .n/ .n/ C  .n/ .n/ C
sup . fn . k / C yk / D sup . fn . k / C yk /
.n/ .n/ .n/ .n/ .n/
k T; k ¤ Ni k T; yk 
 .0/ .0/ C
! sup . f0 . k / C yk /
.0/ .0/
k T; yk 

as n ! 1. The latter goes to zero as ! 0 because f0C .0/ D 0 by assumption.


Thus, we have proved that lim !0 lim supn!1 In . / D 0.
ˇ ˇ
3rd Term Set An .t/ WD ˇc1 N N i /ˇ, t 2 Œ0; T.
n log jVn .t/j  sup Ni t . f0 . i / C y
2.3 Proofs for Section 2.2 73

If t 2 Œ0; N1 /, then An .t/ D jfn .0/  f0 .0/j ! 0 as n ! 1 by the definition of the


mappings F and Gn . Let now t 2 ΠNk ; NkC1 /, k D 1; : : : ; p  1 or t 2 ΠNp ; T. Since
all exp. f0 . N1 / C yN 1 /; : : : ; exp. f0 . Nk / C yN k / are distinct by (2.47) and

lim exp. fn . Nj / C yN j / D exp. f0 . Nj / C yN j /;


.n/ .n/
j D 1; : : : ; k
n!1

by (2.55), we conclude that exp. fn . N1 / C yN 1 /; : : : ; exp. fn . Nk / C yN k / are all


.n/ .n/ .n/ .n/

distinct for large enough n. Denote by ak;n < : : : < a1;n their increasing
rearrangement3 and put
ˇ     ˇ
ˇ a2;n cn ak;n cn ˇˇ
Bn .t/ WD c1 ˇ
log ˇ1 ˙ ˙ ::: ˙
n
a a ˇ:
1;n 1;n

 
 a2;n cn  ak;n cn
Since limn!1 ˙ a1;n
˙ ::: ˙ a1;n
D 0, there is an Nk such that

jBn .t/j  c1


n for n  Nk :

Summarizing we have

sup jBn .t/j  c1


n for all n  max.N1 ; : : : ; Np /: (2.58)
t2Œ0; T

With these at hand we can proceed as follows


ˇ  ˇˇ
ˇ .n/  
An .t/ D ˇ sup fn . Ni / C yN i C Bn .t/  sup f0 . Ni / C yN i ˇ
.n/

Ni t Ni t
ˇ  ˇ
ˇ .n/    ˇ
 ˇ sup fn . Ni / C yN i  sup f0 . Ni / C yN i j C jBn .t/ˇ
.n/

Ni t Ni t
p
X ˇ ˇ ˇ ˇ

 ˇ fn . N .n/ /  f0 . Ni /ˇ C ˇyN .n/  yN i ˇ C jBn .t/j:


i i
iD1

In view of (2.55) and (2.58) the right-hand side tends to zero uniformly in t 2
Œ0; T as n ! 1. Equivalently, for any t 2 Œ0; T and any tn ! t as n ! 1,
limn!1 c1 N
n log jVn .tn /j D sup Ni t . f0 . i / C y
N i /. Recalling that we have picked
such that
   .0/ 
sup f0 . Ni / C yN i D
.0/
sup f0 . k / C yk >0
Ni T .0/ .0/
k T; yk >

3
Although aj;n ’s depend on t we suppress this dependence for the sake of clarity.
74 2 Perpetuities

we infer limn!1 c1 C N


n log jVn .tn /j D sup Ni t . f0 . i / C y
N i /. This is equivalent to
ˇ ˇ ˇ ˇ
ˇ ˇ
lim sup ˇc1
n log

Vn .t/ˇ  sup. f0 . Ni / C yN i /ˇ D 0:
n!1 t2Œ0; T
Ni t

4th Term In the proof of Theorem 1.3.17 it is shown that4

sup j sup. f0 . Ni / C yN i /  sup . f0 . k / C yk /j  !f0 .2j j/ C


.0/ .0/

t2Œ0;T Ni t .0/
k t

where j j WD maxi .siC1  si / and !f0 ."/ WD supjuvj<"; u;v0 jf0 .u/  f0 .v/j is the
modulus of continuity of f0 . Of course, the right-hand side of the last inequality
tends to zero on sending j j and to zero.
Collecting pieces together and letting in (2.56) n ! 1 and then j j and tend to
zero we arrive at the desired conclusion limn!1 dT .Gn . fn ; n /; F . f0 ; 0 // D 0. u
t
Proof of Theorem 2.2.1 We only treat the case
< 0, the case
> 0 being similar,
the case
D 0 being much simpler. All unspecified limit relations are assumed to
hold as n ! 1.
Proof of (2.39) For k 2 N0 , set Sk WD log j…k j and kC1 WD log jQkC1 j. As a
consequence of the strong law of large numbers,

n1 SŒn )
‡./ (2.59)

where ‡.t/ D t for t  0 (actually, in (2.59) the a.s. convergence holds, see
Theorem 4 in [96]). According to Theorem 6.3 on p. 180 in [237], condition (2.38)
entails
X
1fkC1 >0g ".n1 k; n1 kC1 / ) N .c;1/ (2.60)
k0

on Mp . Now relations (2.59) and (2.60) can be combined into the joint convergence
 X 
 
n1 SŒn ; 1fkC1 >0g ".n1 k; n1 kC1 / )
‡./; N .c;1/ (2.61)
k0

on D  Mp . By the Skorokhod representation theorem there are versions which


converge a.s. Retaining the original notation for these versions we want to apply

4
Condition (2.54) is only used in this part of the proof.
2.3 Proofs for Section 2.2 75

Theorem 2.3.1 to

Gn . fn ; n /.t/
ˇ Œnt ˇ

ˇX ˇ
1
D n log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
ˇ Œnt ˇ
ˇX

ˇ
1
D n log ˇ sgn .…k QkC1 /e n.n1 SŒn.k=n/ Cn1 kC1 /
1fkC1 >0g ˇˇ;
kD0

P
so that fn ./ D n1 SŒn , f0 ./ D
‡./, n D k0 1fkC1 >0g "fn1 k; n1 kC1 g , 0 D
N .c;1/ , cn D n and the signs ˙ are defined by sgn.…k QkC1 /.
Now we shall show that the so defined functions and measures satisfy with
probability one all the conditions of Theorem 2.3.1. In view of (2.61), conditions
(A5) and (A6) are fulfilled. Furthermore, by Lemma 1.3.18 N .c;1/ satisfies with
probability one conditions (2.46), (A1), and (A2). The (nonnegative) expression
under the limit sign in (2.49) is dominated by n1 log.ŒnT/ which converges to zero
as n ! 1. Hence (2.49) holds. While checking (2.47) our argument is similar to
that given on p. 223 in [237]. We fix any T > 0, ı > 0 and use the representation


X
N .c;1/ .Œ0; T  .ı; 1 \ / D ".Uk ;Vk / ./
kD1

where .Ui / are i.i.d. with a uniform distribution on Œ0; T, .Vj / are i.i.d. with PfV1 
xg D .1  ı=x/1.ı;1/ .x/, and has a Poisson distribution with parameter Tc=ı, all
the random variables being independent. It suffices to prove that

I WD Pf  2;
Uk C Vk D
Ui C Vi for some 1  k < j  g D 0:

This is a consequence of the fact that


U1 C V1 has a continuous distribution which
implies Pf
U1 C V1 D
U2 C V2 g D 0. Indeed,
X
ID Pf
Uk C Vk D
Ui C Vi for some 1  k < j  ngPf D ng
n2
!
X n
D Pf
U1 C V1 D
U2 C V2 gPf D ng D 0:
n2
2

˚ .c;1/ .c;1/ 
Condition (2.48) holds because P supt.c;1/ T .
tk C jk /  0 D 0 for each
k
T > 0, by (1.40).
76 2 Perpetuities

Thus, Theorem 2.3.1 is indeed applicable with our choice of fn and n , and we
conclude that
ˇX ˇ
ˇ Œn ˇ  .c;1/ .c;1/ 
n 1 Cˇ
log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ ) sup
tk C jk : (2.62)
.c;1/
kD0 tk 

Further, for each T > 0,


ˇ X ˇ X 
ˇ Œnt ˇ Œnt
sup log ˇC ˇ
…k QkC1 1fjQkC1 j1g ˇ  sup logC
j…k j
ˇ
0tT kD0 0tT kD0
X 
 logC j…k j :
k0

P
Since limn!1 j…n j D 0 a.s. as a consequence of
< 0, k0 j…k j < 1 a.s. by
Theorem 2.1.1, whence
X
Œn 
1 C
n log …k QkC1 1fjQkC1 j1g ) „./ (2.63)
kD0

where „.t/ D 0 for t  0. Using (2.52) with

Œn Œn
X X
xD …k QkC1 1fjQkC1 j>1g and y D …k QkC1 1fjQkC1 j1g
kD0 kD0

(so that x C y D YŒnC1 ) in combination with (2.62) and (2.63) we obtain (2.39) with
the help of Slutsky’s lemma.
Proof of (2.40) Recalling that X0 D 0 a.s. by assumption we use a representation

Œn
X
XŒnC1 D …ŒnC1 …k QkC1 (2.64)
kD0

where …k WD …1  


k , k 2 N0 and Qk WD Qk =Mk (with generic copy Q ), k 2 N.
Observe that
ˇ ˇ P
n1 sup jSŒntC1  SŒnt j D n1 max ˇ log jMk jˇ ! 0
0tT 1kŒnTC1

˚ˇ ˇ 
for every T > 0 because limt!1 tP ˇ log jMjˇ > t D 0 as a consequence of
Ej log jMjj < 1. This together with (2.59) proves n1 log j…ŒnC1 j )
‡./.
2.3 Proofs for Section 2.2 77

Hence n1 log.e


n j…ŒnC1 j/ ) „./ and thereupon

n1 logC .e


n j…ŒnC1 j/ ) „./ and n1 log .e
n j…ŒnC1 j/ ) „./
(2.65)
by the continuous mapping theorem (use continuity of x 7! xC and x 7! x ).
Further, write, for " 2 .0; 1/ and t > 0,

Pflog jQj > .1 C "/tg  Pflog jMj > "tg  Pflog jQj  log jMj > tg
 Pflog jQj > .1  "/tg
C Pflog jMj > "tg: (2.66)

Multiplying the inequality by t, sending t ! 1 and then " ! 0 yields

Pflog jQ j > tg D Pflog jQj  log jMj > tg  Pflog jQj > tg  ct1 ; t ! 1:

Arguing as in the proof of (2.39) we conclude that this limit relation in combination
with (2.59) ensures that
 X 
1 
 
n log j…Œn j; 1flog jQkC1 j>0g ".n1 k; n1 log jQkC1 j/ ) 
‡./; N .c;1/
k0

on D  Mp . By the Skorokhod representation theorem there are versions which


converge a.s. Retaining the original
P notation for these versions we want to apply
Theorem 2.3.1 with n D 1
k0 flog jQ " 1 k; n1 log jQkC1 jg , 0 D N .c;1/ ,
kC1 j>0g fn
fn ./ D n1 log j…Œn j, f0 ./ D 
‡./, cn D n and the signs ˙ defined
by sgn.…k QkC1 /. As has already been checked in the proof of (2.39) the so
defined functions and measures satisfy with probability one all the conditions of
Theorem 2.3.1 (to check (2.48) use (1.40)). Hence
ˇX ˇ
ˇ Œn   ˇ  .c;1/ 
n1 logC ˇˇ …k QkC1 1fjQkC1 j>1g ˇˇ ) sup 
tk C jk
.c;1/
.c;1/
kD0 tk 

by Theorem 2.3.1. This entails


ˇX ˇ
ˇ Œn   ˇ
n 1 Cˇ
log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ C
Œn
kD0
 .c;1/ .c;1/ 
)
‡./ C sup 
tk C jk :
.c=a; 1/
tk 
78 2 Perpetuities

Since the right-hand side is a.s. nonnegative (see (1.40)), we further have

 ˇX ˇ
ˇ Œn   ˇ
n1 logC e
Œn ˇˇ …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
 ˇX ˇ C
ˇ Œn   ˇ
D n1 log ˇˇ …k QkC1 1fjQkC1 j>1g ˇˇ C
Œn
kD0
 ˇ Œn ˇ C
ˇX   ˇ
D n 1 Cˇ
log ˇ …k QkC1 1fjQkC1 j>1g ˇˇ C
Œn
kD0
 .c;1/ .c;1/ 
)
‡./ C sup 
tk C jk (2.67)
.c; 1/
tk 

having utilized .x  y/C D .xC  y/C for x 2 R and y  0. Using (2.51) with
ˇX ˇ
ˇ Œn   ˇ
x D e
Œn j…ŒnC1 j and y D e
Œn ˇˇ …k QkC1 1fjQkC1 j1g ˇˇ
kD0

ˇ PŒn ˇ
(so that xy D ˇ…ŒnC1 kD0 …k QkC1 1fjQkC1 >1jg ˇ) in combination with (2.65)
and (2.67) we obtain
ˇ Œn
X ˇ
ˇ ˇ
n1 logC ˇˇ…ŒnC1 …k QkC1 1fjQkC1 j>1g ˇˇ
kD0
 .c;1/ .c;1/ 
)
‡./ C sup 
tk C jk (2.68)
.c;1/
tk 

with the help of Slutsky’s lemma.


Relation (2.59) entails n1 log.max0kŒn …k / ) 
‡./ by the continuous
mapping theorem. Therefore
 
n1 log j…ŒnC1 j C log.Œn C 1/ C log. max …k / ) „./
0kŒn

whence
 
n1 logC j…ŒnC1 j.Œn C 1/ max …k ) „./:
0kŒn
2.3 Proofs for Section 2.2 79

This implies
ˇ Œn
X ˇ
ˇ ˇ
n1 logC ˇˇ…ŒnC1 …k QkC1 1fjQkC1 j1g ˇˇ ) „./ (2.69)
kD0

because
ˇ Œn ˇ  Œn 
ˇ X ˇ X
Cˇ   ˇ C 
log ˇ…ŒnC1 …k QkC1 1fjQkC1 j1g ˇ  log j…ŒnC1 j j…k j
kD0 kD0
C
 
 log j…ŒnC1 j.Œn C 1/ max j…k j :
0kŒn

To finish the proof of (2.40) it remains to use (2.52) with

Œn Œn
X X
x D …ŒnC1 …k QkC1 1fjQkC1 j>1g and y D …ŒnC1 …k QkC1 1fjQkC1 j1g
kD0 kD0

PŒn
(so that x C y D …ŒnC1 kD0 …k QkC1 D XŒnC1 ) in combination with (2.68)
and (2.69). The proof of Theorem 2.2.1 is complete. t
u
Proof of Theorem 2.2.5 The proof proceeds along the lines of that of Theorem 2.2.1
but is simpler for the contribution of the Mk ’s is negligible. Therefore we only
provide details for fragments which differ principally from the corresponding ones
in the proof of Theorem 2.2.1.
Observe that

lim n1 bn D C1: (2.70)


n!1

Indeed, since .bn / is a regularly varying sequence of index 1=˛, this is trivial when
˛ 2 .0; 1/. If ˛ D 1, this follows from the relation n1 bn  `.bn / as n ! 1 and
our assumption that limt!1 `.t/ D 1.
Proof of (2.44) We recall the already used notation Sk D log j…k j and kC1 D
log jQkC1 j, k 2 N0 . According to Theorem 6.3 on p. 180 in [237], condition (2.41)
entails
X
1fkC1 >0g ".n1 k; b1
n kC1 /
) N .1;˛/ (2.71)
k0

on Mp . If we can prove that

SŒn
) „./ (2.72)
bn
80 2 Perpetuities

where „.t/ D 0 for t  0, then relations (2.71) and (2.72) can be combined into the
joint convergence
 X 
 
b1
n S Œn ; 1 " 1 1
fkC1 >0g .n k; bn kC1 / ) „./; N .1;˛/
k0

on D  Mp . By the Skorokhod representation theorem there are versions which con-


verge a.s. Retaining the original notation
Pfor these versions we apply Theorem 2.3.1
with fn ./ D b1
n SŒn , f0 D „, n D k0 1fkC1 >0g "fn1 k; b1
n kC1 g
, 0 D N .1;˛/ ,
cn D bn and the signs ˙ defined by sgn .…k QkC1 / which gives
PŒn
logC j kD0 …k QkC1 1fjQkC1 j>1g j .1;˛/
) sup jk : (2.73)
bn .1;˛/
tk 

Further, for each T > 0,


ˇ X ˇ X 
ˇ Œnt ˇ Œnt
sup log Cˇ … Q 1 ˇ  sup logC j… j
ˇ k kC1 fjQ kC1 j1g ˇ k
0tT kD0 0tT kD0

 log.ŒnT C 1/ C logC . max j…k j/:


0knT

In view of (2.72) we infer


 C
logC . max j…k j/ max Sk
0knT 0kŒnT P
D ! 0
bn bn

by the continuous mapping theorem, whence


PŒn
logC j kD0 …k QkC1 1fjQkC1 j>1g j
) „./: (2.74)
bn

Using (2.73), (2.74), inequality (2.52), and Slutsky’s lemma we arrive at (2.44).
It only remains to check (2.72). To this end, it suffices to prove that

sup jSŒnt j max jSk j


0tT 0kŒnT P
D ! 0 (2.75)
bn bn

for every T > 0. Set

S0C DS0 WD 0; SnC WD logC jM1 jC: : :ClogC jMn j; Sn WD log jM1 jC: : :Clog jMn j
2.3 Proofs for Section 2.2 81

for n 2 N. Since .bn / is a regularly varying sequence and

max jSk j  max SkC C max Sk D SŒnT


C 
C SŒnT ;
0kŒnT 0kŒnT 0kŒnT

(2.75) follows if we prove that limn!1 .Sn˙ =bn / D 0 in probability. We only


investigate the case lim infn!1 j…n j D 0 a.s., for the assumptions concerning M
in the complementary case limn!1 j…n j D 1 a.s. are symmetric to those for the
case limn!1 j…n j D 0 a.s.
Case E log jMj < 1 Then necessarily E logC jMj < 1 for otherwise
limn!1 j…n j D 1 a.s. Therefore we have limn!1 n1 Sn˙ D E log˙ jMj by
the strong law of large numbers. Invoking (2.70) proves (2.75).
Case E log jMj D 1 Condition (2.42) entails
n  
lim E .log jMj/ ^ bn D 0:
n!1 bn

Since
n   n
E .log jMj/ ^ bn D nPflog jMj > bn g C E log jMj1flog jMjbn g ;
bn bn

we infer

lim nPflog jMj > bn g D 0 (2.76)


n!1

and
n   
lim E log jMj1flog jMjbn g D 0: (2.77)
n!1 bn

Using (2.77) together with Markov’s inequality proves


Pn
log jMk j1flog jMk jbn g
lim kD1
D0 in probability:
n!1 bn

Now limn!1 .Sn =bn / D 0 in probability is a consequence of


 X
n X
n 
P b1
n log 
jMk j ¤ b 1
n log
jMk j1 
flog jMk jbn g
kD1 kD1

X
n
 Pflog jMk j > bn g D nPflog jMj > bn g
kD1

in combination with (2.76).


82 2 Perpetuities

Left with proving that

P
SnC =bn ! 0 (2.78)

we suppose immediately that E logC jMj D 1, for otherwise (2.78) is a con-


sequence of the strong law of large numbers. If lim supn!1 j…n j D 1, then
repeating the argument above but using (2.43) instead of (2.42) we obtain (2.78).
In the case limn!1 j…n j D 0, we can invoke Lemma 8.1 in [232] to conclude that
limn!1 SnC =Sn D 0 a.s. which together with limn!1 .Sn =bn / D 0 in probability
implies (2.78) The proof of (2.72) is complete. Hence so is that of (2.44).
Proof of (2.45) Since we do not assume that X0 D 0 a.s., an analogue of (2.64)
reads
Œn
X
XŒnC1 D …ŒnC1 X0 C …ŒnC1 …k QkC1 :
kD0

An application of inequality (2.52) with

Œn
X
x D …ŒnC1 …k QkC1 and y D …ŒnC1 X0
kD0

followed by the use of b1


n log j…ŒnC1 j ) „./ which is a consequence of (2.75)
shows that (2.45) is equivalent to
ˇ PŒn ˇ
logC ˇ…ŒnC1 kD0 …k QkC1 ˇ .1;˛/
) sup jk :
bn t
.1;˛/
 k

The proof of the last limit relation follows the pattern of that of (2.40) but is simpler.
Referring to the proof of (2.40) the only thing that needs to be checked is that

Pflog jQ j > tg D Pflog jQj  log jMj > tg  Pflog jQj > tg  t˛ `.t/

as t ! 1. To this end, we shall use (2.66). As before, we only investigate the case
lim infn!1 j…n j D 0 a.s.
Case E log jMj < 1 We have limt!1 tPflog jMj > "tg D 0 whereas
limt!1 tPflog jQj > tg D 1 (recall that in the case ˛ D 1 we assume that
limt!1 `.t/ D 1). Therefore,

Pflog jMj > "tg


lim D 0: (2.79)
t!1 Pflog jQj > tg
2.4 Bibliographic Comments 83

Since E log jMj < 1 entails E logC jMj < 1, the same argument proves (2.79)
for the right tail of logC jMj.
Case E log jMj D 1 and E logC jMj < 1 It suffices to check (2.79) which is a
consequence (2.42) and the regular variation of Pflog jQj > xg.
Case E log jMj D E logC jMj D 1 We only have to prove that

PflogC jMj > "tg


lim D 0:
t!1 Pflog jQj > tg

If lim supn!1 j…n j D 1 a.s., this follows from (2.43) and the regular variation of
Pflog jQj > tg. Suppose that limn!1 j…n j D 0 a.s., equivalently, limn!1 Sn D
1 a.s. Then, EJ .logC jMj/ < 1 according to Case (A3) of Remark 1.2.3 with
  0 where J .t/ D t=E.log jMj ^ t/ for t > 0. In view of
Z 1
1=J .t/ D Pf log jMj > tygdy;
0

J is nondecreasing whence

tPflogC jMj > tg


lim D 0;
t!1 E.log jMj ^ t/

and the desired relation follows by an application of (2.42). t


u

2.4 Bibliographic Comments

Perpetuities are ubiquitous in many areas of applied mathematics. A number of


examples supporting this claim can be found in [255]. Additionally we mention
frequent appearance of perpetuities in the studies of branching processes in random
environment [2, 112], in computer sciences [131, 168, 203, 263], and in the analysis
of discrete random (combinatorial) structures (references can be found below among
the comments concerning ‘Exponential functionals’). The list of less known areas
of applications includes generalized refinement equations [170], kinetic models
for wealth distribution [252], and growth-collapse processes with renewal collapse
times [50].
Here is a quite common model giving rise to perpetuities. Consider a linear
recurrence relation of the form

Zn D ZI0n C Vn ;
d
n 2 N; Z0 WD c
84 2 Perpetuities

where In is a random index with values in the set f0; 1; : : : ; ng, the random variable
Zm0 is assumed to be independent of .In ; Vn / and distributed like Zm for each m 2 N0 ,
and c is a constant. If
d
.na Zn0 ; na Ina ; na Vn / ! .Y1 ; M; Q/; n!1

for some a > 0 then, necessarily, Y1 is independent of .M; Q/. Furthermore, the
distribution of Y1 satisfies (2.7) which can be seen on writing

Zn d ZI0n Ina Vn
a
D a a C a:
n In n n

Decomposable Distributions A perpetuity representation of the selfdecomposable


distributions (with nondegenerate M) was obtained in Theorem 1 of [165].
R1
Exponential Functionals of Lévy Processes The integrals 0 f .X.t//dt were
investigated in [253] for transient Lévy processes X and a wide class of functions f .
Of course, the choice f .x/ D e˙x gives the exponential functional. Thus, it seems
that [253] is the first paper in which exponential functionals of Lévy processes were
partly analyzed. In particular, in the case T D 1 formula (2.10) was given in
Example 3.4 of [253]. The exponential functionals of Lévy processes have been
and still are a very popular object of research. Out of a long list of works dealing
with them we only mention a collection of articles [268], a survey [34] and several
more recent papers [27, 29, 228]. The last references are intended to demonstrate
that this research area is alive and well.
The exponential functionals of subordinators arise naturally as scaling limits of
various random sequences defined on random discrete structures. In many cases,
quantities of interest are the absorption times of certain nonincreasing Markov
chains on the nonnegative integers which satisfy a regular variation assumption.
While Theorem 2 in [121] treats the general nondecreasing Markov chains (The-
orem 3 in [32] provides an extension of this result to the first hitting times of
some nonmonotone Markov chains), particular nonincreasing Markov chains were
investigated in the following contexts: the absorption times of the random walks
with a barrier (Theorem 6 in [220]); the number of collisions in the coalescents with
multiple collisions a.k.a. ƒ-coalescents (Theorem 7.1 in [153] and Section 5.4 in
[102]); the number of blocks in random regenerative compositions (Corollary 5.2 in
[106]). Further examples (which are not the absorption times) include the number
of jumps of random walks with a barrier (Theorem 1.3 in [153]); the total branch
length of the ƒ-coalescent tree (Theorem 3.1 in [218]); the number of types of
a sample taken from the coalescent with simultaneous multiple collisions and
mutation (Theorem 1.2 in [91]), etc.
Fixed Points of Inhomogeneous Smoothing Transforms There is a large body
of papers dealing with fixed points of smoothing transforms. A small sample for
the homogeneous case includes [4, 39, 64, 81, 135, 199]. Studies of the fixed points
2.4 Bibliographic Comments 85

of the inhomogeneous transforms were commenced only recently. While the scalar
inhomogeneous case was addressed in [11, 12, 14, 60, 62, 164], the articles [25, 58,
152, 208, 209] focussed at various multidimensional generalizations.
Fixed Points of Shot Noise Transforms In [141] the shot noise transforms were
introduced and investigation of their fixed points was initiated. Further results can
be found in [144, 145]. Example 2.1.3 is Proposition 8 in [134]. Example 2.1.4(a)
belongs to the folklore which means that its origin cannot be easily traced. Some
of its extensions can be found in [77, 78]. While part (b) of Example 2.1.4 is taken
from p. 72 in [144], part (c) is formula (2.5) in [76] and part (d) is Proposition 12 in
[229].
Random Lüroth Series We learned the notion of Lüroth series, both deterministic
and random, from [231]. Its authors investigated continuity properties of the
distributions of convergent random Lüroth series which are more general than those
discussed here. Since the authors of [231] never mentioned perpetuities, it seems
the connection between perpetuities and random Lüroth series is noticed here for
the first time.
Theorem 2.1.1 was proved in [109], the articles [84, 255] being the important
earlier contributions. Various extensions of Theorem 2.1.1 to wider settings can be
found in [48, 51, 87].
Theorem 2.1.2 is Theorem 1.3 in [9]. Earlier, it was proved in [113] under the
additional condition E log jMj 2 .1; 0/. However, for the conclusion that the
distribution of Y1 is continuous if nondegenerate the analytic argument of the last
cited paper is quite different from ours. This latter conclusion may also be derived
from Theorem 1 in [116], as has been done in Lemma 2.1 in [33]. If M and Q
are independent, sufficient conditions for the absolute continuity of the distribution
of Y1 were given in [222]. The proof in [222] relies heavily upon studying the
behavior of corresponding characteristic functions and used moment assumptions as
an indispensable ingredient. Without such assumptions it is not clear how absolute
continuity of the distribution of Y1 may be derived via an analytic approach.
Example 2.1.5(a) can be found as Example 4.3 in [257].
Example 2.1.5(b) is a special case of Example A2 in [194].
Example 2.1.5 (c,d) is taken from [175]. Our proof follows an argument given
in Section 4 of [234]. Parts (c) and (d) of Example 2.1.5 treat a well-studied class
of special cases when PfQ D ˙1g D 1=2. A short survey can be found in [75].
One would expect the distribution of Y1 to be absolutely continuous whenever c 2
.1=2; 1/. However, this is not true as there are values
p of c between 1=2 and 1 giving a
singular distribution of Y1 , for example, if c D . 51/=2, see [85, 86]. Meanwhile
it has been proved in [249] that, on the other hand, the distribution of Y1 is indeed
absolutely continuous for almost all values of c 2 .1=2; 1/.
Theorem 2.1.3 is Theorem 5.1 in [27].
Theorems 2.1.4 and 2.1.5 coincide with Theorem 1.2 in [6] and Theorem 1.4
in [9], respectively. In connection with Theorem 2.1.5 see also Proposition 10.1 in
[174] for the case that M; Q  0 in which (2.22) and (2.23) are clearly identical.
For the case p > 1, the implication (2.20)) (2.22) was proved in Theorem 5.1 of
86 2 Perpetuities

[255]. The following problem that is closely related to Theorem 2.1.5 has received
and is still receiving enormous attention in the literature: which conditions imposed
on the distributions of M and Q ensure that PfjY1 j > xg  x˛ `.x/ as x ! 1
for some ˛ > 0 and some ` slowly varying at 1? This asymptotics can arise from
comparable right tail of jQj provided that the right tail of jMj is lighter, see [111,
115] and, for a multidimensional case, [240]. A very similar situation is described
in Theorem 1.3.6 which concerns the tails of suprema of perturbed random walks. A
more remarkable fact was noticed in [176]: the right tail of jY1 j can exhibit a power
asymptotic even in the case where jMj and jQj are light-tailed. For the last case,
papers [107, 115, 178] are further important contributions concerning perpetuities
(the book [59] provides a nice summary of results around the Kesten–Grincevičius–
Goldie theorem), while Theorem 1.3.8 is a result on the tail behavior of suprema of
perturbed random walks. Finally, we mention Theorem 3.1 in [83] which is a result
on tail behavior of Y1 obtained under a subexponentiality assumption.
Formula (2.26) can be found in Theorem 5.1 of [255]. A more complicated
recursive formula relating moments of fractional orders is obtained in Lemma 2.1
of [213] under the assumption that M and Q are nonnegative and independent.
Theorems 2.1.7 and 2.1.8 are Theorems 1.6 and 1.7 in [9]. Extending these results
and answering the question asked in [9], the recent paper [5] provides formulae for
supfr 2 Z W EerY1 < 1g and inffr 2 Z W EerY1 < 1g. Various results concerning
exponential and ‘Poisson-like’ tails of Y1 can be found in [5, 74, 108, 127, 128,
185, 206].
Theorems 2.2.1 and 2.2.5 are extended versions of Theorems 1.1 and 1.5 in [61].
The proofs given here are modified versions of those in [61]. Under the assumptions
M D e
a.s. for some
2 .1; 0/ and Q  0 a.s., one-dimensional versions of
our results can be found in parts (ii) and (iii) of Theorem 5 in [222].
As far as we know [114] was the first paper in which a limit theorem for Yn was
proved in the boundary case E log M D 0 under the assumption that M  0 a.s. Also,
weak convergence of one-dimensional distributions of divergent perpetuities has
been investigated in [26, 129, 222, 233] under various assumptions on M and Q. To
the best of our knowledge, (a) except in [61], functional limit theorems for divergent
perpetuities have not been obtained so far; (b) [61, 222] are the only contributions
to divergent non-boundary case limn!1 …n D 0 a.s. and EJ .logC jQj/ D 1.
Outside the area of limit theorems we are only aware of two papers [174, 269]
which investigate the latter case. The boundary case E log jMj D 0 has received
more attention in the literature, see [21, 53–55, 57, 114, 129, 233].
Chapter 3
Random Processes with Immigration

3.1 Definition

Denote by D.R/ the Skorokhod space of real-valued right-continuous functions


which are defined on R and have finite limits from the left at each point of the
domain. Let X WD .X.t//t2R be a random process with paths in D.R/ satisfying
X.t/ D 0 for all t < 0 and let  be a positive random variable. Arbitrary dependence
between X and  is allowed.
Further, let .X1 ; 1 /; .X2 ; 2 /; : : : be i.i.d. copies of the pair .X; / and denote, as
before, by .Sk /k2N0 the zero-delayed random walk with increments j , that is,

S0 D 0; Sk D 1 C : : : C k ; k 2 N:

We write ..t//t2R for the corresponding first-passage time process, i.e.,

.t/ WD inffk 2 N0 W Sk > tg D #fk 2 N0 W Sk  tg; t2R

where the last equality holds a.s. Following [148, 149], the process Y WD .Y.t//t2R
defined by

 .t/1
X X
Y.t/ WD XkC1 .t  Sk / D XkC1 .t  Sk /; t2R (3.1)
k0 kD0

will be called random process with immigration at the epochs of a renewal process
or, for short, random process with immigration. Observe that jY.t/j < 1 a.s. for
each t because the number of summands in the sum defining Y.t/ is a.s. finite. The
process Y is called renewal shot noise process if X D h for a deterministic function

© Springer International Publishing AG 2016 87


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_3
88 3 Random Processes with Immigration

h, so that

 .t/1
X X
Y.t/ WD h.t  Sk / D h.t  Sk /; t 2 R: (3.2)
k0 kD0

If the distribution of  is exponential, the renewal shot noise process is called


Poisson shot noise process because ..t/  1/t0 is then a Poisson process.
Our interpretation of Y as defined in (3.1) is as follows. At time S0 D 0 the
immigrant 1 starts running a random process X1 . For k 2 N, at time Sk the immigrant
k C 1 starts running a random process XkC1 , Y.t/ being the sum of all processes run
by the immigrants up to and including time t.
We advocate using the term ‘random process with immigration’ for two reasons.
First, we believe that it is more informative than the more familiar term renewal
shot noise process with random response functions Xk ; in particular, the random
process Y defined by (3.1) has little in common with the originally defined Poisson
shot noise processes [247] intended to model the current induced by a stream of
electrons arriving at the anode of a vacuum tube. Second, the new term was inspired
by the fact that if X is a continuous-time branching process, then Y is a branching
process with (single) immigration (see Example 3.1.1 for more details).
Now we want to give several explicit examples of the random processes with
immigration.
Example 3.1.1 Let ..Zi;j .t//t0 /i;j2N be independent copies of a branching pro-
cess .Z.t//t0 and . k /k2N0 independent copies of a nonnegative integer-valued
random variable , these two collections being independent. Assume further that
...Zi;j .t//t0 /j2N ; i1 ; i /i2N are i.i.d. Then Y D .Y.t//t0 defined by

k
XX
Y.t/ D Zi;kC1 .t  Sk /1fSk tg ; t0 (3.3)
k0 iD1

is known in the literature as branching process with immigration. Plainly, Y is a


random process with immigration which corresponds to

Xk .t/ D Z1;k .t/ C : : : C Z k1 ;k .t/; k 2 N:

If D 1 a.s. we obtain the aforementioned branching process with single


immigration.
Example 3.1.2 Let  be a random variable arbitrarily dependent on  and g W R2 !
R a function that satisfies g.x; y/ D 0 for x < 0 and g.; y/ 2 D.R/ for each y 2 R.
Setting X.t/ D g.t; / gives a quite general class of the random processes with
immigration. Rather popular choices of functions g include:
(a) g.x; y/ WD 1Œ0;x .y/ and g.x; y/ WD 1.x;1/ .y/. Let  > 0 a.s. The resulting
processes Y can be interpreted in a number of ways, see p.3. For instance, as
3.2 Limit Theorems Without Scaling 89

far as GI=G=1-queuing system is concerned, Y1 defined by


X
Y1 .t/ WD 1fSk CkC1 tg ; t0
k0

is the number of customers served up to and including time t, whereas Y2 defined


by
X
Y2 .t/ WD 1fSk t<Sk CkC1 g ; t0
k0

is the number of customers which are being served at time t. The latter random
process with immigration will prove to be of great importance in Chapter 5.
Let h W R ! R be a function satisfying h.x/ D 0 for x < 0 and h 2 D.R/.
(b) g.x; y/ WD h.x/y. The corresponding process Y3 defined by

 .t/1
X
Y3 .t/ D kC1 h.t  Sk /; t0
kD0

is sometimes called multiplicative renewal shot noise.


(c) g.x; y/ WD h.x/. This gives rise to the classical renewal shot noise, see (3.2).
(d) g.x; y/ WD y. In this case Y4 defined by

 .t/1
X
Y4 .t/ D k ; t0
kD1

is called continuous-time random walk.R


x
(e) g.x; y/ D h.x ^ y/. Note that x ^ y D 0 1.z;1/ .y/dz. Hence if g.x; y/ D x ^ y
and  > 0 a.s., the corresponding process Y5 satisfies

 .t/1
X  Z
 t
Y5 .t/ WD kC1 ^ .t  Sk / D Y2 .s/ds; t  0:
kD0 0

In the two subsequent sections we shall provide results on weak convergence of


the so defined random processes with immigration.

3.2 Limit Theorems Without Scaling

In this section we treat the situation when E.jX.t/j/ is finite and tends to 0 quickly
as t ! 1 while E < 1. Then Y is the superposition of a regular stream of
freshly started processes with quickly fading contributions of the processes that
90 3 Random Processes with Immigration

started early. As t becomes large, these competing effects balance on a distributional


level and Y approaches stationarity. Under these assumptions the joint distribution
of .X; / should affect the asymptotic behavior of Y. However, we refrain from
investigating this by assuming that X and  are independent.
Before we formulate our results, some preliminary work has to be done.

3.2.1 Stationary Random Processes with Immigration

Suppose that
D E < 1, and that the distribution of  is nonlattice, i.e., it is
not concentrated on any lattice dZ, d > 0. Further, we stipulate hereafter that the
basic probability space on which .Xk /k2N and .k /k2N are defined is rich enough to
accommodate
• an independent copy .k /k2N of .k /k2N ;
• a random variable 0 which is independent of .k /k2Znf0g and has distribution

Pf0 2 dxg D
1 E.1f2dxg /1.0;1/ .x/I

• a random variable U which is independent of .k /k2Z and has a uniform


distribution on Œ0; 1;
• a family .Xk /k2Z of i.i.d. random elements of D.R/ that is independent of .k /k2Z
and U.
Set

Sk WD .1 C : : : C k /; k 2 N; S0 WD U0 ; S1



WD .1  U/0

and

Sk D S0 C Sk ; k 2 N; Sk


 
WD S1 C SkC1 ; k 2 N n f1g: (3.4)

Recall (see Proposition 6.2.7) that the distribution of .S0 ; S1



/ coincides with the
joint limit distribution of the overshoot S .t/  t and the undershoot t  S .t/1 as
t ! 1, and that

PfS0 2 dxg D PfS1



2 dxg D
1 Pf > xg1.0; 1/ .x/dx: (3.5)
P
It is easily seen that the pointPprocess k2Z "Sk is distributionally invariant
P under
reflection at the origin, i.e., k2Z "Sk has the same distribution as k2Z "Sk . A
deeper result (see Theorem 4.1 in
P PChapter 8 of [251]) states that the point
P process
k2Z "Sk
is shift-invariant, i.e., k2Z " S has the same distribution as
k k2Z "S
k Ct
for every t 2 R. As a consequence,
  f:d  
#fk 2 N0 W t  s  Sk  tg s2Œ0;t
D #fk 2 N0 W Sk  sg s2Œ0;t : (3.6)
3.2 Limit Theorems Without Scaling 91

P
The shift-invariance alone implies that the intensity measure of k2Z "S
k
is a
constant multiple of the Lebesgue measure where the constant can be identified
as
1 by the elementary renewal theorem (formula (6.4)). In conclusion,
X 
E "Sk .dx/ D
1 dx: (3.7)
k2Z

Fix any u 2 R. Since lim Sk D 1 a.s., the sum


k!1

X
XkC1 .u C Sk /1fSk ug
k1

is a.s. finite because the number of nonzero summands is a.s. finite. Define
X X
Y  .u/ WD XkC1 .u C Sk / D XkC1 .u C Sk /1fSk ug
k2Z k2Z


P the random variable Y .u/ being almost surely finite provided that the
with

series
X
k0 kC1 .u C S k /1 fSk ug converges in probability. It is natural to call .Y .u// u2R
stationary random process with immigration.

3.2.2 Weak Convergence

Theorem 3.2.1 provides sufficient conditions for weak convergence of the finite-
dimensional distributions of .Y.t C u//u2R as t ! 1.
Theorem 3.2.1 Suppose that
• X and  are independent;

D E < 1;
• the distribution of  is nonlattice.
If the function G.t/ WD EŒjX.t/j ^ 1 is directly
P Riemann integrable
1
(dRi) on RC D

Œ0; 1/, then, for each u 2 R, the series k0 XkC1 .u C Sk /1fSk ug is absolutely


convergent with probability one, and, for any n 2 N and any finite u1 < u2 < : : : <
un ,
  d  
Y.t C u1 /; : : : ; Y.t C un / ! Y  .u1 /; : : : ; Y  .un / ; t ! 1: (3.8)

1
See Section 6.2.2 for the definition and properties of directly Riemann integrable functions.
92 3 Random Processes with Immigration

Here is an example in which Y.t/ fails to converge in distribution, as t ! 1.


In particular, this shows that Lebesgue integrability of G, even when taken together
with the condition limt!1 G.t/ D 0, is not enough to ensure that (3.8) holds.
Example 3.2.1 Let X.t/ D h.t/ WD .1 ^ 1=t2 /1Q .t/, t  0 where Q denotes the
set of rational numbers. Observe that G.t/ D E.jX.t/j ^ 1/ D h.t/ is Lebesgue
integrable but not Riemann integrable. Let the distribution of  be such that Pf 2
Q \ .0; 1g D 1 and Pf D rg > 0 for all r 2 Q \ .0; 1. Then the distribution of
 is nonlattice. Since the k take rational values a.s., all Sk take rational values on a
set of probability 1. This entails Y.t/ D 0 for t 2 RnQ. On the other hand, in the
given situation, Y.t/ does not converge to 0 in distribution when t approaches C1
along a Psequence of rational numbers. In fact, for t 2 Q, Y.t/ D b Y.t/ a.s. where
b
Y.t/ D k0 f .t  Sk /1fSk tg with f .t/ D 1 ^ 1=t2 for t  0. The so defined f is dRi
as a nonincreasing Lebesgue integrable function, see Lemma 6.2.1 (a). Therefore,
from Theorem 3.2.1 we conclude that
d X
Y.t/ D b Y.t/ ! f .Sk /; t ! 1; t 2 Q:
k0

The latter random variable is positive a.s.


In the situation when Y.t/ does not converge, yet Y.t/  a.t/ does converge
for appropriate a.t/ our results are still incomplete. At present we can only treat
one-dimensional convergence of renewal shot noise, i.e., the random process with
immigration that corresponds
P to deterministic (nonintegrable) X. Observe that the
random variable Y  .0/ D k0 h.Sk / is then not well defined which motivates us
to set
X Z t 
Yı WD lim h.Sk /1fSk tg 
1 h.y/dy (3.9)
t!1 0
k0

whenever
< 1 and the distribution of  is nonlattice.
Theorem 3.2.2 Assume that the distribution of  is nonlattice. Let h W RC ! R
be a locally bounded, almost everywhere continuous, eventually nonincreasing and
nonintegrable function.
(A1) Suppose  2 WD Var  < 1 and
Z 1
.h.y//2 dy < 1: (3.10)
0

Then Yı exists as the limit in L2 in (3.9) and


Z t
1 d
Y.t/ 
h.y/dy ! Yı ; t ! 1: (3.11)
0

Rt
Relation (3.11) also holds with
1 0 h.y/dy replaced by EY.t/.
3.2 Limit Theorems Without Scaling 93

For the rest of the theorem, assume that h is eventually twice differentiable2 and that
h00 is eventually nonnegative.
(A2) Suppose E r < 1 for some 1 < r < 2. If there exists a > 0 such that
h.y/ > 0 for y  a,
Z 1
.h.y//r dy < 1; (3.12)
a

and3

h00 .t/ D O.t21=r /; t ! 1; (3.13)

then Yı is well defined as the a.s. limit in (3.9). Furthermore, (3.11) holds.
(A3) Suppose Pf > xg  x˛ `.x/ as x ! 1 for some 1 < ˛ < 2 and some `
slowly varying at 1. If there exists an a > 0 such that h.y/ > 0 for y  a;
Z 1
.h.y//˛ `.1=h.y//dy < 1; (3.14)
a

and

h00 .t/ D O.t2 .c.t//1 /; t!1 (3.15)

where c.t/ is a positive function satisfying

lim t`.c.t//.c.t//˛ D 1;
t!1

then Yı exists as the limit in probability in (3.9) and (3.11) holds.

3.2.3 Applications of Theorem 3.2.1

We first derive equivalent condition for the direct Riemann integrability of G.t/ D
E.jX.t/j ^ 1/ that is more suitable for applications.
With probability one, X takes values in D.R/, and hence is continuous almost
everywhere (a.e.). This carries over to t 7! jX.t/j ^ 1. From Lebesgue’s dominated
convergence theorem we conclude that G is a.e. continuous. Since G is also

2
h is called eventually twice differentiable if there exists a t0  0 such that h is twice differentiable
on .t0 ; 1/
3
If h00 is eventually monotone, then (3.13) and (3.15) are consequences of (3.12) and (3.14),
respectively.
94 3 Random Processes with Immigration

bounded, it must be locally Riemann integrable. From this we conclude that the
direct Riemann integrability of G is equivalent to
X
sup EŒjX.t/j ^ 1 < 1: (3.16)
k0 t2Œk; kC1/

The direct Riemann integrability of G entails limt!1 G.t/ D 0 (see Section 6.2.2
for the proof) whence limt!1 X.t/ D 0 in probability. We now give an example in
which X.t/ does not converge to zero a.s., yet satisfies (3.16) (hence G is dRi).
Example 3.2.2 Let be uniformly distributed on Œ0; 1 and set
X
X.t/ WD 1˚kC k2
t<kC
; t  0:
k2 C1
k1

Then X.n C n2 .n2 C 1/1 / D 1 a.s. for n 2 N and thereupon lim sup X.t/ D 1 a.s.
t!1
On the other hand, supt2Œn; nC1/ EŒjX.t/j ^ 1 D supt2Œn; nC1/ EŒX.t/ D .n2 C 1/1
for n 2 N, and inequality (3.16) holds true.
Suppose that the first three assumptions of Theorem 3.2.1 hold.
Example 3.2.3 Let PfX.t/  0g D 1 and PfX.t/ 2 .0; 1/g D 0 for each t  0.
Suppose that, with probability one, X gets absorbed into the unique absorbing state
0. This means that the random variable  WD infft W X.t/ D 0g is a.s. finite, and
X.t/ D 0 for t  . Then E < 1 is necessary and sufficient for (3.8) to hold.
Indeed, if E < 1, then from the equality

E.jX.t/j ^ 1/ D PfjX.t/j  1g D Pf > tg


ˇ
we deduce that the function G.t/ D E.jX.t/ˇ ^ 1/ is dRi on RC as a nonincreasing
Lebesgue integrable function (see Lemma 6.2.1(a)). Therefore, (3.8) follows from
Theorem 3.2.1.
Suppose now that E D 1. By the strong law of large numbers, for any 2
.0;
/, there exists an a.s. finite random variable M such that Sk > .
 /k for
k  M. Therefore, for any u 2 R,
X ˚ 
P XkC1 .u C Sk /1fuCSk 0g  1j.Sj/j2N0
k0
X ˚ ˇ  X ˚ ˇ 
D 1fuCSk 0g P kC1 > u C Sk ˇ.Sj / D P kC1  u > Sk ˇ.Sj /
k0 k  .u/
X ˚ ˇ 
 P   u > .
 /kˇ.Sj / D 1 a.s.
kM_  .u/

where k WD
Pinfft W Xk .t/ D 0g, and   .u/ WD inffk 2 N0 W Sk  ug. Given .Sj /,

the series k0 XkC1 .u C Sk /1fSk ug does not converge a.s. by the three series
3.2 Limit Theorems Without Scaling 95


theorem. Since the general term of the seriesP is nonnegative,then, given .Sj /, this
series diverges a.s. Hence, for each u 2 R, k2Z XkC1 .u C Sk /1fSk ug D 1 a.s.,
and (3.8) cannot hold.
Here are several specializations of the aforementioned process.
(a) In (3.3), let .Z1;1 .t//t0 be a subcritical or critical Bellman–Harris process (see
Chapter IV in [20] for the definition and many properties) with a single ancestor.
Then Y is a subcritical or critical Bellman–Harris process with immigration at
the epochs of a renewal process. Let  and N be independent with  being
distributed according to the life length distribution and N according to the
offspring distribution of the process. Suppose that Pf D 0g D 0, PfN D
0g < 1 and PfN D 1g < 1. Then Z1;1 .t/ C : : : C Z 0 ;1 .t/ is a pattern of the
process X as discussed above, and we infer that Y satisfies (3.8) if, and only if,
E < 1. The latter is equivalent to
Z 1  X 
k
1 PfZ1;1 .t/ D 0g Pf D kg dt < 1
0 k0

because

Pf  tg D PfZ1;1 .t/ C : : : C Z 0 ;1 .t/ D 0g


X
D .PfZ1;1 .t/ D 0g/k Pf D kg
k0

where and 0 follow the immigration distribution.


(b) As in Example 3.1.2(a), suppose that X.t/ D 1f>tg for a positive random
variable . In this case  D . Hence the corresponding Y satisfies (3.8) if,
and only if, E < 1.
(c) Let X be a birth and death process with X.0/ D i 2 N a.s. Suppose that X
is eventually absorbed at 0 with probability one. Then the corresponding Y
satisfies (3.8) if, and only if, E < 1. A criterion for the finiteness of E
expressed in terms of infinitesimal intensities is given in Theorem 7.1 on p. 149
of [172].
Example 3.2.4 As in Example 3.1.2(b), let X.t/ D h.t/ where  is a random
variable independent of .
(a) Suppose that Pf D bg D 1, and that the function t ! jh.t/j ^ 1 is dRi. Then
relation (3.8) holds.
(b) Suppose h.t/ D eat , a > 0.If E.logC jj/ < 1, then the nonincreasing
function G.t/ D E.jjeat ^ 1 is Lebesgue integrable on RC , hence dRi by
Lemma 6.2.1(a). Thus Theorem 3.2.1 implies (3.8). If E.logC jj/ D 1, then,
by Theorem 2.1.1,

ˇXn ˇ
ˇ ˇ
lim ˇ kC1 exp.aSk /ˇ D 1
n!1
kD0
96 3 Random Processes with Immigration

in probability where 1 ; 2 ; : : : are i.i.d. copies of . The latter implies that (3.8)
cannot hold.

3.2.4 Proofs for Section 3.2.2

Denote by Np the set of Radon point measures on .1; 1 with the topology of
vague convergence, and let "x0 denote the probability measure concentrated at point
x0 2 R. Recall that, for mn ; m 2 Np , limn!1 mn D m vaguely if, and only if,
Z Z
lim f .x/mn .dx/ D f .x/m.dx/
n!1

for any continuous function f W R ! RC with compact support. According to


Proposition 3.17 in [235], there is a metric on Np which makes .Np ; / a complete
separable metric space, while convergence in this metric is equivalent to vague
convergence. Further,
P for later use, recall that any m 2 Np has a representation of
the form m D k2Z "tk for tk 2 R. Moreover, this representation is unique subject
to the constraints tk  tkC1 for all k 2 Z and t1 < 0  t0 . The tk are given by
(
infft  0 W m.Œ0; t/  k C 1g if k  0I
tk D (3.17)
 infft  0 W m.Œt; 0//  kg if k < 0:

First we prove three auxiliary results. Lemma 3.2.3 given next and the continuous
mapping theorem are the key technical tools in the proof of Theorem 3.2.1.
Lemma 3.2.3 Assume that
D E < 1 and that the distribution of  is
nonlattice. Then
X X
"tSk ) "Sj ; t ! 1
k0 j2Z

on Np .
Proof Let h W R ! RC be a continuous function with a compact support. We have
to prove that
X  d X
h.t  Sk ! h.Sj /; t ! 1: (3.18)
k0 j2Z

We start by reviewing a classical coupling. Let .Ok /k2N be an independent copy


of the sequence .k /k2N . Let SO 0 denote a random variable that is independent of
all previously introduced random variables and has the same distribution as S0
(see (3.5)). Put

SO 0 WD 0; SO k WD O1 C : : : C Ok ; k2N


3.2 Limit Theorems Without Scaling 97

and then SO k WD SO 0 C SO k , k 2 N0 . It is known (see p. 210 in [80]) that, for any fixed
> 0, there exist a.s. finite stopping times 1 D 1 . / and 2 D 2 . / such that
jS2  SO 1 j  a.s. Define the coupled random walk
(
SO k ; if k  1 ;
SQ k WD P2 Ck1
SO 1 C jD2 C1 j ; if k  1 C 1;


for k 2 N0 . Then .SQ k /k2N0 D .SO k /k2N0 D .Sk /k2N0 . The construction of the
d d

sequence .SQ k /k2N0 guarantees that

SQ 1 Ck   S2 Ck  SQ 1 Ck C (3.19)

for k 2 N0 .
With the same as above, we set h. / .x/ WD supjyxj h.y/ and h. / .x/ WD
infjyxj h.y/, x 2 R. The so defined functions are continuous functions with
compact supports. Indeed, if x is a discontinuity of h. / , then x  or x C is a
discontinuity of h. Consequently, since h is continuous, so is h. / and, by the same
argument, h. / . The claim about supports is obvious. Observe that the sum on the
right-hand side of (3.18) is a.s. finite because the number of its nonzero terms is a.s.
finite. The same is true if h is replaced by h. / or h. / .
Using now (3.19) we infer

X X
2 1 X
h.t  Sk / D h.t  Sk / C h.t  .S2 Ck  SQ 1 Ck /  SQ 1 Ck /
k0 kD0 k0

X
2 1 X
 h.t  Sk / C h. / .t  SQ 1 Ck /
kD0 k0

X
2 1 X
1 1 X
D h.t  Sk /  h. / .t  SQ k / C h. / .t  SQ k /:
kD0 kD0 k0

The first two summands on the right-hand side tend to 0 a.s. as t ! 1 in view
of limt!1 h.t/ D limt!1 h. / .t/ D 0. With A. / WD infft W h. / .t/ ¤ 0g and
g. / .t/ WD h. / .t C A. / /, t 2 R, the third term satisfies
X X X
h. / .t  SQ k / D g. / .t  A. /  SQ k / D g. / .t  A. /  Sk /
d

k0 k0 k0


X X X
h. / .Sk C A. / / D h. / .Sk C A. / / D h. / .Sk /
d d
D
k0 k2Z k2Z

where the second distributional equality is a consequence of (3.6); the last equality
is due to the fact that h. / .Sk C A. / / D 0 for negative integer k, and the last
98 3 Random Processes with Immigration

P
distributional equality follows from the distributional shift invariance of k2Z "Sk
(see p. 90). Further, since h is continuous, we have h. / # h as # 0. Therefore
ˇ ˇ Z
ˇ X . /  X ˇ  . / 
ˇ
lim Eˇ h .Sk /   ˇ
h.Sk /ˇ D lim h .x/  h.x/ dx D 0
#0 #0
k2Z k2Z

by the monotone convergence


P theorem. The convergenceP in . mean together with
. /  / 
the
P monotonicity of k2Z h .S k / in ensures that k2Z h .S /
k converges to

k2Z h.S k / a.s. as # 0. We conclude that
˚X  ˚X 
lim sup P h.t  Sk / > x  P h.Sj / > x
t!1
k0 j2Z

P
for every continuity point x of the distribution function (d.f.) of j2Z h.Sj /. More
precisely, let . n /n2N
P be a sequence with n # 0 as n ! 1. Let x be a continuity
point of the d.f. of j2Z h.Sj / and x  ı (ı > 0) be a continuity point of the d.f. of
P 
P . n / 
j2Z h.Sj / and the d.f.’s of j2Z h .Sj / (the set of these ı is dense in R). Then
˚X 
lim sup P h.t  Sk / > x
t!1
k0
 X
2 1 X
1 1 
 lim sup P h.t  Sk /  h . n /
.t  SQ k / > ı
t!1
kD0 kD0
˚X  ˚ X . /  
C lim sup P h. n / .t  SQ k / > x  ı D P h n .SQ k / > x  ı :
t!1
k2Z k2Z

P
As n ! 1, the last expression tends to Pf j2Z h.Sj / > x  ıg. Sending now ı # 0
along an appropriate sequence, we arrive at the desired conclusion. Corresponding
lower bounds can be obtained similarly, starting with

X X
1 1 X
h.t  Sk /   h. / .t  SQ k / C h. / .t  SQ k /:
k0 kD0 k0

The proof of Lemma 3.2.3 is complete. t


u
As explained in Section 2.2 of [149] there exists a metric d such that .D.R/; d/
is a complete separable metric space. Furthermore, limn!1 fn D f in .D.R/; d/ if,
and only, if there exist

n 2 ƒ WD f W  is a strictly increasing continuous function on R


with .˙1/ D ˙1g
3.2 Limit Theorems Without Scaling 99

such that, for any finite a and b, a < b,


n o
lim max sup jn .u/  uj; sup j fn .n .u//  f .u/j D 0:
n!1 u2Œa; b u2Œa; b

Lemma 3.2.4 Suppose that limn!1 tn D t in R and limn!1 fn D f in .D.R/; d/.


Then

fn .tn C / ! f .t C /; n!1

in .D.R/; d/.
Proof Without loss of generality we assume that t D 0. It suffices to prove that there
exist n 2 ƒ, n 2 N such that, for any 1 < a < b < 1,
n o
lim max sup jn .u/  uj; sup j fn .tn C n .u//  f .u/j D 0: (3.20)
n!1 u2Œa; b u2Œa; b

By assumption, limn!1 fn D f in .D.R/; d/. Hence, there are


n 2 ƒ, n 2 N such
that, for any 1 < a < b < 1,
n o
lim max sup j
n .u/  uj; sup j fn .
n .u//  f .u/j D 0: (3.21)
n!1 u2Œa; b u2Œa; b

Put n .u/ WD
n .u/  tn and note that n 2 ƒ. Then (3.21) can be rewritten as
n o
lim max sup jn .u/  u C tn j; sup j fn .tn C n .u//  f .u/j D 0
n!1 u2Œa; b u2Œa; b

which is equivalent to (3.20), for limn!1 tn D 0. t


u
Denote by D.R/Z the Cartesian product of countably many copies of D.R/
endowed with the topology of componentwise convergence via the metric
X  
dZ .. fk /k2Z ; .gk /k2Z / WD 2jkj d. fk ; gk / ^ 1 :
k2Z

Note that .D.R/Z ; dZ / is a complete and separable metric space. Now consider the
metric space .Np  D.R/Z ;  / where  .; / WD dZ .; / C .; / (i.e., convergence
is defined componentwise). As the Cartesian product of complete and separable
spaces, .Np  D.R/Z ;  / is complete and separable.
.l/
For fixed c > 0, l 2 N and .u1 ; : : : ; ul / 2 Rl , define the mapping c W Np 
D.R/Z ! Rl by
X 
 
c.l/ m; . fk .//k2Z WD fk .tk C uj /1fjtk jcg
k jD1;:::;l

with the tk given by (3.17).


100 3 Random Processes with Immigration

.l/
Lemma 3.2.5 The mapping c is continuous at all points .m; . fk /k2Z / for which
m.fc; 0; cg/ D 0 and for which u1 ; : : : ; ul are continuity points of fk .tk C / for all
k 2 Z.
Proof Let c > 0 and suppose that
 .n/   
mn ; . fk /k2Z ! m; . fk /k2Z ; n!1 (3.22)

in .Np  D.R/Z ;  / where m.fc; 0; cg/ D 0. Then, in particular, limn!1 mn D m


vaguely. Since m.fc; 0; cg/ D 0, we can apply Theorem 3.13 in [235], which says
that mn .Œc; 0/ D m.Œc; 0/ DW r and mn .Œ0; c/ D m.Œ0; c/ DW rC for all
.n/
sufficiently large n. For these n, with the definition of tk and tk according to (3.17),
we have
rC 1
X
r X
mn . \ Œc; 0/ D "t.n/ ; mn . \ Œ0; c/ D "t.n/ ;
k k
kD1 kD0

rC 1
X
r X
m. \ Œc; 0/ D "tk ; and m. \ Œ0; c/ D "tk
kD1 kD0

where the empty sum is interpreted as 0. Theorem 3.13 in [235] further implies that
there is convergence of the points of mn in Œc; 0 to the points of m in Œc; 0 and
analogously with Œc; 0 replaced by Œ0; c. Since m has no point at 0, this implies
.n/
that limn!1 tk D tk for k D r ; : : : ; rC  1. On the other hand, (3.22) entails
.n/
limn!1 fk D fk in .D.R/; d/ for k D r ; : : : ; rC  1. Therefore, Lemma 3.2.4
ensures that
.n/ .n/
fk .tk C / ! fk .tk C /; n!1 (3.23)

in .D.R/; d/ for k D r ; : : : ; rC  1.


Now assume that u1 ; : : : ; ul are continuity points of fk .tk C / for all k 2 Z. We
show that then
.n/
c.l/ .mn ; . fk /k2Z / ! c.l/ .m; . fk /k2Z /; n ! 1: (3.24)

Indeed, in the given situation, (3.23) implies that


 .n/ .n/ .n/ .n/   
fk .tk C u1 /; : : : ; fk .tk C ul / ! fk .tk C u1 /; : : : ; fk .tk C ul / ; n ! 1

for k D r ; : : : ; rC  1. Summation of these relations over k D r ; : : : ; rC  1


yields (3.24). t
u
Now we are ready to prove Theorem 3.2.1.
3.2 Limit Theorems Without Scaling 101

Proof of Theorem 3.2.1 We start by showing that the Lebesgue integrability of


G.t/ D EŒjX.t/j ^ 1 ensures jY  .u/j < 1 a.s. for each u 2 R. To this end, fix
u 2 R and set Zk WD XkC1 .u C Sk /1fSk ug , k 2 Z. We infer
X X
E.jZk j ^ 1/ D E.jXkC1 .u C Sk /j ^ 1/1fSk ug (3.25)
k2Z k2Z
X Z 1
D E G.u C Sk /1fSk ug / D
1 G.x/ dx < 1
k2Z 0

P
having utilized (3.7) for the last equality. Therefore k0 jZk j < 1 a.s. by the
two-series theorem which implies jY  .u/j < 1 a.s.
Using Lemma 3.2.3 and recalling that the space Np D.R/Z is separable we infer
X  X 
"tSk ; .XkC1 /k2Z ) "Sk ; .XkC1 /k2Z ; t!1 (3.26)
k0 k2Z

on Np  D.R/Z by Theorem 3.2 in [40]. Fix l 2 N and real numbers ˛1 ; : : : ; ˛l and


u1 < : : : < ul . For k 2 Z, the number of jumps of XkC1 is at most countable
a.s. Since the distribution of Sk is absolutely continuous, and Sk and XkC1 are
independent we infer

PfSk C u is a jump of XkC1 g D 0


P
for any u 2 R. Also, j2Z "Sj .fc; 0; cg/ D 0 a.s. for every c > 0 is a consequence
of the aforementioned absolute continuity. Hence, according to Lemma 3.2.5, for
.l/ P 
every c > 0, the mapping c is a.s. continuous at k2Z "S
k
; .XkC1 /k2Z . Now
.l/
apply the continuous mapping theorem to (3.26) twice (first using the map c and
then the map .x1 ; : : : ; xl / 7! ˛1 x1 C : : : C ˛l xl ) to obtain that
X 
 
Yc .t; ui / iD1;:::;l WD XkC1 .ui C t  Sk /1fjtSk jcg
k0 iD1;:::;l
X 
d
D c.l/ "tSk ; .XkC1 /k2Z
k0
X 
d
! c.l/ "Sk ; .XkC1 /k2Z
k2Z
X 
 
D XkC1 .ui C Sk /1fjSk jcg DW Yc .ui / iD1;:::;l
k2Z iD1;:::;l
102 3 Random Processes with Immigration

and that

X
l
d X
l
˛i Yc .t; ui / ! ˛i Yc .ui /; t ! 1:
iD1 iD1

The proof of (3.8) is complete if we verify

X
l
d X
l
˛i Yc .ui / ! ˛i Y  .ui /; c!1 (3.27)
iD1 iD1

and
ˇ X X
ˇ 
ˇ l ˇ
lim lim sup P ˇˇ ˛i XkC1 .ui C t  Sk /1fjtSk j>cg ˇˇ > D 0 (3.28)
c!1 t!1
iD1 k0

for all > 0.


Proof of (3.27) We claim that the stronger statement limc!1 Yc .u/ D Y  .u/ a.s.
for all u 2 R holds. Indeed, as we have shown in (3.25),
X 

E jXkC1 .u C Sk /j ^ 1 < 1;
k2Z

P
in particular, k2Z jXkC1 .uCSk/j < 1 a.s. Hence, by the monotone (or dominated)
convergence theorem,
X
jYc .u/  Y  .u/j  jXkC1 .u C Sk /j1fjSk j>cg ! 0
k2Z

as c ! 1 a.s.
Proof of (3.28) It suffices to prove
ˇ X ˇ 
ˇ ˇ
lim lim sup P ˇ XkC1 .u C t  Sk /1fjtSk j>cg ˇ > D 0
c!1 t!1
k0

for every u 2 R. Write


ˇ X ˇ 
ˇ ˇ
ˇ
P ˇ ˇ
XkC1 .u C t  Sk /1fjtSk j>cg ˇ >
k0
X 
P jXkC1 .u C t  Sk /j1fjtSk j>cg >
k0
3.2 Limit Theorems Without Scaling 103

X 
P jXkC1 .u C t  Sk /j1fjtSk j>c; jXkC1 .uCtSk /j1g > =2
k0
X 
CP jXkC1 .u C t  Sk /j1fjtSk j>c; jXkC1 .uCtSk /j>1g > =2
k0
X ˚ 
 P jt  Sk j > c; jXkC1 .u C t  Sk /j > 1
k0
X 
C 2 1 E G.u C t  Sk /1fjtSk j>cg
k0

having utilized Markov’s inequality for the last line.


Without loss of generality we can assume that G.t/ D 0 for t < 0. Now we intend
to show that, for c > juj, the function f .t/ WD G.uCt/1.c;1/ .jtj/ D G.uCt/1.c;1/ .t/
is dRi on RC . Since G is dRi on RC , it is locally bounded
Pand a.e. continuous on RC .
Hence f possesses the same properties. Furthermore, n1 sup.n1/h0 y<nh0 f .y/ <
1 for h0 D 1, if u is integer, and h0 D juj=.Œjuj C 1/, otherwise, because the sum
above is finite with G replacing f by the definition of direct Riemann integrability
(see Section 6.2.2). We conclude that f is dRi by Lemma 6.2.1(d) and an application
of the key renewal theorem (Proposition 6.2.3) yields
X  Z 1
lim lim E G.u C t  Sk /1fjtSk j>cg D
1 lim G.u C x/dx D 0:
c!1 t!1 c!1 c
k0

˚ 
Since P jt Sk j > c; jXkC1 .u Ct Sk /j > 1  EG.u Ct Sk /1fjtSk j>cg for k 2 N0 ,
the latter limit relation entails
X ˚ 
lim lim sup P jt  Sk j > c; jXkC1 .u C t  Sk /j > 1 D 0;
c!1 t!1
k0

thereby finishing the proof of (3.28). The proof of Theorem 3.2.1 is complete. t
u
The proof of convergence in Theorem 3.2.2 which can be found in [146] will not
be given here, for it is very similar to the proof of Lemma 3.2.3. We shall only show
that the limit random variables in Theorem 3.2.2 are well defined.
Proposition 3.2.6 Assume that
< 1 and that the distribution of  is nonlattice.
Let h W RC ! R be locally bounded, eventually nonincreasing and nonintegrable.
Under the assumptions of Theorem 3.2.2,
X Z t 
Yı D lim h.Sk /1fSk tg 1

h.y/dy
t!1 0
k0

exists as the limit in L2 in case (A1), as the a.s. limit in case (A2) and as the limit in
probability in case (A3). In all the three cases, it is a.s. finite.
104 3 Random Processes with Immigration

For the proof of Proposition 3.2.6 we need an elementary auxiliary result.


Lemma 3.2.7 Let f W RC ! RC be a nonincreasing function with limt!1 f .t/ 
0. Then, for every > 0,
Z n X
n
f . y/dy D f . k/ C ın . /; n2N
0 kD0

where ın . / converges as n ! 1 to some ı. /  0.


Proof We assume w.l.o.g. that D 1. For each n 2 N,

X
n Z n n1 
X Z kC1 
f .k/  f .y/dy D f .k/  f .y/dy C f .n/:
kD0 0 kD0 k

Since f is nonincreasing, each summand in the sum is nonnegative. Hence, the sum
is nondecreasing in n. On the other hand, it is bounded from above by

n1 
X Z kC1  X
n1
f .k/  f .y/dy  . f .k/  f .k C 1//  f .0/ < 1:
kD0 k kD0

P  R kC1 
Consequently, the series k0 f .k/  k f .y/dy converges. Recalling that
limn!1 f .n/ exists completes the proof. t
u
Proof of Proposition 3.2.6 We only investigate cases (A1) and (A2) assuming
additionally that h is nonincreasing on RC (rather than eventually nonincreasing)
in case (A1) and that h is nonincreasing and twice differentiable on RC with h00  0
in case (A2). A complete proof in full generality can be found in [146] and [147].
Define
X Z t
Yt WD h.Sk /1fSk tg 
1 h.y/dy; t  0:
k0 0

Our aim is to show that Yt converges as t ! 1 in the asserted sense.


Proof for Case (A1) It suffices to check that

lim sup E.Yt  Ys /2 D 0:


s!1 t>s

Since
X X
Yt  Ys D h.Sk /1fs<Sk tg  E h.Sk /1fs<Sk tg
k0 k0
3.2 Limit Theorems Without Scaling 105

for t > s, we conclude that


X 2  X 2
E.Yt  Ys /2 DE 
h.Sk /1fs<Sk tg  E


h.Sk /1fs<Sk tg


k0 k0
X 2  Z t 2
 1
DE h.t  Sk /1fSk <tsg 
h.y/dy
k0 s

where the last equality follows from (3.6) and (3.7). The first term on the right-hand
side equals
X X
E .h.t  Sk //21fSk <tsg C 2 E h.t  Si /1fSi <tsg h.t  Sj /1fSj <tsg
k0 0i<j
Z ts Z ts Z
D
1
.h.t  y// dy C 2
2 1
h.t  y/ Q
h.t  y  x/ dU.x/dy
0 0 .0; tsy/
Z t Z t Z
D
1
.h.y// dy C 2
2 1
h.y/ Q
h.y  x/ dU.x/dy
s s .0; ys/

P
where Q
U.x/ Q
WD n1 PfSn  xg, x  0. Note that U.x/ D U.x/  1 where U.x/ D
P
n0 PfS n  xg is the renewal function. Hence,
Z t
E.Yt  Ys /2 D
1
.h.y//2 dy
s
Z t Z
C 2
1 h.y/ Q
h.y  x/ d.U.x/ 
1 x/dy:
s .0; ys/

Rt
Since h2 is assumed to be integrable, lims!1 supt>s s .h.y//2 dy D 0 and it remains
to check that
Z t Z
lim sup h.y/ Q
h.y  x/ d.U.x/ 
1 x/dy D 0: (3.29)
t!1 t>s s .0; ys/

R tx
Put Hs;t .x/ WD s h.x C y/h.y/dy for x 2 Œ0; t  s/ and Hs;t .x/ WD 0 for all other
x. Note that Hs;t .x/ is continuous and nonincreasing on RC . Changing the order of
integration followed by integration by parts gives
Z Z
t  
h.y/ Q
h.y  x/ d U.x/ 
1 x dy
s .0; ys/
Z Z tx  
D Q
h.x C y/h.y/dyd U.x/ 
1 x
.0;ts/ s
106 3 Random Processes with Immigration

Z
 Q
.U.x/ 
1 x/ d.Hs;t .x//
.0; ts/
ˇ ˇ
 sup ˇU.x/
Q 
1 xˇ Hs;t .0/
x0
Z
ˇ ˇ t
D sup ˇU.x/
Q 
1 xˇ .h.y//2 dy:
x0 s

Q
By Lorden’s inequality (formula (6.6)), supx0 jU.x/ 
1 xj 
2 Var  < 1,
and (3.29) follows.
Proof for Case (A2) is divided into three steps.
P
Step
P 1 Prove that if Un WDR nkD0 .h.Sk /  h.
k// converges a.s. as n ! 1, then


1 0 h.y/dy converges a.s. as t ! 1.
t
k0 h.Sk /1fSk tg
P P
Step 2 Prove that if the series j1 .j 
/ kj h0 .
k/ converges a.s., then Un
converges a.s. as n ! 1.
Step 3PUse the threePseries theorem to check that, under the conditions stated, the
series j1 .j 
/ kj h0 .
k/ converges a.s.
Step
Pn 1 Assume that
R Un converges a.s. Then, by Lemma 3.2.7, the sequence
 1
n
kD0 h.Sk / 
0 h.y/dy converges a.s., too. Set

  .t/ WD #fk 2 N0 W Sk  tg; t  0:


P  .t/1 R
.  .t/1/
We then have that kD0 h.Sk / 
1 0 h.y/dy converges a.s. as t ! 1
because limt!1   .t/ D 1 a.s. To complete this step, it remains to prove that
ˇZ
.  .t/1/ Z ˇ
ˇ t ˇ
lim ˇˇ h.y/dy  h.y/dyˇˇ D 0 a.s. (3.30)
t!1 0 0

To this end, write


ˇZ
.  .t/1/ Z ˇ
ˇ t ˇ
ˇ h.y/dy  h.y/dyˇˇ
ˇ
0 0
Z
.  .t/1/_t
D h.y/dy

.  .t/1/^t

 j
.  .t/  1/  tj h.
.  .t/  1/ ^ t/ (3.31)

where the inequality follows from the monotonicity of h. By Theorem 3.4.4 in [119],
E r < 1 implies that

.t/ 
1 t D o.t1=r /; t!1 a.s.
3.2 Limit Theorems Without Scaling 107

where it should be recalled that

.t/ D inffk 2 N W Sk  S0 > tg D #fk 2 N0 W Sk  tg:

Since

  .t/ D 1fS0 tg C .t  S0 /1fS0 tg a.s.

and S0 is a.s. finite, we infer

  .t/ 
1 t D o.t1=r /; t!1 a.s.

This relation implies that the first factor in (3.31) is o.t1=r /, whereas the second
factor is o.t1=r / as t ! 1. The latter relation can be derived as follows. First, in
view of (3.12) and the monotonicity of h, we have

h.t/ D o.t1=r /; t ! 1: (3.32)

Second, by the strong law of large numbers for   .t/, we have

Œ
.  .t/  1/ ^ t  t; t!1 a.s.

Altogether, (3.30) has been proved.


Step 2 For each k 2 N, by Taylor’s formula, there exists a k between Sk and
k
such that

h.Sk/  h.
k/ D h0 .
k/.Sk 
k/ C 21 h00 . k /.Sk 
k/2 :

Set

X
n
In WD 21 h00 . k /.Sk 
k/2
kD1

and write

  X
n
Un  h S0 C h.0/ D h0 .
k/.Sk 
k/ C In
kD1

X
n X
n X
n
D S0 h0 .
k/ C .k 
/ h0 .
j/ C In
kD1 kD1 jDk

X
n X
n X X
D S0 h0 .
k/ C .k 
/ h0 .
j/  .Sn 
n/ h0 .
k/ C In : (3.33)
kD1 kD1 jk knC1
108 3 Random Processes with Immigration

Since h0 is nonincreasing and nonnegative we have


X Z 1 X
h0 .
k/  h0 .
y/dy D
1 h.
n/  h0 .
k/: (3.34)
knC1 n kn

for all n. Using the first inequality in (3.34) and the fact that limy!1 h.y/ D 0,
one immediately infers that the first summand in (3.33) converges as n ! 1. The
a.s. convergence of the second (principal) term is assumed to hold here. By the
Marcinkiewicz–Zygmund law of large numbers (Theorem 2 on p. 125 in [68])

Sn 
n D o.n1=r /; n ! 1 a.s. (3.35)

Therefore, in view of (3.32) and (3.34), the third term in (3.33) converges to zero
a.s. Further, limk!1 k1 k D
a.s. by the strong law of large numbers. Hence, in
view of (3.13),
21=r
h00 . k / D O. k / D O.k21=r /; ! 1:

From (3.35) we infer

h00 . k /.Sk 
k/2 D o.k.21=r/ /; k!1 a.s.

which implies that Pa.s. as n ! 1, for 2  1=r > 1. Hence the a.s.
P In converges
convergence of k1 .k 
/ jk h0 .
j/ entails that of Un .
Step 3 Set
X
ck WD h0 .
j/ and k WD ck .k 
/; k 2 N:
jk

P
k1 .h.
k// < 1. In view of (3.34),
r
Condition (3.12) ensures that
0 1r
X X X
Ejk jr D Ej 
jr @ .h0 .
j//A
k1 k1 jk
X

r Ej 
jr .h.
.k  1///r < 1:
k1

P
Hence the series k1 k converges a.s. by Corollary 3 on p. 117 in [68]. t
u
3.3 Limit Theorems with Scaling 109

3.3 Limit Theorems with Scaling

3.3.1 Our Approach

In this section, we focus on the case where the distribution of  is in the domain
of attraction of a stable distribution of index ˛ 6D 1 and, if
< 1 (equivalently,
˛ > 1), either EX.t/ or Var X.t/ is too large for convergence to stationarity. In this
situation, we investigate weak convergence of the finite-dimensional distributions of
Yt .u/ WD .a.t//1 .Y.ut/b.ut// with suitable norming constants a.t/ > 0 and shifts
b.t/ 2 R. This convergence is mainly regulated by two factors: the tail behavior of
 and the asymptotics of the finite-dimensional distributions of X.t/ as t ! 1. The
various combinations of these give rise to a broad spectrum of possible limit results.
Assuming that h.t/ WD EX.t/ is finite for all t  0, we start with the
decomposition
 X  X 
Y.t/b.t/ D Y.t/ h.tSk /1fSk tg C h.tSk /1fSk tg b.t/ (3.36)
k0 k0

and observe that Yt .u/ may converge if at least one summand in (3.36), properly
normalized, converges weakly.
The asymptotics of the first summand, properly normalized, is accessible via
martingale central limit theory or convergence results for triangular arrays. When
E is finite, the normalizing constants and limit processes for the first summand are
completely determined by properties of X, the influence of the distribution of  is
small. This phenomenon can easily be understood: the randomness induced by the
k is governed by the law of large numbers for .t/ and is thus degenerate in the
limit. When E is infinite and Pf > tg is regularly varying with index larger than
1, .t/, properly normalized, weakly converges to a nondegenerate distribution.
Hence, unlike in the finite mean case, the randomness induced by  persists in the
limit.
The asymptotic behavior of the second summand, properly normalized, is driven
by the functional limit theorems or the strong approximation results for the first-
passage time process ..t//t0 as well as the behavior of the function h at infinity.
It turns out that there are situations in which one of the summands in (3.36)
dominates (cases (Bi1) and (Bi2), i D 1; 2; 3 of Theorem 3.3.18; cases (C1) and
(C2) of Theorem 3.3.19; the case when h  0), and those in which the contributions
of the summands are comparable (cases (Bi3), i D 1; 2; 3 of Theorem 3.3.18
and case (C3) of Theorem 3.3.19). A nice feature of the former situation is that
possible dependence of X and  gets neutralized by normalization (provided that
limt!1 a.t/ D C1) so that the limit results are only governed by individual
contributions of X and . Suppose, for the time being, that the latter situation
prevails, i.e., the two summands in (3.36) are of the same order, and that X and
 are independent. From the discussion above it should be clear that whenever E
is finite, the two limit random processes corresponding to the summands in (3.36)
110 3 Random Processes with Immigration

are independent, whereas this is not the case, otherwise. Still, we are able to show
that the summands in (3.36) converge jointly. When X and  are dependent, proving
such a joint convergence remains an open problem.
Throughout the section we assume that h.t/ D EX.t/ is finite for all t  0 and
that the covariance

f .s; t/ WD Cov .X.s/; X.t// D EX.s/X.t/  EX.s/EX.t/

is finite for all s; t  0. The variance of X will be denoted by v, i.e., v.t/ WD


f .t; t/ D Var X.t/. In what follows we assume that h; v 2 DŒ0; 1/. For instance,
local uniform integrability of X 2 is sufficient for this to be true since the paths of
X belong to DŒ0; 1/. h; v 2 DŒ0; R t 1/ impliesR that h and v are a.e. continuous and
t
locally bounded. Consequently, 0 h.y/dy and 0 v.y/dy are well defined as Riemann
integrals.
Regular Variation in R2C
Definition 3.3.1 A function r W Œ0; 1/  Œ0; 1/ ! R is regularly varying4 in
R2C WD .0; 1/  .0; 1/ if there exists a function C W R2C ! .0; 1/, called limit
function, such that

r.ut; wt/
lim D C.u; w/; u; w > 0:
t!1 r.t; t/

The definition implies that r.t; t/ is regularly varying at 1, i.e., r.t; t/  tˇ ` .t/ as
t ! 1 for some ` slowly varying at 1 and some ˇ 2 R which is called the index
of regular variation. In particular, C.a; a/ D aˇ for all a > 0 and further

C.au; aw/ D C.a; a/C.u; w/ D aˇ C.u; w/

for all a; u; w > 0.


Definition 3.3.2 A function r W Œ0; 1/  Œ0; 1/ ! R will be called fictitious
regularly varying of index ˇ in R2C if

r.ut; wt/
lim D C.u; w/; u; w > 0
t!1 r.t; t/

where C.u; u/ WD uˇ for u > 0 and C.u; w/ WD 0 for u; w > 0, u ¤ w. A function r


will be called wide-sense regularly varying of index ˇ in R2C if it is either regularly
varying or fictitious regularly varying of index ˇ in R2C .

4
The canonical definition of the regular variation in R2C (see, for instance, [120]) requires
nonnegativity of r. The definitions of slowly and regularly varying functions on RC can be found
in Definitions 6.1.1 and 6.1.2 of Section 6.1.
3.3 Limit Theorems with Scaling 111

The function C corresponding to a fictitious regularly varying function will also be


called limit function.
Definition 3.3.3 A function r W Œ0; 1/Œ0; 1/ ! R is uniformly regularly varying
of index ˇ in strips in R2C if it is regularly varying of index ˇ in R2C and
ˇ   ˇ
ˇ r ut; .u C w/t ˇ
lim sup ˇˇ  C.u; u C w/ˇˇ D 0 (3.37)
t!1 aub r.t; t/

for every w > 0 and all 0 < a < b < 1.


Limit Processes for Yt .u/ The processes introduced in Definition 3.3.4 arise as
weak limits of the first summand in (3.36) in the case E < 1. We shall check that
these are well defined in Lemma 3.3.22.
Definition 3.3.4 Let C be the limit function for a wide-sense regularly varying
function (see Definition 3.3.2) in R2C of index ˇ for some ˇ 2 .1; 1/. We shall
denote by Vˇ WD .Vˇ .u//u>0 a centered Gaussian process with covariance function
Z u
EVˇ .u/Vˇ .w/ D C.u  y; w  y/dy; 0 < u  w;
0

when C.s; t/ ¤ 0 for some s; t > 0, s ¤ t, and a centered Gaussian process with
independent values and variance E.Vˇ .u//2 D .1 C ˇ/1 u1Cˇ , otherwise.
Let S2 WD .S2 .t//t0 denote a standard Brownian motion and, for 1 < ˛ < 2, let
.S˛ .t//t0 denote a spectrally negative ˛-stable Lévy process such that S˛ .1/ has
the characteristic function
˚ 
z 7! exp  jzj˛ .1˛/.cos. ˛=2/ C i sin. ˛=2/ sgn.z// ; z2R (3.38)

with ./ denoting the gamma function.


The processes introduced in Definition 3.3.5 arise as weak limits of the second
summand in (3.36) in the case E < 1. We shall check that these are well defined
in Lemma 3.3.23.
Definition 3.3.5 For ˛ 2 .1; 2 and > 1=˛, ¤ 0, set
Z
I˛; .0/ WD 0; I˛; .u/ WD .u  y/ dS˛ .y/ u > 0:
Œ0; u

Also, we set I˛; 0 .u/ WD S˛ .u/ for u  0. The stochastic integral above is defined via
integration by parts: if > 0, then
Z u
I˛; .u/ D S˛ .y/.u  y/ 1 dy; u > 0;
0
112 3 Random Processes with Immigration

whereas if 2 .1=˛; 0/, then


Z u
I˛; .u/ D u S˛ .u/ C j j .S˛ .u/ S˛ .y//.uy/ 1 dy; u > 0:
0

This definition is consistent with the usual definition of a stochastic integral with
a deterministic integrand and the integrator being a semimartingale. We shall call
I˛; WD .I˛; .u//u0 fractionally integrated ˛-stable Lévy process.
Definition 3.3.6 reminds the notion of an inverse subordinator.
Definition 3.3.6 For ˛ 2 .0; 1/, let W˛ WD .W˛ .t//t0 be an ˛-stable subordinator
(nondecreasing Lévy process) with the Laplace exponent  log E exp.zW˛ .t// D
.1  ˛/tz˛ , z  0. The inverse ˛-stable subordinator W˛ WD .W˛ .t//t0 is
defined by

W˛ .t/ WD inffs  0 W W˛ .s/ > tg; t  0:

The processes introduced in Definitions 3.3.7 and 3.3.8 arise as weak limits of the
second and the first summand in (3.36), respectively, in the case when Pf > tg is
regularly varying of index ˛ for ˛ 2 .0; 1/. We shall check that these are well
defined in Lemmas 3.3.25 and 3.3.27, respectively.
Definition 3.3.7 For 2 R, set
Z
J˛; .0/ WD 0; J˛; .u/ WD .u  y/ dW˛ .y/; u > 0:
Œ0; u

Since the integrator W˛ has nondecreasing paths, the integral exists as a pathwise
Lebesgue–Stieltjes integral. We shall call J˛; WD .J˛; .u//u0 fractionally inte-
grated inverse ˛-stable subordinator.
Definition 3.3.8 Let W˛ be an inverse ˛-stable subordinator and C the limit
function for a wide-sense regularly varying function (see Definition 3.3.2) in R2C
of index ˇ for some ˇ 2 R. We shall denote by Z˛;ˇ WD .Z˛;ˇ .u//u>0 a process
which, given W˛ , is centered Gaussian with (conditional) covariance
Z
 ˇ 
E Z˛;ˇ .u/Z˛;ˇ .w/ˇW˛ D C.u  y; w  y/ dW˛ .y/; 0 < u  w;
Œ0; u

when C.s; t/ ¤ 0 for some s; t > 0, s ¤ t, and a process which, given


W˛ , is centered Gaussian
R with independent values and (conditional) variance
E..Z˛;ˇ .u//2 jW˛ / D Œ0; u .u  y/ˇ dW˛ .y/, otherwise.
3.3 Limit Theorems with Scaling 113

3.3.2 Weak Convergence of the First Summand in (3.36)

Theorem 3.3.9 (case E < 1) and Theorem 3.3.10 (case E D 1) deal with the
asymptotics of the first summand in (3.36).
f:d:
We shall write Vt .u/ ) V.u/ as t ! 1 to denote weak convergence of finite-
dimensional distributions, i.e., for any n 2 N and any 0 < u1 < u2 < : : : < un < 1

d
.Vt .u1 /; : : : ; Vt .un // ! .V.u1 /; : : : ; V.un //; t ! 1:

Theorem 3.3.9 Assume that



D E 2 .0; 1/;
• f .u; w/ D Cov .X.u/; X.w// is either uniformly regularly varying in strips in R2C
or fictitious regularly varying in R2C , in either of the cases, of index ˇ for some
ˇ 2 .1; 1/ and with limit function C; when ˇ D 0, there exists a positive
monotone function u satisfying v.t/ D Var X.t/  u.t/ as t ! 1;
• for all y > 0

vy .t/ WD E.X.t/  h.t//2 1fjX.t/h.t/j>yptv.t/g D o.v.t//; t ! 1: (3.39)

Then
P
Y.ut/  k0 h.ut  Sk /1fSk utg f:d:
p ) Vˇ .u/; t!1 (3.40)

1 tv.t/

where Vˇ is a centered Gaussian process as introduced in Definition 3.3.4.


Theorem 3.3.10 Assume that
• X is independent of ;
• for some ˛ 2 .0; 1/ and some ` slowly varying at 1

Pf > tg  t˛ `.t/; t ! 1I (3.41)

• f .u; w/ D Cov .X.u/; X.w// is either uniformly regularly varying in strips in


R2C or fictitious regularly varying in R2C , in either of cases, of index ˇ for some
ˇ 2 Œ˛; 1/ and with limit function C; when ˇ D ˛, there exists a positive
v.t/
nondecreasing function u with limt!1 Pf>tgu.t/ D 1;
• for all y > 0

v y .t/ WD E.X.t/  h.t//2 1fjX.t/h.t/j>ypv.t/=Pf>tgg D o.v.t//


b (3.42)

as t ! 1.
114 3 Random Processes with Immigration

Then
s  
Pf > tg X f:d:
Y.ut/  h.ut  Sk /1fSk utg ) Z˛; ˇ .u/; t!1
v.t/ k0

where Z˛; ˇ is a conditionally Gaussian process as introduced in Definition 3.3.8.


Remark 3.3.11 There is an interesting special case of Theorem 3.3.10 in which the
finite-dimensional distributions of Y converge weakly, without normalization and
centering. Namely, if h.t/  0, limt!1 v.t/=Pf > tg D c for some c > 0, and the
assumptions of Theorem 3.3.10 hold (note that ˇ D ˛ and one may take u.t/  c),
then
f:d: p
Y.ut/ ) cZ˛; ˛ .u/; t ! 1:

When h.t/ D EX.t/ is not identically zero, the centerings used in Theorems 3.3.9
and 3.3.10 are random which is undesirable. Theorem 3.3.18 (case E < 1) and
Theorem 3.3.19 (case E D 1) stated below in Section 3.3.4 give limit results
with nonrandom centerings. These are obtained by combining Theorems 3.3.12
and 3.3.13 which are the results concerning weak convergence of the second
summand in (3.36) with Theorems 3.3.9 and 3.3.10, respectively.

3.3.3 Weak Convergence of the Second Summand in (3.36)

In this section we investigate the asymptotics of the second summand in (3.36) under
the assumption that h is regularly varying at infinity

h.t/  t b̀.t/; t!1 (3.43)

for some 2 R and some b̀ slowly varying at 1. Recall that b̀.t/ > 0 for all t  0
by the definition of slow variation (see Definition 6.1.1 in Section 6.1). Note further
that the functions h with limt!1 h.t/ D b 2 .0; 1/ are covered by condition (3.43)
with D 0 and limt!1 b̀.t/ D b.
Before we formulate our next results we have to recall that the distribution of 
belongs to the domain of attraction of a 2-stable (normal) distribution if, and only
if, either  2 WD Var  < 1 or Var  D 1 and

E 2 1ftg  `.t/; t!1

for some ` slowly varying at 1. Further, the distribution of  belongs to the domain
of attraction of an ˛-stable distribution, ˛ 2 .0; 2/ if, and only if,

Pf > tg  t˛ `.t/; t!1


3.3 Limit Theorems with Scaling 115

for some ` slowly varying at 1. We shall not treat the case ˛ D 1, for it is
technically more complicated than the others and does not shed any new light on
weak convergence of random processes with immigration. If
D E D 1, then
necessarily ˛ 2 .0; 1/ (because we excluded the case ˛ D 1), and if
< 1, then
necessarily ˛ 2 .1; 2.
As before, let DŒ0; 1/ denote the space of right-continuous real-valued functions
on Œ0; 1/ with finite limits from the left at each positive point. Recall that ..t//t2R
is the first-passage time process defined by .t/ D inffk 2 N0 W Sk > tg for t 2 R. It
is well known that the following functional limit theorems hold:

.ut/ 
1 ut
) S˛ .u/; t!1 (3.44)
g.t/

where in the case


p
• when  2 < 1 (case (B1) of Theorem 3.3.12), ˛ D 2, g.t/ D  2
3 t, and the
convergence takes place in the J1 -topology on DŒ0; 1/;
• when  2 D 1 and E 2 1ftg  `.t/ as t ! 1 for some ` slowly varying
at 1 (case (B2) of Theorem 3.3.12), ˛ D 2, g.t/ D
3=2 c.t/ with c.t/ being
a positive continuous function satisfying limt!1 t`.c.t//.c.t//2 D 1, and the
convergence takes place in the J1 -topology on DŒ0; 1/;
• when Pf > tg  t˛ `.t/ for some 1 < ˛ < 2 and some ` slowly varying at
1 (case (B3)), g.t/ D
11=˛ c.t/ where c.t/ is a positive continuous function
with limt!1 t`.c.t//.c.t//˛ D 1, and the convergence takes place in the M1 -
topology on DŒ0; 1/.
We refer to [40] and [197] for extensive information concerning the J1 - convergence
on DŒ0; 1/. The book [261] is an excellent source on the M1 - convergence.
There is also an analogue of (3.44) in the case when Pf > tg  t˛ `.t/ as
t ! 1 for some ˛ 2 .0; 1/ and some ` slowly varying at 1. The functional
convergence

.ut/
) W˛ .u/; t!1 (3.45)
g.t/

holds under the J1 -topology on DŒ0; 1/ where W˛ is an inverse ˛-stable subordi-


nator (see Definition 3.3.6) and g.t/ D 1=Pf > tg.
Set
X
Z.t/ WD h.t  Sk /1fSk tg ; t  0:
k0

Recall that .Z.t//t0 is a renewal shot noise process. Relevance of the preceding
paragraphs for the subsequent presentation stems from the fact that (3.44) and (3.45)
are functional limit theorems for .Z.t//t0 which corresponds to h.t/  1.
116 3 Random Processes with Immigration

Theorem 3.3.12 Let h W RC ! R be locally bounded, measurable, and eventually


monotone.
(B1) Suppose that  2 D Var  < 1. If (3.43) holds for some > 1=2, then
R ut Z
Z.ut/ 
1 0 h.y/dy f:d:
p ) .u  y/ dS2 .y/ D I2; .u/ (3.46)
 2
3 th.t/ Œ0; u

as t ! 1.
(B2) Suppose that  2 D 1 and that

E 2 1ftg  `.t/; t!1

for some ` slowly varying at 1. Let c.t/ be a positive continuous function such
that limt!1 t`.c.t//.c.t//2 D 1. If condition (3.43) holds with > 1=2,
then
R ut
Z.ut/ 
1 0 h.y/dy f:d:
) I2; .u/; t ! 1:

3=2 c.t/h.t/

(B3) Suppose that

Pf > tg  t˛ `.t/; t!1

for some 1 < ˛ < 2 and some ` slowly varying at 1. Let c.t/ be a positive
continuous function such that limt!1 t`.c.t//.c.t//˛ D 1. If condition
(3.43) holds with > 1=˛, then
R ut Z
Z.ut/ 
1 0 h.y/dy f:d:
) .u  y/ dS˛ .y/ D I˛; .u/; t ! 1:

11=˛ c.t/h.t/ Œ0; u

Our next result is concerned with the case of infinite


. Here the assumptions on the
response function h are less restrictive.
Theorem 3.3.13 Let h W RC ! R be locally bounded and measurable. Suppose
that Pf > tg  t˛ `.t/ as t ! 1 for some 0 < ˛ < 1 and some ` slowly varying
at 1, and that h satisfies (3.43) for some 2 R. Then
Z
Pf > tg f:d:
Z.ut/ ) .u  y/ dW˛ .y/ D J˛; .u/; t ! 1:
h.t/ Œ0; u

Theorem 3.3.12 only contains limit theorems with regularly varying normal-
ization. Now we treat the borderline situation when in (3.43) equals 1=2 yet
the function h2 is nonintegrable (we shall see that this gives rise to a slowly
varying normalization). This case bears some similarity with the case > 1=2
3.3 Limit Theorems with Scaling 117

(normalization is needed; the limit is Gaussian) and is very different from the case
when h2 is integrable. The principal new feature of the present situation is necessity
of sublinear time scaling as opposed to the time scalings u C t and ut used for the
other regimes. As might be expected of a transitional regime there are additional
technical complications. In particular, the techniques (tools related to stationarity;
the continuous mapping theorem along with the functional limit theorem for the
first-passage time process ..t//) used for the other regimes cannot be exploited
here. Our main technical tool is a strong approximation theorem.
Now we introduce a limit process X WD .X .u//u2Œ0;1 appearing in Theo-
rem 3.3.14 below. Let S2 D .S2 .u//u2Œ0;1 denote a Brownian motion independent
of D WD .D.u//u2Œ0;1 a centered Gaussian process with independent values which
satisfies E.D.u//2 D u. Then we set

X .u/ D S2 .1  u/ C D.u/; u 2 Œ0; 1:

The presence of D makes the paths of X highly irregular. In particular, no version


of X lives in the Skorokhod space DŒ0; 1 of right-continuous functions with finite
limits from the left. The covariance structure of X is very similar to that of S2 : for
any u; v 2 Œ0; 1
(
.1  u/ ^ .1  v/; if u ¤ v;
cov.X .u/; X .v// D
1; if u D v;

whereas cov.S2 .1  u/; S2 .1  v// D .1  u/ ^ .1  v/. Among others, this shows


that neither X nor X .1  / is a self-similar process.
Theorem 3.3.14 Suppose that E r < 1 for some r > 2 and that h W RC ! R is a
right-continuous, locally bounded, and eventually nonincreasing function. If

h.t/  t1=2b̀.t/; t!1 (3.47)


R1
for some b̀ slowly varying at 1 such that 0 .h.y//2 dy D 1, then, as t ! 1,

 R tCx.t;u/ 
Z.t C x.t; u// 
1 0 h.y/dy f:d:
q Rt ) .X .u//u2Œ0;1
 2
3 0 .h.y//2 dy u2Œ0;1

where  2 D Var ,
D E, and x W RC  Œ0; 1 ! RC is any nondecreasing in the
second coordinate function that satisfies
R x.t;u/
0 .h.y//2 dy
lim Rt Du (3.48)
2 dy
0 .h.y//
t!1

for each u 2 Œ0; 1.


118 3 Random Processes with Immigration

Remark 3.3.15 To facilitate comparison of Theorem 3.3.14 and part (B1) of


Theorem 3.3.12, observe that, under (3.43) with > 1=2,
sZ
t p
.h.y//2 dy  .2 C 1/1=2 th.t/; t!1
0

by Lemma 6.1.4(c), and therefore


R t the normalization in (3.46) can be replaced (up to
a multiplicative constant) by . 0 .h.y//2 dy/1=2 .
Rt
Remark 3.3.16 Set m.t/ WD 0 .h.y//2 dy, t > 0 and observe that, under (3.47), m
is a slowly varying function (see Lemma 6.1.4(d)) diverging to C1. Since m is
nondecreasing and continuous, the generalized inverse function m is increasing.
Putting x.t; u/ D m .um.t// gives us a nondecreasing in u function that satisfies
(3.48).
Remark 3.3.17 Here we point out three types of possible time scalings which
correspond to ‘moderate’, ‘slow’, and ‘fast’ slowly varying b̀ in (3.47).
‘Moderate’ b̀ If

b̀.t/ D .log t/. 1/=2 L.log t/ (3.49)

for some > 0 and some L slowly varying at 1, then


Z t Z log t
m.t/ D .h.y//2 dy D .h.ey //2 ey dy  1 .log t/ .L.log t//2 ; t!1
0 0

by Lemma 6.1.4(c) because .h.et //2 et  t 1 .L.t//2 as t ! 1. Hence, we may


1=
take x.t; u/ D tu .
‘Slow’ b̀ If

b̀.t/ D .log t/1=2 .log log t/. 1/=2 L.log log t/

for some > 0 and some L slowly varying at 1, then

m.t/  1 .log log t/ .L.log log t//2 ; t!1

(which can be checked as in the ‘moderate’ case), and one may take x.t; u/ D
1=
exp..log t/u /.
‘Fast’ b̀ If

b̀.t/ D exp.. =2/.log t/ı /.log t/.ı1/=2 L.exp..log t/ı //


3.3 Limit Theorems with Scaling 119

for some > 0, some ı 2 .0; 1/ and some L slowly varying at 1, then

m.t/  . ı/1 exp. .log t/ı /.L.exp..log t/ı ///2 ; t ! 1;


1 .log t/1ı
and one may take x.t; u/ D tu. ı/ .

3.3.4 Scaling Limits of Random Processes with Immigration

Theorem 3.3.18 Assume that h.t/ D EX.t/ is eventually monotone and not
identically zero, and that
• in cases (Bi1) and (Bi3), i D 1; 2; 3 the assumptions of Theorem 3.3.9 hold;
• in cases (Bi2) and (Bi3), i D 1; 2; 3 h.t/  t b̀.t/ as t ! 1 for some 2 R and
some b̀ slowly varying at 1; Rt
• in cases (Bi2), i D 1; 2; 3 limt!1 0 v.y/dy D 1 and there exists a positive
monotone function u such that v.t/  u.t/, t ! 1, or v is directly Riemann
integrable on Œ0; 1/;
• in cases (Bi3), i D 1; 2; 3 X is independent of .
(B1) Let  2 D Var  < 1. Rt
(B11) If limt!1 .t.h.t//2 /= 0 v.y/dy D 0 (which is equivalent to
limt!1 .h.t//2 =v.t/ D 0), then
R ut
Y.ut/ 
1 0 h.y/dy f:d:
p ) Vˇ .u/; t!1 (3.50)

1 tv.t/

where Vˇ is as in Definition 3.3.4.


Rt
(B12) If > 1=2 and limt!1 .t.h.t//2 /= 0 v.y/dy D 1, then
R ut
Y.ut/ 
1 0 h.y/dy f:d:
p ) I2; .u/; t!1
 2
3 th.t/

where I2; is as in Definition 3.3.5.


(B13) If v.t/  b.h.t//2 for some b > 0, then, as t ! 1,
R ut s s
Y.ut/ 
1 0 h.y/dy f:d: 2 b
p ) 3
I2; .u/ C Vˇ .u/ (3.51)
th.t/

where the processes I2; and Vˇ are independent.


(B2) Suppose that  2 D 1 and that

E 2 1ftg  `.t/; t!1


120 3 Random Processes with Immigration

for some ` slowly varying at 1. Let c.t/ be a positive function satisfying


limt!1 t`.c.t//.c.t//2 D 1.
(B21) If limt!1 .c.t/h.t//2 =.tv.t// D 0, thenR relation (3.50) holds.
(B22) If > 1=2 and limt!1 .c.t/h.t//2 = 0 v.y/dy D 1, then
t

R ut
Y.ut/ 
1 0 h.y/dy f:d:
) I2; .u/; t ! 1:

3=2 c.t/h.t/

(B23) If v.t/  bt1 .c.t/h.t//2 for some b > 0, then, as t ! 1,


R ut b
1=2
Y.ut/ 
1 0 h.y/dy f:d: 3=2
)
I2; .u/ C Vˇ .u/
c.t/h.t/

where the processes I2; and Vˇ are independent.


(B3) Suppose that

Pf > tg  t˛ `.t/; t!1

for some ˛ 2 .1; 2/ and some ` slowly varying at 1 and let c.t/ be a positive
function with limt!1 t`.c.t//.c.t//˛ D 1.
(B31) If limt!1 .c.t/h.t//2 =.tv.t// D 0, thenR relation (3.50) holds.
(B32) If > 1=˛ and limt!1 .c.t/h.t//2 = 0 v.y/dy D 1, then
t

R ut
Y.ut/ 
1 0 h.y/dy f:d:
) I˛; .u/; t!1

.˛C1/=˛ c.t/h.t/

where I˛; is as in Definition 3.3.5.


(B33) If v.t/  bt1 .c.t/h.t//2 for some b > 0, then, as t ! 1,
R ut b
1=2
Y.ut/ 
1 0 h.y/dy f:d: .˛C1/=˛
)
I˛; .u/ C Vˇ .u/
c.t/h.t/

where the processes I˛; and Vˇ are independent.


Theorem 3.3.19 Suppose that

Pf > tg  t˛ `.t/; t!1

for some ˛ 2 .0; 1/ and some ` slowly varying at 1, and that h is not identically
zero.
(C1) If the assumptions of Theorem 3.3.10 hold (with the same ˛ as above) and

v.t/Pf > tg
lim D 1; (3.52)
t!1 .h.t//2
3.3 Limit Theorems with Scaling 121

then
s
Pf > tg f:d:
Y.ut/ ) Z˛; ˇ .u/; t!1
v.t/

where Z˛;ˇ is as in Definition 3.3.8.


(C2) Assume that h.t/  t b̀.t/ as t ! 1 for some  ˛ and some b̀ slowly
varying at 1 and that

v.t/Pf > tg
lim D 0: (3.53)
t!1 .h.t//2

In the case D ˛ assume additionally that there exists a nondecreasing


function w such that limt!1 w.t/ D 1 and limt!1 .w.t/Pf > tg/=h.t/ D 1.
Then
Z
Pf > tg f:d:
Y.ut/ ) .u  y/ dW˛ .y/ D J˛; .u/; t ! 1:
h.t/ Œ0; u

(C3) If the assumptions of Theorem 3.3.10 hold and

v.t/Pf > tg
lim D b 2 .0; 1/; (3.54)
t!1 .h.t//2

then
s Z
Pf > tg f:d:
Y.ut/ ) Z˛; ˇ .u/ C b1=2 .u  y/.ˇ˛/=2 dW˛ .y/; t!1
v.t/ Œ0; u

Here W˛ under the integral sign is the same as in the definition of Z˛; ˇ
(Definition 3.3.8). In particular, the summands defining the limit process are
dependent.
There is a simple situation where weak convergence of the finite-dimensional
distributions obtained in Theorem 3.3.19 implies the J1 -convergence on DŒ0; 1/.
Corollary 3.3.20 Let X.t/ be almost surely nondecreasing with limt!1 X.t/ 2
.0; 1 almost surely. Assume that the assumptions of part (C2) of Theorem 3.3.19
are in force. Then the limit relations of part (C2) of Theorem 3.3.19 hold in the sense
of weak convergence in the J1 -topology on DŒ0; 1/.
As shown in the proof of Corollary 2.6 in [148], in Corollary 3.3.20 the limit
process J˛; is a.s. continuous. Now Corollary 3.3.20 follows from a modification
of the aforementioned proof which uses Remark 2.1 in [267] instead of Theorem 3
in [42].
122 3 Random Processes with Immigration

We close the section with two ‘negative’ results. According to Lemmas 3.3.26(b)
and 3.3.28, weak convergence of the finite-dimensional distributions in Theo-
rem 3.3.19 cannot be strengthened to weak convergence on D.0; 1/ whenever
either J˛; ˛ or Z˛; ˛ arises in the limit. We arrive at the same conclusion when
the limit process in Theorem 3.3.10 is a conditional white noise (equivalently,
C.u; w/ D 0 for u ¤ w) because no version of such a process belongs to D.0; 1/.

3.3.5 Applications

Unless the contrary is stated, the random variable  appearing in this section may
be arbitrarily dependent on , and .k ; k /, k 2 N denote i.i.d. copies of .; /.
TheoremP3.3.21 given below is a specialization of Theorems 3.3.18 and 3.3.19
to Y.t/ D k0 1fSk t<Sk CkC1 g , t  0 the random process with immigration which
corresponds to X.t/ D 1f>tg . The result is stated explicitly because Theorem 5.1.3
in Section 5 that provides a collection of limit results for the number of empty boxes
in the Bernoulli sieve is just a reformulation of Theorem 3.3.21. On the other hand,
Theorem 3.3.21 is interesting on its own because of numerous applications of the so
defined Y (see ‘Queues and branching processes’ on p. 3 and Example 3.1.2(a)).
Theorem 3.3.21 Suppose that

Pf > tg  tˇb̀.t/; t!1 (3.55)

for some ˇ 2 Œ0; 1/ and some b̀ slowly varying at 1.


(D1) If  2 D Var  < 1, then
P R ut
k0 1fSk ut<Sk CkC1 g 
1 0 Pf > yg dy f:d:
q Rt ) Vˇ .u/; t!1

1 0 Pf > yg dy
(3.56)
where
D E < 1 and Vˇ D .Vˇ .u//u0 is a centered Gaussian process
with

E Vˇ .u/Vˇ .s/ D u1ˇ  .u  s/1ˇ ; 0  s  u:

(D2) Suppose that  2 D 1 and E 2 1ftg  `.t/ as t ! 1 for


some ` slowly varying at 1. Let c.t/ be a positive function satisfying
limt!1 t`.c.t//.c.t//2 D 1.
(D21) If

lim t1 .c.t//2 Pf > tg D 0; (3.57)


t!1

then relation (3.56) holds true.


3.3 Limit Theorems with Scaling 123

(D22) If

lim t1 .c.t//2 Pf > tg D 1; (3.58)


t!1

then
P R ut
k0 1fSk ut<Sk CkC1 g 
1 0 Pf > yg dy f:d:
) S2 .u/; t!1

3=2 c.t/Pf > tg

where S2 is a Brownian motion.


(D23) If

lim t1 .c.t//2 Pf > tg D 1=b 2 .0; 1/; (3.59)


t!1

then, as t ! 1,
P R ut
1fSk ut<Sk CkC1 g 
1 0 Pf > yg dy f:d:
) S2 .u/ C
b1=2 Vˇ .u/
k0

3=2 c.t/Pf > tg

where S2 and Vˇ are independent.


(D3) Suppose that Pf > tg  t˛ `.t/ as t ! 1 for some ˛ 2 .1; 2/ and some `
slowly varying at 1. Let c.t/ > 0 satisfy limt!1 t`.c.t//.c.t//˛ D 1.
(D31) Condition (3.57) with the present c.t/ entails (3.56).
(D32) Suppose that ˇ 2 Œ0; 2=˛  1. If ˇ D 2=˛  1, assume additionally
that (3.58) holds. Then
P R ut
k0 1fSk ut<Sk CkC1 g 
1 0 Pf > yg dy f:d:
) I˛; ˇ .u/; t!1

11=˛ c.t/Pf > tg

where I˛; ˇ is as in Definition 3.3.5.


(D33) Condition (3.59) with the present c.t/ entails
P R ut
k0 1fSk ut<Sk CkC1 g 
1 0 Pf > yg dy f:d:
) I˛; ˇ .u/ C cVˇ .u/

11=˛ c.t/Pf > tg

as t ! 1, where I˛; ˇ and Vˇ are independent, and c WD


1=2C1=˛ b1=2 .
(D4) Suppose that Pf > tg  t˛ `.t/ as t ! 1 for some ˛ 2 .0; 1/ and some `
slowly varying at 1. Let ˇ 2 Œ0; ˛. If ˛ D ˇ, assume additionally that
limt!1 Pf > tg=Pf > tg D 0 and that there exists a nondecreasing
function u.t/ satisfying limt!1 .u.t/Pf > tg/=Pf > tg D 1. Then, as
t ! 1,
Z
Pf > tg X f:d:
1fSk ut<Sk CkC1 g ) .u  y/ˇ dW˛ .y/ D J˛; ˇ .u/:
Pf > tg k0 Œ0; u
124 3 Random Processes with Immigration

Proof Since h.t/ D EX.t/ D Pf > tg and v.t/ D Pf > tgPf  tg we infer

f .ut; wt/ Pf > .u _ w/tgPf  .u ^ w/tg


D ! .u _ w/ˇ ; u; w > 0;
v.t/ Pf > tgPf  tg

and this convergence is locally uniform in R2C as it is the case for limt!1 Pf >
.u _ w/tg=Pf > tg D .u _ w/ˇ by Lemma 6.1.4(a). In particular, condition
(3.37) holds with C.u; w/ D .u _ w/ˇ . Further, condition (3.39) holds because
j1f>tg  Pf > tgj  1 a.s. Thus, all the standing assumptions of Theorem 3.3.18
hold for the particular case X.t/ D 1f>tg .
Since limt!1 .h.t//2 =v.t/ D 0, part (D1) is a consequence of part (B11) of
Theorem 3.3.18. Let now the assumptions of part (D2) be in force. The specializa-
tion of the condition limt!1 .c.t/h.t//2 =tv.t/ D 0 in part (B21) of Theorem 3.3.18
reads limt!1 t1 .c.t//2 Pf > tg D 0. Hence, part (D21) follows from part (B21)
of Theorem 3.3.18. Analogously, part (D23) is an immediate consequence of part
(B23) of Theorem 3.3.18. Further, use the regular variation of Pf >R tg together
with Lemma 6.1.4(c) to conclude that the condition limt!1 .c.t/h.t//2 = 0 v.y/dy D
t
1 2
1 in part (B22) of Theorem 3.3.18 takes the form limt!1 t .c.t// Pf > tg D 1.
By Lemma 3.3.31, c.t/ is regularly varying of index 1=2 which implies that ˇ D 0.
An appeal to part (B22) of Theorem 3.3.18 completes the proof of part (D22). A
similar argument proves part (D3).
Passing to part (D4) we conclude that condition (3.53) reads limt!1 Pf >
tg=Pf > tg D 0 which obviously holds when ˇ 2 Œ0; ˛/ and holds by the
assumption when ˇ D ˛. Thus, part (D4) is a consequence of part (C2) of
Theorem 3.3.19. t
u
Now we illustrate main results of the chapter for some other particular instances
of random processes with immigration. Here, our intention is to exhibit a variety of
situations that can arise rather than provide the most comprehensive treatment.
Example 3.3.1 Let X.t/ D 1ftg . Since h.t/ D Pf  tg and v.t/ D Pf 
Rt
tgPf > tg  Pf > tg, we infer limt!1 t.h.t//2 = 0 v.y/dy D 1. Further, if
E < 1, then v is dRi on Œ0; 1/ by parts (a) and (d) of Lemma 6.2.1 because
it is nonnegative, bounded, a.e. continuous and dominated by R t the nonincreasing
and integrable function Pf > tg. If E D 1, i.e., limt!1 0 v.y/dy D 1, v is
equivalent to the monotone function u.t/ D Pf > tg. If  2 < 1 then, according to
part (B12) of Theorem 3.3.18,
P R
1 ut
k0 1fSk CkC1 utg 
0 Pf  ygdy f:d:
p ) S2 .u/
2 3

t

where S2 is a Brownian motion, because h is regularly varying at 1 of index


D 0. If Pf > tg is regularly varying at 1 of index ˛, ˛ 2 .0; 1/, then, by
Corollary 3.3.20,
X
Pf > tg 1fSk CkC1 utg ) W˛ .u/
k0
3.3 Limit Theorems with Scaling 125

in the J1 -topology on DŒ0; 1/ where W˛ is an inverse ˛-stable subordinator.


Example 3.3.2 Let X.t/ D g.t/ with Var  < 1 and let g W RC ! R be
regularly varying at 1 of index ˇ=2 for some ˇ > 1. Then h.t/ D g.t/E and
v.t/ D .g.t//2 Var . While f .u; w/ D g.u/g.w/Var  is clearly regularly varying
in R2C of index ˇ with limit function C.u; w/ D .uw/ˇ=2 , (3.37) holds by virtue of
p
Lemma 6.1.4(a). Further, observe that limt!1 tv.t/=jg.t/j D 1 implies

E.X.t/  h.t//2 1fjX.t/h.t/j>yptv.t/g

D .g.t//2 E.  E/2 1fjEj>yptv.t/=jg.t/jg D o.v.t//


p
and thereupon (3.39). Also, as a consequence of limt!1 v.t/=Pf > tg=jg.t/j D
1, which holds whatever the distribution of  is, we have

E.X.t/  h.t//2 1fjX.t/h.t/j>ypv.t/=Pf>tgg

D .g.t//2 E.  E/2 1fjEj>ypv.t/=Pf>tg=jg.t/jg D o.v.t//

which means that condition (3.42) holds.


If E D 0 and E D
2 .0; 1/, then, according to Theorem 3.3.9,
P
k0 kC1 g.ut  Sk /1fSk utg f:d:
p ) Vˇ .u/

1 tE2 g.t/

where Vˇ is a centered Gaussian process with covariance


Z u
EVˇ .u/Vˇ .w/ D .u  y/ˇ=2 .w  y/ˇ=2 dy; 0 < u  w:
0

Furthermore, the limit process can be represented as a stochastic integral


Z
Vˇ .u/ D .u  y/ˇ=2 dS2 .y/; u > 0:
Œ0; u

Throughout the rest of this example we assume that  is independent of .


If E D 0 and Pf > tg is regularly varying at 1 of index ˛, ˛ 2 .0; 1/ and
ˇ > ˛ then, according to Theorem 3.3.10,
p
Pf > tg X f:d: p
kC1 g.ut  Sk /1fSk utg ) E2 Z˛; ˇ .u/:
g.t/ k0

Furthermore, the limit process can be represented as a stochastic integral


Z
Z˛;ˇ .u/ D .u  y/ˇ=2 dS2 .W˛ .y//; u>0
Œ0;u
126 3 Random Processes with Immigration

where S2 is a Brownian motion independent of W˛ which can be seen by


calculating the conditional covariance of the last integral.
If E ¤ 0,  2 < 1 and g is eventually monotone, then, according to part (B13)
of Theorem 3.3.18,
P R ut
k0 kC1 g.ut  Sk /1fSk utg 
1 E 0 g.y/dy
p
E tg.t/
f:d:  2
1=2 Var 
1=2
) I 2; ˇ=2 .u/ C Vˇ .u/:

3 .E/2

If E ¤ 0, Pf > tg is regularly varying at 1 of index ˛, ˛ 2 .0; 1/, and


ˇ > 2˛, then, since limt!1 .v.t/Pf > tg/=.h.t//2 D 0, an application of part
(C2) of Theorem 3.3.19 gives

Pf > tg X f:d:


kC1 g.ut  Sk /1fSk utg ) .E/J˛; ˇ=2 .u/:
g.t/ k0

If further   0 a.s. and g is nondecreasing (which implies ˇ  0), then, according


to Corollary 3.3.20, the limit relation takes place in the J1 -topology on DŒ0; 1/.
Example 3.3.3 Let ‚ WD .‚.t//t0 be a stationary Ornstein–Uhlenbeck process
defined by
Z
t
‚.t/ D e C e.ty/ dS2 .y/; t0
Œ0; t

where is a normally distributed random variable with mean zero and variance 1=2
independent of a Brownian motion S2 . The process ‚ and  may be arbitrarily
dependent. Put X.t/ D .t C 1/ˇ=2 ‚.t/ for ˇ 2 .1; 0/. Then EX.t/ D 0 and
f .u; w/ D EX.u/X.w/ D 21 .u C 1/ˇ=2 .w C 1/ˇ=2 ejuwj from which we conclude
that f is fictitious regularly varying in R2C of index ˇ. By stationarity, for each t > 0,
‚.t/ has the same distribution as . Hence,

EX.t/2 1fjX.t/j>yg D .t C 1/ˇ E 2 1fj j>y.tC1/ˇ=2 g D o.tˇ /;

i.e., condition (3.39) holds. If


< 1, an application of Theorem 3.3.9 yields
P
k0 XkC1 .ut  Sk /1fSk utg f:d:
p ) Vˇ .u/;
.2
/1 tˇC1

the limit process being a centered Gaussian process with independent values (white
noise).
3.3 Limit Theorems with Scaling 127

Example 3.3.4 Let X.t/ D S2 ..t C 1/˛ /, Pf > tg  t˛ and assume that X and
 are independent. Then f .u; w/ D EX.u/X.w/ is uniformly regularly varying of
index ˛ in strips in R2C with limit function C.u; w/ D .u _ w/˛ . Relation (3.42)
follows from

E.X.t//2 1fjX.t/j>yg D .t C 1/˛ E.S2 .1//2 1fjS2 .1/j>y.tC1/˛=2 g D o.t˛ /

for all y > 0. Thus, Theorem 3.3.10 (see also Remark 3.3.11), in which we take
P f:d:
u.t/  1, applies and yields k0 XkC1 .ut  Sk /1fSk utg ) Z˛; ˛ .u/.

3.3.6 Properties of the Limit Processes

Throughout the section the phrase ‘a process R is well defined’ means that the
random variable R.u/ is a.s. finite for any fixed u > 0.
Processes Vˇ (See Definition 3.3.4)
Lemma 3.3.22 Under the assumptions of Theorem 3.3.9 the process Vˇ is well
defined.
Proof If f .u; w/ is fictitious regularly varying in R2C , then Vˇ is a Gaussian process
with independent values.
Suppose now that f .u; w/ is uniformly regularly varying in strips in R2C . Then
relation (3.37) (with f replacing r) ensures continuity of the function u 7! C.u; u C
w/ on .0; 1/ for each w > 0 (an accurate proof of a similar fact is given on pp. 2–3
in [266]). From the Cauchy–Schwarz inequality, we deduce that

j f .u; w/j  21 .v.u/ C v.w//; u; w  0 (3.60)

whence

C.u  y; w  y/  21 ..u  y/ˇ C .w  y/ˇ /: (3.61)

Consequently,
Z u
C.u  y; w  y/ dy < 1; 0<uw
0

because
Ru ˇ > 1. Since .u; w/ 7! C.u; w/ is positive semidefinite, so is .u; w/ 7!
0 C.uy; wy/dy, 0 < u  w. Hence the latter function is the covariance function
of some Gaussian process which proves the existence of Vˇ . t
u
Fractionally Integrated ˛-Stable Lévy Process I˛; (See Definition 3.3.5)
Lemma 3.3.23 Whenever > 1=˛, the process I˛; is well defined.
128 3 Random Processes with Immigration

Proof When > 0, we have


Z u
jI˛; .u/j  jS˛ .u  y/jy 1 dy  u sup jS˛ .y/j
0 0yu

for u  0. When 2 .1=˛; 0/ we use stationarity of increments of S˛ along with


self-similarity with index 1=˛ to obtain
Z u Z u
E jS˛ .u/S˛ .y/j.uy/ 1 dy D EjS˛ .u  y/j.uy/ 1 dy
0 0
Z u
D EjS˛ .1/j .uy/1=˛C 1 dy
0

D .1=˛ C /1 EjS˛ .1/ju1=˛C

for u  0. Thus, in both cases the integrals defining I˛; exist in the a.s. sense. t
u
Further, we provide a result on sample path properties of I˛; .
Lemma 3.3.24
(a) If either > 0 or ˛ D 2, then I˛; has a.s. continuous paths.
(b) If 1 < ˛ < 2 and 2 .1=˛; 0/, then every version I of I˛; is unbounded on
every interval of positive length, that is, there is an event 0 of probability 1
such that supa<t<b jI.t/j D 1 for all 0  a < b on 0 .
Proof Proof for (a). Assume first that > 0 and ˛ 2 .1; 2. With " > 0 write for
any u  0 (deterministic or random)
Z
ˇ ˇ u  ˇ ˇ
1 ˇI˛; .u C "/  I˛; .u/ˇ  .u C "  y/ 1  .u  y/ 1 ˇS˛ .y/ˇdy
0
Z uC" ˇ ˇ
C .u C "  y/ 1 ˇS˛ .y/ˇdy
u

The first integral converges to zero a.s. as " ! 0C


R "by monotone convergence.
The second integral does not exceed sup jS˛ .y/j 0 y 1 dy which converges to
y2Œu; uC"
jS˛ .uC/j  0 D 0 a.s. as " ! 0C. The case " < 0 can be investigated similarly.
It remains to treat the case ˛ D 2 and 2 .1=2; 0/. We first observe that
u 7! u S2 .u/ is a.s. continuous on .0; 1/. Further, from Theorem 1.14 in [219]
(Lévy’s modulus of continuity), we conclude that for every T > 0, there exists some
measurable set 0 D 0 .T/  with P.0 / D 1 such that, for all 2 .0; 1=2/,

supu2Œ0;T jS2 .u C h; !/  S2 .u; !/j


lim D 0; ! 2 0 : (3.62)
h!0C h
3.3 Limit Theorems with Scaling 129

Fix T > 0, 2 . ; 1=2/ and ! 2 0 .T/ and set

.y/ WD y C 1 and K.u; y/ WD y .S2 .u; !/  S2 .u  y; !//1.0;u .y/:

Then
Z u Z u
.S2 .u; !/  S2 .y; !//.u  y/ 1 dy D K.u; y/.y/dy:
0 0

With 0 < t < u < T write


ˇZ u Z t ˇ
ˇ ˇ
ˇ K.u; y/.y/dy  K.t; y/.y/dyˇ
0 0
Z t Z u
 jK.u; y/  K.t; y/j.y/dy C sup jK.u; y/j .y/dy: (3.63)
0 y2Œ0;T t

Since ! is such that (3.62) holds, one can deduce that

sup K.u; y/ < 1:


0yuT

This implies that each of the two summands in (3.63) tends to 0 as t ! u where
for the first summand one additionally needs the dominated convergence theorem.
Starting with 0 < u < t < T and repeating the argument proves that I2; is a.s.
continuous on .0; T/. Since T > 0 was arbitrary, we infer that I2; is a.s. continuous
on .0; 1/.
For the proof of (b), we refer the reader to Proposition 2.13 (b) in [147]. t
u
Here are some other properties of I˛; .
(P1) I˛; is self-similar with Hurst index 1=˛ C , i.e., for every c > 0,

f:d:
.I˛; .cu//u>0 D .c1=˛C I˛; .u//u>0

f: d:
where D denotes equality of finite-dimensional distributions.
This follows from Theorem 3.3.12 and the fact that the functions g.t/h.t/ are
regularly varying of index 1=˛ C (see Lemma 3.3.31; the definition of g can be
found in the paragraph following formula (3.44)).
(P2) For fixed u > 0,
 1=˛
d u˛ C1
I˛; .u/ D S˛ .1/
˛ C 1
130 3 Random Processes with Immigration

which shows that I2; .u/ has a normal distribution and I˛; .u/ for ˛ 2 .1; 2/
has a spectrally negative ˛-stable distribution.
Proof We only prove this for > 0. The proof for the case 2 .1=˛; 0/ can be
found in [147].
By self-similarity of I˛; it is sufficient to show that
Z 1 Z
S˛ .y/.1  y/ 1 dy D S˛ .1  y/dy D .˛ C 1/1=˛ S˛ .1/:
d
I˛; .1/ D
0 Œ0;1

R
The integral Œ0;1 S˛ .1  y/dy exists as a Riemann–Stieltjes integral and as such
can be approximated by

X
n
S˛ .1  k=n/..k=n/  ..k  1/=n/ /
kD1

X
n
 
D S˛ .1  k=n/  S˛ .1  .k C 1/=n/ .k=n/ DW In :
kD1

Since S˛ has independent and stationary increments, we conclude that

X
n
log E exp.izIn / D n1 log E exp.iz.k=n/ S˛ .1//; z 2 R:
kD1

Letting n ! 1 and using Lévy’s continuity theorem for characteristic functions we


arrive at
Z 1
log E exp.izI˛; .1// D log E exp.izy S˛ .1//dy; z2R
0

which proves the stated distributional equality. t


u
(P3) Whenever > 1=˛ and ¤ 0 the increments of I˛; are neither independent
nor stationary.
Proof We only give a sketch of the proof for the case > 0. If the increments were
stationary the characteristic function of I˛; .u/  I˛; .v/ for 0 < v < u would be a
function of u  v. This is however not the case as is seen from the formula
  
log E exp iz I˛; .u/  I˛; .v/
Z u  
 
D log E exp iz .u  y/  .v  y/ 1Œ0; v .y/ S˛ .1/ dy; z 2 R:
0
3.3 Limit Theorems with Scaling 131

Rv
Further, with u and v as above we see that I˛; .v/ D 0 S˛ .y/.v  y/ 1 dy and
Z v  
I˛; .u/  I˛; .v/ D S˛ .y/ .u  y/ 1  .v  y/ 1 dy C S˛ .v/.u  v/
0
Z uv  
C S˛ .y C v/  S˛ .v/ .u  v  y/ 1 dy
0

are dependent because while I˛; .v/ and the last summand on the right-hand side
are independent, I˛; .v/ and the sum of the first two terms on the right-hand side
are strongly dependent. In the case ˛ 2 .1; 2/ there is a short alternative proof.
If the increments were independent the process I˛; which is a.s. continuous by
Lemma 3.3.24(a) would be Gaussian (see Theorem 5 on p. 189 in [93]). However
this is not the case. t
u
Fractionally Integrated Inverse Stable Subordinators (See Definition 3.3.7)
Lemma 3.3.25 The process J˛; is well defined for all 2 R.
Proof When > 0, this follows trivially from J˛; .u/  u W˛ .u/ a.s. for u  0.
Recall that W˛ is an ˛-stable subordinator. When < 0, the claim of the lemma
is a consequence of
Z Z
J˛; .u/ D .u  y/ dW˛ .y/ D .u  y/ dW˛ .y/
Œ0; u Œ0; W˛ .W˛ .u//

 .u  W˛ .W˛ .u/// W˛ .u/ < 1

where the finiteness follows from W˛ .W˛ .u// < u a.s. for each fixed u > 0. t
u
Integration by parts yields
Z u
J˛; .u/ D .u  y/ 1 W˛ .y/dy; u>0
0

when > 0 and


Z u

J˛; .u/ D u W˛ .u/ C j j .W˛ .u/  W˛ .u  y//y 1 dy
0
Z 1
D j j .W˛ .u/  W˛ .u  y//y 1 dy; u>0
0

when ˛ < < 0. These representations show that J˛; is nothing else but the
Riemann–Liouville fractional integral (up to a multiplicative constant) of W˛ in
the first case and the Marchaud fractional derivative of W˛ in the second (see p. 33
and p. 111 in [244]).
We proceed with sample path properties of J˛; .
132 3 Random Processes with Immigration

Lemma 3.3.26
(a) Let ˛ C 2 .0; 1. Then J˛; is a.s. (locally) Hölder continuous with arbitrary
exponent < ˛ C . Let ˛ C > 1. Then J˛ is Œ˛ C -times continuously
differentiable on Œ0; 1/ a.s.
(b) Let ˛ C  0. Then every version J of J˛; is unbounded with positive
probability on every interval of positive length, that is, there is an event 0
of positive probability such that supa<t<b J.t/ D 1 for all 0 < a < b on 0 .
Proof for (a) The proof of Hölder continuity can be found in Theorem 2.7 of [143].
In the case ˛ C > 1 we use the statement for the case ˛ C 2 .0; 1 along with
the equality
Z u
J˛; .u/ D J˛; 1 .y/dy; u0
0

which shows that J˛; is a.s. continuously differentiable whenever J˛; 1 is a.s.
continuous.
Proof for (b) We shall write J˛; for J. Pick arbitrary positive c < d and note that

PfŒW˛ .c/; W˛ .d/ .a; b/g D Pfa < W˛ .c/ < W˛ .d/ < bg > 0:

We shall now prove that

sup J˛; .u/ D 1 a.s. (3.64)


u2ŒW˛ .c/; W˛ .d/

thereby showing that supa<u<b J˛; .u/ D 1 with positive probability.


According to Theorem 2 in [92], there exists an event 0 with P.0 / D 1 such
that for any ! 2 0
W˛ .s; !/  W˛ .y; !/
lim sup r
y!s .s  y/1=˛

for some deterministic constant r 2 .0; 1/ and some s WD s.!/ 2 Œc; d. Fix any
! 2 0 . There exists s1 WD s1 .!/ such that
 
W˛ .s; !/  W˛ .y; !/  .s  y/ =˛ r =2

whenever y 2 .s1 ; s/. Set u WD u.!/ D W˛ .s; !/ and write


Z
J˛; .u/ D .u.!/  y/ dW˛ .y; !/
Œ0; u.!/
Z
D .W˛ .s; !/  y/ dW˛ .y; !/
Œ0; W˛ .s; !/
3.3 Limit Theorems with Scaling 133

Z Z
s   s 
D W˛ .s; !/  W˛ .y; !/ dy  W˛ .s; !/  W˛ .y; !/ dy
0 s1
Z s
 21 r .s  y/ =˛ dy D 1:
s1

Since u.!/ 2 ŒW˛ .c/; W˛ .d/ for all ! 2 0 , we obtain (3.64). Note that the claim
of part (b) then holds with 0 D 0 [ fŒW˛ .c/; W˛ .d/ .a; b/g. The proof of
Lemma 3.3.26 is complete. t
u
Here is the list of some other properties of J˛; .
(Q1) J˛; is self-similar with Hurst index ˛ C .
This follows from Theorem 3.3.13 and the fact that the functions h.t/=Pf >
tg are regularly varying of index ˛ C .
(Q2) Let 2 R. The following distributional equality holds
Z 1
ecZ˛ .t/ dt
d
J˛; .1/ D
0
 
where c WD .˛ C /=˛, and Z˛ WD Z˛ .t/ t0 is a drift-free killed subordinator with
the unit killing rate and the Lévy measure

et=˛
˛ .dt/ D 1.0; 1/ .t/dt:
.1  et=˛ /˛C1

In particular, while the distribution of J˛; ˛ .u/ is exponential with unit mean, the
distribution of J˛; .u/ for ˛ C > 0 is uniquely determined by its moments

 k kŠ Yk
. C 1 C .j  1/.˛ C //
E J˛; .u/ D uk.˛C / (3.65)
..1  ˛//k jD1 .j.˛ C / C 1/

for k 2 N where ./ is the gamma function.


.u/
Proof Consider a family of processes V˛ .t/ WD ..u1=˛  W˛ .t//˛ /0t<W˛ .u1=˛ /
indexed by the initial value u > 0. This family forms a semi-stable Markov process
of index 1, i.e.,

PfrV˛.u/ .t=r/ 2 g D PfV˛.ru/ .t/ 2 g

for all r > 0. Then, according to Theorem 4.1 in [191], with u fixed

.u1=˛  W˛ .t//˛ D u exp.Z˛ ..t=u/// for 0  t  uI a.s.


134 3 Random Processes with Immigration

.u/
for some killed subordinator Z˛ WD .Z˛ .t//t0 D .Z˛ .t//t0 where
Z 1
I WD exp.Z˛ .t//dt D u1 inffv W W˛ .v/ > u1=˛ g D u1 W˛ .u1=˛ / (3.66)
0
Rs
and .t/ WD inffs W 0 exp.Z˛ .v//dv  tg for 0  t  I (except in one place, we
suppress the dependence of Z˛ , I and .t/ on u for notational simplicity). With this
at hand
Z 1
J˛; .u1=˛ / D ..u1=˛  W˛ .t//˛ / =˛ 1fW˛ .t/u1=˛ g dt
0
Z uI
D u =˛ exp.. =˛/Z˛ ..t=u///dt
0
Z I
D u1C =˛ exp.. =˛/Z˛ ..t///dt
0
Z 1
1C =˛
Du exp..1 C =˛/Z˛ .t//dt:
0

Replacing u with u˛ we infer


Z 1
˛
J˛; .u/ D u˛C exp.cZ˛.u / .t//dt a.s. (3.67)
0

The latter integral is known as an exponential functional of subordinator. We


have already encountered these objects on p. 47. In order to prove that the killed
subordinator Z˛ does indeed have unit killing rate and the Lévy measure ˛ it
suffices to show that the Laplace exponent of Z˛ equals
Z
ˆ˛ .s/ WD  log EesZ˛ .1/ D 1 C .1  est /˛ .dt/
Œ0;1/

.1  ˛/.1 C ˛s/


D (3.68)
.1 C ˛.s  1//

for s  0.
We shall now check that W˛ .1/ has the Mittag–Leffler distribution with
parameter ˛, i.e., the distribution that is uniquely determined by its moments


E.W˛ .1//n D ; n2N
..1  ˛//n .1 C n˛/
3.3 Limit Theorems with Scaling 135

(see (2.12) and the centered formula following it). Self-similarity of W˛ with index
1=˛ allows us to conclude that

PfW˛ .1/  tg D PfW˛ .t/  1g D Pft1=˛ W˛ .1/  1g D Pf.W˛ .1//˛  tg

d
for t > 0 which shows that W˛ .1/ D .W˛ .1//˛ . Recall that the equality
Z 1
EX  D .. //1 y 1 EeyX dy
0

holds for positive random variables X and > 0. Setting X D W˛ .1/ and D p˛
for p > 0 we obtain

.1 C p/
E.W˛ .1//p D E.W˛ .1//p˛ D ; p>0
..1  ˛//p .1 C p˛/

which is more than enough to justify the claim.


Using (3.66) along with self-similarity of W˛ we conclude that I has the same
Mittag–Leffler distribution. It follows that the moments of I can be written as

nŠ nŠ
EI n D D ; n2N
..1  ˛// .1 C n˛/
n ˆ˛ .1/  : : :  ˆ˛ .n/

which, by (2.10), implies that the Laplace exponent ˆ˛ satisfies (3.68).


If D ˛ which is equivalent to c D 0, we infer with the help of (3.67)
Z 1
ecZ˛ .t/ dt D supft  0 W Z˛ .t/ < 1g DW R
d
J˛; ˛ .u/ D
0

where R has an exponential distribution with unit mean. Assume now that C˛ > 0
which is equivalent to c > 0. Since cZ˛ is a killed subordinator with the Laplace
exponent ˆ˛ .cs/ we obtain


E.J˛; .1//k D ; k2N
ˆ˛ .c/  : : :  ˆ˛ .ck/

by another application of formula (2.10). This proves (3.65) with u D 1. For other
u > 0 formula (3.65) follows by self-similarity. From the inequality
Z 1
.˛C /
ecZ˛ .t/ dt  R;
d
u J˛; .u/ D
0

and the fact that EeaR < 1 for a 2 .0; 1/, we conclude that the distribution
of J˛; .u/ has some finite exponential moments which entails that it is uniquely
determined by its moments. t
u
136 3 Random Processes with Immigration

(Q3) Let ˛ C  0. Then

.1 C /
EJ˛; .v/J˛; .u/ D
.˛/..1  ˛//2 .1 C ˛ C /
Z v
 .v  y/ .u  y/ y˛1 ..v  y/˛ C .u  y/˛ /dy
0

for 0 < v  u < 1.


This is Lemma 2.15 in [147].
(Q4) Let ˛ C  0. The increments of J˛; are neither independent nor stationary.
Proof When ˛ C > 0, the process J˛; is a.s. continuous by Lemma 3.3.26(a).
Thus, if the increments of J˛; were independent, it would be Gaussian by
Theorem 5 on p. 189 in [93] which is not the case. Let now D ˛. Then
Z v=u
1
EJ˛; ˛ .v/J˛; ˛ .u/ D 1 C .1  y/˛ y˛1 dy
.˛/.1  ˛/ 0

for 0 < v < u < 1 by (Q3). This entails

0 D EJ˛; ˛ .v/E.J˛; ˛ .u/  J˛; ˛ .v// ¤ EJ˛; ˛ .v/.J˛; ˛ .u/  J˛; ˛ .v//

where the first equality follows from EJ˛; ˛ .v/ D 1 (see (Q2)). We have proved
that the increments of J˛; are not independent whenever ˛ C  0.
When ˛ C ¤ 1, the increments of J˛; are not stationary because, by (Q2),
EJ˛; .u/ is a function of u˛C rather than u. When ˛ C D 1, one can show with
the help of (Q3) that, with 0 < v < u, E.J˛; .u/  J˛; .v//2 is not a function of
u  v. t
u
Processes Z˛; ˇ (See Definition 3.3.8)
Lemma 3.3.27 Let ˛ 2 .0; 1/, ˇ 2 R and C denote the limit function for f .u; w/ D
Cov .X.u/; X.w// uniformly regularly varying in strips in R2C of index ˇ. Then the
process Z˛; ˇ is well defined.
Proof Use Lemma 3.3.25 in combination with (3.61) to infer
Z
….s; t/ WD C.s  y; t  y/ dW˛ .y/ < 1; 0 < s < t: (3.69)
Œ0; s

In order to prove that the process Z˛; ˇ is well defined, we shall show that the
function ….s; t/ is positive semidefinite, i.e., for any m 2 N, any 1 ; : : : ; m 2 R
3.3 Limit Theorems with Scaling 137

and any 0 < u1 < : : : < um < 1

X
m X
j2 ….uj ; uj / C 2 r l ….ur ; ul /
jD1 1r<lm

m1 Z
X X
m
D k2 C.uk  y; uk  y/
iD1 .ui1 ; ui  kDi
X 
C2 r l C.ur  y; ul  y/ dW˛ .y/
ir<lm
Z
C m2 C.um  y; um  y/ dW˛ .y/  0 a.s.
.um1 ; um 

where u0 WD 0. Since the second term is nonnegative a.s., it suffices to prove that
so is the first. The function .u; w/ 7! C.u; w/, 0 < u  w is positive semidefinite
as a limit of positive semidefinite functions. Hence, for each 1  i  m  1 and
y 2 .ui1 ; ui /,

X
m X
k2 C.uk  y; uk  y/ C 2 r l C.ur  y; ul  y/  0:
kDi ir<lm

Thus, the process Z˛; ˇ does exist as a conditionally Gaussian process with covari-
ance function ….s; t/, 0 < s  t. t
u
Lemma 3.3.28 Let ˛ 2 .0; 1/ and ˇ  ˛. Any version of the process Z˛; ˇ has
paths in the Skorokhod space D.0; 1/ with probability strictly less than 1. If further
C.u; w/ D 0 for all u ¤ w, u; w > 0, then any version has paths in D.0; 1/ with
probability 0.
Proof Let Z be a version of Z˛; ˇ . Pick arbitrary 0 < c < d. From the proof of
Lemma 3.3.26(b) we know that
Z
 ˇ 
E .Z.u//2 ˇW˛ D .u  y/ˇ dW˛ .y/ D C1
Œ0; u

for a random u 2 ŒW˛ .c/; W˛ .d/ whence


 ˇ 
ˇ 2
E sup .Z.t// ˇ W˛ D C1 a.s. (3.70)
t2ŒW˛ .c/; W˛ .d/

Now observe that if Z has paths in D.0; 1/ a.s., then


nˇ ˇ ˇ o
ˇ ˇ ˇ
P ˇ sup Z.t/ˇ < 1 ˇ W˛ D 1: (3.71)
t2ŒW˛ .c/; W˛ .d/
138 3 Random Processes with Immigration

Note that the process W˛ is measurable with respect to the -algebra generated
by W˛ and that, given W˛ , the process Z is centered Gaussian. Hence, from
Theorem 3.2 on p. 63 in [1] (applied to .Z.t//t2ŒW˛ .c/; W˛ .d/ and .Z.t//t2ŒW˛ .c/; W˛ .d/
both conditionally given W˛ ), we conclude that (3.71) is equivalent to
 ˇ 
ˇ
E sup .Z.t//2 ˇ W˛ < 1 a.s.
t2ŒW˛ .c/; W˛ .d/

which cannot hold due to (3.70). Hence Z has paths in D.0; 1/ with probability less
than 1.
Finally, suppose that C.u; w/ D 0 for all u ¤ w, u; w > 0. Then, given W˛ , the
Gaussian process Z has uncorrelated, hence independent values. For any fixed t > 0
and any decreasing sequence .hn /n2N with limn!1 hn D 0 we infer
˚ ˇ 
P Z is right-continuous at tˇW˛
n ˇ o
 P lim sup Z.t C hn / D Z.t/ˇW˛ D 0 a.s. (3.72)
n!1

which proves that Z has paths in the Skorokhod space with probability 0. To
justify (3.72) observe that, given W˛ , the distribution of Z.t/ is Gaussian, hence
continuous, while lim supn!1 Z.tChn / is equal to a constant (possibly ˙1) a.s. by
the Kolmogorov zero-one law which is applicable because Z.t C h1 /, Z.t C h2 /; : : :
are (conditionally) independent. The proof of Lemma 3.3.28 is complete. t
u

3.3.7 Proofs for Sections 3.3.2 and 3.3.3

For a -algebra G we shall write EG ./ for E.jG/. Also, we


P recall the notation: for
t  0, .t/ D inffk 2 N0 W Sk > tg and U.t/ D E.t/ D k0 PfSk  tg. In what
follows all unspecified limit relations are assumed to hold as t ! 1.
Proof of Theorem 3.3.9 We only investigate the case where C.u; w/ > 0 for some
u; w > 0, u ¤ w. Modifications needed in the case where C.u; w/ D 0 for all
u; w > 0, u ¤ w should be clear from the subsequent presentation.
Recall from Lemma 3.3.22 and its proof that the limit process Vˇ is well defined
and that the function u 7! C.u; u C w/ is continuous on .0; 1/ for each w > 0.
Without loss of generality we can and do assume that X is centered, for it is the
case for X.t/  h.t/. According to the Cramér–Wold device (see p. 232) it suffices
to prove that
Pm P
jD1 ˛j  Sk /1fSk uj tg d X
k0 XkC1 .uj t
m
p ! ˛j Vˇ .uj / (3.73)

1 tv.t/ jD1
3.3 Limit Theorems with Scaling 139

for any m 2 N, any


P ˛1 ; : : : ; ˛m 2 R and any 0 < u1 < : : : < um < 1. Note that the
random variable m jD1 ˛j Vˇ .uj / has a normal distribution with mean 0 and variance

X
m X Z ui
1Cˇ
.1 C ˇ/1 ˛j2 uj C2 ˛i ˛j C.ui  y; uj  y/ dy
jD1 1i<jm 0

DW D.u1 ; : : : ; um /: (3.74)

Define the -algebras F0 WD f˛; g and Fk WD ..X1 ; 1 /; : : : ; .Xk ; k //, k 2 N


and observe that
X
m 
EFk ˛j 1fSk uj tg XkC1 .uj t  Sk / D 0:
jD1

Thus, in order to prove (3.73), one may use the martingale central limit theorem
(Corollary 3.1 in [122]) whence it suffices to verify
X P
2
EFk ZkC1;t ! D.u1 ; : : : ; um / (3.75)
k0

and
X P
2
EFk ZkC1;t 1fjZkC1;t j>yg ! 0 (3.76)
k0

for all y > 0 where


Pm
jD1 ˛j 1fSk uj tg XkC1 .uj t  Sk /
ZkC1;t WD p ; k 2 N0 ; t > 0:

1 tv.t/

Proof of (3.76) In view of the inequality

.a1 C : : : C am /2 1fja1 C:::Cam j>yg  .ja1 j C : : : C jam j/2 1fja1 jC:::Cjam j>yg
 m2 .ja1 j _ : : : _ jam j/2 1fm.ja1 j_:::_jam j/>yg
 
 m2 a21 1fja1 j>y=mg C : : : C a2m 1fjam j>y=mg (3.77)

which holds for a1 ; : : : ; am 2 R, it is sufficient to show that

X .XkC1 .t  Sk //2 P
1fSk tg EFk 1
1fjX .tS /j>yp
1 tv.t/g ! 0 (3.78)
k0

tv.t/ kC1 k

for all y > 0. We can take t instead of uj t here because v is regularly varying and
y > 0 is arbitrary.
140 3 Random Processes with Immigration

Without loss of generality we assume that theRfunction t 7! tv.t/ is increasing, for


t
otherwise we could have worked with .ˇ C 1/ 0 v.y/dy (see Lemma 6.1.4(c)). By
Markov’s inequality and the aforementioned monotonicity relation (3.78) follows if
we can prove that
Z
1
lim vy .t  x/ dU.x/ D 0 (3.79)
t!1 tv.t/ Œ0;t

for all y > 0 where the definition of vy is given in (3.39). Recalling that
< 1
and that v is locally bounded, measurable, and regularly varying at infinity of index
ˇ 2 .1; 1/ an application of Lemma 6.2.14 with r1 D 0 and r2 D 1 yields
Z
v.t  x/ dU.x/  const tv.t/:
Œ0; t

Since, according to (3.39), vy .t/ D o.v.t//, (3.79) follows the last centered formula
in combination with Lemma 6.2.13(b).
Proof of (3.75) It can be checked that
Pm P
X ˛j2 k0 1fSk uj tg v.uj t  Sk /
2 jD1
EFk ZkC1;t D
k0

1 tv.t/
P P
2 1i<jm ˛i ˛j k0 1fSk ui tg f .ui t  Sk ; uj t  Sk /
C :

1 tv.t/

We shall prove that


P R 1Cˇ
k0 1fSk ui tg v.ui t  Sk / Œ0; ui  v..ui  y/t/ dy .ty/ P ui
D ! (3.80)

1 tv.t/
1 tv.t/ 1Cˇ

and
P R
k0 1fSk ui tg f .ui t  Sk ; uj t  Sk / Œ0; ui  f ..ui  y/t; .uj  y/t/ dy .ty/
D

1 tv.t/
1 tv.t/
Z ui
P
! C.ui  y; uj  y/dy (3.81)
0

for all 1  i < j  m.


Fix any ui < uj and pick " 2 .0; ui /. By the functional strong law of large
numbers (Theorem 4 in [96])
ˇ .ty/ ˇ
ˇ ˇ
lim sup ˇ 1  yˇ D 0 a.s.
t!1 y2Œ0;u 
t
i
3.3 Limit Theorems with Scaling 141

Also,

v..ui  y/t/
lim D .ui  y/ˇ
t!1 v.t/

uniformly in y 2 Œ0; ui  " by Lemma 6.1.4(a), and

f ..ui  y/t; .uj  y/t/


lim D C.ui  y; uj  y/
t!1 v.t/

uniformly in y 2 Œ0; ui  " by virtue of (3.37) (with f replacing r). Two applications
of Lemma 6.4.2(b) (with Rt .y/ D .ty/=.
1 t/) yield
Z   Z ui " 1Cˇ
v..ui  y/t/ .ty/ P ˇ ui  "1Cˇ
dy ! .u i  y/ dy D
Œ0;ui " v.t/
1 t 0 1Cˇ

and
Z   Z ui "
f ..ui  y/t; .uj  y/t/ .ty/ P
dy ! C.ui  y; uj  y/ dy:
Œ0;ui " v.t/
1 t 0

Observe that since .y/ is a.s. nondecreasing, so is Rt .y/.


As " ! 0C, the right-hand sides of the last two equalities converge to .1 C
1Cˇ Ru
ˇ/1 ui and 0 1 C.ui  y; uj  y/dy, respectively. Therefore, for (3.80) and (3.81)
to hold it is sufficient (see Lemma 6.4.1) that
R 
.ui "; ui  v.t.ui  y// dy .ty/
lim lim sup P >ı D0
"!0C t!1 tv.t/

and
 ˇˇ R ˇ 
.ui "; ui  f .t.ui  y/; t.uj  y// dy .ty/ˇ
lim lim sup P >ı D0
"!0C t!1 tv.t/

for all ı > 0. By Markov’s inequality it thus suffices to check that


R
.ui "; ui  v..ui  y/t/ dy U.ty/
lim lim sup D 0 (3.82)
"!0C t!1 tv.t/

and
R
.ui "; ui  j f ..ui  y/t; .uj  y/t/j dy U.ty/
lim lim sup D 0; (3.83)
"!0C t!1 tv.t/
142 3 Random Processes with Immigration

respectively. Changing the variable s D ui t and recalling that v is regularly varying


of index ˇ 2 .1; 1/ we apply Lemma 6.2.14 with r1 D 1  "u1 i and r2 D 1 to
infer
Z Z
v.ui t  y/ dU.y/ D v.s  y/ dU.y/
..ui "/t; ui t ..1"u1
i /s;s
 1Cˇ
" sv.s/ "1Cˇ tv.t/
  :
ui .1 C ˇ/
.1 C ˇ/

Using (3.60) we further obtain


Z Z
1
j f .ui t  y; uj t  y/j dU.y/  2 v.ui t  y/ dU.y/
..ui "/t; ui t ..ui "/t; ui t
Z
C 21 v.uj t  y/ dU.y/
..ui "/t; ui t
 
 .2
.1 C ˇ//1 "1Cˇ C .uj  ui C "/1Cˇ  .uj  ui /1Cˇ tv.t/

where for the second integral we have changed the variable s D uj t, invoked
Lemma 6.2.14 with r1 D .ui "/u1 1
j and r2 D ui uj and then got back to the original
variable t. These relations entail both (3.82) and (3.83). The proof of Theorem 3.3.9
is complete. t
u
Lemmas 3.3.29 and 3.3.30 are designed to facilitate the proofs of Theo-
rems 3.3.10 and 3.3.19.
Lemma 3.3.29 Suppose that condition (3.41) holds for some ˛ 2 .0; 1/ and that
f .u; w/ D Cov .X.u/X.w// is either uniformly regularly varying in strips in R2C or
fictitious regularly varying in R2C , in either of the cases, of index ˇ for some ˇ  ˛
and with limit function C. If
Z
Pf > tg
lim lim sup v.t.z  y// dy U.ty/ D 0 (3.84)
!1 t!1 v.t/ . z; z

for all z > 0, then


s Z
Pf > tg X
m
1 j h.t.uj  y//1Œ0; uj  .y/dy .ty/
v.t/ Œ0; um  jD1

Z X
m
Pf > tg
C 2 j2 v.t.uj  y//1Œ0; uj  .y/
v.t/ Œ0; um  jD1

X 
C 2 r l f .t.ur  y/; t.ul  y//1Œ0; ur  .y/ dy .ty/
1r<lm
3.3 Limit Theorems with Scaling 143

X
m Z
d
! 1 b1=2 j .uj  y/.ˇ˛/=2 dW˛ .y/
jD1 Œ0; uj 

X
m Z
C 2 j2 .uj  y/ˇ dW˛ .y/
jD1 Œ0; uj 

X Z 
C 2 r l C.ur  y; ul  y/ dW˛ .y/ (3.85)
1r<lm Œ0; ur 

for any m 2 N, any real 1 ; : : : ; m , any 0 < u1 < : : : < um < 1, and any real 1
and 2 provided that whenever 1 > 0 condition (3.54) holds and
s Z
Pf > tg
lim lim sup h.t.z  y// dy U.ty/ D 0 (3.86)
!1 t!1 v.t/ . z; z

for all z > 0.


We refer to [148] for the proof of Lemma 3.3.29.
Lemma 3.3.30 Let .Zk; t /k2N; t>0 be a family of random variables defined on some
probability space .; R; P/ and let G be a sub--algebra of R. Assume that, given
G, .Zk;t /k2N are independent for each t > 0. If
X d
2
EG ZkC1; t ! D; t!1 (3.87)
k0

for a random variable D and


X P
2
EG ZkC1; t 1fjZkC1; t j>yg ! 0; t!1 (3.88)
k0

for all y > 0, then, for each z 2 R,


 X 
d
EG exp iz ZkC1; t ! exp.Dz2 =2/; t ! 1; (3.89)
k0

 X 
 
E exp iz ZkC1; t ! E exp  Dz2 =2 ; t!1 (3.90)
k0

and
 X   X 
P
EG exp iz ZkC1; t  EG exp iz b
Z kC1; t ! 0; t!1 (3.91)
k0 k0
144 3 Random Processes with Immigration

where, given G, b
Z 1; t ; b
Z 2; t ; : : : are conditionally independent normal random vari-
2
ables with mean 0 and variance EG ZkC1; t , i.e.,

EG exp.izb 2
Z kC1; t / D exp.EG .ZkC1; 2
t /z =2/; k 2 N0 :

Proof Apart from minor modifications, the following argument can be found in the
proof of Theorem 4.12 in [167] in which weak convergence of the row sums in
triangular arrays to a normal distribution is investigated. For any " > 0,
X
2 2 2 2 2
sup EG ZkC1; t  " C sup EG ZkC1; t 1fjZkC1; t j>"g  " C EG ZkC1; t 1fjZkC1; t j>"g :
k0 k0 k0

Using (3.88) and letting first t ! 1 and then " ! 0C we infer

2 P
sup EG ZkC1; t ! 0: (3.92)
k0

In view of (3.87)
 X 
EG exp iz b
Z kC1; t
k0
 X 
2 2 d
D exp  .EG ZkC1; t /z =2 ! exp.Dz2 =2/ (3.93)
k0

P
for each z 2 R. Next, we show that k0 ZkC1; t has the same distributional limit as
P
b
k0 Z kC1; t as t ! 1. To this end, for z 2 R, consider

ˇ  X   X ˇ
ˇ ˇ
ˇEG exp iz ZkC1; t  EG exp iz Z kC1; t ˇˇ
b
ˇ
k0 k0
ˇY ˇ
ˇ   Y  ˇ
ˇ
Dˇ EG exp izZkC1; t  EG exp izZ kC1; t ˇˇ
b
k0 k0
X ˇˇ    ˇˇ
 ˇEG exp izZkC1; t  EG exp izb
Z kC1; t ˇ
k0
X ˇˇ   ˇ
ˇ
 ˇEG exp izZkC1; t  1 C 21 z2 EG ZkC1;
2

k0
X ˇˇ   ˇ
ˇ
C Z kC1; t  1 C 21 z2 EGb
ˇEG exp izb Z 2kC1; t ˇ
k0

X   X  
 z2 2
EG ZkC1; 1
t 1 ^ 6 jzZkC1; t j C z
2
Z 2kC1; t 1 ^ 61 jzb
EGb Z kC1; t j
k0 k0
3.3 Limit Theorems with Scaling 145

where, to arrive at the last line, we have utilized jEG ./j  EG .jj/ and the inequality

jeiz  1  iz C z2 =2j  z2 ^ 61 jzj3 ; z2R

which can be found, for instance, in Lemma 4.14 of [167]. For any " 2 .0; 1/ and
z¤0
X   X X
2 1 2 2
EG ZkC1; t 1 ^ 6 jzZkC1; t j  " EG ZkC1; t C EG ZkC1; t 1fjZkC1; t j>6"=jzjg :
k0 k0 k0

Recalling (3.88) and letting first t ! 1 and then " ! 0C give


X   P
2 1
EG ZkC1; t 1 ^ 6 jzZkC1; t j ! 0:
k0

Further,
X   X
Z 2kC1; t 1 ^ 61 jzb
EGb Z kC1; t j  61 jzj EG jb
Z kC1; t j3
k0 k0
p p
2jzj X 2 3=2 2jzj 2

1=2 X
2
D p .EG ZkC1; t/  p sup EG ZkC1; t EG ZkC1; t:
3  k0 3  k0 k0

Here, (3.87) and (3.92) yield


X   P
EGb
Z 2kC1; t 1 ^ 61 jzb
Z kC1; t j ! 0:
k0

Thus, we have already proved (3.91) which together with (3.93) implies (3.89).
Relation (3.90) follows from (3.89) by Lebesgue’s dominated convergence theorem.
The proof of Lemma 3.3.30 is complete. t
u
In what follows, F denotes the -algebra generated by .Sn /n2N0 .
Proof of Theorem 3.3.10 As in the proof of Theorem 3.3.9 we can and do assume
that X is centered. Put r.t/ WD v.t/=Pf > tg. The process Z˛; ˇ is well defined by
Lemma 3.3.27. In view of the Cramér–Wold device (see p. 232) it suffices to check
that

1 X d X
m m
p j Y.uj t/ ! j Z˛;ˇ .uj / (3.94)
r.t/ jD1 jD1

for any m 2 N, any 1 ; : : : ; m 2 R and any 0 < u1P


< : : : < um < 1. Since
C.y; y/ D yˇ , then, given W˛ , the random variable m jD1 j Z˛;ˇ .uj / is centered
146 3 Random Processes with Immigration

normal with variance

X
m Z
D˛;ˇ .u1 ; : : : ; um / WD j2 .uj  y/ˇ dW˛ .y/ (3.95)
jD1 Œ0;uj 

X Z
C2 i j C.ui  y; uj  y/ dW˛ .y/:
1i<jm Œ0;ui 

Equivalently,
 X m 
E exp iz j Z˛;ˇ .uj / D E exp.D˛;ˇ .u1 ; : : : ; um /z2 =2/; z 2 R:
jD1

Hence, according to Lemma 3.3.30, (3.94) is a consequence of


X d
2
EF ZkC1;t ! D˛;ˇ .u1 ; : : : ; um / (3.96)
k0

Pm
where ZkC1;t WD .r.t//1=2 jD1 j XkC1 .uj t  Sk /1fSk uj tg and

X P
2
EF ZkC1;t 1fjZkC1;t j>yg ! 0 (3.97)
k0

for all y > 0. Since r.t/ is regularly varying at 1 of index ˇ C ˛ we have


Z
1
lim sup v.t.z  y// dy U.ty/
t!1 r.t/ . z; z
Z
r.tz/ 1
 lim lim sup v.tz  y/ dU.y/
t!1 r.t/ t!1 r.tz/ . tz; tz
Z
ˇC˛ 1
Dz lim sup v.t  y/ dU.y/
t!1 r.t/ . t;t

for all z > 0. Hence, relation (3.84) is an immediate consequence of


Lemma 6.2.16(a). Using the representation

X Z X
m
2 1
EF ZkC1;t D j2 v..uj  y/t/1Œ0;uj  .y/
k0
r.t/ Œ0; um  jD1

X 
C2 i j f ..ui  y/t; .uj  y/t/1Œ0;ui  .y/ dy .ty/
1i<jm
3.3 Limit Theorems with Scaling 147

we further conclude that (3.96) follows from Lemma 3.3.29 with 1 D 0 (observe
that conditions (3.54) and (3.86) are then not needed). In view of (3.77), (3.97) is a
consequence of

1 X P
1fSk tg EF .XkC1 .t  Sk //2 1fjXkC1 .tSk /j>ypr.t/g ! 0 (3.98)
r.t/ k0

for all y > 0. To prove (3.98) we assume, without loss of generality, that the
function r is nondecreasing, for in the case ˇ D ˛ it is asymptotically equivalent
to a nondecreasing function u.t/ by assumption, while in the case ˇ > ˛ the
existence of such a function is guaranteed by Lemma 6.1.4(b) because r is then
regularly varying of positive index. Using this monotonicity and recalling that we
are assuming that h  0 whence b v y .t/ D E.X.t//2 1fjX.t/j>ypr.t/g , we conclude that
it is sufficient to check that
X
E 1fSk tg EF .XkC1 .t  Sk //2 1fjXkC1 .tSk /j>ypr.tSk /g
k0
Z
D b
v y .t  x/ dU.x/ D o.r.t//
Œ0; t

for all y > 0, by Markov’s inequality. In view of (3.42) the latter is an immediate
consequence of Lemma 6.2.16(b) with f1 .t/ D b v y .t/, f .t/ D v.t/, q.t/ D u.t/ and
D ˇ. The proof of Theorem 3.3.10 is complete. t
u
We shall need the following lemma.
Lemma 3.3.31 The functions g.t/ appearing in (3.44) are regularly varying of
index 1=2 in cases (B1) and (B2), and of index 1=˛ in case (B3).
Proof In case (B1) this is trivial. In cases (B2) and (B3) the claim follows from
Lemma 6.1.3. t
u
Proof of Theorem 3.3.12 We shall only give a proof for the case where h is
eventually nondecreasing so that  0. The proof in the complementary, more
complicated case where h is eventually nonincreasing can be found in [146].
To treat cases (B1), (B2), and (B3) simultaneously, put
R ut
Z.ut/  0 h.y/dy
Zt .u/ WD ; t > 0; u  0
g.t/h.t/

with the same g as in (3.44). Further, we recall the notation I˛; 0 D S˛ ,


Z Z u
I˛; .u/ D .u  y/ dS˛ .y/ D .u  y/ 1 S˛ .y/dy; u0
Œ0; u 0

for > 0 and note that I˛; is well defined by Lemma 3.3.23.
148 3 Random Processes with Immigration

We start by showing that w.l.o.g. h can be replaced by a nondecreasing


(everywhere rather than eventually) and continuous on RC function h that satisfies
h .0/ D 0 and h .t/  h.t/. We thus need to construct such a function h and prove
that
R 
R
1 ut 
Œ0; h .ut  y/ d.y/ 
0 h .y/ dy f:d:
Zt .u/ WD
ut
) I˛; .u/: (3.99)
g.t/h .t/

f:d:
Then, to ensure the convergence Zt .u/ ) I˛; .u/ it suffices to check that, for any
u > 0,
R 
Œ0; ut .h.ut  y/  h .ut  y// d.y/ P
! 0 (3.100)
g.t/h.t/

and
R ut
0 .h.y/  h .y// dy
lim D 0: (3.101)
t!1 g.t/h.t/

We construct h in two steps.


Step 1 We first prove an intuitively clear fact that the behavior of h near zero does
not influence the asymptotics of Zt . In particular, if, given a > 0, we replace h by
any locally bounded function b
h such that bh.t/ D h.t/ for t  a the asymptotics of Zt
will not change. Indeed,
ˇZ ˇ
ˇ   ˇ
ˇ h.t.u  y//  b
h.t.u  y// d .ty/ ˇ
ˇ y ˇ
Œ0; u
ˇZ ˇ
ˇ   ˇ
D ˇˇ h.t.u  y// dy .ty/ˇˇ
h.t.u  y//  b
.ua=t; u
ˇ ˇ  d
 sup ˇh.y/  b
h.y/ˇ .ut/  .ut  a/  sup jh.y/  b
h.y/j.a/
y2Œ0; a y2Œ0; a

by the distributional subadditivity of  (see (6.2)). The local boundedness of h and


b
h ensures the finiteness of the last supremum. Recalling that  0 and using
Lemma 3.3.31 we conclude that g.t/h.t/ is regularly varying of positive index
whence limt!1 g.t/h.t/ D 1. This entails
R  
Œ0; u h.t.u  y//  b
h.t.u  y// dy .ty/ P
! 0: (3.102)
g.t/h.t/
3.3 Limit Theorems with Scaling 149

Further, for ut  a,
ˇ R ut ˇ R ut
ˇ .h.y/  b h.y//dy ˇˇ jh.y/  b
ˇ 0 h.y/jdy
ˇ ˇ 0
ˇ g.t/h.t/ ˇ g.t/h.t/
R ˇ ˇ
ˇ b ˇ
Œ0; a h.y/  h.y/ dy
D ! 0: (3.103)
g.t/h.t/

Choosing a large enough we can make b


h nondecreasing on RC . Besides that, we
shall take b
h such that b
h.0/ D 0.
Step 2 We shall now construct h from b
h. Set
Z
 t 
h..t  /C / D et b
h .t/ WD Eb h.0/ C b
h.y/ey dy ; t0
0

where is a random variable with the standard exponential distribution. It is clear


that b
h.t/  h .t/ for t  0 and that h is continuous and nondecreasing on RC with
h .0/ D bh.0/ D 0. Furthermore, h .t/  b h.t/  h.t/ by dominated convergence.
Now we intend to show that
Z t
 
b
h.y/  h .y/ dy  h.t/ (3.104)
0

which immediately implies


R ut  
b 
0 h.y/  h .y/ dy
lim D 0:
t!1 g.t/h.t/

In combination with (3.103) the latter proves (3.101). To check (3.104), write
Z Z Z
t   t t
b
h.y/  h .y/ dy D E b
h.y/dy D Pf > tg b
h.y/dy
0 .t /C 0
Z t
C E1f tg b
h.y/dy:
t

The first term on the right-hand side tends to 0 because the regular variation of b
h
entails that of the integral by Lemma 6.1.4(c). The second term can be estimated as
follows
Rt
Ebh.t  / 1f tg E1f tg t b
h.y/dy
  E D 1:
b
h.t/ b
h.t/
150 3 Random Processes with Immigration

The left-hand side converges to E D 1 by dominated convergence, and (3.104)


follows.
Using (3.104) and recalling that in all cases limt!1 g.t/ D 1, we conclude
from Lemma 6.2.12 (with f1 D b h and f2 D h )
ˇR ˇ
ˇ b  ˇ
ˇ Œ0; ut .h.ut  y/  h .ut  y// d.y/ ˇ
ˇ ˇ
ˇ g.t/h.t/ ˇ
R
b 
Œ0; ut .h.ut  y/  h .ut  y// d.y/
D ! 0
g.t/h.t/
in L1 . This together with (3.102) gives (3.100). It remains to prove (3.99).
By the Cramér–Wold device (see p. 232), in order to show finite-dimensional
convergence of .Zt .u//, it suffices to prove that for any n 2 N, any 1 ; : : : ; n 2 R
and 0  u1 < : : : < un < 1 we have
X
n
d X
n
k Zt .uk / ! k I˛; .uk /: (3.105)
kD1 kD1

Since Zt .0/ D I˛; .0/ D 0 a.s. we can and do assume that u1 > 0. Using the fact
that .0/ D 1 a.s. and integrating by parts, we have, for t > 0 and u > 0
Z  
 h .t.u  y// .yt/  yt
Zt .u/ D dy
Œ0; u h .t/ g.t/
 Z   
h .ut/ h .t.u  y// .yt/  yt
D C dy
g.t/h .t/ .0; u h .t/ g.t/
Z   
.yt/  yt h .t.u  y//
D dy 
.0; u g.t/ h .t/
Z
.yt/  yt 
D t .dy/
.0; u g.t/

where t is the finite measure on Œ0; u defined by

h .t.u  a//  h .t.u  b//


t .a; b WD ; 0  a < b  u:
h .t/

Let > 0. By the regular variation of h , the finite measures t converge weakly
on Œ0; u to a finite measure   on Œ0; u which is defined by   .a; b D .u  a/ 
.u  b/ . Clearly, the limiting measure is absolutely continuous with density x 7!
.u  x/ 1 on .0; u. This in combination with (3.44) enables us to conclude that
Z Z
.yt/  yt  d
u
t .dy/ ! .u  y/ 1 S˛ .y/ dy D I˛; .u/
.0;u g.t/ 0
3.3 Limit Theorems with Scaling 151

by Lemma 6.4.2(a). Suppose now that D 0. By the slow variation of h ,


the finite measures t converge weakly on Œ0; u to "u (the probability measure
concentrated at u). Since S˛ is a.s. continuous at u we infer with the help of (3.44)
and Lemma 6.4.2(a).
Z
.yt/  yt  d
t .dy/ ! S˛ .u/ D I˛; 0 .u/:
.0;u g.t/

With a little additional effort these (one-dimensional) arguments can be extended to


prove (3.105). The proof of Theorem 3.3.12 is complete. t
u
Proof of Theorem 3.3.13 We only treat the more complicated case when  0.
Recall the notation g.t/ D 1=Pf > tg. First we fix an arbitrary " 2 .0; 1/ and
prove that
X Z
1
I" .u; t/ WD h.ut  Sk /1fSk "utg ) .u  y/ dW˛ .y/
g.t/h.t/ k0 Œ0; "u

in the J1 -topology on D.0; 1/. Write


 
1 X h.ut  Sk /
I" .u; t/ D  .u  t1 Sk / 1fSk "utg
g.t/ k0 h.t/

1 X
C .u  t1 Sk / 1fSk "utg
g.t/ k0

D I"; 1 .u; t/ C I"; 2 .u; t/:

We shall show that


Z
I"; 1 .u; t/ ) r.u/ and I"; 2 .u; t/ ) .u  y/ dW˛ .y/ (3.106)
Œ0; "u

in the J1 -topology on D.0; 1/ where r.u/ D 0 for all u > 0. Throughout the rest of
the proof we use arbitrary positive and finite a < b. Observe that
ˇ ˇ
ˇ h.ty/ ˇ
ˇ ."ut/
jI"; 1 .u; t/j  sup ˇ ˇ y ˇ
.1"/uyu h.t/ g.t/

and thereupon
ˇ ˇ
ˇ h.ty/ ˇ
ˇ ."bt/
sup jI"; 1 .u; t/j  sup ˇ ˇ y ˇ :
aub .1"/ayb h.t/ g.t/
152 3 Random Processes with Immigration

d
We have ."bt/=g.t/ ! W˛ ."b/ as a consequence of the functional limit theorem
for ..t//t0 (see (3.45)), This, combined with the uniform convergence theorem
for regularly varying functions (Lemma 6.1.4(a)) implies that the last expression
converges to zero in probability thereby proving the first relation in (3.106).
Turning to the second relation in (3.106) we observe that
Z
I"; 2 .u; t/ D .u  y/ dy ..ty/=g.t// :
Œ0; "u

Since .ty/=g.t/ ) W˛ .y/ in the J1 -topology on DŒ0; 1/, and the limit W˛ is a.s.
continuous, an application of Lemma 6.4.2(c) proves the second relation in (3.106).
An appeal to Lemma 6.4.1 reveals that the proof of the theorem is complete if
we can show that for any fixed u > 0
Z Z
lim .uy/ dW˛ .y/ D J˛; .u/ D .uy/ dW˛ .y/ a.s. (3.107)
"!1 Œ0; "u Œ0; u

and
 X 
1
lim lim sup P h.ut  Sk /1f"ut<Sk utg > ı D 0 (3.108)
"!1 t!1 g.t/h.t/ k0

for all ı > 0.


To check (3.107), write for fixed u > 0
Z Z
0 .u  y/ dW˛ .y/  .u  y/ dW˛ .y/
Œ0; u Œ0; "u
Z
D .u  y/ 1."u; u .y/dW˛ .y/:
Œ0; u

R theorem, the right-hand side converges to 0 a.s. as


By the dominated convergence
" ! 1 because J˛; .u/ D Œ0; u .u  y/ dW˛ .y/ < 1 a.s. by Lemma 3.3.25.
The probability on the left-hand side of (3.108) is bounded from above by

Pf.ut/  ."ut/ > 0g D Pfut  S .ut/1 < .1  "/utg:

By a well-known Dynkin–Lamperti result (see Theorem 8.6.3 in [44])

d
t1 .t  S .t/1 / ! ˛

where ˛ has a beta distribution with parameters 1  ˛ and ˛, i.e.,

Pf˛ 2 dxg D  1 sin. ˛/x˛ .1  x/˛1 1.0;1/ .x/dx:


3.3 Limit Theorems with Scaling 153

This entails

lim lim sup Pf.ut/  ."ut/ > 0g D lim Pf˛ < 1  "g D 0
"!1 t!1 "!1

thereby proving (3.108). The proof of Theorem 3.3.13 is complete. t


u
For the proof of Theorem 3.3.14 we need an auxiliary result.
Lemma 3.3.32 Let h be a nonincreasing function which satisfies all the assump-
tions of Theorem 3.3.14. Then, for any 0  a < b  1,
R tCt.a/
0 h.y/h.y C t.b/  t.a/ /dy
lim Rt D1b
2
0 h .y/dy
t!1

where t.u/ WD x.t; u/, u 2 Œ0; 1 (see (3.48) for the definition of x.t; u/).
Proof We first treat the principal part of the integral, namely, we check that
Rt
t.b/ h.y/h.y C t.b/  t.a/ /dy
lim D1b (3.109)
t!1 m.t/
Rt
where the notation m.t/ D 0 h2 .y/dy has to be recalled. Since h is regularly varying
at 1 of index 1=2 (see (3.47)), we conclude that m is slowly varying at 1 and

t.h.t//2
lim D0 (3.110)
t!1 m.t/

by Lemma 6.1.4(d) with g D h2 . As a consequence of Lemma 6.1.4(c) and (3.110)


we also have
Rt
h.t/ 0 h.y/dy
lim D 0: (3.111)
t!1 m.t/

We shall frequently use that limt!1 t.b/ =t.a/ D 1 which is a consequence of the
slow variation and monotonicity of m. By monotonicity of h,
Z t
.b/ .a/ .b/ .a/
m.t C t  t /  m.2t t / h.y/h.y C t.b/  t.a/ /dy  m.t/  m.t.b/ /
t.b/

which entails (3.109) in view of (3.48) and the slow variation of m. It remains to
show that
R t.b/
0 h.y/h.y C t.b/  t.a/ /dy
lim D0 (3.112)
t!1 m.t/
154 3 Random Processes with Immigration

and
R tCt.a/
h.y/h.y C t.b/  t.a/ /dy
lim t
D 0: (3.113)
t!1 m.t/
As for (3.112), we have, using monotonicity of h and (3.111),
Z t.b/ Z t.b/
.b/ .a/ .b/ .a/
h.y/h.y C t  t /dy  h.t t / h.y/dy D o.m.t.b/ // D o.m.t//
0 0

which proves (3.112). Turning to (3.113), we argue similarly to obtain


Z tCt.a/
h.y/h.y C t.b/  t.a/ /dy  t.a/ .h.t//2 D O.t.h.t//2 / D o.m.t//:
t

The proof of Lemma 3.3.32 is complete. t


u
.u/
Proof of Theorem 3.3.14 We shall write t for x.t; u/. As before, all unspecified
limits are assumed to hold as t ! 1.
To avoid distracting technicalities we shall assume that h is nonincreasing
(everywhere rather than eventually) and continuous. Also, we shall work with
random walk .Sk / (see (3.4) for the definition) instead of .Sk /. The last simplification
is that we only establish weak convergence of two-dimensional distributions. The
proof in full generality can be found in [142].
We divide the proof into three steps.
Step 1 (Getting Rid of Negligible Terms) The Cramér–Wold device (see p. 232)
allows us to work with linear combinations of vector components rather than with
vectors themselves, i.e., it suffices to check that

2
P R tCt.ui /
X k0 h.t C t.ui /  Sk /1fSk tCt.ui / g 
1 0 h.y/dy
˛i p
iD1  2
3 m.t/
d
! ˛1 X .u1 / C ˛2 X .u2 / (3.114)

for any real ˛1 ; ˛2 and any 0  u1 < u2  1. Observe that the random variable on
the right-hand side of (3.114) has a normal distribution with mean zero and variance
˛12 C ˛22 C 2˛1 ˛2 .1  u2 /.
Integrating by parts we see that the numerator of the left-hand side of (3.114)
equals
2
X Z
˛i h.t C t.ui /  y/d.  .y/ 
1 y/
iD1 Œ0; tCt.ui / 

2
X 
 
D ˛i h.t C t.ui / /   .t C t.ui / / 
1 .t C t.ui / /
iD1
3.3 Limit Theorems with Scaling 155

Z 
 .ui /  .ui / 1
C . .t C t /   ..t C t  y// 
y/d.h.y//
Œ0; tCt.ui / 
p
where   .t/ WD #fk 2 N0 W Sk  tg for t  0. Since .  .t/ 
1 t/=  2
3 t
converges in distribution5 to the standard normal distribution, we infer

2 p
X th.t C t.ui / /   .t C t.ui / / 
1 .t C t.ui / / P
˛i p p ! 0
iD1 m.t/ t

which shows that (3.114) is equivalent to


2
R  .ui /
X Œ0; tCt.ui /  . .t C t /    ..t C t.ui /  y// 
1 y/d.h.y//
˛i p
iD1  2
3 m.t/
d
! ˛1 X .u1 / C ˛2 X .u2 /: (3.115)

Reversing the time at the point t C t.u2 / by means of (3.6), we conclude that the
numerator of the left-hand side of (3.115) has the same distribution as
Z
˛1 .  .y C t.u2 /  t.u1 / /    .t.u2 /  t.u1 / / 
1 y/d.h.y//
Œ0; tCt.u1 / 
Z
C ˛2 .  .y/ 
1 y/d.h.y// DW 1 C 2 C R.t/
Œ0; tCt.u2 / 

where
Z
1 WD .  .y C t.u2 /  t.u1 / /    .t.u2 /  t.u1 / /
Œ0; tCt.u1 / 
 X2 

1 y/dy  ˛k h.y C t.uk /  t.u1 / / ;
kD1
Z
2 WD ˛2 .  .y/ 
1 y/d.h.y//
Œ0; t.u2 / t.u1 / 

and
 
R.t/ WD ˛2   .t.u2 /  t.u1 / / 
1 .t.u2 /  t.u1 / /
 
 h.t.u2 /  t.u1 / /  h.t C t.u2 / /

p
5
This follows from the distributional convergence of . .t/ 
1 t/=  2
3 t to the standard
normal distribution (this is a consequence of part (B1) of (3.44)), the representation   .t/ D
 .t  S0 /1fS0 tg and the distributional subadditivity of  .t/ (see (6.2)).
156 3 Random Processes with Immigration

By the already mentioned central limit theorem for   .t/ and (3.110)

R.t/ P
p ! 0:
m.t/

This implies that (3.115) is equivalent to

1 C 2 d
p ! ˛1 X .u1 / C ˛2 X .u2 /: (3.116)
 2
3 m.t/

Step 2 (Reduction to Independent Random Variables) The random variables


1 and 2 are dependent. Now we intend to show that instead of 1 C 2 we can
work with the sum of two independent random variables. To this end, we want
to replace the process .  .y C r/    .r//y0 which depends on .  .y//0yr
where r WD t.u2 /  t.u1 / , by its copy independent of .  .y//0yr (recall that the
aforementioned processes appear in the definitions of 2 and 1 , respectively). For
this, it suffices to replace the overshoot of .Sk /k2N0 at the point r by a copy of the
random variable S0 which is independent of everything else while keeping all other
increments unchanged.
To implement this task, let S0;1 denote an independent copy of S0 which is also
independent of .k /k2N . Define
.1/
Sk WD r C S .r/Ck  S .r/ C S0;1 ; k 2 N0 ;
.1/
 .1/ .s/ WD inffk 2 N0 W Sk > sg; sr

and

 .1/ .s/ WD inffk 2 N0 W S .r/Ck  S .r/ > sg; s  0:

Observe that the process . .1/ .s C r//s0 is a copy of .  .s//s0 and furthermore
.  .y//0yr and . .1/ .y C r/   .1/ .r//y0 are independent.
Let us check that, for y  0,

Ej  .y C r/    .r/   .1/ .y C r/j  c < 1 (3.117)

where c WD 2E.S0 / C E.y0 / for y0 large enough. Note that c < 1 because
E 2 < 1 entails both ES0 < 1 and E.y/ 
1 y C const for all y  0 (Lorden’s
inequality, see (6.6) and (6.7)). Passing to the proof of (3.117) we obtain
X
  .y C r/    .r/   .1/ .y C r/ D 1fS S y.S r/g
 .r/Ck  .r/  .r/
k0
X
 1fS S yS0;1 g
 .r/Ck   .r/
k0

D  .1/ .y  1 /1f1 yg   .1/ .y  S0;1 /1fS0;1 yg


3.3 Limit Theorems with Scaling 157

where 1 WD S .r/  r. Note that . .1/ .t//t0 is a copy of ..t//t0 independent
of both 1 and S0;1 . The last two random variables are independent copies of S0 .
Further, the inequality ES0 < 1 entails limy!1 E.y/PfS0 > yg D 0 because
E.y/ 
1 y as y ! 1 by the elementary renewal theorem (see (6.4)). With
these at hand we have

I D Ej .1/ .y  1 /   .1/ .y  S0;1 /j1f1 y; S0;1 yg


C E .1/ .y  S0;1 /1f1 >y;S0;1 yg C E .1/ .y  1 /1f1 y; S0;1 >yg
 E .1/ .j1  S0;1 j/ C 2E.y/PfS0 > yg
 2E.S0 / C E.y0 /

for large enough y0 , having utilized twice the distributional subadditivity of  .1/ .t/
(see (6.2)) for the first term on the right-hand side.
Now (3.117) reveals that (3.116) is equivalent to

0 C 20 d
p 1 ! ˛1 X .u1 / C ˛2 X .u2 /
2 3

m.t/

where
Z  2
X 
10 WD . .1/
.y C t .u2 /
t .u1 / 1
/ 
y/dy  ˛k h.y C t .uk /
t .u1 /
/
Œ0; tCt.u1 /  kD1

and
Z
20 WD ˛2 .  .y/ 
1 y/d.h.y//
Œ0; t.u2 / t.u1 / 

are independent.
Step 3 (Reduction to Independent Gaussian Variables) Recall that  .1/ . C
t.u2 /  t.u1 / / is a renewal process with stationary increments. Let S2;0 and S2;1 denote
independent Brownian motions which approximate   ./ and  .1/ . C t.u2 /  t.u1 / /
in the sense of Lemma 6.2.17. We claim that
Z
K2 .t/ WD .m.t//1=2 j .1/ .y C t.u2 /  t.u1 / / 
1 y
Œ0; tCt.u1 / 
 X2 
P
 
3=2 S2;1 .y/jdy  ˛k h.y C t.uk /  t.u1 / / ! 0 (3.118)
kD1
158 3 Random Processes with Immigration

and that
Z
K1 .t/ WD .m.t//1=2 ˛2 j  .y/ 
1 y
Œ0; t.u2 / t.u1 / 

P
 
3=2 S2;0 .y/jd.h.y// ! 0: (3.119)

With t0 and A as defined in Lemma 6.2.17, (3.118) follows from the inequality
Z
K2 .t/  K2 .t/1ft0 >tCt.u1 / g C .m.t//1=2 j .1/ .y C t.u2 /  t.u1 / /
Œ0; t0 
 2
X 

1 y  
3=2 S2;1 .y/jdy  ˛k h.y C t.uk /  t.u1 / /
kD1
Z  2
X 
1=r .uk / .u1 /
CA y dy  ˛k h.y C t t / 1ft0 tCt.u1 / g
.t0 ; tCt.u1 /  kD1

because the first two terms on the right-hand side Rtrivially converge to zero in
probability, whereas the third does so, for the integral .t0 ; 1/ y1=r d.h.y// converges
(use integration by parts). Relation (3.119) can be checked along the same lines.
Formulae (3.118) and (3.119) demonstrate that we reduced the original problem
to showing that

100 C 200 d
p ! ˛1 X .u1 / C ˛2 X .u2 /
m.t/

where
Z  X2 
100 WD S2;1 .y/d  ˛k h.y C t.uk /  t.u1 / /
Œ0; tCt.u1 /  kD1

and
Z
200 WD ˛2 S2;0 .y/d.h.y//:
Œ0; t.u2 / t.u1 / 

Since 100 C 200 is the sum of independent centered Gaussian random variables it
remains to check that
 
Var . 100 C 200 / D Var 100 C Var 200  ˛12 C ˛22 C 2˛1 ˛2 .1  u2 / m.t/:
3.3 Limit Theorems with Scaling 159

Writing the integral defining 100 as the limit of integral sums we infer
Z tCt.u1 /   2
Var 100 D ˛12 h.y/  h t C t.u1 / dy
0
Z tCt.u1 /     2
C ˛22 h y C t.u2 /  t.u1 /  h t C t.u2 / C 2˛1 ˛2
0
Z tCt.u1 /       
 h.y/  h t C t.u1 / h y C t.u2 /  t.u1 /  h t C t.u2 / dy
0
Z tCt.u1 / Z tCt.u2 /
D ˛12 .h.y//2 dy C ˛22 .h.y//2 dy
0 t.u2 / t.u1 /
Z tCt.u1 / Z 
  t
C 2˛1 ˛2 h.y/h y C t.u2 /  t.u1 / dy C o .h.y//2 dy :
0 0

The appearance of the o-term follows by (3.110) and (3.111). Arguing similarly we
obtain
Z t.u2 / t.u1 /   2
Var 200 D ˛22 h.y/  h t.u2 /  t.u1 / dy
0
Z t.u2 / t.u1 / Z t 
D ˛22 2
.h.y// dy C o 2
.h.y// dy :
0 0

Using these calculations yields


  2
Var 100 C 200 X m.t C t.uk / /
D ˛k2
m.t/ kD1
m.t/
R tCt.u1 /  
0 h.y/h y C t.u2 /  t.u1 / dy
C 2˛1 ˛2 C o.1/:
m.t/

The coefficients in front of ˛12 and ˛22 converge to one as t ! 1. An appeal to


Lemma 3.3.32 enables us to conclude that the coefficient in front of 2˛1 ˛2 converges
to 1  u2 as t ! 1. The proof of Theorem 3.3.14 is complete. t
u

3.3.8 Proofs for Section 3.3.4

For the proof of Theorem 3.3.18 we need two auxiliary results, Lemmas 3.3.33
and 3.3.34. Replacing the denominator in (3.40) by a function which grows faster
leads to weak convergence of finite-dimensional distributions to zero. However, this
result holds without the regular variation assumptions of Theorem 3.3.9.
160 3 Random Processes with Immigration

Lemma 3.3.33 Assume that



D E < 1;
• either
Z t  Z t 
lim v.y/dy D 1 and lim v.t/= v.y/dy D 0
t!1 0 t!1 0

and there exists a monotone function u such that v.t/  u.t/ as t ! 1, or v is


directly Riemann integrable on Œ0; 1/.
Then
P
Y.ut/  k0 h.ut  Sk /1fSk utg f:d:
) 0; t!1 (3.120)
s.t/

for any positive function s.t/ regularly varying at 1 which satisfies


Z t
lim .s.t//2 = v.y/dy D 1:
t!1 0

Proof By Chebyshev’s inequality and the Cramér–Wold device (see p. 232), it


suffices to prove that
 X 2
lim .s.t//2 E Y.t/  h.t  Sk /1fSk tg D 0:
t!1
k0
R
The expectation above equals Œ0;t v.t  y/dU.y/. If v is dRi, the latter integral
is bounded (this is clear from the key renewal theorem (Proposition 6.2.3) when
the distribution of  is nonlattice while in the lattice case, this follows from
Lemma 6.2.8). If v is nonintegrable and u is a monotone function such that v.t/ 
u.t/, Lemma 6.2.13(a) with r1 D 0 and r2 D 1 yields
Z Z
v.t  y/ dU.y/  u.t  y/ dU.y/:
Œ0; t Œ0; t

Modifying u if needed in the right vicinity of zero we can assume


R t that u is monotone
and locally integrable. Since u  v, we have limt!1 .u.t/= 0 u.y/dy/ D 0 as the
corresponding relation holds for v, and an application of Lemma 6.2.9 applied to
f D u with r1 D 0 and r2 D 1 gives
Z Z t
1
u.t  y/ dU.y/ 
u.y/ dy
Œ0; t 0

and again using u  v we obtain


Z t Z t
u.y/ dy  v.y/ dy D o.s.t/2 /
0 0
3.3 Limit Theorems with Scaling 161

where the last equality follows from the assumption on s. The proof of
Lemma 3.3.33 is complete. u
t
Lemma 3.3.34 Assume that h is eventually monotone and eventually nonnegative
and that the distribution of  belongs to the domain of attraction of an ˛-stable
distribution, ˛ 2 .1; 2 (i.e., relation (3.44) holds). Then
P R ut
k0 h.ut  Sk /1fSk utg 
1 0 h.y/ dy f:d:
) 0; t!1
r.t/

for any positive function r.t/ regularly varying at 1 of positive index that further
satisfies

r.t/
lim D1
t!1 c.t/h.t/

where c is the same as in (3.44).


The proof of Lemma 3.3.34 can be found in [148].
Proof of Theoremp3.3.18 We shall use the function g as defined in (3.44). For
instance, g.t/ D  2
3 t in case (B1).
Cases (Bi1) According to Theorem 3.3.9, (3.40) holds which is equivalent to
P
Y.ut/  k0 h.ut  Sk /1fSk utg f:d: p
qR )
1 .1 C ˇ/Vˇ .u/ (3.121)
t
0 v.y/dy

by Lemma 6.1.4(c) because v is regularly varying at 1 of index ˇ 2 .1; 1/.


Rt 1=2
Since 0 v.y/dy is regularly varying at 1 of positive index 21 .1 C ˇ/ and
qR
t
0 v.y/dy
lim D C1;
t!1 g.t/jh.t/j
qR
t
Lemma 3.3.346 (with r.t/ D 0 v.y/dy) applies and yields

P R ut
k0 h.ut  Sk /1fSk utg 
1 0 h.y/dy f:d:
qR ) 0:
t
0 v.y/dy

Summing the last relation and (3.121) finishes the proof for cases (Bi1).

6
Lemma 3.3.34 requires that h be eventually monotone and eventually nonnegative. If h is
eventually nonpositive we simply replace it with h.
162 3 Random Processes with Immigration

Cases (Bi2) and (Bi3) Using Theorem 3.3.12 we infer


P R ut
k0 h.ut  Sk /1fSk utg 
1 0 h.y/dy f:d:
) I˛; .u/: (3.122)
g.t/h.t/

Cases (Bi2) By Lemma 3.3.31 g.t/ is regularly varying at 1 of index 1=˛.


Hence g.t/h.t/ is regularly varying of positive index. If v is dRi, an application
of Lemma 3.3.33 (with s.t/ D g.t/h.t/) yields
P
Y.ut/  k0 h.ut  Sk /1fSk utg f:d:
) 0: (3.123)
g.t/h.t/
Rt Rt
If limt!1 0 v.y/dy D 1, then the assumption limt!1 ..g.t/h.t//2 = 0 v.y/dy/ D
Rt
1 implies that limt!1 .v.t/= 0 v.y/dy/ D 0. To see this, we can assume
without loss of generality that v is monotone. If v is nonincreasing, then the
claimed convergence follows immediately. Hence, consider R t the case where v is
nondecreasing. Since .g.t/h.t//2 is regularly varying and 0 v.y/dy  v.t=2/t=2,
we conclude that there exists an a > 0 such that limt!1 .ta =v.t// D 1. Let a
denote the infimum of these a. Then, there exists " > 0 such that ta C" =v.t/ ! 1
whereas ta C"1 =v.t/ ! 0. Consequently,

v.t/ v.t/ 2v.t/ v.t/ .t=2/a C"1


Rt  Rt  D 2a C" a C" ! 0
0 v.y/dy t=2 v.y/dy tv.t=2/ t v.t=2/

because both factors tend to zero. Invoking Lemma 3.3.33 again allows us to
conclude that (3.123) holds in this case, too. Summing (3.122) and (3.123) finishes
the proof for cases (Bi2).
Cases (Bi3) We only give a proof for case (B13) in which  2 < 1, the other cases
being similar. Write
R ut P
Y.ut/ 
1 0 h.y/dy Y.ut/  k0 h.ut  Sk /1fSk utg
p D p
th.t/ th.t/
P R
1 ut
k0 h.ut  Sk /1fSk utg 
0 h.y/dy
C p
th.t/
DW At .u/ C Bt .u/:

According to Theorem 3.3.9, (3.121) holds which is equivalent to

f:d:
At .u/ ) c1 Vˇ .u/
3.3 Limit Theorems with Scaling 163

p
wherec1 WD b
1 . From (3.122) we already know that

f:d:
Bt .u/ ) c2 I2; .u/ (3.124)
p
where c2 WD  2
3 . By the Cramér–Wold device (see p. 232) and Lévy’s
continuity theorem, it suffices to check that, for any m 2 N, any real numbers
˛1 ; : : : ; ˛m , ˇ1 ; : : : ; ˇm , any 0 < u1 < : : : ; um < 1 and any w; z 2 R,
 X m X
m 
lim E exp iw ˛j At .uj / C iz ˇr Bt .ur / (3.125)
t!1
jD1 rD1
 X
m   X
m 
D E exp iwc1 ˛j Vˇ .uj / E exp izc2 ˇr I2; .ur /
jD1 rD1
 X 
  m
D exp  D.u1 ; : : : ; um /c21 w2 =2 E exp izc2 ˇr I2; .ur /
rD1

with D.u1 ; : : : ; um / defined in (3.74).


The idea behind the subsequent proof is that while the Bt is F -measurable, the
finite-dimensional distributions of the At converge weakly conditionally on F . To
make this precise, we write
 X m X
m 
EF exp iw ˛j At .uj / C iz ˇr Bt .ur /
jD1 rD1
 X m   X m 
D exp iz ˇr Bt .ur / EF exp iw ˛j At .uj / :
rD1 jD1

In view of (3.124)
 X m   X
m 
d
exp iz ˇr Bt .ur / ! exp izc2 ˇr I2; .ur / :
rD1 rD1

Since X and  are assumed independent, relations (3.75) and (3.76) read
X P
2
EF ZkC1;t ! D.u1 ; : : : ; um /
k0

and
X P
2
EF ZkC1;t 1fjZkC1;t j>yg ! 0
k0
164 3 Random Processes with Immigration

for all y > 0, respectively. With these at hand and noting that
p

1 tv.t/
y.t/ WD p ! c1 ;
th.t/

we infer
 X m   X 
EF exp iw ˛j At .uj / D EF exp iwy.t/ ZkC1;t
jD1 k0

d
! exp.D.u1 ; : : : ; um /c21 w2 =2/

by formula (3.89) of Lemma 3.3.30. Since the right-hand side of the last expression
is nonrandom, Slutsky’s lemma implies
 X m   X m 
exp iz ˇr Bt .ur / EF exp iw ˛j At .uj /
rD1 jD1
 X
m 
d
! exp izc2 ˇr I2; .ur / exp.D.u1 ; : : : ; um /c21 w2 =2/:
rD1

An application of the Lebesgue dominated convergence theorem finishes the proof


of (3.125). The proof of Theorem 3.3.18 is complete. t
u
Proof of Theorem 3.3.19 Case (C1). According to Theorem 3.3.10
s  
Pf > tg X f:d:
Y.ut/  h.ut  Sk /1fSk utg ) Z˛; ˇ .u/: (3.126)
v.t/ k0

Thus, it remains to show that


s
Pf > tg X f:d:
h.ut  Sk /1fSk utg ) 0:
v.t/ k0

Invoking the Cramér–Wold device (see p. 232), Markov’s inequality, and the regular
variation of the normalization factor, we conclude that it is enough to prove that
s
Pf > tg X
E jh.t  Sk /j1fSk tg
v.t/ k0
s Z
Pf > tg
D jh.t  x/j dU.x/ ! 0: (3.127)
v.t/ Œ0; t
3.3 Limit Theorems with Scaling 165

This
p follows immediately from Lemma 6.2.16(b) p with f1 .t/ D jh.t/j, f .t/ D
v.t/Pf > tg, D .ˇ  ˛/=2 and q.t/ D u.t/ for u.t/ defined in Theo-
rem 3.3.10. Note that f1 D o. f / in view of (3.52). The proof for case (C1) is
complete.
Case (C2) Using Theorem 3.3.13 we infer
Z
Pf > tg X f:d:
h.ut  Sk /1fSk utg ) .u  y/ dW˛ .y/ D J˛; .u/:
h.t/ k0 Œ0; u

Thus, we are left with showing that


 X 
Pf > tg f:d:
Y.ut/  h.ut  Sk /1fSk utg ) 0:
h.t/ k0

Appealing to Markov’s inequality and the Cramér–Wold device we conclude that it


suffices to prove
   X 2
Pf > tg 2
E Y.ut/  h.ut  Sk /1fSk utg
h.t/ k0
 2 Z
Pf > tg
D v.t  y/ dU.y/ ! 0:
h.t/ Œ0; t

This immediately follows from Lemma 6.2.16(b) with f .t/ D .h.t//2 =Pf > tg,
f1 .t/ D v.t/, D 2 C ˛ and q.t/ D .w.t//2 for w.t/ defined in Theorem 3.3.19.
Note that f1 D o. f / in view of (3.53). The proof for case (C2) is complete.
Case (C3) Put
s
Pf > tg X  
AN t .u/ WD XkC1 .ut  Sk /  h.ut  Sk / 1fSk utg ;
v.t/ k0
s
Pf > tg X
BN t .u/ WD h.ut  Sk /1fSk utg
v.t/ k0

and
Z
1=2
A˛;ˇ .u/ WD b .u  y/.ˇ˛/=2 dW˛ .y/ D b1=2 J˛; .ˇ˛/=2 .u/:
Œ0; u

We shall prove that

X
m
d X
m
j .AN t .uj / C BN t .uj // ! j .Z˛; ˇ .uj / C A˛; ˇ .uj //
jD1 jD1
166 3 Random Processes with Immigration

for any m 2 N, any 1 ; : : : ; m 2 R and any 0 < u1 < : : : < um < 1.


Set
s
Pf > tg X
m
ZN kC1;t WD j .XkC1 .uj t  Sk /  h.uj t  Sk //1fSk uj tg
v.t/ jD1

Pm P
for k 2 N0 and t > 0. Then jD1 j AN t .uj / D k0 ZN kC1;t and

X Z X
m
Pf > tg
EF ZN kC1;t
2
D j2 v.t.uj  y//1Œ0;uj  .y/
k0
v.t/ Œ0; um  jD1

X 
C2 r l f .t.ur  y/; t.ul  y//1Œ0;ur  .y/ dy .ty/:
1r<lm

With this at hand, we write


 X 
m
 
EF exp iz j AN t .uj / C BN t .uj /
jD1
 X m   X 
D exp iz j BN t .uj / EF exp iz ZN kC1;t
jD1 k0
 X m   X 
D exp iz N
j Bt .uj / EF exp iz N
ZkC1;t
jD1 k0
 X 
 exp  N 2 2
EF ZkC1;t z =2
k0
 X m X 
C exp iz N
j Bt .uj /  N 2 2
EF ZkC1;t z =2 (3.128)
jD1 k0

for z 2 R.
Now we intend to show that under the present assumptions Lemma 3.3.29
is applicable. While relation (3.84) has already been checked in the proof of
Theorem 3.3.10, relation (3.54) holds by assumption. Thus, we are left with proving
that (3.86) holds which, under (3.54), is equivalent to
Z
Pf > tg
lim lim sup h.t.z  y// dy U.ty/ D 0
!1 t!1 h.t/ . z; z

for all z > 0. This follows p


immediately from Lemma 6.2.16(a) with f .t/ D h.t/,
D .ˇ  ˛/=2 and q.t/ D u.t/ for u.t/ defined in Theorem 3.3.10.
3.3 Limit Theorems with Scaling 167

By formula (3.85) of Lemma 3.3.29

X
m X
1 j BN t .uj / C 2 EF ZN kC1;t
2

jD1 k0
s Z
Pf > tg X
m
D 1 j h.t.uj  y//1Œ0;uj  .y/dy .ty/
v.t/ Œ0;um  jD1

Z X
m
Pf > tg
C 2 j2 v.t.uj  y//1Œ0;uj  .y/
v.t/ Œ0;um  jD1

X 
C 2 r l f .t.ur  y/; t.ul  y//1Œ0;ur  .y/ dy .ty/
1r<lm

d X
m
! 1 j A˛; ˇ .uj / C 2 D˛;ˇ .u1 ; : : : ; um / (3.129)
jD1

for any real 1 and 2 with D˛; ˇ .u1 ; : : : ; um / defined in (3.95). Hence,
 X m X 
exp iz N
j Bt .uj /  N 2 2
EF ZkC1;t z =2
jD1 k0
 X m 
d
! exp iz j A˛;ˇ .uj /  D˛; ˇ .u1 ; : : : ; um /z2 =2
jD1

for each z 2 R, and thereupon


 X m X 
lim E exp iz N
j Bt .uj /  N 2 2
EF ZkC1;t z =2
t!1
jD1 k0
 X m 
D E exp iz j A˛; ˇ .uj /  D˛; ˇ .u1 ; : : : ; um /z2 =2
jD1
 X m 
D E exp iz j .A˛;ˇ .uj / C Z˛;ˇ .uj //
jD1

by Lebesgue’s dominated convergence theorem, P the second equality following from


the fact that the conditional distribution of m
jD1 j Z˛; ˇ .uj / is centered normal with
variance D˛; ˇ .u1 ; : : : ; um /.
168 3 Random Processes with Immigration

According to formula (3.91) of Lemma 3.3.30


 X   X 
P
EF exp iz ZN kC1;t  exp  EF ZN kC1;t
2
z2 =2 ! 0:
k0 k0

Hence the first summand on the right-hand side of (3.128) tends to zero in
probability if we verify that
X d
EF ZN kC1;t
2
! D˛;ˇ .u1 ; : : : ; um / (3.130)
k0

and
X P
EF ZN kC1;t
2
1fjZN kC1;t j>yg ! 0 (3.131)
k0

for all y > 0. Relation (3.130) follows from (3.129) with 1 D 0 and 2 D 1.
In view of inequality (3.77) relation (3.131) is implied by (3.98) which has already
been checked (in the proof of Theorem 3.3.10). This finishes the proof for case (C3).
The proof of Theorem 3.3.19 is complete. t
u

3.4 Moment Results

In this section we get rid of the condition X.t/ D 0 for t < 0. Thus .Y.t//t2R is
defined by
X
Y.t/ WD XkC1 .t  Sk /; t 2 R:
k0

Also, we assume that X has nondecreasing paths and that limt!1 X.t/ D 0 a.s.
The results concerning finiteness of power and exponential moments of .Y.t//
defined above we are going to derive hereafter are actually a key in the analysis
of the moments of N.t/ the number of visits to .1; t of a PRW .Tn /n1 (see
Section 1.4). The link between N.t/ and Y.t/ is discussed next.
Example 3.4.1 If Xn .t/ D 1fn tg for a real-valued random variable n , n 2 N,
then Y.t/ equals the number of visits to .1; t of the PRW .Sn1 C n /n2N , thus
Y.t/ D N.t/.
Our first moment result for shot-noise processes, assuming   0 a.s., provides
two conditions which combined are necessary and sufficient for the finiteness of
EeaY.t/ for fixed a > 0 and
Pt 2 R. As before, let .x/ D inffn  1 W Sn > xg,
 D .0/ and set U.x/ WD n0 PfSn  xg.
3.4 Moment Results 169

Theorem 3.4.1 Let   0 a.s. Then, for any a > 0 and t 2 R,

EeaY.t/ < 1 (3.132)

holds if, and only if,


Z  
r.t/ WD EeaX.ty/  1 dU.y/ < 1 (3.133)
Œ0; 1/

and

!
Y
aXn .tSn1 /
l.t/ WD E e < 1: (3.134)
nD1

Moreover, (3.133) alone implies EeaY.t0 / < 1 for some t0  t.


Remark 3.4.2 It can be extracted from the proof given next that we may replace 
in (3.134) by any other .Sn /n2N0 -stopping time N  . Note also that, unlike the case
when Pf < 0g > 0 to be discussed later,  coincides with b  WD inffn  1 W n > 0g
and thus has a geometric distribution with parameter Pf > 0g. Finally, (3.134) is a
trivial consequence of (3.133) if  > 0 a.s.
We shall now carry over the previous result to the case when .Sn /n2N0 is a
positively divergent random walk taking negative values with positive probability.
Let 0 D P 0 and n D inffk > n1 W Sk > Sn1 g for n 2 N. The function
U > .x/ WD n0 PfSn  xg, x  0 is the renewal function of the associated ordinary
random walk of strictly ascending ladder heights.
Theorem 3.4.3 Let .Sn /n2N0 be positively divergent and Pf < 0g > 0. Then the
following assertions are equivalent for any a > 0:

EeaY.t/ < 1 for some t 2 R; (3.135)


EeaY.t/ < 1 for all t 2 R; (3.136)
r> .t/ < 1 for all t 2 R; (3.137)
>
r .t/ < 1 for some t 2 R; (3.138)

where l.t/ is defined as in (3.134) and


Z
>
r .t/ WD .l.t  y/1/dU >.y/
Œ0;1/

for t 2 R. Furthermore, the conditions imply r.t/ < 1 and l.t/ < 1 for all t 2 R.
Turning to power moments, we consider the case   0 a.s. only.
170 3 Random Processes with Immigration

Theorem 3.4.4 Let   0 a.s. Then for any p  1 and t 2 R, the following
assertions are equivalent:

E.Y.t//p < 1I (3.139)


Z
sq .t/ WD E.X.t  y//q dU.y/ < 1 for all q 2 Œ1; p: (3.140)

3.5 Proofs for Section 3.4

Proof of Theorem 3.4.1 Observe that


X  Y
eaY.t/  1 D eaXn .tSn1 /  1 eaXk .tSk1 /
n1 knC1

X  (3.141)
 eaXn .tSn1 /  1
n1

Y
and eaY.t/  eaXn .tSn1 /
nD1

hold whenever Y.t/ < 1. Taking expectations in the above inequalities gives the
implications (3.132))(3.133) and (3.132))(3.134).
In turn, assume that (3.133) and (3.134) hold and define

Y
L.s/ WD eaXn .sSn1 /
nD1

for s 2 R. Pick " > 0 so small that

EL.s/1fS "g  ˇ WD EL.t/1fS "g < 1

for all s  t. This is possible because l.t/ D EL.t/ < 1 in view of (3.134) and L is
a.s. nondecreasing. Next define Y0 ./ D Y00 ./ D 0 and

X
n  Cn
X
Yn ./ WD Xk .  Sk1 /; Yn0 ./ WD Xk .  .Sk1  S //
kD1 kD C1

for n 2 N. Plainly, Yn ./ " Y./ and similarly


X
Yn0 ./ " Y 0 ./ WD Xn .  .Sn1  S //
n C1
3.5 Proofs for Section 3.4 171

as n ! 1. Note that each Yn0 ./ is a copy of Yn ./ and further independent of
.L./; S /. Now observe that
0
Yn .t/  Y .t/ C Yn .t/1f n; S "g
0
C Yn .t  "/1f n; S >"g
 Y .t/ C Yn0 .t/1fS "g C Yn0 .t  "/1fS >"g

and therefore, using the stated independence properties,


0 0

EeaYn .t/  E L.t/1fS "g eaYn .t/ C L.t/1fS >"g eaYn .t"/

 ˇ EeaYn .t/ C EL.t/ EeaYn .t"/ (3.142)

for any n 2 N. Now notice that

Y
n
 n
EeaYn .t/  E eaXk .t/ D EeaX1 .t/ < 1 (3.143)
kD1

where the finiteness follows from EeaX.t/ < 1 which, in its turn, is a consequence
of (3.133). By solving (3.142) for EeaYn .t/ and letting n ! 1, we arrive at

EeaY.t/  .1  ˇ/1 EL.t/ EeaY.t"/

and then upon successively repeating this argument at

Y
n1
EeaY.t/  .1  ˇ/n EeaY.tn"/ EL.t  k"/
kD0

for any n 2 N. Hence EeaY.t/ < 1 as claimed if we verify EeaY.t0 / < 1 for some
t0 < t.
To this end, pick t0 such that r.t0 / < 1 which is possible because (3.133) in
combination with the monotone convergence theorem entails limt!1 r.t/ D 0. Note
also that r.t0 / < 1 implies EeaX.t0 / < 1. Define

X
n  
aYn .t0 / aXk .t0 Sk1 /
bn WD Ee and cn WD E e 1
kD1

for n 2 N0 , in particular, b0 D 1 and c0 D 0. The bn is finite by the same argument


as in (3.143). Moreover, supn1 cn D r.t0 / < 1. With this notation and for any
172 3 Random Processes with Immigration

n 2 N, we obtain (under the usual convention that empty products are defined as 1)
n 
X  Y
n
aYn .t0 / aXk .t0 Sk1 /
e 1 D e 1 eaXj .t0 Sj1 /
kD1 jDkC1
n 
X  Y
n
 eaXk .t0 Sk1 /  1 eaXj .t0 Sj1 CSk /
kD1 jDkC1

n 
X  kCn1
Y
 eaXk .t0 Sk1 /  1 eaXj .t0 Sj1 CSk / :
kD1 jDkC1

QkCn1 aXj .tSj1 CSk /


For fixed k; n 2 N, the random variable jDkC1 e is independent of
eaXk .t0 Sk1 / and has the same distribution as eaYn1 .t0 / . Taking expectations, we get

bn  1  cn bn1  r.t0 /bn1

for n 2 N and thereupon at bn  .1  r.t0 //1 for all n 2 N. Finally letting n ! 1,


we conclude EeaY.t0 / < 1.
The previous argument has only used (3.133) and thus also shown the last
assertion of the theorem. t
u
Proof of Theorem 3.4.3 The last assertion follows from (3.141) and l.t/1  r> .t/.
(3.135))(3.136). Put g.t/ WD EeaY.t/ and ht .y/ WD E.eaX.ty/  1/g.t  y  / for
t; y 2 R. Let M be a random variable with the same distribution as infn0 Sn . Use
now the first line of (3.141) to infer via conditioning and with the help of (6.20)

g.t/  1
!
X
Y
D E eaXn .tSn1 /  1 eaXk .tSk1 /
n1 knC1


ˇ !!
X Y ˇ
D E eaXn .tSn1 /  1 E eaXk .tSk1 / ˇˇSn
n1 knC1

X 

aXn .tSn1 /
D E e  1 g.t  Sn /
n1
X Z
D Eht .Sn / D E Eht .y C M/dU > .y/
n0 Œ0; 1/
Z Z 

D E E eaX.tyCz/  1 g.t  y   C z/ dPfM  zgdU > .y/
Œ0;1/ Œ0;1/
Z

 E eaX.tCz/  1 dPfM  zg  E eaX.tCu/  1 PfM > ug


Œ0;1/
3.5 Proofs for Section 3.4 173

for any t 2 R and any u > 0. The distribution of M being concentrated on Œ0; 1/
(because Pf < 0g > 0 by assumption) and infinitely divisible (see Theorem 2
on p. 613 in [89]) has unbounded support, i.e., PfM > ug > 0 for any u > 0.
Consequently, g.t C u/ < 1 for any u > 0 if g.t/ < 1. By monotonicity, we also
have g.t C u/ < 1 for u < 0.
(3.136))(3.137). Put
n
Y   
Ln .s/ WD exp aXk .s  Sk1  Sn1 /
kDn1 C1

for n 2 N and s 2 R which are i.i.d. with L1 .s/ D L.s/ as defined in the proof of
Theorem 3.4.1. If EeaY.t/ < 1, then
X Y
eaY.t/  1 D .Ln .t  Sn1 /  1/ Lk .t  Sk1 /
n1 knC1
X
 .Ln .t  Sn1 /  1/ :
n1

Taking expectations on both sides of this inequality gives r> .t/ < 1.
(3.138))(3.135). If r> .t/ < 1 for some t 2 R, then also l.t/ < 1 and, therefore,
r> .t0 / < 1 and l.t0 /  1 < 1 for some t0  t. Since

Y
n
eaYn .s/  Lk .s/;
kD1

we infer

bn WD EeaYn .t0 /  .EL.t0 //n D l.t0 /n < 1

for any n 2 N. Putting

X
n
cn WD E .Lk .t0  Sk1 /  1/
kD1

we have supn1 cn D r> .t0 / < 1 and thus find by a similar estimation as in the proof
of Theorem 3.4.1 for nonnegative  that bn  1Ccn bn1 and thus bn  .1r> .t0 //1
for all n 2 N. Hence, EeaY.t0 / < 1, for Yn .t0 / " Y.t0 / as n ! 1. t
u
For the proof of Theorem 3.4.4 we need a lemma.
Lemma 3.5.1 Let 1  p D nCı with n 2 N0 and ı 2 .0; 1. Then, for any x; y  0,

.x C y/p  xp C yp C p2p1 .xyp1 C xn yı /: (3.144)


174 3 Random Processes with Immigration

Rr
Proof For any 0  r  1, we have .1 C r/p D 1 C p 0 .1 C t/p1 dt. By the mean
value theorem for integrals, for some 2 .0; r/,

.1Cr/p D 1 C pr.1C /p1  1 C p2p1 r  1 C p2p1 rı (3.145)

where in the last step we have used that 0  r  1. Now let x; y  0. When x  y,
use the first estimate in (3.145) to get .x C y/p  yp C p2p1 xyp1 . When y  x use
the second estimate in (3.145) to infer .x C y/p  xp C p2p1 xn yı . Thus, in any case,
(3.144) holds. t
u
Proof of Theorem 3.4.4 (3.139))(3.140). Let E.Y.t//p < 1 and q 2 Œ1; p.
Using the superadditivity of the function x 7! xq for x  0, we then infer
X Z
1 > E.Y.t//  Eq
.Xk .t  Sk1 // D
q
E.X.t  y//q dU.y/
k1 Œ0; 1/

which is the desired conclusion.


(3.140))(3.139). To prove this implication, we write p D n C ı with n 2 N0 ,
ı 2 .0; 1 and use induction on n. When n D 0, then necessarily ı D 1, i.e., p D 1.
Then there is nothing to verify, for
Z
EY.t/ D EX.t  y/dU.y/ D s1 .t/ < 1:
Œ0; 1/

In the induction step, we assume that the asserted implication holds for p D n and
conclude that it then also holds for p D n C ı for all ı 2 .0; 1. To this end, assume
that p D n C ı for some n 2 N and ı 2 .0; 1 and that sq .t/ < 1 for all q 2 Œ1; p.
By induction hypothesis, E.Y.t//n < 1. For k 2 N and t 2 R, define
X
Yk .t/ WD Xj .t  .Sj1  Sk //:
jkC1

Then Yk ./ is a copy of Y0 ./ WD Y./ which is also independent of

Fk WD ...Xj .t//t2R ; j / W j D 1; : : : ; k/:

Observe that Yk .t/ D XkC1 .t/ C YkC1 .t  kC1 / for all t 2 R. Using (3.144), we get

.Y.t//p D .X1 .t/ C Y1 .t1 //p


 .X1 .t//p C .Y1 .t1 //p
Cp2p1 .X1 .t/.Y1 .t1 //p1 C .X1 .t//n .Y1 .t1 //ı /:
3.6 Bibliographic Comments 175

Iterating this inequality and using


X
Yk .t  Sk / D Xj .t  Sj1 / ! 0; k!1 a.s.
jkC1

we obtain the following upper bound for .Y.t//p :


X p
.Y.t//p  Xj .t  Sj1 /
j1
X
 p1
Cp2p1 Xj .tSj1 / Yj .tSj /
j1

X 
n  ı
C Xj .tSj1 / Yj .tSj / :
j1

E.Y.t//n < 1 implies that E.Y.t//q is finite for 0 < q  n. Using this and the
monotonicity of Yj , we conclude
 
E.Y.t//p  sp .t/ C p2p1 s1 .t/E.Y.t//p1 C sn .t/E.Y.t//ı < 1:

t
u

3.6 Bibliographic Comments

Random processes with immigration have been used to model various phenomena.
An incomplete list of possible areas of applications includes anomalous diffusion
in physics [210], earthquakes occurrences in geology [254], rainfall modeling in
meteorology [242, 258], highway traffic engineering [159, 204], river flow and
stream flow modeling in hydrology [193, 259], computer failures modeling [195]
and network traffic in computer sciences [186, 212, 238, 239], insurance [181, 182],
and finance [180, 245].
In the case where  has an exponential distribution, the process Y (or its stationary
version) may be called random process with immigration at the epochs of a Poisson
process or random process with Poisson immigration. Weak convergence of random
processes with Poisson immigration has received considerable attention. In some
papers of more applied nature weak convergence of Yt .u/ D .a.t//1 .Y.ut/  b.ut//
for X having a specific form is investigated. In the list to be given next  denotes
a random variable independent of  and f a deterministic function which satisfies
certain restrictions which are specified in the cited papers:
• X.t/ D 1f>tg and X.t/ D t ^ , functional convergence, see [238];
• X.t/ D f .t/, stationary version of Y, functional convergence, see [180];
176 3 Random Processes with Immigration

• X.t/ D f .t ^ /, convergence of finite-dimensional distributions, see [186];


functional convergence, see [239];
• X.t/ D 1 =2 f .t2 /, stationary version, convergence of finite-dimensional
distributions, see [94, 95].
The articles [125, 163, 181, 183, 192] are of more theoretical nature, and study
weak convergence of Yt .u/ for general (not explicitly specified) X. The work [163]
contains further pointers to relevant literature which could have extended our list of
particular cases given above.
In the case where the distribution of  is exponential, the variables Yt .u/ have
infinitely divisible distributions with characteristic functions of a rather simple form.
Furthermore, the convergence, as t ! 1, of these characteristic functions to a
characteristic function of a limiting infinitely divisible distribution follows from
the general theory. Also, in this context Poisson random measures arise naturally
and working with them considerably simplifies the analysis. In the cases where
the distribution of  is not exponential, the aforementioned approaches are not
applicable. We are aware of several papers in which weak convergence of processes
Y, properly normalized, centered, and rescaled, is investigated in the case where 
has distribution other than exponential. In [132] weak convergence on DŒ0; 1 of
 Z u 
1 X 1 n
p XkC1 .u  n Sk /1fSk nug  E.X.y//dy
n k0 E 0

to a Gaussian process is proved under rather restrictive assumptions (in particular,


concerning the existence of moments of order four). See also Theorem 1 on p. 103
of [47] for a similar result with X.t/ D 1f>tg in a more general setting. With the
same X weak convergence of finite-dimensional distributions of .Yt .u// as t ! 1
is settled in [212] under the assumption that  and  are independent and some
moment-type conditions. Weak convergence of Yt .1/ has been much investigated,
especially in the case where X is a branching process (see, for instance, [15, 161,
223]). Until the end of this paragraph let X be as in Example 3.1.2, i.e., X.t/ D
g.t; / for a random variable  independent of  and measurable g W R2 ! R
satisfying g.t; x/ D 0 for t < 0. Weak convergence of one-dimensional distributions
of Y is analyzed in Section 6 of [214]. Our Example 3.2.1 shows that Theorem R t 6.1 in
[214] does not hold in the stated generality. Functional limit theorems for 0 Y.s/ds
are
R t obtained in [133] (in [133] the process Y is called a flag process). Observe that
0 Y.s/ds is aR random process with immigration which corresponds to the response
t
process t 7! 0 g.s; /ds.
Let us also note that various distributional aspects, other than distributional
convergence, of particular random processes with immigration were investigated.
For X as in Example 3.1.2, see, for instance, [246] and [250]. In [246], the sequence
.Si / is more general than an ordinary random walk with positive steps.
Let .Xk /k2Z be i.i.d. stochastic processes and .Sj /j2Z be the points of a point
process, the two sequences being independent. One may wonder which P conditions
ensure the a.s. conditional or absolute convergence of the series k2Z XkC1 .t 
3.6 Bibliographic Comments 177

Sk / for t 2 R fixed. Conditioning on .Sj / gives us the infinite sum of independent


random variables whose a.s. conditional or absolute convergence is amenable to
the three-series theorem. This is an underlying idea behind necessary and sufficient
conditions obtained in [260] for the a.s. convergence of the aforementioned series.
It would be interesting to find a practicable criterion for the a.s. convergence of the
series as above in which .Sj / is an ordinary random walk and .Xj ; Sj  Sj1 /j2Z are
independent
Theorem 3.2.1 is a part of Theorem 2.2 in [149]. The other part of the cited result
treats weak convergence in the Skorokhod space D.R/. The recent article [205] gets
rid of an annoying assumption that X and  are independent, irrespective of whether
weak convergence of finite-dimensional distributions or weak convergence on D.R/
is concerned.
Theorem 3.2.2 is a consequence of Theorem 2.4 in [146]. We think that
Theorem 3.2.2 can be strengthened to weak convergence of finite-dimensional
distributions of .Y.u C t//u2R , properly centered. Proving or disproving a functional
convergence in this setting seems to be an interesting open problem.
Example 3.2.3(a) In Theorem 1 of [223] the same criterion is derived for the
convergence of one-dimensional distributions via an analytic argument. Under the
condition E < 1 which entails E < 1, weak convergence of one-dimensional
distributions of a subcritical process with immigration was proved in Theorem 3 of
[161].
Example 3.2.4(a) Weak convergence of one-dimensional distributions was proved
in Theorem 2.1 of [146] under the assumption that the function t 7! j f .t/j is dRi,
not assuming, however, that f 2 D.R/. Note that if f 2 D.R/, then f is bounded
on compact intervals, and the function t 7! j f .t/j ^ 1 is dRi if, and only if, so is
t 7! j f .t/j.
Theorems 3.3.9 and 3.3.10 are taken from [148]. It would be interesting to
prove ‘functional versions’ of these results. As far as weak convergence of finite-
dimensional distributions in Theorem 3.3.10 is concerned we think that it holds for
all ˇ 2 R rather than for ˇ  ˛ and that the technical assumption on the existence
of monotone u is not needed.
Theorem 3.3.12 is Theorem 2.9 in [146] in the case where h is eventually
nonincreasing (hence 2 .1=˛; 0) and a corollary to Theorem 1.1 in [140] in
the case where h is eventually nondecreasing (hence  0). Actually, in the last
cited theorem weak convergence in the J1 -topology (cases (B1) and (B2)) and the
M1 -topology (case (B3)) on DŒ0; 1/ was proved. According to Lemma 3.3.24(b),
any version of I˛; for ˛ 2 .1; 2/ and 2 .1=˛; 0/ does not belong to D.0; 1/
which excludes the possibility that a classical functional limit theorem holds with
I˛; being the limit process. It is an interesting open problem whether there is weak
convergence on D.0; 1/ when h is eventually nonincreasing in case (B3) with
D 0 and cases (B1) and (B2). Functional limit theorems (3.44) which are an
indispensable ingredient in the proof of Theorem 3.3.12 are well known, see, for
instance, Theorem 5.3.1 and Theorem 5.3.2 in [119] or Section 7.3.1 in [261].
178 3 Random Processes with Immigration

Theorem 3.3.13 is Theorem 2.4 in [143]. For 2 Œ˛; 0 this result, accompanied
by convergence of moments, was earlier obtained in Theorem 2.9 of [146] under
a minor additional assumption. Actually, whenever > ˛ and h is eventually
monotone, there is weak convergence in the Skorokhod space D.0; 1/ endowed
with the J1 -topology. Eventually nondecreasing and nonincreasing h are covered
by Theorem 1.1 in [140] and Theorem 2.1 in [143], respectively. A perusal of
the proof of Theorem 2.1 in [143] reveals that the result actually holds without
the monotonicity assumption. We suspect that the same is true in the situation
of Theorem 1.1 in [140]. Functional limit theorem (3.45) is a consequence of
the well-known fact that SŒut , properly normalized, converges weakly in the J1 -
topology on DŒ0; 1/ to an ˛-stable subordinator (which has strictly increasing
paths a.s.) and Corollary 13.6.4 in [261]. Note that there is certain confusion about
convergence (3.45) in the literature (see [267] for more details).
Theorem 3.3.14 was proved in [142]. In different contexts the limit process X
from Theorem 3.3.14 has arisen in [46, 49]. It is an open problem whether the result
of Theorem 3.3.14 still holds under the sole assumption E 2 < 1 rather than E r <
1 for some r > 2. We think that a proof if exists should be technically involved.
An even more complicated open problem is: what happens in the case where the
distribution of  belongs to the domain of attraction of a stable distribution with
finite mean?
Section 3.3.4 is based on [148].
Theorem 3.3.21 is obtained here as a specialization of Theorems 3.3.12
and 3.3.13. Originally, Theorem 3.3.21 was implicitly proved in [150] (see
Theorems 1.2 and 1.3 there) following earlier work in [138, 139]. Assuming that
 and  are independent, a counterpart of part (C1) of Theorem 3.3.21 with a
random centering (i.e., a result that follows from Theorem 3.3.9) was obtained in
Proposition 3.2 of [212].
With the exception of Theorem 3.3.21 Section 3.3.5 follows the presentation
in [148]. Functional limit theorems for Y corresponding to X.t/ D 1ftg similar
in spirit to Theorem 3.3.21 were recently obtained in Theorem 3.2 of [7]. These
provide a generalization of Example 3.3.1.
Section 3.3.6 is based on [140, 143, 146–148].
The results of Section 3.4 came from [8]. Inequality (3.144) is a variant of an
inequality we have learned from [118]. The proof given here is a slight modification
of the argument given in the cited reference.
Chapter 4
Application to Branching Random Walk

The purpose of this chapter is two-fold. First, we obtain a criterion for uniform
integrability of intrinsic martingales .Wn /n2N0 in the branching random walk as
a corollary to Theorem 2.1.1 that provides a criterion for the a.s. finiteness of
perpetuities. Second, we state a criterion for the existence of logarithmic moments
of a.s. limits of .Wn /n2N0 as a corollary to Theorems 1.3.1 and 2.1.4. While the
former gives a criterion for the existence of power-like moments for suprema
of perturbed random walks, the latter contains a criterion for the existence of
logarithmic moments of perpetuities. To implement the task, we shall exhibit an
interesting connection between these at first glance unrelated models which emerges
when studying the weighted random tree associated with the branching random walk
under the so-called size-biased measure.

4.1 Definition of Branching Random Walk

The evolution of a branching process can be conveniently described in terms of


the evolution of a certain population. Armed with this idea, consider a population
starting from one ancestor located at the origin and evolving like a Galton–Watson
process but with the generalization that individuals may have infinitely many
children. All individuals are residing in points on the real line, and the displacements
P
of children relative to their mother are described by a point process Z D NiD1 "Xi
on R. Here "x is a probability measure concentrated at x. Thus N D Z.R/ gives the
total number of offspring of the considered mother and Xi the displacement of the
i-th child. The displacement processes of all population members are supposed to
be independent copies of Z. We further assume Z.f1g/ D 0 a.s. and EN > 1
(supercriticality) including the possibility PfN D 1g > 0. If PfN < 1g D
1, then the population size process forms an ordinary Galton–Watson process.
Supercriticality ensures survival of the population with positive probability.

© Springer International Publishing AG 2016 179


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_4
180 4 Application to Branching Random Walk

For n 2 N0 , let Zn be the point process that defines the positions on R of the
individuals of the n-th generation, their total number given by Zn .R/.
Definition 4.1.1 The sequence .Zn /n2N0 is called branching random walk (BRW).
S
Let V WD n0 N be the infinite Ulam–Harris tree of all finite sequences
n

v D v1 : : : vn (shorthand for .v1 ; : : : ; vn /), with root ˛ (N0 WD f˛g) and edges
connecting each v 2 V with its successors vi, i 2 N. The length of v is denoted as
jvj. Call v an individual and jvj its generation number. A BRW .Zn /n2N0 may now
be represented as a random labeled subtree of V with the same root. This subtree
T is obtained recursively as follows. For any v 2 T, let N.v/ be the number of its
PN.v/
successors (children) and Z.v/ WD iD1 "Xi .v/ denote the point process describing
the displacements of the children vi of v relative to their mother. By assumption,
the Z.v/ are independent copies of Z. The Galton–Watson tree associated with this
model is now given by

T WD f˛g [ fv 2 Vnf˛g W vi  N.v1 : : : vi1 / for i D 1; : : : ; jvjg;

and Xi .v/ denotes the label attached to the edge .v; vi/ 2 T  T P and describes
the displacement of vi relative to v. Let us stipulate hereafter that jvjDn means
summationP over all vertices of T (not V) of length n. For v D v1 : : : vn 2 T, put
S.v/ WD niD1 Xvi .v1 : : : vi1 /.P
Then S.v/ gives the position of v on the real line (of
course, S.˛/ D 0), and Zn D jvjDn "S.v/ for all n 2 N0 .
Suppose there exists > 0 such that
Z
m. / WD E e x Z.dx/ 2 .0; 1/: (4.1)
R

For n 2 N, define Fn WD .Z.v/ W jvj  n  1/, and let F0 be the trivial -algebra.
For n 2 N0 , put
Z X X
Wn . / D Wn WD .m. //n e x Zn .dx/ D .m. //n e Sv D L.v/
R
jvjDn jujDn

where L.v/ WD e Sv =.m. //jvj .


The sequence .Wn ; Fn /n2N0 forms a nonnegative martingale with mean one and
is thus a.s. convergent with limit variable W, say, satisfying EW  1. If .Wn / is
uniformly integrable, then Wn converges to W a.s. and in mean. The latter ensures
EW D 1 which in particular implies that PfW > 0g > 0. Theorem 4.2.1 below
provides us with a necessary and sufficient condition for the uniform integrability
of .Wn /, under no additional assumptions on the BRW beyond indispensable (4.1).
In order to formulate it, we first need to introduce a multiplicative random walk
associated with our model. This will in fact be done on a suitable measurable space
under a second probability measure b P related to P, for details see Section 4.3. Let
4.2 Criterion for Uniform Integrability of Wn and Moment Result 181

M be a random variable with distribution defined by


2 3
X
b
PfM 2 Bg WD E 4 L.v/"L.v/ .B/5 (4.2)
jvjD1

for any Borel subset B of RC . Notice that theP


right-hand side of (4.2) does indeed
define a probability distribution because E jvjD1 L.v/ D EW1 D 1. More
generally, we have (see, for instance, Lemma 4.1 in [37])
2 3
X
b
Pf…n 2 Bg D E 4 L.v/"L.v/ .B/5 ;
jvjDn

for eachQn 2 N, whenever .Mk /k2N is a family of independent copies of M and


…n WD nkD1 Mk . It is important to note that

b
PfM D 0g D 0 and b
PfM D 1g < 1: (4.3)

The first assertion follows since, by (4.2), b


PfM > 0g D EW1 D 1. As for the
P
second, observe that bPfM D 1g D 1 implies E jvjD1 L.v/1fL.v/¤1g D 0 which in
combination with EW1 D 1 entails that the point process Z consists of only one
point u with L.u/ D 1. This contradicts the assumed supercriticality of the BRW.

4.2 Criterion for Uniform Integrability of Wn and Moment


Result

The chosen notation for the multiplicative random walk associated with the given
BRW as opposed to the notation in Section 2.1 is intentional. Also, we keep the
definition of J .x/ from there (see p. 44).
Theorem 4.2.1 The martingale .Wn /n2N0 is uniformly .P-/ integrable if, and only
if, the following two conditions hold true:

lim …n D 0 b
P-a.s. (4.4)
n!1

and
Z
EW1 J .logC W1 / D xJ .log x/PfW1 2 dxg < 1: (4.5)
.1; 1/
182 4 Application to Branching Random Walk

There are three distinct cases in which conditions (4.4) and (4.5) hold simultane-
ously:
(A1) b
E log M 2 .1; 0/ and EW1 logC W1 < 1;
(A2) b
E log M D 1 and EW1 J .logC W1 / < 1;
(A3) b
E logC M D bE log M D C1, EW1 J .logC W1 / < 1, and
Z
  log x
b
EJ logC M D R log x b
PfM 2 dxg < 1:
.1;1/ b
Pf log M > yg dy
0

Remark 4.2.2 Condition (4.4) together with EW1 logC W1 < 1 which is a well-
known condition in the branching processes theory are always sufficient for the
uniform integrability of .Wn /. It is curious that if b
E log M is infinite, the condition
EW1 logC W1 < 1 is not any longer necessary.
Remark 4.2.3 Using Theorem 4.2.1 we shall demonstrate that Doob’s condition is
not necessary for the supremum of a martingale to be integrable.
Let .Un / be a nonnegative martingale. It is known that Doob’s condition
supn0 EUn logC Un < 1 ensures E supn0 Un < 1 and thereupon the uniform
integrability of .Un /. Note that there are uniformly integrable martingales with
nonintegrable suprema. For instance, let .Sn /n2N0 be an ordinary finite mean
random walk with positive jumps. Then .Sn =n; .Sn ; SnC1 ; : : ://n2N forms a reversed
martingale. By Proposition V-3-11 in [221], this martingale is uniformly integrable.
However, Theorem 4.14 in [67] tells us that the supremum of this martingale is
nonintegrable provided that ES1 logC S1 D 1.
For the martingales .Wn / things are better: .Wn / is uniformly integrable if, and
only if, its supremum is integrable (see (4.10) in Lemma 4.3.3). By Theorem 4.2.1,
if conditions (A2) and EW1 logC W1 D 1 hold (the latter means that Doob’s
condition is violated), then .Wn / is uniformly integrable which implies that its
supremum is integrable.
Restricting to the case (A1), the existence of moments of W was studied in quite a
number of articles, see ‘Bibliographic Comments’. The following result goes further
by covering the cases (A2) and (A3) as well.
Theorem 4.2.4 If limn!1 …n D 0 b
P-a.s. and
 
EW1 f logC W1 J .logC W1 / < 1; (4.6)

then .Wn /n2N0 is uniformly integrable and

EWf .logC W/ < 1: (4.7)

Conversely, if (4.7) holds and PfW1 D 1g < 1, then (4.6) holds.


4.3 Size-Biasing and Modified Branching Random Walk 183

An interesting aspect of this theorem is that it provides conditions for the


existence of ˆ-moments of W for ˆ slightly beyond L1 without assuming the
(LlogL)-condition to ensure uniform integrability.

4.3 Size-Biasing and Modified Branching Random Walk

We adopt the situation described in Section 4.1. Recall that Z denotes a generic copy
of the point process describing the displacements of children relative to its mother
in the considered population. In the sequel we shall need the associated modified
BRW with a distinguished ray .„n /n2N0 , called spine.
Let Z  Pbe a point process whose distribution has Radon–Nikodym derivative
.m. //1 iD1 e
Xi
with respect to the distribution of Z. The individual „0 D
˛ residing at the origin of the real line has children, the displacements of which
relative to „0 are given by a copy Z0 of Z  . All the children of „0 form the first
generation of the population, and among these the spinal successor „1 is picked
with a probability proportional to e s if s is the position of „1 relative to „0 (size-
biased selection). Now, while „1 has children the displacements of which relative to
v1 are given by another independent copy Z1 of Z  , all other individuals of the first
generation produce and spread offspring according to independent copies of Z (i.e.,
in the same way as in the given BRW). All children of the individuals of the first
generation form the second generation of the population, and among the children of
„1 the next spinal individual „2 is picked with probability e s if s is the position
of „2 relative to „1 . It produces and spreads offspring according to an independent
copy Z2 of Z  whereas all siblings of „2 do so according to independent copies
of Z, and so on. Let Z bn denote the point process describing the positions of all
members of the n-th generation. We call .Z bn /n2N0 modified BRW associated with the
ordinary BRW .Zn /n2N0 . Both, the BRW and its modified version, may be viewed
as a random weighted tree with an additional distinguished ray (the spine) in the
second case. On an appropriate measurable space .X; G/ specified below, they can
be realized as the same random element under two different probability measures P
and bP, respectively. Let

X WD f.t; s; / W t V; s 2 F.t/; 2 Rg

be the space of weighted rooted subtrees of V with the same root and a distinguished
ray (spine) where R WD f.0; 1 ; 2 ; : : :/ W k 2 N for all k 2 Ng denotes the set
of infinite rays and F.t/ denotes the set of functions s W V ! R [ f1g assigning
position s.v/ 2 R to v 2 t and s.v/ D 1 to v 62 t. Endow this space with
G WD .Gn W n 2 N0 / where Gn is the -algebra generated by the sets

f.t0 ; s0 ; 0 / 2 X W tn0 D tn ; s0jtn 2 B and jn0 D jn g; .t; s; / 2 X


184 4 Application to Branching Random Walk

where tn0 WD fv 2 t0 W jvj  ng, tn ranges over the subtrees V with maxfjvj W v 2
tn g  n, B over the Borel sets Rtn and over R. The subscript jtn means restriction
to the coordinates in tn while the subscript jn means restriction to all coordinates up
to the nth. Let further Fn Gn denote the -algebra generated by the sets

f.t0 ; s0 ; 0 / 2 X W tn0 D tn and s0jtn 2 Bg:

Then under b
P the identity map .T; S; „/ D .T; .S.v//v2V ; .„n /n2N0 / represents the
modified BRW with its spine, while .T; S/ under P represents the original BRW (the
way how P picks a spine does not matter and thus remains unspecified). Finally, the
random variable Wn W X ! Œ0; 1/ defined by
X
Wn .t; s; / WD .m. //n e s.v/
jvjDn

P
is Fn -measurable for each n 2 N0 and satisfies Wn D jvjDn L.v/. The relevance
of these definitions with respect to the P-martingale ..Wn ; Fn //n2N0 to be studied
hereafter is provided by the following lemma
Lemma 4.3.1 For each n 2 N0 , Wn is the Radon–Nikodym derivative of b
P with
respect to P on Fn . Moreover, if W WD lim supn!1 Wn , then
(1) .Wn / is a P-martingale and .1=Wn / is a b
P-supermartingale.
(2) EW D 1 if, and only if, b
PfW < 1g D 1.
(3) EW D 0 if, and only if, b
PfW D 1g D 1.
The link between the P-distribution and the b
P-distribution of Wn is provided by
Lemma 4.3.2 For each n 2 N0 , b
P.Wn 2 / is a size-biasing of P.Wn 2 /, that is

EWn f .Wn / D b
Ef .Wn /

for each nonnegative Borel function f on R. More generally,

EWn g.W0 ; : : : ; Wn / D b
Eg.W0 ; : : : ; Wn / (4.8)

for each nonnegative Borel function g on RnC1 . Finally, if .Wn /n2N0 is uniformly
P-integrable, then also

EWh.W0 ; W1 ; : : :/ D b
Eh.W0 ; W1 ; : : :/ (4.9)

holds true for each nonnegative Borel function h on R1 .


Proof Equality (4.8) is an immediate consequence of Lemma 4.3.1 when noting
that .W0 ; : : : ; Wn / is Fn -measurable. In the uniformly integrable case, Wn ! W a.s.
4.4 Connection with Perpetuities 185

and in mean with respect to P which immediately implies that W is the P-density of
b
P on F WD .Fn W n 2 N0 / and thereupon also (4.9). t
u
Also, we shall need another auxiliary result.
Lemma 4.3.3 Let .Wn /n2N be uniformly integrable with the a.s. limit W and put
W  WD supn0 Wn . Then, for each a 2 .0; 1/, there exists b D b.a/ 2 RC such that

PfW > tg  PfW  > tg  b PfW > atg (4.10)

and

b
PfW > tg  b
PfW  > tg  .b=a/ PfW > atg (4.11)

for all t > 1.


Proof Inequality (4.10) which can be found in Lemma 2 of [36] in the case of a.s.
finite branching was obtained without this restriction in Lemma 1 of [154] by a
different argument.
We infer for the nontrivial part of (4.11)
Z 1
b
PfW  > tg D EW1fW  >tg  EW  1fW  >tg D PfW  > x _ tgdx
0
Z 1
 b PfW > a.x _ t/gdx D .b=a/ EW1fW=a>tg
0

D .b=a/ b
PfW > atg

for all t > 1 where the first and the last equalities follow from (4.8) and (4.9),
respectively, the first inequality is a consequence of W  W  P-a.s., and the second
inequality is implied by (4.10). t
u

4.4 Connection with Perpetuities

Next we have to make the connection with perpetuities. For u 2 T, let N .u/ denote
the set of children of u and, if juj D k,
X L.uv/
Wn .u/ D ; n 2 N0 :
vW uv2TkCn
L.u/

Since all individuals off the spine reproduce and spread as in the unmodified BRW,
S
we have that, under P as well as b P, the .Wn .u//n2N0 for u 2 n0 N .„n /nf„nC1 g
186 4 Application to Branching Random Walk

are independent copies of .Wn /n2N0 under P. For n 2 N, define further

L.„n / e .S.„n /S.„n1 //


Mn WD D (4.12)
L.„n1 / m. /

and
X L.u/ X e .S.u/S.„n1 //
Qn WD D : (4.13)
L.„n1 / m. /
u2N .„n1 / u2N .„n1 /

Then it is easily checked that the .Mn ; Qn /n2N are i.i.d. under b
P with distribution
given by
!!
XN
e X i e X i X e X j
N
b
Pf.M; Q/ 2 Ag D E 1A ;
iD1
m. / m. / jD1 m. /
!!
X X
DE L.u/1A L.u/; L.v/
jujD1 jvjD1

for any Borel


P set A where
P .M; Q/ denotes a generic copy of .Mn ; Qn / and our
convention jujDn D u2Tn should be recalled from Section 4.1. In particular,
!!
X X
b
PfQ 2 Bg D E L.u/1B L.u/ D EW1 1B .W1 /
jujD1 jujD1

for any measurable B, i.e.,

b
PfQ 2 dxg D x PfW1 2 dxg: (4.14)

Notice that this implies

b
PfQ D 0g D 0: (4.15)

As for the distribution of M, we have


!
X
b
PfM 2 Bg D E L.u/1B .L.u//
jujD1

which is in accordance with the definition given in (4.2). As we see from (4.12),

…n D M1  : : :  Mn D L.„n /; n 2 N0 : (4.16)
4.5 Proofs for Section 4.2 187

Here is the lemma that provides the connection between .Wn /n2N0 and the perpe-
tuity generated by .Mn ; Qn /n2N . Let A be the -algebra generated by .Mn ; Qn /n2N0
and the family of displacements of the children of the „n relative to their mother,
i.e. of fS.u/ W u 2 N .„n /; n 2 N0 g. For n 2 N and k D 1; : : : ; n, put also
X L.u/

Rn;k WD Wnk .u/  1


L.„k1 /
u2N .„k1 /nf„k g

 
and notice that b
E Rn;k jA D 0 because each Wnk .u/ is independent of A with mean
one.
Lemma 4.4.1 With the previous notation the following identities hold true for each
n 2 N0

X
n
  Xn1
Wn D …k1 Qk C Rn;k  …k b
P-a.s. (4.17)
kD1 kD1

and

  X
n X
n1
b
E Wn jA D …k1 Qk  …k b
P-a.s. (4.18)
kD1 kD1

Proof Each v 2 Tn has a most recent ancestor in .„k /k2N0 . By using this and
recalling (4.13) and (4.16), one can easily see that

X
n X
Wn D L.„n / C L.u/Wnk .u/
kD1 u2N .„k1 /nf„k g

X
n  
L.„k /
D …n C …k1 Qk  C Rn;k
kD1
L.„k1 /
n
X

 
D …n C …k1 Qk C Rn;k  …k
kD1

which obviously gives (4.17). But the second assertion is now immediate in view of
E.…k1 Rn;k jA/ D …k1b
b E.Rn;k jA/ D 0 a.s. t
u

4.5 Proofs for Section 4.2

Proof of Theorem 4.2.1 Sufficiency Suppose first that (4.4) and (4.5) hold true.
P
Recalling (4.14) we infer k1 …k1 Qk < 1 bP-a.s. by Theorem 2.1.1. Since Wn is
188 4 Application to Branching Random Walk

nonnegative and P-a.s. convergent to W, the uniform P-integrability follows if we


can show EW D 1 or, equivalently (by Lemma 4.3.1), bPfW < 1g D 1. To this end
note that, by (4.18) and Fatou’s lemma,
X
b
E.lim inf Wn jA/  …k1 Qk < 1 b
P-a.s.
n!1
k1

and thus lim infn!1 Wn < 1 b P-a.s. As .1=Wn /n2N0 constitutes a positive and
thus bP-a.s. convergent supermartingale by Lemma 4.3.1, we further infer W D
lim infn!1 Wn and thereupon the desired b
PfW < 1g D 1.
Necessity Assume now that .Wn /n2N0 is uniformly P-integrable, so that EW D 1
and thus b
PfW < 1g D 1 by Lemma 4.3.1(2). Furthermore, b PfW  < 1g D 1 in
view of (4.11). The inequality
X L.v/
Wn  L.„n1 / D …n1 Qn b
P-a.s.
L.„n1 /
v2N .„n1 /

then shows that

sup …n1 Qn  W  < 1 b


P-a.s.
n1

which in combination with b PfM D 0g D 0, b PfM D 1g < 1 (see (4.3)) and b PfQ D
0g D 0 (see (4.15)) allows us to appeal to Theorem 2.1.1 to conclude validity of (4.4)
and (4.5). t
u
A similar argument can be used to deduce Theorem 4.2.4 from Theorems 1.3.1
and 2.1.4. We omit details which can be found in Theorem 1.4 of [6].

4.6 Bibliographic Comments

The martingale .Wn / defined in (4.1) has been extensively studied in the literature,
but first results were obtained in [179] and [35].
Theorem 4.2.1 For the case (A1), this is due to Biggins [35] and Lyons [202], see
also [187]. In the present form, the result has been obtained in [6] following an
earlier work [135].
Theorem 4.2.4 is Theorem 1.4 of [6]. Under the x log x condition various moment
results for W, the a.s. limit of .Wn /, can be found in [10, 36, 43, 136, 158, 196, 200,
243]. A counterpart of Theorem 4.2.4 for concave unbounded f was obtained in
[230]. In particular, the cited result covers some slowly varying f .
4.6 Bibliographic Comments 189

There are basically two probabilistic approaches towards finding conditions for
the existence of Eˆ.W/ for suitable functions ˆ. One method, worked out in
[135] and [158], hinges on getting first a moment-type result for perpetuities and
then translating it into the framework of branching random walks. The second
approach, first used in [13] for Galton–Watson processes and further elaborated in
[10], relies on the observation that BRWs bear a certain double martingale structure
which allows the repeated application of the convex function inequalities due to
Burkholder, Davis and Gundy (see, for instance, Theorem 2 on p. 409 in [68]) for
martingales. Both approaches have their merits and limitations. Roughly speaking,
the double martingale argument requires as indispensable ingredients only that ˆ
be convex and at most of polynomial growth. On the other hand, it also comes with
a number of tedious technicalities caused by the repeated application of the convex
function inequalities. The basic tool of the first method is only Jensen’s inequality
for conditional expectations (see [6] for more details), but it relies heavily on the
existence of a nonnegative concave function ‰ that is equivalent at 1 to the function
ˆ.x/=x. This clearly imposes a strong restriction on the growth of ˆ.
Section 4.3 The construction of the modified BRW is based on [38] and [202].
Lemma 4.3.1 is a combination of Proposition 12.1 and Theorem 12.1 in [38] and
Proposition 2 in [124].
Chapter 5
Application to the Bernoulli Sieve

The definition of the Bernoulli sieve which is an infinite allocation scheme can be
found on p. 1. Assuming that the number of balls to be allocated equals n (in other
words, using a sample of size n from a uniform distribution on Œ0; 1), denote by
Kn the number of occupied boxes and by Mn the index of the last occupied box.
Also, put Ln WD Mn  Kn and note that Ln equals the number of empty boxes
within the occupancy range (i.e., we only count the empty boxes with indices not
exceeding Mn ).
The purpose of this chapter is two-fold. First, we present all the results accu-
mulated to date concerning weak convergence of finite-dimensional distributions of
.LŒeut  /u>0 as t ! 1. Second, we demonstrate that some of these results (namely,
those given in Theorem 5.1.3) can be derived from Theorem 3.3.21 which is a
statement about weak convergence of finite-dimensional distributions of a particular
random process with immigration. The connection is hidden, and we shall spend
some time to uncover it.

5.1 Weak Convergence of the Number of Empty Boxes

We shall use the following notation for the expectations


WD Ej log Wj and  WD Ej log.1  W/j

which may be finite or infinite.


Depending on the behavior of the distribution of W near the endpoints 0 and 1
the number of empty boxes can exhibit quite a wide range of different asymptotics.
Classifying them leads to considering four cases. We find it useful to precede the
detailed exposition by a survey of known results.

© Springer International Publishing AG 2016 191


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_5
192 5 Application to the Bernoulli Sieve

Case I in which
 < 1: Ln converges in distribution to some L with a mixed
Poisson distribution (Theorem 5.1.1).
Case II in which
D 1 and  < 1: Ln becomes asymptotically negligible
(Theorem 5.1.2).
Case III in which
< 1 and  D 1: There are several possible modes of weak
convergence of .LŒeut  /u>0 , properly normalized and centered (Theorem 5.1.3).
Case IV in which
D  D 1: The asymptotics of Ln is determined by the behavior
of the ratio PfW  tg=Pf1  W  tg as t ! 0C. When the distribution of W
assigns much more mass to the neighborhood of 1 than to that of 0 equivalently the
ratio goes to 0, the number of empty boxes becomes asymptotically large. In this
situation finite-dimensional distributions of .LŒeut  /u>0 , properly normalized without
centering, converge weakly under a condition of regular variation (Theorem 5.1.3).
If the roles of 0 and 1 are interchanged Ln converges to zero in probability
(Theorem 5.1.2). When the tails are comparable finite-dimensional distributions of
.LŒeut  /u>0 converge weakly (Theorem 5.1.4).
Theorem 5.1.1 Suppose that the distribution of j log Wj is nonlattice and that
 <
d
1. Then Ln ! L as n ! 1, and the distribution of L is mixed Poisson.
We are aware of two cases in which the distribution of L can be explicitly
identified.
It is easily checked that the distribution of L1 is geometric with parameter EW.
Curiously, the same is true for all n 2 N provided that the distribution of W is
symmetric about the midpoint 1=2.
d
Example 5.1.1 If W D 1  W, then Ln is geometrically distributed with success
probability 1=2 for all n 2 N.
Example 5.1.2 If W has a beta distribution with parameters > 0 and 1, i.e.,
PfW 2 dxg D x 1 1.0;1/ .x/dx, then L has a mixed Poisson distribution with
random parameter j log.1  W/j. In other words,

.1 C /.1 C  s/
EsL D ; s 2 Œ0; 1: (5.1)
.1 C 2  s/

See Section 5.4 for the proofs.


Theorem 5.1.2 Suppose that either

D 1 and  < 1 or

D  D 1 and lim PfW  tg=Pf1  W  tg D 1.
t!0C

P
Then Ln ! 0 as n ! 1.
5.1 Weak Convergence of the Number of Empty Boxes 193

In the next theorem we investigate the cases where the distribution of log W
belongs to the domain of attraction of an ˛-stable distribution, ˛ 2 .0; 1/ [ .1; 2.
In particular, we treat two situations:
< 1 and  D 1;
D  D 1, and the left
tail of 1  W dominates the left tail of W.

P in which we set  D j log Wj


Theorem 5.1.3 All the assertions of Theorem 3.3.21
and  D j log.1  W/j hold with LŒeut  replacing k0 1fSk ut<Sk CkC1 g .
For instance, the counterpart of case (D11) is: If Var.log W/ < 1 and
Pfj log.1  W/j > tg  tˇb̀.t/ as t ! 1 for some ˇ 2 Œ0; 1/ and some b̀ slowly
varying at 1, then
R ut
LŒeut  
1 0 Pfj log.1  W/j > yg dy f:d:
q Rt ) Vˇ .u/; t!1

1 0 Pfj log.1  W/j > yg dy

where
D E j log Wj < 1 and Vˇ is a centered Gaussian process with

E Vˇ .t/Vˇ .s/ D t1ˇ  .t  s/1ˇ ; 0  s  t:

The counterpart of case (D4) in Theorem 3.3.21 is: Suppose that

Pfj log Wj > tg  t˛ `.t/ and Pfj log.1  W/j > tg  tˇb̀.t/; t!1

for some ˛ 2 .0; 1/, some ˇ 2 Œ0; ˛ and some ` and b̀ slowly varying at 1. If
˛ D ˇ, assume additionally that

Pfj log Wj > tg


lim D0
t!1 Pfj log.1  W/j > tg

and that there exists a nondecreasing function u.t/ satisfying

u.t/Pfj log Wj > tg


lim D 1:
x!1 Pfj log.1  W/j > tg

Then

Pfj log Wj > tg f:d:


LŒeut  ) J˛; ˇ .u/; t!1
Pfj log.1  W/j > tg

where J˛; ˇ is as in Definition 3.3.7.


Further we discuss the situation when
D  D 1 and the tails of j log Wj and
j log.1  W/j are comparable. For ˛ 2 .0; 1/ and c > 0, let N .1=c;˛/ be a Poisson
random measure which is independent of W˛ an ˛-stable subordinator (see ‘List of
notation’ for precise definitions).
194 5 Application to the Bernoulli Sieve

Theorem 5.1.4 Suppose that

Pfj log Wj > tg  cPfj log.1  W/j > tg  t˛ `.t/; t!1

for some ˛ 2 .0; 1/, some c > 0 and some ` slowly varying at 1. Then

f:d: X
LŒeut  ) 1fW .1=c;˛/ .1=c;˛/ .1=c;˛/ DW R˛; c .u/; t ! 1:
˛ .tk /u<W˛ .tk /Cjk g
k

For fixed u > 0, the distribution of R˛; c .u/ is geometric with success probability
c.c C 1/1 , i.e.,
 k
c 1
PfR˛; c D kg D ; k 2 N0 :
cC1 cC1

Remark 5.1.5 Weak convergence of finite-dimensional distributions stated in The-


orem 5.1.4 immediately implies the strict stationarity of the process R˛; c .et / t2R .
Remark 5.1.6 If
D 1 and

E .1  W/n
lim D c 2 .0; 1/
n!1 E Wn
which is implied by

Pfj log Wj > tg


lim D c; (5.2)
t!1 Pfj log.1  W/j > tg

then, as proved in Theorem 1.1 of [138], there is convergence of one-dimensional


distributions in Theorem 5.1.4. However, there are no reasons to expect that the
conditions E j log Wj D 1 and (5.2) alone are sufficient for weak convergence of
some finite-dimensional distributions related to .Ln /.
We deem it natural to close this section with a comment about the link between
the Bernoulli sieve as an infinite allocation scheme with the random frequencies (in
the random environment) and the classical infinite allocation scheme obtained by
conditioning on the frequencies. In any allocation scheme with random frequencies
.b
Pk / the variability of the allocation of balls is regulated by both the randomness
of the frequencies and the randomness of the allocation given the frequencies (sam-
pling variability). In [100] the notion of strong environment was introduced. With
respect to some functional Vn the environment is called strong if the randomness of
.b
Pk / dominates the sampling variability in the sense that Vn and E .Vn j .b Pk /k2N /,
normalized by the same constants, have the same limit distributions. From this
definition it follows that whenever the environment is strong with respect to Vn
the asymptotics of Vn in an infinite allocation scheme in a random environment is
essentially different from the asymptotics of Vn in the classical infinite allocation
5.2 Poissonization and De-Poissonization 195

scheme obtained by conditioning on the environment. In the cited paper [100] it


was shown that the Bernoulli sieve exhibits the strong environment with respect to
Kn the number of occupied boxes whenever the distribution of W is nondegenerate.
Results of the present chapter (see, in particular, Lemma 5.2.1 and Lemma 5.2.2
given below) indicate that this is also the case for Ln the number of empty boxes at
least for those distributions of W which are covered by our theorems. Finally, we
stress that we are not aware of any works which would investigate the asymptotics
of the number of empty boxes in the classical infinite allocation scheme.

5.2 Poissonization and De-Poissonization

In the context of problems related to random allocations a Poissonization is a rather


efficient tool. Let .Tk /k2N be the arrival times in a Poisson process ..t//t0 of unit
intensity which are independent of the random variables .Uk / and the multiplicative
random walk R (see p. 1). In particular, we have

.t/ WD #fk 2 N W Tk  tg; t  0:

Instead of the scheme with n balls we shall work with a Poissonized version of the
Bernoulli sieve in which the successive allocation times of the balls (the points Uk )
over the boxes (the intervals .Rj ; Rj1 ) are given by the sequence .Tk /k2N . More
precisely, the point Uk hits some box at the time Tk . Thus, the random number .t/
of balls will be allocated over the boxes within Œ0; t. Denote by j .t/ the number
of balls which fall into the jth box within Œ0; t. It is clear that, given the sequence
R, first, for each j, the process .j .t//t0 is a Poisson process with intensity Pj D
Rj1  Rj , and second, for different j’s, these processes are independent. It is this
latter property which demonstrates the advantage of the Poissonized scheme over
the original one.
Put M.t/ WD M.t/ , K.t/ WD K.t/ , and L.t/ WD L.t/ . For instance, L.t/ is then
the number of empty boxes within the occupancy range obtained by throwing .t/
balls. The Bernoulli sieve can be interpreted as the infinite allocation scheme in the
random environment .Wk / which is given by i.i.d. random variables. The first two
results of the present section reveal that one can investigate the asymptotics of a
relatively simple functional which is determined by the environment .Wj /j2N alone
rather than that of L.t/.
Let .bSn /n2N0 be the zero-delayed ordinary random walk defined by

b
S0 WD 0; b
Sn WD j log W1 j C : : : C j log Wn j; n2N

and put b
n WD j log.1  Wn /j for n 2 N.
196 5 Application to the Bernoulli Sieve

Lemma 5.2.1 If E j log Wj D 1, then


X f:d:
L.eut /  1fb
S ut<b
S Cb

) 0; t ! 1: (5.3)
k k kC1 g
k0

Lemma 5.2.2 If E j log Wj < 1, then


P
L.eut /  1fb
S ut<b
S Cb
 kC1 g
k0 f:d:
) 0; t!1
k k
(5.4)
a.t/

for any function a.t/ satisfying limt!1 a.t/ D 1. P In other words, the finite-
dimensional distributions of the process .L.eut /  k0 1fb
Sk ut<b
Sk Cb
/
kC1 g u0
are
tight.
Lemma 5.2.3 given below allows us to implement a de-Poissonization, i.e., a
reverse transition from the scheme with Poisson number of balls to the original
scheme with deterministic number of balls.
Lemma 5.2.3 With no assumptions on the expectation of j log Wj

f:d:
L.eut /  LŒeut  ) 0; t ! 1:

.t/ WD inffk 2 N W b
Put b Sk > tg for t  0 and denote by
X
b WD Eb
U.t/ .t/ D Pfb
Sk  tg; t0
k0

the renewal function.


Proof of Lemma 5.2.1 To prove Lemma 5.2.1, it suffices to show that the left-hand
side of relation (5.3) with u D 1 converges to zero in probability and use the
Cramér–Wold device (see p. 232).
We divide the proof into several steps. As before, all unspecified limit relations
are assumed to hold as t ! 1.
Step 1 We intend to show that the maximal index of boxes discovered by the
P
Poisson process within Œ0; et  satisfies M.et /  b
.t/ ! 0.
To this end, put E.n/ WD  log min.U1 ; : : : ; Un / and note that M.et / D
b
.E..et ///. As n ! 1, E.n/  log n converges in distribution to a random variable
E having the Gumbel distribution. Since the sequence .E.n// is independent of the
process ..t//, then E..et //  log .et / converges in distribution to E , too. By the
P
weak law of large numbers for the Poisson processes log .et /  t ! 0. Hence
 
lim E..et //  t D E in distribution: (5.5)
t!1
5.2 Poissonization and De-Poissonization 197

Set R.t/ WD E..et //. Using Markov’s inequality and the fact that the renewal
b is nondecreasing we obtain
function U.t/
n  ˇ o
ˇ
P b .R.t//  b
.t/ 1f0<R.t/t g > " ˇ R.t/
  ˇ

ˇ
 "1 E b  .R.t//  b
.t/ 1f0<R.t/t g ˇ R.t/
   
D "1 Ub t C R.t/  t  U.t/
b 1f0<R.t/t g
 
b C /  U.t/
 "1 U.t b

for all > 0 and " > 0. This in combination with (6.10) yields
n  ˇ o
ˇ
lim P b .t/ 1f0<R.t/t g > " ˇ R.t/ D 0 almost surely:
 .R.t//  b
t!1

Consequently
  P
b
 .R.t//  b
.t/ 1f0<R.t/t g ! 0

by the Lebesgue dominated convergence theorem.


It is clear that
n  o
P b  .t/ 1fR.t/t> g > "  PfR.t/  t > g
.R.t//  b

for any > 0 and " > 0. With this at hand, recalling (5.5) and using the absolute
continuity of the distribution of E we conclude that
n  o
lim sup P b .t/ 1fR.t/t> g > "  PfE > g
 .R.t//  b
t!1

and thereupon
n  o
lim lim sup P b .R.t//  b
.t/ 1fR.t/t> g > " D 0:
!1 t!1

The previously obtained estimates lead to an important relation


  P
b
 .R.t//  b
.t/ 1fR.t/t>0g ! 0:

Arguing similarly we arrive at


  P
b.R.t//  b
.t/ 1fR.t/t0g ! 0

which completes Step 1.


198 5 Application to the Bernoulli Sieve

Step 2 We are looking for a good approximation for K.et / the number of boxes
discovered by the Poisson process within Œ0; et . More precisely, we shall prove that
X  
P
K.et /  1  exp  etb
Sk
.1  WkC1 / ! 0:
k0

We start with the representation


X
K.et / D 1fk .et /1g (5.6)
k1

where k .et / is the number of balls (in the Poissonized scheme) landing in the kth
box within Œ0; et . In view of
  X  

E K.et / j .Wj / D 1  exp  etb


Sk
.1  WkC1 / ; (5.7)
k0

to establish the desired approximation it is sufficient to prove that


 
lim E Var K.et / j .Wj / D 0: (5.8)
t!1

Given .Wj /, the indicators in (5.6) are independent. Hence


  X  
E Var K.et / j .Wj / D E exp  etb
Sk
.1  WkC1 /
k0
 

 exp  2etb
Sk
.1  WkC1 /
Z
 ty 
D b
'.e /  '.2ety / dU.y/
Œ0; 1/

where '.y/ WD E ey.1W/ . By Lemma 6.2.2, the function g0 .y/ D '.ey /  '.2ey /
is dRi on R. Applying now the key renewal theorem for distributions with infinite
mean (Proposition 6.2.4) justifies relation (5.8).
Step 3 We intend to prove the relation
X  

1  exp  etb
P
Z1 .t/ WD Sk
.1  WkC1 / 1fb
S >tg
! 0:
k
k0
5.2 Poissonization and De-Poissonization 199

According to Lemma 6.2.2, the function g1 .y/ D E .1  exp.ey .1  W/// is dRi


on .1; 0. Hence,
Z
E Z1 .t/ D b !0
g1 .t  y/ dU.y/
Œt; 1/

by the key renewal theorem (Proposition 6.2.4).


Step 4 We are going to check the relation
X  

exp  etb
P
Z2 .t/ WD Sk
.1  WkC1 /  1fb
S Cb

1fb ! 0:
k kC1 >tg S tg
k
k0

To this end, write Z2 .t/ as a difference of two nonnegative random functions


X  
Z2 .t/ D exp  etb
Sk
.1  WkC1 / 1fb
S Cb
 k kC1 tg
k0
X  

 1  exp  etb
Sk
.1  WkC1 / 1fb
S t<b
S Cb
 k k kC1 g
k0

DW Z21 .t/  Z22 .t/

and show that limt!1 E Z2i .t/ D 0, i D 1; 2. Indeed, according to Lemma 6.2.2,
the functions g2 .y/ D E exp.ey .1  W//1f1W>ey g and g3 .y/ D E .1 
exp.ey .1  W///1f1Wey g are dRi on Œ0; 1/. Hence, by the key renewal theorem
(Proposition 6.2.4),
Z Z
E Z21 .t/ D b ! 0 and E Z22 .t/ D
g2 .t  y/ dU.y/ b !0
g3 .t  y/ dU.y/
Œ0; t Œ0; t

which is the desired result.


Noting that

X  X 
L.e / 
t
1fb
S t<b
S Cb

D M.e / 
t
1fb
k k kC1 g S tg k
k0 k0
 X 
 

 K.et /  1  exp  etb


Sk
.1  WkC1 /  Z1 .t/ C Z2 .t/
k0

and combining conclusions of the four steps finishes the proof of Lemma 5.2.1. u
t
Proof of Lemma 5.2.2 If the distribution of j log Wj is nonlattice, the proof of
Lemma 5.2.2 exploits the same formulae as the proof of Lemma 5.2.1. However,
while implementing Step 1 one has to use the Blackwell theorem (formula (6.8)) for
200 5 Application to the Bernoulli Sieve

the finite mean case rather than for the infinite mean case. Also, while implementing
the steps 2 through 4 one has to use Lemma 6.2.8 rather than Proposition 6.2.4.
If the distribution of j log Wj is l-lattice for some l > 0, an additional argument
is only needed for Step 1 of the proof of Lemma 5.2.1.
Step 1 Fix any > 0 and pick m 2 N such that  ml. With this and " > 0, we
use the inequality
 ˇ  b C ml/  b
b
.R.t//  b
.t/ ˇ U.t U.t/
P 1f0<R.t/t g > " ˇ R.t/ 
a.t/ "a.t/

in combination with the relation


  ml
b C ml/  U.t/
lim U.t b D
t!1 E j log Wj

(which is equivalent to (6.9)) and the Lebesgue bounded convergence theorem to


infer

b
.R.t//  b
 .t/ P
1f0<R.t/t g ! 0:
a.t/

To implement the steps 2 through 4 one may use Lemma 6.2.8 and argue as in
the nonlattice case. t
u
Proof of Lemma 5.2.3 It suffices to check that

P P
K.t/  KŒt ! 0 and M.t/  MŒt ! 0 (5.9)

and use the Cramér–Wold device (see p. 232). In view of the inequality PfM.t/ ¤
MŒt g  PfK.t/ ¤ KŒt g which can be easily checked, only the first relation in (5.9)
needs a proof.
We first show that
p p P
K.t C x t/  K.t  x t/ ! 0 (5.10)

for any x > 0. By (5.7), we have


 p p 
E K.t C x t/  K.t  x t/
Z  p   p 

D b
' .t  x t/ey  ' .t C x t/ey dU.y/
Œ0; 1/
5.2 Poissonization and De-Poissonization 201

for large enough t where '.y/ D E ey.1W/ . Since the function y 7! ' 0 .y/ is
nonincreasing, we infer
 p   p   p  p
' .t  x t/ey  ' .t C x t/ey  ' 0 .t  x t/ey  2x tey

by the mean value theorem for differentiable functions, and therefore


 p p 
E K.t C x t/  K.t  x t/
p Z
2x t  p 
p
 p b
 ' 0 .t  x t/ey .t  x t/ey dU.y/:
t  x t Œ0; 1/

According to Lemma 6.2.2, the function g4 .y/ D ' 0 .ey /ey is dRi on R. This and
Lemma 6.2.8 together imply that
Z  p 
p
b D O.1/:
 ' 0 .t  x t/ey .t  x t/ey dU.y/
Œ0; 1/

 p p 
Hence, limt!1 E K.tCx t/K.tx t/ D 0 for any x > 0 which entails (5.10).
The process .K.s//s0 is a.s. nondecreasing. This implies that

jKŒt  K.t/j D jK.TŒt /  K.t/j1ftxptTŒt tCxptg


C jK.TŒt /  K.t/j1fjTŒt tj>xptg
p p
 K.t C x t/  K.t  x t/ C jK.TŒt /  K.t/j1fjTŒt tj>xptg

for any x > 0. Hence,


p p
PfjKŒt  K.t/j > 2"g  PfK.t C x t/  K.t  x t/ > "g
n o
C P jK.TŒt /  K.t/j1fjTŒt tj>xptg > "
p p
 PfK.t C x t/  K.t  x t/ > "g
p
C PfjTŒt  tj > x tg

for any " > 0. Recalling (5.10) and using the central limit theorem for TŒt yield
˚ 
lim sup P jKŒt  K.t/j > 2"  PfjN .0; 1/j > xg
t!1

where N .0; 1/ denotes a random variable with the standard normal distribution.
Sending x to 1 establishes the first relation in (5.9). The proof of Lemma 5.2.3 is
complete. t
u
202 5 Application to the Bernoulli Sieve

5.3 Nonincreasing Markov Chains and Random Recurrences


 
With M 2 N0 given and any integer n  M, let I WD Ik .n/ k2N0 be a nonincreasing
Markov chain with I0 .n/ D n, state space N and transition probabilities PfIk .n/ D
jjIk1 .n/ D ig for i  M C 1 and either M < j  i or M D j < i; PfIk .n/ D
jjIk1 .n/ D ig=0 for i < j and PfIk .n/ D MjIk1 .n/ D Mg D 1. Denote by

Zn WD #fk 2 N0 W Ik .n/  IkC1 .n/ D 0; Ik .n/ > Mg

the number of zero decrements of the Markov chain before the absorption. Assum-
ing that si; i1 > 0 for all M C 1  i  n, the absorption at state M is certain, and
Zn is a.s. finite.
Neglecting zero decrements of I along with renumbering of indices lead to
a decreasing Markov chain J WD Jk .n/ k2N0 with J0 .n/ D n and transition
probabilities
si;j
e
si;j D ; i>jM
1  si; i

(the other probabilities are the same as for I).


d
Lemma 5.3.1 If Zn ! Z as n ! 1 where a random variable Z has a proper
distribution, then this distribution is mixed Poisson.
Proof Let .Rj /MC1jn be independent random variables such that Rj has a
geometric distribution with success probability 1  sj;j . Assuming that the Rj ’s are
independent of the sequence of states visited by J we may identify Rj with the time
that the chain I spends in the state j provided this state is visited. With this at hand
Zn can be conveniently represented as

d
X
Zn D RJk .n/ 1fJk .n/>Mg : (5.11)
k0

Plainly, a unit rate Poisson process stopped at an independent exponentially


distributed random time with mean 1= has a geometric distribution with success
probability =. C 1/. Conditioning in (5.11) on the chain and using the latter
observation along with the independent increments property of Poisson processes
lead to the representation
X 
d 
Zn D  Jk .n/ 1fJk .n/>Mg
k0

where . j /MC1 jn are independent random variables which are independent of J,
j has an exponential distribution with mean sj;j =.1  sj;j /, and .  .t//t0 is a unit
rate Poisson process which is independent of everything else. Since Zn converges
5.4 Proofs for Section 5.1 203

in distribution, the sequence in the parantheses must converge, too. The proof of
Lemma 5.3.1 is complete. u
t
Now we present one more construction of the Bernoulli sieve which highlights
the connection with nonincreasing Markov chains. The Bernoulli sieve can be
realized as a random allocation scheme in which n ‘balls’ are allocated over an
infinite array of ‘boxes’ indexed 1, 2; : : : according to the following rule. At the first
round each of n balls is dropped in box 1 with probability W1 . At the second round
each of the remaining balls is dropped in box 2 with probability W2 , and so on. The
procedure proceeds until all n balls get allocated. Let Ik .n/ denote the number of
remaining balls (out of n) after the kth round. Then I  WD .Ik .n//k2N0 is a pattern of
nonincreasing Markov chains described above with M D 0 and
!
i
si;j D EW j .1  W/ij ; j  i: (5.12)
j

It is plain that Ln is the number of zero decrements of I  before the absorption.


Furthermore, the Markov property leads to the following distributional recurrence

d
L0 D 0; Ln D LIn .1/ C 1fIn .1/Dng ; n2N (5.13)

where on the right-hand side In .1/ is assumed independent of .Ln /n2N .

5.4 Proofs for Section 5.1

Proof of Theorem 5.1.1 As far as convergence in distribution is concerned we only


give a sketch of the proof. Consider the ‘inflated’ version of the Bernoulli sieve
with balls nU1 ; : : : ; nUn and boxes .nRk ; nRk1  for k 2 N. The number of empty
boxes
P within the occupancy range is still Ln . By Lemma 3.2.3, the point processes
"
k0 log nb
Sk
converge vaguely as n ! 1 to a stationary renewal point process
P
j2Z "b
S
where .b Sj /j2Z is as defined in (3.4) with j log Wj replacing . This
j P P P
entails vague convergence of k "n exp.b Sk /
D k "nRk to j2Z "exp.b S
. Further,
Pn j /
by Lemma 6.4.3 the point processes kD1 "nUk converge vaguely as n ! 1 to
P
j1 "Tj where T1 , T2 ; : : : are the arrival times of a Poisson
P process with unit
intensity. Think of intervals between consecutive points of j2Z "exp.b S
as a series
P j /
of boxes. In the role of balls assume the points of j1 "Tj . Denote by L the number
d
of empty boxes belonging to the interval .T1 ; 1/. The convergence Ln ! L can
be read off from convergence of certain point processes. For details we refer to the
proof of Theorem 3.3 in [105].
204 5 Application to the Bernoulli Sieve

The fact that the distribution of L is mixed Poisson follows from Lemma 5.3.1
because Ln is the number of zero decrements before absorption of the nonincreasing
Markov chain I  . The proof of Theorem 5.1.1 is complete. t
u
Proof for Example 5.1.1 The argument is based on recurrence (5.13) for marginal
d
distributions of the Ln . The symmetry W D 1  W yields EW k D E.1  W/k for
k 2 N and thereupon

PfIn .1/ D ng D EW n D PfIn .1/ D 0g; n2N (5.14)

in view of (5.12). Recalling that PfL1 D kg D 2k1 for k 2 N0 we shall show by


induction on n that PfLn D kg D 2k1 for all k 2 N0 . Using (5.13) and (5.14) we
obtain

PfLn D 0g D PfLIn .n/ C 1fIn .1/Dng D 0g D PfLIn .1/ D 0; In .1/ ¤ ng


X
n1
D PfIn .1/ D 0g C PfLk D 0gPfIn.1/ D kg
kD1
 
D PfIn .1/ D 0g C 21 1  2PfIn .1/ D 0g D 21

by the induction hypothesis. Assuming now that PfLn D ig D 2i1 for all i < k
we have

X
n1
PfLn D kg D PfIn .1/ D jgPfLj D kg C PfIn .1/ D ngPfLn D k  1g
jD1
 
D 2k1 1  2PfIn .1/ D 0g C PfIn .1/ D 0g2k D 2k1

which completes the proof. t


u
According to the definition the Bernoulli sieve is a multiplicative scheme. In the
proof for Example 5.1.2 it is more convenient to work with its additive counterpart
obtained by the logarithmic transformation x 7!  log x. Under this transfor-
mation the uniform sample .U1 ; : : : ; Un / becomes the sample .E1 ; : : : ; En / WD
.j log U1 j; : : : ; j log Un j/ from an exponential distribution with unit mean, and the
multiplicative random walk R turns into the zero-delayed ordinary random walk
j log Rj D .b Sk /k2N0 with jumps j log Wk j. In this setting the balls are identified
with points E1 ; : : : ; En and boxes are identified with intervals ŒbSk1 ; b
Sk / for k 2 N.
In what follows we shall use E1;n  E2;n  : : :  En;n the order statistics of
.E1 ; : : : ; En /. The multiplicative and additive scheme are equivalent because the
events fEj 2 Œb Sk1 ; b
Sk /g and fUj 2 .Rk ; Rk1 g coincide.
Proof for Example 5.1.2 Since W has the beta distribution with parameters and 1,
S1 ; b
b S2 ; : : : are the arrival times in a Poisson process of intensity . For j D 1; : : : ; n,
set Mj D #fk 2 N W b Sk 2 .Enj;n ; EnjC1;n /g (with E0;n WD 0). Recall that the
5.4 Proofs for Section 5.1 205

differences En;n En1;n ; En1;n En2;n ; : : : ; En;1 En;0 are independent exponential
random variables with expectations 1; 1=2; : : : ; 1=n. Since the Poisson process has
independent increments, M1 ; : : : ; Mn are independent. Since the Poisson process has
stationary increments we infer

Mj D #fk 2 N W b
d
Sk < EnjC1;n  Enj;n g

and further
j X  j  j k
EsMj D Ee .1s/.EnjC1;nEnj;n / D D sk 1 :
j C .1  s/ k0
jC jC

Thus, Mj has a geometric distribution with success probability j=. C j/. Counting
the number of empty gaps .Sk ; SkC1 / which fit in .Enj; n ; EnjC1; n / we see that this
is Mn for j D n and .Mj  1/C for j D 2; : : : ; n whence

Ln D .M1  1/C C : : : C .Mn1  1/C C Mn :

In terms of generating functions this is equivalent to

n Y j.j C 2  s/
n1
EsLn D ;
n C  s jD1 .j C /.j C  s/

and (5.1) follows by sending n ! 1 and evaluating the infinite product in terms of
the gamma function (see Example 1 on p. 239 in [262]). The generating function of
the stated mixed Poisson distribution equals

.1 C /.1 C  s/
Ee j log.1W/j.1s/ D E.1  W/ .1s/ D
.1 C 2  s/

which is the same as the right-hand side of (5.1). t


u
Proof of Theorem 5.1.4 The proof of distributional convergence can be found in
[150]. To determine the distribution of R˛; c .u/, we fix ı > 0, put
X
R.ı/
˛; c .u/ WD 1fW .1=c; ˛/ .1=c; ˛/ .1=c; ˛/ .1=c; ˛/
˛ .tk /u<W˛ .tk /Cjk ; jk >ıg
k

and use the equality (consult the definition of N .1=c; ˛/ in ‘List of notation’ for the
used notation)
 Z Z 
.ı/  
E ezR˛; c .u/ D E exp  1  ez1fW˛ .s/u<W˛ .s/Cyg ds
1=c; ˛ .dy/
Œ0; 1/ .ı; 1
206 5 Application to the Bernoulli Sieve

for z  0 along with simple manipulations to obtain


 Z 
.ı/  ˛
E ezR˛;c .u/ D E exp  c1 .u  y/ _ ı dW˛ .y/ .1  ez / ; z0
Œ0; u

where W˛ is an inverse ˛-stable subordinator (see Definition 3.3.6). Passing to the


limit as ı ! 0C and using the continuity theorem for Laplace transforms, we see
that
 Z 
zR˛; c .u/ 1 ˛ z
Ee D E exp  c .u  y/ dW˛ .y/ .1  e / ; z  0:
Œ0; u

Thus, the distribution of R˛; c .u/ is mixed Poisson with the parameter
Z
1
c .u  y/˛ dW˛ .y/ D J˛; ˛ .u/
Œ0; u

(see Definition 3.3.7). According to Property Q2 on p. 133, the distribution of


J˛; ˛ .u/ is exponential with unit mean. Therefore, the distribution of R˛; c .u/ is geo-
metric with success probability c.c C 1/1 as asserted. The proof of Theorem 5.1.4
is complete. t
u
Proof of Theorem 5.1.2 Since
D 1 by assumption, Lemmas 5.2.1 and 5.2.3 in
combination with Markov’s inequality ensure that it suffices to prove
X
lim E 1fb
S t<b
S Cb

D 0:
t!1 k k kC1 g
k0

If
D 1 and  < 1, then t 7! Pf1 > tg is dRi on RC by Lemma 6.2.1 (a), and
an application of Lemma 6.2.4 yields
X Z
E 1fb D b
Pf1 > t  ygdU.y/ ! 0:
S t<b
k S Cb
 k kC1 g
k0 Œ0; t

If the second group of assumptions is in force, equivalently,

lim .Pfj log.1  W/j > tg=Pfj log Wj > tg/ D 0;


t!1

R
b
the convergence limt!1 Œ0; t Pf1 > t  ygdU.y/ D 0 follows by Lemma 6.2.15
(with  D j log Wj, f .t/ D Pfj log.1  W/j > tg and c D 0). The proof of
Theorem 5.1.2 is complete. t
u
5.5 Bibliographic Comments 207

Proof of Theorem 5.1.3 By Theorem 3.3.21


P
k0 1fb
Sk ut<b
Sk Cb
kC1 g
 b.ut/ f:d:
) ‚.u/
a.t/

for appropriate functions a.t/ > 0 and b.t/ 2 R and q appropriate limit processes ‚.
Rt
For instance, if Var .log W/ < 1, then a.t/ D
1 0 Pfj log.1  W/j > yg dy,
Rt
b.t/ D
1 0 Pfj log.1  W/j > yg dy and ‚.u/ D Vˇ .u/. Since in all cases (D1)–
(D4) of Theorem 3.3.21 limt!1 a.t/ D 1 Lemmas 5.2.1 (in case (D4)), 5.2.2 (in
cases (D1)–(D3)), and 5.2.3 Penable us to conclude that the last centered formula
holds with LŒeut  replacing k0 1fbSk ut<b
Sk Cb
kC1 g
. The proof of Theorem 5.1.3 is
complete. t
u

5.5 Bibliographic Comments

The study of the Bernoulli sieve was initiated in [97]. Since then, several papers [7,
99–101, 103–105, 138, 139, 150, 220] have appeared which analyzed various
asymptotic properties of the Bernoulli sieve. While the articles [97, 100, 104] give a
fairly complete account of one-dimensional convergence of the number of occupied
boxes Kn , functional limit theorems for Kn were recently obtained in [7]. Further,
we note that one-dimensional convergence of the index of the last occupied box Mn
and the number of empty boxes Ln was investigated in [104]. See also [138, 139] for
further results on Ln .
Recall that the Bernoulli sieve is the infinite allocation scheme with the random
frequencies. The infinite allocation schemes with nonrandom frequencies have also
attracted some attention in the literature. Starting from Karlin’s fundamental paper
[171] which followed earlier work in [22, 72], several aspects of the scheme were
investigated in [23, 45, 79, 82, 90, 130, 211, 215, 216]. Surveys on the infinite
allocation schemes can be found in [24, 98]. It should be emphasized that the infinite
allocation schemes differ radically from the classical allocation scheme with finitely
many positive frequencies (detailed treatment of the latter scheme can be found in
monograph [184]).
Distributional convergence in Theorem 5.1.1 was proved in Theorem 3.3 of [105]
by using the argument outlined in the proof above. An alternative proof can be found
in Theorem 2.2(a) of [104]. Actually, the distributional convergence is accompanied
with convergence of moments of all positive orders. While the convergence of
expectations was established in [105], convergence of higher moments was later
derived in [220].
Examples 5.1.1 and 5.1.2 are Proposition 7.1 in [101] and Proposition 5.2 in
[104], respectively.
208 5 Application to the Bernoulli Sieve

Theorems 5.1.4 and 5.1.3 are taken from [150]. Theorem 5.1.3 strengthens
Theorem 1.1 in [139] and Theorem 1.2 in [138], respectively, which only deal with
one-dimensional convergence.
Section 5.2 follows the presentation in [150]. Although the papers [104, 138, 139]
offer several approaches to the de-Poissonization of the number of empty boxes, the
result of Lemma 5.2.3 is the strongest one out of those known to the author, even if
only one-dimensional distributions are concerned.
Section 5.3 is based on [104, 139]. In particular, Lemma 5.3.1 is Proposition 1.3
in [139]. We refer to [138, 139] for some other results which hold for general
nonincreasing Markov chains.
Chapter 6
Appendix

6.1 Regular Variation

Definition 6.1.1 A positive measurable function `, defined on some neighborhood


of 1, is called slowly varying at 1 if limt!1 .`.ut/=`.t// D 1 for all u > 0.
Definition 6.1.2 A positive measurable function f , defined on some neighborhood
of 1, is called regularly varying at 1 of index ˛ 2 R if limt!1 . f .ut/=f .t// D u˛
for all u > 0.
Lemma 6.1.3 Let a.t/ be a positive function satisfying limt!1 t`.a.t//.a.t//˛ D
1 for some ˛ > 0 and some ` slowly varying at 1 with limt!1 `.t/ D 1. Then
a.t/ is regularly varying at 1 of index 1=˛ and limt!1 t1=˛ a.t/ D 1.
Proof The function a.t/ is asymptotic inverse to t˛ =`.t/. Hence, according to
Proposition 1.5.15 in [44], a.t/  t1=˛ .`# .t//1=˛ where `# .t/ is the de Bruijn
conjugate of 1=`.t1=˛ /. The de Bruijn conjugate is slowly varying and hence a
is regularly varying of index 1=˛. This implies limt!1 `.a.t// D 1 whence
limt!1 t1=˛ a.t/ D 1 because `.a.t//  .a.t//˛ =t as t ! 1. t
u
Lemma 6.1.4 Let g be regularly varying at 1 of index and locally bounded
outside zero.
(a) Then
ˇ ˇ
ˇ g.st/ ˇ
ˇ
lim sup ˇ ˇ
s ˇD0
t!1 asb g.t/

for all 0 < a < b < 1.


(b) Suppose ¤ 0. Then there exists a monotone function such that g.t/  u.t/ as
t ! 1. Rt
(c) Let > 1 and a > 0. Then a g.y/dy  . C 1/tg.t/ as t ! 1.

© Springer International Publishing AG 2016 209


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4_6
210 6 Appendix

Rt
(d) Let D 1 and a > 0. Then t 7! a g.y/dy is a slowly varying function and

tg.t/
lim R t D 0:
t!1 g.y/dy
0

Part (a) of Lemma 6.1.4 is Theorem 1.5.2 in [44]; part (b) is a consequence of
Theorem 1.5.3 in [44]; parts (c) and (d) are two versions of Karamata’s theorem
(Proposition 1.5.8 and Proposition 1.5.9a in [44], respectively).

6.2 Renewal Theory

This and the next section are concerned with ordinary random walks. In the present
section we mainly treat random walks with nonnegative jumps, Proposition 6.2.6
being the only exception. In the next section random walks with two-sided jumps
are in focus.

6.2.1 Basic Facts

Let .k /k2N be a sequence of independent copies of a nonnegative random variable


. Throughout the section we make

Standing Assumption: Pf D 0g < 1:

Further, let .Sn /n2N0 be the zero-delayed ordinary random walk defined by S0 D 0
and Sn D 1 C : : : C n , n 2 N.
For x 2 R, set .x/ D inffk 2 N0 W Sk > xg. Plainly, .x/ D 0 for x < 0. Since
limn!1 Sn D C1 a.s., we have .x/ < 1 a.s. for each x  0. Put further
X
U.x/ D E.x/ D PfSn  xg; x 2 R:
n0

The function U is called renewal function. It is clear that U is nondecreasing on R


with U.x/ D 0 for x < 0.
Here is a collection of standard results that are frequently used throughout the
book.
Finiteness of U For x 2 R,

U.x/ < 1: (6.1)


6.2 Renewal Theory 211

Proof Fix any > 0. Since Pf D 0g < 1 we have Ee  < 1 and further
X X
U.x/ D PfSn  xg  e x Ee Sn D e x .1  Ee  /1 < 1
n0 n0

by Markov’s inequality. t
u
Distributional Subadditivity of .x/

Pf.t C s/ > xg  Pf.t/ C N .s/ > xg; x  0 (6.2)

where, with t; s  0 fixed, .s/


N has the same distribution as .s/ and is independent
of .t/. Hence, the renewal function U is subadditive on R, i.e.,

U.t C s/  U.t/ C U.s/; t; s 2 R: (6.3)

Proof We start by observing that

.t C s/  .t/
(
inffk 2 N W S .t/  t C 1C .t/ C : : : C kC .t/ > sg; if S .t/  t  s;
D
0; if S .t/  t > s:

Since S .t/  t > 0 a.s. we infer

.t C s/  .t/ C inffk 2 N W SkC .t/  S .t/ > sg a.s.

The second term on the right-hand side has the same distribution as .s/ and
is independent of .t/ because the sequence .SkC .t/  S .t/ /k2N has the same
distribution as .Sk /k2N and is independent of .t/ (this is a consequence of the fact
that .t/ is a stopping time w.r.t. the filtration generated by .i /).
Integrating (6.2) over Œ0; 1/ immediately gives (6.3) for t; s  0. If t; s < 0, then
both sides of (6.3) equal zero. Finally, if t < 0 and s  0, (6.3) reads U.tCs/  U.s/.
This obviously holds for U is nondecreasing on R. t
u
Elementary Renewal Theorem

lim x1 U.x/ D


1 (6.4)
x!1

if
D E < 1, whereas the limit equals zero if
D 1.
Erickson’s Inequality

x 2x
Rx  U.x/  R x ; x > 0: (6.5)
0 Pf > ygdy 0 Pf > ygdy
212 6 Appendix

Recall that, for ı > 0, the distribution is called ı-lattice if it is concentrated on


ıZ and not concentrated on ı1 Z for any ı1 > ı. The distribution is called lattice
if it is ı-lattice for some ı > 0 and nonlattice if it is not ı-lattice for any ı > 0.
Note that unlike some other areas of probability theory being lattice in the renewal
theory means to be concentrated on a centered p arithmetic progression. For instance,
the distribution concentrated at points 1 and 2 is nonlattice in the renewal theory.
Lorden’s Inequality Let E 2 < 1. Suppose that the distribution of  is nonlattice.
Then

U.x/ 
1 x 
2 E 2 ; x  0: (6.6)

Suppose that the distribution of  is ı-lattice for ı > 0. Then

U.ın/ 
1 ın 
1 ı C
2 E 2 ; n 2 N: (6.7)

Blackwell’s Theorem Suppose that the distribution of  is nonlattice. Then

lim U.x C y/  U.x/ D


1 y (6.8)
x!1

for each y > 0 if


D E 2 .0; 1/, whereas the limit equals zero if
D 1.
Suppose that the distribution of  is ı-lattice for ı > 0. Then
X
lim PfSk D ıng D
1 ı (6.9)
n!1
k0

if
D E 2 .0; 1/, whereas the limit equals zero if
D 1.

6.2.2 Direct Riemann Integrability and the Key Renewal


Theorem

Direct Riemann Integrability A function f W R ! RC is called directly Riemann


integrable (dRi, in short) on R, if
P
(a) n2Z sup.n1/hy<nh f .y/ < 1 for each h > 0 and
P  
(b) limh!0C h n2Z sup.n1/hy<nh f .y/  inf.n1/hy<nh f .y/ D 0.
A function f W R ! R is called dRi on R if f C and f  are dRi. A function
f W R ! RC is dRi on Œ0; 1/ or .1; 0 if (a) and (b) hold with the sums taken
over N or N0 , respectively.
Let f W R ! RC be dRi. Then f is bounded on R and vanishes at ˙1. The first
claim follows from
X
sup f .x/  sup sup f .y/  sup f .y/ < 1
x2R n2Z n1y<n n2Z n1y<n
6.2 Renewal Theory 213

where the finiteness is ensured by part (a) of the definition above. As for the second,
observe that for any t 2 R there exists n 2 Z such that t 2 Œn  1; n/. It remains to
note that limn!˙1 supn1y<n f .y/ D 0 and that f .t/  supn1y<n f .y/.
Every nonnegative dRi function is improperly Riemann integrable (see
Remark 3.10.2 in [236]). The other implication does not necessarily hold just
because an improperly Riemann integrable function may be unbounded at
the vicinity of ˙1, whereas P a dRi function is globally bounded. A concrete
example is given by g.t/ D n1 n1=2 1Œn;nCn2 / .t/ which is obviously
P unbounded
improperly Riemann integrable. The function g .t/ D n1 1 Œn;nCn2 / .t/ is

bounded improperly Riemann integrable, yet not dRi on Œ0; 1/ because


X X
sup g .y/ D 1 D 1:
n2 n1y<n n1

Lemma 6.2.1 The following conditions are sufficient for f W R ! RC to be dRi on


RC .
(a) f is nonincreasing on RC and Lebesgue integrable on RC .
(b) f is Lebesgue integrable on RC and eat f .t/ is nonincreasing on RC for some
a > 0.
(c) f .t/ WD Eg.t  /1f tg where g W RC ! RC is dRi on RC , and is a
nonnegative random variable.
(d) f is locally bounded and a.e. continuous on RC (equivalently locally Riemann
integrable Pon RC ); f .x/  g.x/, x  0 for some dRi on RC function g or, more
generally, n1 sup.n1/h0 y<nh0 f .y/ < 1 for some h0 > 0.
(e) f is continuous and compactly supported.
If f is Lebesgue integrable on R and eat f .t/ is nonincreasing on R for some
a > 0 then f is dRi on R.
Part (a) is well known, see, for instance, Remark 3.10.3 in [236]. Part (b) and
the last statement follow from the proof of Corollary 2.17 in [81]. Part (c) is
Proposition 2.16(d) on p. 297 in [69]. Part (d) with h0 D 1 is Remark 3.10.4 in
[236]. The case h0 > 0 only requires trivial modifications. Part (e) is a particular
case of Remark 3.10.1 in [236].
Lemma 6.2.2 Let be a random variable taking values in .0; 1. Then the functions
g0 .y/ WD E exp.ey /  Eexp.2ey / andg4 .y/ WD ey E exp.ey / are dRi on
R, the function g 1 .y/ WD E 1  exp  ey is dRi
 on .1; 0, and the functions
g2 .y/ WD E exp  ey 1f >ey g and g3 .y/ WD E 1  exp.ey / 1f ey g are dRi
on RC .
Proof Since the functions gi , i D 0; 4, and g3 are nonnegative it suffices to check
that these are Lebesgue integrable on R and RC , respectively, and that the functions
ey gi .y/, i D 0; 3; 4 are nonincreasing, see Lemma 6.2.1. The first property follows
214 6 Appendix

from the equalities


Z Z 1  
g0 .y/dy D y1 Eey  Ee2y dy
R 0
Z 1  
DE y1 ey  e2y dy D log 2;
0
Z Z 1
g4 .y/dy D E ey dy D 1
R 0

and the inequality


Z 1 Z 1  
g3 .y/dy D E y1 1  exp.y / 1f y1 g dy
0 1
Z 1 Z 1
   
DE y1 1  ey dy  y1 1  ey dy < 1:
0

For the last estimate we have used the change of variable and condition 2
.0; 1 a.s. Further, with z 2 .0; 1 fixed, the function y 7! y1 .1  eyz /eyz is
nonincreasing on Œ0; 1/. Hence the function y 7! ey g0 .y/ is nonincreasing, too.
By the same reasoning, with z 2 .0; 1 fixed, the functions y 7! y1 .1  eyz / and
y 7! 1Œz;1/ .y1 / are nonincreasing on .0; 1/, hence, so are their product and the
function y 7! ey g3 .y/. The monotonicity of g4 is obvious.
In view of
Z Z Z
0 1   1
g1 .y/dy D y1 E 1  ey dy  E dy 2 .0; 1
1 0 0

g1 is Lebesgue integrable on .1; 0. Since it is also nonnegative and nondecreas-


ing applying Lemma 6.2.1 (a) to g1 .y/ gives the desired result.
The positive function h.y/ WD exp.ey / is dRi on RC . Since g2 is the convolution
of h and the distribution function of j log j, it must be dRi on RC by part (c) of
Lemma 6.2.1. t
u
The following two results are known as key renewal theorems. These are of major
importance in many problems of applied probability.
Proposition 6.2.3 Let f W RC ! RC be dRi and assume that  has a nonlattice
distribution with
D E < 1. Then, as t ! 1,
Z X Z 1
f .t  y/dU.y/ D E f .t  Sk /1fSk tg !
1 f .y/dy;
Œ0; t k0 0

if
D E < 1, whereas the limit equals zero, if
D 1.
6.2 Renewal Theory 215

Proposition 6.2.4 Suppose that E D 1. If f W R ! RC is dRi on RC , then


Z
lim f .t  y/dU.y/ D 0;
t!1 Œ0; t

and if f is dRi on .1; 0, then


Z
lim f .t  y/dU.y/ D 0:
t!1 Œt; 1/

Proof Using the monotonicity of U and appealing to the Blackwell theorem


(see (6.8) and (6.9)) we conclude that in both lattice and nonlattice cases
 
lim U.t C h/  U.t/ D 0 (6.10)
t!1

for any h > 0. Thus repeating literally the proof of the key renewal theorem given
on p. 241 in [236] leads to the desired conclusion. t
u
P
Corollary 6.2.5 Suppose that E D 1. Then t  S .t/1 ! C1 as t ! 1.
Proof For x > 0,
Z Z
Pft  S .t/1 < xg D Pf > t  ygdU.y/ D fx .t  y/dU.y/
.tx;t Œ0; t

where fx .z/ WD Pf > zg1Œ0;x/ .z/ is dRi on Œ0; 1/ as nonincreasing Lebesgue
integrable function (see Lemma 6.2.1(a)). Thus, the last integral converges to zero
as t ! 1 which proves the claim. t
u
In Proposition 6.2.3 we did not discuss the lattice case, for it is subsumed in a
more general result, a version of the key renewal theorem for the lattice distributions
concentrated on the whole line. Even though the result is widely used in the
literature, we are not aware of any reference which would give a proof.
Proposition 6.2.6 Assume that  has a ı-lattice distribution concentrated
P on R and

D E 2 .0; 1/. Let f W R ! R be a function that satisfies j2Z jf .x C ıj/j < 1


for some x 2 R. Then
X X
lim E f .x C ın  Sk / D
1 ı f .x C ıj/:
n!1
k0 j2Z

Proof By considering f C and f  separately, without loss of generality f may be


assumed nonnegative.
216 6 Appendix

P
Suppose first that   0 a.s. Set u.ın/ WD k0 PfSk D ıng. In view of (6.9), for
any " 2 .0;
1 ı/ there exists a j0 2 N such that


1 ı  "  u.ıj/ 
1 ı C "

whenever j  j0 C 1. Using this we obtain


X
E f .x C ın  Sk /
k0

X
j0
  X  
D f x C ı.n  j/ u.ıj/ C f x C ı.n  j/ u.ıj/
jD0 jj0 C1

nj0 1
X
j0
  X  
 f x C ı.n  j/ u.ıj/ C .
1 ı C "/ f x C ıj : (6.11)
jD0 jD1
P
The assumption j2Z f .x C ıj/ < 1 ensures limn!1 f .x C ın/ D 0, whence
X X
lim sup E f .x C ın  Sk / 
1 ı f .x C ıj/
n!1
k0 j2Z

on letting in (6.11) first n ! 1 and then " to zero. The converse inequality for the
lower limit follows analogously.
The general case when  takes values of both signs will now be handled by
reducing it to the case  > 0 a.s. via a stopping time argument. We use the
representation

X X jC1
X1 X
E f .x C ın  Sk / D E f .x C ın  Si / D E f  .x C ın  Sk /
k0 j0 iDj k0

where .k /k2N0 are successive strictly increasing ladder epochs for .Sn / (see ‘List of
P 1
Notation’ for the precise definition), and f  .x/ WD E jD0 f .xSj /, x 2 R (we write
 for 1 ). The sequence .Sk /k2N0 is an ordinary random walk with positive jumps
having the same distribution as S . Observe that ES D
E by Wald’s identity, and
that the distribution of S is ı-lattice. Since

X X X 1 X
 
f  .x C ıj/ D E f x C ı. j  i/ 1fSk Dıig
j2Z j2Z kD0 i0

 1 X
X X
DE 1fSk Dıig f .x C ı. j  i//
kD0 i0 j2Z
X
D E f .x C ıj/ < 1;
j2Z
6.2 Renewal Theory 217

an application of the already proved result in the case   0 a.s. yields


X ı X  ı X
lim E f .x C ın  Sk / D f .x C ıj/ D f .x C ıj/:
n!1
k0
ES j2Z
j2Z

The proof of Proposition 6.2.6 is complete. t


u
The following application of the key renewal theorem (Proposition 6.2.3)
concerns the joint limit distribution of the undershoot t  S .t/1 and the overshoot
S .t/  t as t ! 1.
Proposition 6.2.7 Suppose that the distribution of nonnegative  is nonlattice with

D E < 1. Then, as t ! 1,
  d  
t  S .t/1 ; S .t/  t ! ;  (6.12)
 
where the distribution of ;  is given by
Z 1
Pf > u;  > vg D
1 Pf > ygdy; u; v  0:
uCv

In particular,
Z u
1
Pf  ug D Pf  ug D
Pf > ygdy; u  0:
0
   
Furthermore, ;  has the same distribution as UV; .1  U/V where U and V
are independent random variables, U has a uniform distribution on Œ0; 1, and the
distribution of V is given by PfV 2 dxg D
1 xPf 2 dxg, x > 0.
Proof Note that (6.12) is equivalent to
Z 1
lim Pft  S .t/1  u; S .t/  t > vg D
1 Pf > ygdy (6.13)
t!1 uCv

for all nonnegative u and v because the limit distribution is continuous in Œ0; 1/ 
Œ0; 1/.
For fixed u; v  0, the function fu;v .t/ WD Pf > uCvCtg, t  0 is nonincreasing
and Lebesgue integrable on RC . The latter is a consequence of
Z 1 Z 1 Z 1
fu;v .t/dt D Pf > tgdt  Pf > tgdt D
< 1:
0 uCv 0
218 6 Appendix

Thus, fu;v is dRi by Lemma 6.2.1(a). For u 2 Œ0; t and v  0, we have


˚ 
P t  S .t/1  u; S .t/  t > v
X ˚ 
D P S .t/1  t  u; S .t/ > t C v; .t/ D k
k1
X ˚ 
D P Sk1  t  u; Sk1 C k > t C v
k1
Z Z
D Pf > t C v  ygdU.y/ D fu;v .t  u  y/dU.y/:
Œ0; tu Œ0; tu

By Proposition 6.2.3, the last integral tends to


Z 1 Z 1

1 fu;v .y/dy D
1 Pf > ygdy
0 uCv

as t ! 1 which proves (6.13). The formulae for the marginal distributions follow
by setting u D 0 and v D 0, respectively, in the formula for the joint distribution.
Finally, the last assertion is a consequence of

PfUV > u; .1  U/V > vg D Pfu=V < U < 1  v=V; V > u C vg


 
D E1fV>uCvg 1  .u C v/=V
Z
 
D
1 1  x1 .u C v/ xPf 2 dxg
.uCv; 1/
 
D
1 E1f>uCvg  .u C v/E1f>uCvg
 
D
1
 E. ^ .u C v//
Z 1
1
D
Pf > ygdy:
uCv

t
u

6.2.3 Relatives of the Key Renewal Theorem

Throughout Section 6.2.3 we assume that   0 a.s.


Suppose that E < 1. Let f be nonnegative and dRi. TheR key renewal theorem
(Propositions 6.2.3 and 6.2.6) states that the limit limt!1 f .t  y/dU.y/ exists
and is finite, whenever the distribution of  is nonlattice, whereas the limit only
exists along a subsequence t D nl, n 2 N, when the distribution of  is l-lattice.
In particular, the key renewal theorem says nothing about what happens with the
6.2 Renewal Theory 219

 
integrals when t approaches 1 along subsequences other than nl in the l-lattice
case. Lemma 6.2.8 given below fills this gap.
Lemma 6.2.8 If f W R ! RC is dRi on RC , then
Z
lim sup f .t  y/dU.y/ < 1:
t!1 Œ0; t

If f is dRi on .1; 0, then


Z
lim sup f .t  y/dU.y/ < 1:
t!1 Œt; 1/

Proof If the distribution of  is nonlattice, the (even stronger) assertion follows


from the key renewal theorem (Proposition 6.2.3). Suppose that the distribution of
 is l-lattice, l > 0. We only treat the case of direct Riemann integrability on RC .
Since
X
f .t/  sup f .s/1Œ.n1/l; nl/ .t/; t  0;
n1 .n1/ls<nl

we obtain
Z X  
f .t  y/dU.y/  sup f .s/ U.t  nl/  U.t  .n  1/l/
Œ0; t n1 .n1/ls<nl
X
 U.l/ sup f .s/
n1 .n1/ls<nl

having utilized the subadditivity of U on R (see (6.3)) for the last inequality. It
remains to remark that the series on the right-hand side converges because f is dRi.
t
u
Given below is a counterpart of the key renewal theorem in the case when f is
nonintegrable.
Lemma 6.2.9 Let 0  r1 < r2 R  1. Suppose that f W RC ! RC is either
t
nondecreasing and limt!1 . f .t/= 0 f .y/dy/ D 0, or nonincreasing and, if r2 D 1,
R .1r /t
locally integrable. If
D E 2 .0; 1/ and limt!1 .1r21/t f .y/dy D 1, then

Z Z .1r1 /t
1
f .t  y/ dU.y/ 
f .y/ dy; t ! 1:
Œr1 t; r2 t .1r2 /t

Proof If the distribution of  is nonlattice or 1-lattice, the proof runs the same
path as that of Theorem 4 in [248] which investigates the case r1 D 0, r2 D 1.
If the distribution of  is l-lattice, the distribution of l1  is 1-lattice. Hence, putting
220 6 Appendix

fl .t/ WD f .lt/ we obtain


Z Z X
f .t  y/dU.y/ D fl .l1 t  y/d Pfl1 Sn  yg
Œr1 t; r2 t Œr1 l1 t; r2 l1 t n0
Z .1r1 /l1 t Z .1r1 /t
1 1

l fl .y/dy D
f .y/dy:
.1r2 /l1 t .1r2 /t

t
u
Remark 6.2.10 When f is nondecreasing, the condition
 Z t 
lim f .t/ f .y/dy D 0
t!1 0

cannot be omitted. Indeed, taking f .t/ D et we infer


Z
f .t  y/dU.y/  .1  Ee /1 et ;
Œ0;t

Rt
whereas
1 0 f .y/dy 
1 et as t ! 1.
Remark 6.2.11 We now give an example which demonstrates that the result of
Lemma 6.2.9 may fail to hold for ill-behaved f . Let, for instance, f .t/ D 1QcC .t/
Rt
where QcC is the set of positive irrational numbers. Then 0 f .y/dy D t. Now suppose
the distribution of  is concentrated at rational points in .0; 1/. Note that choosing
these points properly, the distribution of  can be made lattice as well as nonlattice.
The
R points of increase of the renewal function U.y/ are rational points only. Hence
Œ0;t f .t  y/dU.y/ D 0 for rational t.

Here is a weak version of Lemma 6.2.9 for functions f of bounded variation.

RLemma 6.2.12 Let f1 ; f2 W RC ! RC satisfying f1 .t/  f2 .t/ for t  0 and


t
0 . f1 .y/  f2 .y//dy > 0 for large enough t > 0 be either nondecreasing functions
with

f1 .t/ C f2 .t/
lim sup R t   <1
t!1 0 f1 .y/  f2 .y/ dy

or nonincreasing functions. Then


R
Œ0; t . f1 .t  y/  f2 .t  y//dU.y/
lim sup Rt < 1:
t!1 0 . f1 .y/  f2 .y//dy
6.2 Renewal Theory 221

Proof We shall use the decomposition


Z Z Z
 
f1 .t  y/  f2 .t  y/ dU.y/ D C DW I1 .t/ C I2 .t/:
Œ0; t Œ0; Œt .Œt; t

For I2 .t/ we have


Z
 
I2 .t/  f1 .t  y/dU.y/  f1 .t  Œt/ U.t/  U.Œt/  f1 .1/U.1/
.Œt; t

by the subadditivity of U (see (6.3)). As for I1 .t/ we obtain using again the
subadditivity of U

Œt1 Z
X  
I1 .t/ D f1 .t/  f2 .t/ C f1 .t  y/  f2 .t  y/ dU.y/
jD0 . j; jC1

Œt1
X  
 f1 .t/  f2 .t/ C f1 .t  j/  f2 .t  j  1/ U. j C 1/  U. j/
jD0

Œt1
X 
 f1 .t/ C U.1/ f1 .t  j/  f2 .t  j  1/
jD0

Œt1
X 
 f1 .t/ C U.1/ f1 .Œt C 1  j/  f2 .Œt  1  j/
jD0
Z Œt 
   
D U.1/ f1 .y/  f2 .y/ dy C O f1 .t/ C f2 .t/ :
2

The proof of Lemma 6.2.12 is complete. t


u
Lemma 6.2.13 Let E < 1 and f W RC ! RC be a locally bounded and
measurable function.
(a) Let 0  r1 < r2  1. If there exists a monotone function g W RC ! RC such
that f .t/  g.t/ as t ! 1, then
Z Z
f .t  y/ dU.y/  g.t  y/ dU.y/; t!1
Œr1 t; r2 t Œr1 t; r2 t

provided that, when r2 D 1,


Z t  Z t 
lim f .y/dy D 1 and lim f .t/= f .y/dy D 0:
t!1 0 t!1 0
222 6 Appendix

(b) If there exists a locally bounded and measurable


R function g W RC ! RC such
that f .t/ D o.g.t// as t ! 1 and limt!1 Œ0;t g.t  y/dU.y/ D C1, then
Z Z 
f .t  y/ dU.y/ D o g.t  y/dU.y/ ; t ! 1:
Œ0; t Œ0; t

Proof (a) For any ı 2 .0; 1/ there exists a t0 > 0 such that

1  ı  f .t/=g.t/  1 C ı (6.14)

for all t  t0 .
Case r2 < 1 We have, for t  .1  r2 /1 t0 ,
Z Z
.1  ı/ g.t  y/ dU.y/  f .t  y/ dU.y/
Œr1 t; r2 t Œr1 t; r2 t
Z
 .1 C ı/ g.t  y/ dU.y/:
Œr1 t; r2 t

R
Dividing both sides by Œr1 t; r2 t g.t  y/ dU.y/ and sending t ! 1 and then ı ! 0C
gives the result.
Rt
Case r2 D 1 Since g is monotone, it is locally integrable. Further, limt!1 0
Rt
g.y/dy D 1 and limt!1 .g.t/= 0 g.y/dy/ D 0. Hence Lemma 6.2.9 applies and
yields
Z
lim g.t  y/ dU.y/ D 1:
t!1 Œr t; t
1

In view of (6.14) we have


Z Z
g.t  y/ dU.y/  .1 C ı/ g.t  y/ dU.y/
Œr1 t; t Œr1 t; tt0 
Z Z
C g.t  y/ dU.y/  .1 C ı/ g.t  y/ dU.y/
.tt0 ; t Œr1 t; t

C U.t0 / sup g.y/


0yt0

for t  .1  r1 /1 t0 , the last inequality


R following from the subadditivity of U
(see (6.3)). Dividing both sides by Œr1 t; t g.t  y/ dU.y/ and sending t ! 1
6.2 Renewal Theory 223

yields
R
Œr t; t f .t  y/ dU.y/
lim sup R 1  1 C ı:
t!1 Œr1 t; t g.t  y/ dU.y/

The converse inequality for the lower limit follows analogously.


(b) For any ı 2 .0; 1/ there exists a t0 > 0 such that f .t/=g.t/  ı for all t  t0 . The
rest of the proof is the same as for the case r2 D 1 of part (a). t
u
When f is regularly varying, a specialization of Lemma 6.2.9 reads as follows.
Lemma 6.2.14 Let
D E 2 .0; 1/ and f W RC ! RC be locally bounded,
measurable, and regularly varying at C1 of index ˇ 2 .1; 1/. If ˇ D 0, assume
further that there exists a monotone function g such that f .t/  g.t/ as t ! 1.
Then, for 0  r1 < r2  1,
Z
t.t/
f .t  y/ dU.y/  ..1  r1 /1Cˇ  .1  r2 /1Cˇ /; t ! 1:
Œr1 t; r2 t .1 C ˇ/

Proof If ˇ ¤ 0, Lemma 6.1.4(b) ensures the existence of a positive monotone


function g such that f .t/  g.t/ as t ! 1. If ˇ D 0, such a function exists by
assumption. Modifying g if needed in the right vicinity of zero we can assume that
g is monotone and locally integrable. Therefore,
Z Z Z .1r1 /t
f .t  y/ dU.y/  g.t  y/ dU.y/ 
1 g.y/ dy
Œr1 t; r2 t Œr1 t; r2 t .1r2 /t

where the first equivalence follows from Lemma 6.2.13(a) and the second is
a consequence of Lemma 6.2.9 (observe that, with h D f or h D g, the
Rt R .1r /t
relations limt!1 .h.t/= 0 h.y/dy/ D 0 and limt!1 .1r21/t h.y/dy D 1 hold by
Lemma 6.1.4(c) because h is regularly varying of index ˇ > 1). Finally, using
Lemma 6.1.4(c) we obtain
Z .1r1 /t
1 tg.t/
g.y/dy  ..1  r1 /1Cˇ  .1  r2 /1Cˇ /
E .1r2 /t .1 C ˇ/

tf .t/
 ..1  r1 /1Cˇ  .1  r2 /1Cˇ /:
.1 C ˇ/

The proof of Lemma 6.2.14 is complete. t


u
Lemma 6.2.15 Suppose that E D 1 and let f W RC ! RC be a measurable and
locally bounded function such that limt!1 . f .t/=Pf > tg/ D c 2 Œ0; 1. Then
Z
lim f .t  y/dU.y/ D c:
t!1 Œ0; t
224 6 Appendix

Proof Denote by V.t/ WD t  S .t/1 the undershoot of .Sn /n2N0 at t and put h.t/ WD
P
f .t/=Pf > tg for t  0. Under the sole assumption E D 1 we have V.t/ ! 1
P
by Corollary 6.2.5 whence h.V.t// ! c. If c < 1, the function h is bounded and
limt!1 Eh.V.t// D c by the dominated convergence theorem. If c D 1, we obtain
limt!1 Eh.V.t// D c D 1 by Fatou’s lemma. In view of the representation
Z
f .t  y/dU.y/ D Eh.V.t//; t0
Œ0; t

the proof of Lemma 6.2.15 is complete. t


u
Lemma 6.2.16 Suppose that Pf > tg is regularly varying of index ˛ for some
˛ 2 .0; 1/. Let f W RC ! R be a locally bounded and measurable function which
varies regularly at 1 of index for some  ˛. If D ˛, assume additionally
f .t/
that there exists a positive nondecreasing function q such that limt!1 Pf>tgq.t/ D 1.
Then
(a)
Z
Pf > tg
lim lim sup f .t  y/ dU.y/ D 0; (6.15)
!1 t!1 f .t/ Πt; t

in particular,
Z
Pf > tg .1 C /
lim f .t  y/ dU.y/ D I
t!1 f .t/ Œ0; t .1  ˛/.1 C ˛ C /
R
(b) Œ0; t f1 .t  x/ dU.x/ D o. f .t/=Pf > tg/ as t ! 1 for any positive locally
bounded function f1 such that f1 .t/ D o. f .t// as t ! 1.
Proof (a) With r.t/ WD f .t/=Pf > tg for t  0, the expression under the double
limit in (6.15) equals Er.V.t//1fV.t/.1 /tg =r.t/ where, as before, V.t/ D t  S .t/1
is the undershoot of .Sn /n2N0 at t.
Case 1 in which > ˛ or D ˛ and limt!1 r.t/ D 1. If > ˛, then,
by Lemma 6.1.4(b), there exists a nondecreasing function q such that r.t/  q.t/ as
t ! 1. If D ˛, such a function q exists by assumption. Now fix " > 0 and let
t0 > 0 be such that .1  "/q.t/  r.t/  .1 C "/q.t/ for all t  t0 . Then

Er.V.t//1fV.t/t0 g sup0yt0 r.y/


 ! 0; t!1
r.t/ r.t/
6.2 Renewal Theory 225

by the local boundedness of r. Further, for t such that .1  /t > t0 ,

Er.V.t//1ft0 <V.t/.1 /tg 1 C " Eq.V.t//1fV.t/.1 /tg



r.t/ 1" q.t/
1C"
 PfV.t/  .1  /tg:
1"
By a well known Dynkin–Lamperti result (see, for instance, Theorem 8.6.3 in [44])
Z 1
1
lim PfV.t/  .1  /tg D y˛ .1  y/˛1 dy:
t!1 .˛/.1  ˛/ 0

When ! 1, the last integral goes to zero which proves (6.15).
Case 2 in which D ˛ and r is bounded. Then Er.V.t//1fV.t/.1 /tg  const 
PfV.t/  .1  /tg for t  0. The rest of the proof is the same as in the previous
case.
Turning to the second assertion of part (a), we observe that
Z Z
Pf > tg f .t.1  y//
f .t  y/ U.dy/ D Pf > tgU.t/ Ut .dy/
f .t/ Œ0; t Œ0;  f .t/

where Ut .Œ0; x/ D U.tx/=U.t/, 0  x  1. Formula (8.6.4) on p. 361 in [44] says


that limt!1 Pf > tgU.t/ D ..1  ˛/.1 C ˛//1 . Hence, the measures Ut .dx/
converge vaguely to ˛x˛1 dx as t ! 1. This in combination with Lemma 6.1.4(a)
yields
Z
f .t.1y//
lim lim Pf > tgU.t/ Ut .dy/
!1 t!1 Œ0;  f .t/
Z
˛ .1 C /
D lim .1y/ y˛1 dy D :
.1˛/.1C˛/ !1 0 .1 C ˛ C /.1˛/

An appeal to (6.15) finishes the proof of part (a).


(b) For any ı > 0 there exists a t0 > 0 such that f1 .t/=f .t/  ı for all t  t0 . Hence
Z Z
f1 .t  y/ dU.y/  ı f .t  y/ dU.y/ C .U.t/  U.t  t0 // sup f1 .y/
Œ0; t Œ0; t 0yt0

for t  t0 . According to part (a), the first term on the right-hand side grows like
const f .t/=Pf > tg. By Blackwell’s theorem in the infinite mean case (see (6.10)),
limt!1 .U.t/  U.t  t0 // D 0. Dividing the inequality above by f .t/=Pf > tg and
sending first t ! 1 and then ı ! 0C finishes the proof. t
u
226 6 Appendix

6.2.4 Strong Approximation of the Stationary Renewal Process

Set   .t/ D #fk 2 N0 W Sk  tg for t  0 where the sequence .Sk / is as defined
in (3.4). Although the process .  .t//t0 is known as stationary renewal process, it
is a process with stationary increments rather than a stationary process.
Lemma 6.2.17 Suppose that E r < 1 for some r > 2. Then there exists a
Brownian motion S2 such that, for some random, almost surely finite t0 > 0 and
deterministic A > 0,

j  .t/ 
1 t  
3=2 S2 .t/j  At1=r

for all t  t0 where  2 D Var  and


D E.
Proof According to formula (3.13) in [71], there exists a Brownian motion S2 such
that

sup jSŒu 
u  S2 .u/j D O.t1=r / a.s.
0ut

This obviously implies



sup jSŒu 
u  S2 .u/j D O.t1=r / a.s.
0ut

and thereupon

sup j  .u/ 
1 u  
3=2 S2 .u/j D O.t1=r / a.s.
0ut

by Theorem 3.1 in [71]. This proves the lemma with possibly random A. As noted
in Remark 3.1 of the cited paper the Blumenthal 0  1 law ensures that the constant
A can be taken deterministic. t
u

6.3 Ordinary Random Walks

In this section we discuss ordinary random walks .Sn /n2N0 with two-sided jumps,
Proposition 6.3.4 being the only exception. BeforeR we formulate the first result the
x
following notation has to be recalled: J .x/ D x= 0 Pf > ygdy, x > 0;

 D inffk 2 N W Sk < 0g and w D inffk 2 N W Sk  0g:


6.3 Ordinary Random Walks 227

Theorem 6.3.1 Let .Sn /n2N0 be negatively divergent. For a function f as defined in
Theorem 1.3.1, the following assertions are equivalent:

Ef .sup Sn / < 1I (6.16)


n0

Ef . sup Sn /J . sup Sn / < 1I (6.17)


0n 1 0n 1

Ef . C /J . C / < 1I (6.18)


Ef .Sw /1fw <1g < 1: (6.19)

Proof (6.16))(6.17). We retain the notation of the proof of (1.8))(1.6) in


Theorem 1.3.1 with the exception that we use

n WD sup.n1 C1 ; : : : ; n1 C1 C : : : C n 1 /


b

for n 2 N rather than b n as given there. Then .b


 n ;b
n /n2N are independent copies
of .S ; sup0k 1 Sk / and supn0 Sn D supn1 .b Sn1 C b n / a.s. Thus we have
represented supn0 Sn as the supremum of a perturbed random walk to which
 
Lemma 1.3.11 applies. This allows us to conclude that Ef supn1 .b Sn1 C b
n / D
Ef .supn0 Sn / < 1 entails

C /J .b
1 > Ef .b C / D Ef . sup Sn /J . sup Sn /
0n 1 0n 1

because b  1 < 0 a.s.


(6.17))(6.18). While on the event f D 1g D fS1 < 0g we have sup0n 1 Sn D
0 D S1C , on the event f > 1g D fS1  0g we have sup0n 1 Sn  S1 D S1C .
Hence sup0n 1 Sn  S1C a.s., and the implication follows because recalling that
J is nondecreasing (see the proof of (1.7))(1.8) in Theorem 1.3.1) and using
Lemma 1.3.9 we can assume that f  J is nondecreasing.
Throughout the rest of the proof, without loss of generality (see Lemma 1.3.9),
we shall assume that f is nondecreasing and differentiable and that f .0/ D 0, f .x C
y/  c. f .x/ C f .y// for all x; y  0 and some c > 0. In particular, f .x=2/  cf .x/
for all x  0.
(6.18))(6.19). Using formula (3.7a) on p. 399 in [89] with  replacing  we infer

PfSw  t; w < 1g D EU  .  t/1ftg


228 6 Appendix

for t > 0 where, P


with 0 D 0, 1 D  and n D inffk > n1 W Sk < Sn1 g for
n  2, U  .y/ WD n0 PfSn  yg is a renewal function. Hence
Z 1
Ef .Sw /1fw <1g D f 0 .t/PfSw  t; w < 1gdt
0
Z C
DE f 0 .t/U  . C  t/dt  Ef . C /U  . C /
0

C
 2Ef . C / R  2Ef . C /J . C / < 1:
C
0 PfS > ygdy

We have used Erickson’s inequality (6.5) for the first inequality and PfS1 > yg 
PfS > yg, y > 0 for the second.
(6.19))(6.16). According to the formula given on p. 1236 in [3],
X
Ef .sup Sn / D .1  / n Ef .Vn /
n0 n0

where WD Pfw < 1g and .Vn /n2N0 is a zero-delayed ordinary random walk
with increments having distribution PfSw 2 jw < 1g. It suffices to show that
Ef .V1 / < 1 (which is equivalent to (6.19)) entails Ef .Vn / D O.nı / as n ! 1 for
some ı > 0.
ˇ
The condition Ef .V1 / < 1 ensures that EV1 < 1 for some ˇ 2 .0; 1/. Set
1=ˇ
fˇ .x/ D f .x / and observe that fˇ still possesses all the properties of f stated in the
paragraph preceding the proof of (6.18))(6.19). By the subadditivity of x 7! xˇ
on RC
X 
n
 ˇ ˇ
Ef .Vn /  Efˇ .Vk  Vk1 /ˇ  EV1 C EV1
kD1
 ˇ X ˇ 
ˇ n  ˇ ˇ
 c1 Efˇ ˇˇ .Vk  Vk1 /ˇ  EV1 ˇˇ C fˇ .nEV1 /
ˇ

kD1

 Pm^n  ˇ ˇ 
for some c1 > 0. Since kD1 .Vk  Vk1 /  EV1 m2N0 is a martingale w.r.t. the
natural filtration we can use the Burkholder–Davis–Gundy inequality (Theorem 2
on p. 409 in [68]) to infer
ˇ X ˇ
ˇ n  ˇ ˇ
Efˇ ˇˇ .Vk  Vk1 /ˇ  EV1 ˇˇ
kD1
 ˇ m ˇ
ˇX ˇ ˇˇ
 Efˇ ˇ
sup ˇ ˇ
.Vk  Vk1 /  EV1 ˇ
mn
kD1
6.3 Ordinary Random Walks 229

 
 ˇ ˇ   ˇ 
 c2 fˇ nEjV1  EV1 j C Efˇ sup j.Vm  Vm1 /ˇ  EV1 j
mn
  ˇ ˇ   ˇ ˇ 
 c2 fˇ nEjV1  EV1 j C nfˇ EjV1  EV1 j

for some c2 which does not depend on n. Thus, we have shown that Ef .Vn / exhibits
at most power growth. t
u
Formula (6.20) is needed for the proof of Theorem 3.4.3. Recall that

w D inffk 2 N W Sk  0g;

0 D 0, 1 D  D inffk 2 N W Sk > 0g and n D inffk > n1 W Sk > Sn1 g for


n  2.
Lemma 6.3.2 Let .Sn /n2N0 be positively divergent and h W R ! RC a measurable
function. Then
X Z X 
Eh.Sn / D E Eh.x C inf Sn /d PfSn  xg (6.20)
Œ0;1/ n0
n0 n0

Proof We first prove that


X
Ef .Sj /1f >jg D EEf .inf Sn / (6.21)
n0
j0

To this end, we set J WD supfn 2 N0 W Sn D infk0 Sk g and observe that

fJ D jg D fSk  Sj ; 0  k  j; Sn > Sj ; n > jg; j 2 N0 :

Formula (6.21) follows by summing the following equalities over j 2 N0

Ef .inf Sk /1fJDjg D Ef .Sj /1fSk Sj ; 0kjg Pfinf Sn > 0g


k0 n0

D Ef .Sj /1fSk 0; 0kjg Pfw D 1g D Ef .Sj /1f >jg =E

(see Theorem 2 on p. 146 in [68] for Pfw D 1g D 1=E).


Since .Sk  Sn /n knC1 1 is independent of Sn and has the same distribution
as .Sj /0j 1 we obtain by using (6.21) with f ./ D h.x C /

nC1 1
X X X
Eh.Sn / D E h.Sk /
n0 n0 kDn
XZ  
D h.x C n C1 / C : : : C h.x C n C1 C : : : C nC1 1 / dPfSn  xg
n0 Œ0; 1/
230 6 Appendix

XZ X
D E h.x C Sj /1f >jg dPfSn  xg
n0 Œ0; 1/ j0
XZ
D E Eh.x C inf Sk /dPfSn  xg
Œ0; 1/ k0
n0
Z X 
D E Eh.x C inf Sk /d PfSn  xg :
Œ0; 1/ k0
n0

t
u

P Lemma 6.3.3 is used in the proof of Theorem 1.4.6. Recall that N.x/ D
n0 1fSn xg for x 2 R is the number of visits of .Sn / to .1; x.

Lemma
P 6.3.3 Let p p > 0 and I P R be an open interval such that
p
E 1
n0 fSn 2Ig 2 .0; 1/. Then E n0 1fSn 2Jg < 1 for any bounded
interval J R. In particular, E.N.x// < 1 for some x 2 R entails E.N.y//p < 1
p

for every y 2 R.
P p
Proof Let I D .a; b/ be such that E n0 1fSn 2Ig 2 .0; 1/. We assume w.l.o.g.
that 1 < a < b < 1. We first show that
X p
E 1fjSn j<"g < 1 for some " > 0: (6.22)
n0

P p
Pick " > 0 so small that I" WD .a C "; b  "/ satisfies E n0 1fSn 2I" g > 0.
Then PfSn 2 I" g > 0 for some n 2 N. In particular, Pf.I" / < 1g > 0 where
.I" / D inffn 2 N0 W Sn 2 I" g. Using the strong Markov property at .I" /, we get
X p  X p
1>E 1fSn 2Ig  E 1f .I" /<1g 1fjSn S .I" / j<"g
n0 n .I" /
X p
D Pf.I" / < 1g E 1fjSn j<"g :
n0

Hence, (6.22) holds. Now let J be a nonempty bounded interval R, and J1 ; : : : ; Jm


open intervals of length at most " such that J J1 [ : : : [ Jm . Using the inequality
p
.x1 C : : : C xm /p  .mp1 _ 1/.x1 C : : : C xpm /, xj  0 for j D 1; : : : ; m, leads to
X p X
m X p
E 1fSn 2Jg  E 1fSn 2Jk g
n0 kD1 n0

X
m X p
 .mp1 _ 1/ E 1fSn 2Jk g :
kD1 n0
6.3 Ordinary Random Walks 231

Therefore, it suffices to prove the result under the additional assumption that the
length of J is at most ". Using the strong Markov property at .J/ WD inffn 2 N0 W
Sn 2 Jg gives
X p  X p
E 1fSn 2Jg  E 1f .J/<1g 1fjSn S .J/ j<"g
n0 n .J/
X p
D Pf.J/ < 1gE 1fjSn j<"g < 1:
n0

This proves the first assertion of the lemma. Concerning the second, assume that
E.N.x//p < 1 for some x 2 R. Then, for any y > x,
 X p 
E.N.y//p  .2p1 _ 1/ E.N.x//p C E 1fx<Sn yg < 1
n0

where the last term is finite by the first part of the lemma. t
u
Given below are counterparts for .Sn / of the results obtained in Section 1.4 for
the perturbed random walks. Recall that .x/ and .x/ denote the first-passage time
of .Sn / into .x; 1/ and the last-exit time of .Sn / from .1; x, respectively.
Proposition 6.3.4 Assume that Pf  0g D 1 and let ˇ WD Pf D 0g 2 Œ0; 1/.
Then for a > 0 the following conditions are equivalent:

Eea .x/ < 1 for some/all x  0I


a <  log ˇ

where  log ˇ WD 1 if ˇ D 0. In particular, E..x//p < 1 for all x  0 and any


p > 0. The same results hold for N.x/ and .x/.
Theorem 6.3.5 Let .Sn /n2N0 be positively divergent with Pf < 0g > 0 and a > 0.
The following assertions are equivalent:
X
n1 ean PfSn  xg < 1 for some/all x  0I
n1

Eea .x/ < 1 for some/all x  0I


EeaN.x/ < 1 for some/all x  0I
a   log inf Eet :
t0
232 6 Appendix

Theorem 6.3.6 Let .Sn /n2N0 be positively divergent with Pf < 0g > 0 and a > 0.
The following assertions are equivalent:
X
ean PfSn  xg < 1 for some/all x  0I
n0

Eea .x/ < 1 for some/all x  0I


a <  log inf Eet or a D  log inf Eet and Ee 0  > 0
t0 t0

where 0 is the unique positive number such that Ee 0  D inft0 Eet .

Theorem 6.3.7 Let .Sn /n2N0 be positively divergent with Pf < 0g > 0 and p > 0.
The following assertions are equivalent:

E..x//1Cp < 1 for some/all x  0I


E.N.x//p < 1 for some/all x  0I
E. .x//p < 1 for some/all x  0I
E.JC .  //1Cp < 1:

6.4 Miscellaneous Results

Here we collect several results of various flavors that are frequently used throughout
the book.
Cramér–Wold Device The relation
d
.Xt .u1 /; : : : ; Xt .un // ! .X.u1 ; : : : ; X.un ///; t!1

is equivalent to

X
n
d X
n
˛k Xt .uk / ! ˛k X.uk /; t!1 (6.23)
kD1 kD1

for any real ˛1 ; : : : ; ˛n . If Xt .u/  0 a.s., it suffices that (6.23) holds for any
nonnegative ˛1 ; : : :, ˛n .
For the proof, see Theorem 29.4 on p. 397 in [41].
Lemma 6.4.1 Let .S; d/ be an arbitrary metric space. Suppose that .Zk;n ; Yn / are
random elements on S  S. If Zk;n ) Zk on .S; d/ as n ! 1, Zk ) Z on .S; d/ as
6.4 Miscellaneous Results 233

k ! 1 and

lim lim sup Pfd.Zk;n ; Yn / > "g D 0


k!1 n!1

for all " > 0, then Yn ) Z on .S; d/ as n ! 1.


Lemma 6.4.1 is Theorem 4.2 in [40].
Lemma 6.4.2 Let 0  a < b < 1.
(a) Assume that Rt ) R as t ! 1 in the J1 - or M1 -topology on DŒa; b. Further,
assume that t for t  0 are finite measures such that t converge weakly to 
as t ! 1 where  is a finite measure on Œa; b which is continuous w.r.t. the
Lebesgue measure. Then
Z Z
d
Rt .y/t .dy/ ! R.y/.dy/; t ! 1:
Œa; b Œa; b

If  D "c (the probability measure concentrated at c 2 Œa; b) and R is a.s.


continuous at c, then
Z
d
Rt .y/ t .dy/ ! R.c/; t ! 1:
Œa; b

(b) Assume that ft 2 DŒa; b and that the random process .Rt .y//ayb is a.s.
nondecreasing for each t > 0. Assume further that limt!1 ft .y/ D f .y/
uniformly in y 2 Œa; b and that Rt ) R as t ! 1 in the J1 -topology on
DŒa; b, the paths of .R.y//ayb being almost surely continuous. Then
Z Z
d
ft .y/ dRt .y/ ! f .y/ dR.y/; t ! 1:
Œa; b Œa; b

(c) Assume that the processes Rt are a.s. right-continuous and nondecreasing for
each t  0 and that Rt ) R as t ! 1 locally uniformly on DŒ0; 1/. Then,
for any " 2 .0; 1/ and any 2 R,
Z Z
.u  y/ dRt .y/ ) .u  y/ dR.y/; t!1
Œ0; "u Œ0; "u

uniformly on DŒa; b.


Proof for Parts (a) and (b). Since locally uniform convergence entails convergence
in the J1 -topology which, in its turn, entails convergence in the M1 -topology, parts
(a) and (b) of the lemma follow from the Skorokhod representation theorem along
234 6 Appendix

with the deterministic result: if limt!1 xt D x in the M1 -topology on DŒa; b, then
Z Z
lim xt .y/t .dy/ D x.y/.dy/
t!1 Œa; b Œa; b

provided that  is continuous, and


Z
lim xt .y/ t .dy/ D x.c/
t!1 Œa; b

provided that  D "c .


Since x 2 DŒa; b the set Dx of its discontinuities is at most countable. By
Lemma 12.5.1 in [261], convergence in the M1 -topology implies local uniform
convergence at all continuity points of the limit. Hence

E WD fy W there exists yt such that lim yt D y; but lim xt .yt / ¤ x.y/g Dx


t!1 t!1

and if  is continuous, we conclude that .E/ D 0. If x is continuous at c and  D "c


then c … E whence .E/ D 0. Now both (deterministic) limit relations follow from
Lemma 2.1 in [56].
Proof for Part (c). In view of the Skorokhod representation theorem it suffices
to prove the following: if (deterministic) functions ft are right-continuous and
nondecreasing for each t  0 and limt!1 ft D f locally uniformly on Œ0; 1/, then
Z Z

lim .u  y/ dft .y/ D .u  y/ df .y/
t!1 Œ0; "u Œ0; "u

uniformly on Œa; b.


Integrating by parts, we obtain
Z Z "u

.u  y/ dft .y/ D .1  "/ u ft ."u/  u ft .0/ C .u  y/ 1 ft .y/dy
Œ0; "u 0

for t  0. The claim follows from the relations

sup ju ft ."u/  u f ."u/j  .a _ b / sup jft .u/  f .u/j ! 0I


u2Œa; b u2Œ0; b

sup ju ft .0/  u f .0/j  .a _ b / jft .0/  f .0/j ! 0


u2Œa; b
6.5 Bibliographic Comments 235

and
ˇZ Z ˇ
ˇ "u "u ˇ
sup ˇˇ .u  y/ 1
ft .y/dy  .u  y/ 1
f .y/dyˇˇ
u2Œa; b 0 0
Z "u
 sup .u  y/ 1 jft .y/  f .y/j dy
u2Œa; b 0
Z "u
 sup jft .u/  f .u/j sup .u  y/ 1 dy
u2Œ0; b u2Œa; b 0

D sup jft .u/  f .u/j .a _ b /j j1 j1  .1  "/ j ! 0


u2Œ0; b

as t ! 1. t
u
Lemma 6.4.3 Let .Uk /k2N be independent random variables with a uniform dis-
tribution on Œ0;P
1 and T1 , T2 ; : : : the arrival times of a Poisson
P process with unit
intensity. Then nkD1 "nUk converges vaguely as n ! 1 to j1 "Tj .
Proof It suffices to prove the convergence of Laplace functionals
 X
n   Z 1 
lim E exp  f .nUk / D exp  .1  exp.f .x///dx
n!1 0
kD1

for nonnegative continuous functions f with compact supports.


The expectation on the left-hand side is
 Z n n  Z n n
n1 exp.f .x//dx D 1  n1 .1  exp.f .x///dx :
0 0

R
1 1
 .1R1n 0 .1  exp.f .x///dx/ for n > supfx W f .x/ >
n
The last expression equals 
0g and converges to exp  0 .1  exp.f .x///dx as n ! 1. t
u

6.5 Bibliographic Comments

For the renewal theory our favorite source is [236]. The books [18, 119, 251] are
also highly recommended. Erickson’s inequality (6.5) and Lorden’s inequality (6.6)
were originally proved in [88] and [201], respectively. Elegant alternative proofs
of these results can be found on pp. 153–154 in [68] and [65], respectively. The
elementary renewal theorem, Blackwell’s theorem, and the key renewal theorem
(Proposition 6.2.3) are classical results. A reader-friendly proof of Proposition 6.2.3
can be found on pp. 241–242 in [236]. The result of Proposition 6.2.4 was mentioned
236 6 Appendix

on p. 959 in [139]. The key renewal theorem for the nonlattice distributions
concentrated on R was proved in Theorem 4.2 of [19]. Proposition 6.2.6 which can
be found in [157] is a counterpart of this result for lattice distributions. An idea of the
second part of our proof was borrowed from [19]. Proposition 6.2.7 is well known.
A fragment of this proposition which states that .; / has the same distribution as
.UV; .1  U/V/ came to our attention from [264]. Lemmas 6.2.2 and 6.2.8 are taken
from [150]. Lemmas 6.2.9, 6.2.13, 6.2.14, and 6.2.16(b) are borrowed from [148].
A version of Lemma 6.2.14 in the case where f is nondecreasing, r1 D 0, r2 D 1,
ˇ ¤ 0, and the distribution of  is nonlattice was earlier obtained in Theorem 2.1 of
[217]. Lemma 6.2.12 is Lemma A.6 in [147]. Lemma 6.2.15 is Lemma 5.1 in [146].
Lemma 6.2.16(a) is a slight extension of Lemma 5.2 in [146]. Lemma 6.2.17 was
stated in [142].
The equivalence (6.16),(6.18),(6.19) of Theorem 6.3.1 is a result which is
well known under additional restrictions on  and/or f , see Theorem 1 in [162] for
the case E 2 .1; 0/ and f an (increasing) power function, Theorem 3 in [3] for
the case E 2 .1; 0/ and regularly varying f , and Proposition 4.1 in [177] for the
case limn!1 Sn D 1 a.s. and f again a power function. The equivalence of (6.17)
to all the other conditions of the theorem was first observed in Lemma 3.5 of [6]. The
present proof of Theorem 6.3.1 uses techniques pertaining to the theory of ordinary
random walks that were developed by the previous writers and techniques related
to perturbed random walks that were discussed in Section 1. We learned the proof
of equality (6.21) from Lemma 2 in [173]. Lemma 6.3.3 is Lemma 5.2 in [8]. The
implication that E.N.x//p < 1 for some x  0 entails E.N.x//p < 1 for all
x  0 has earlier been proved on pp. 27–28 in [177] via an argument different from
ours. Proposition 6.3.4 is a version of Theorem 1 in [30]. A shorter proof can be
found in [151]. Theorems 6.3.5 and 6.3.6 were proved in [151]. Theorem 6.3.7 is a
combination of Theorem 2.1 in [177] and results obtained on pp. 27–28 of the same
paper.
Lemma 6.4.2(a,b) was originally proved in a slightly different form in Lemma
A.5 of [140]. Part (c) of this lemma was obtained in [143]. It seems that Lemma 6.4.2
does not follow from results obtained in [188] and Chapter VI, Section 6c in [160]
which are classical references concerning the convergence of stochastic integrals.
Bibliography

1. R. J. Adler, An introduction to continuity, extrema, and related topics for general Gaussian
processes. Institute of Mathematical Statistics, 1990.
2. A. Agresti, Bounds on the extinction time distribution of a branching process. Adv. Appl.
Probab. 6 (1974), 322–335.
3. G. Alsmeyer, On generalized renewal measures and certain first passage times. Ann. Probab.
20 (1992), 1229–1247.
4. G. Alsmeyer, J. D. Biggins and M. Meiners, The functional equation of the smoothing
transform. Ann. Probab. 40 (2012), 2069–2105.
5. G. Alsmeyer and P. Dyszewski, Thin tails of fixed points of the nonhomogeneous smoothing
transform. Preprint (2015) available at http://arxiv.org/abs/1510.06451
6. G. Alsmeyer and A. Iksanov, A log-type moment result for perpetuities and its application to
martingales in supercritical branching random walks. Electron. J. Probab. 14 (2009), 289–
313.
7. G. Alsmeyer, A. Iksanov and A. Marynych, Functional limit theorems for the number of
occupied boxes in the Bernoulli sieve. Stoch. Proc. Appl., to appear (2017).
8. G. Alsmeyer, A. Iksanov and M. Meiners, Power and exponential moments of the number of
visits and related quantities for perturbed random walks. J. Theoret. Probab. 28 (2015), 1–40.
9. G. Alsmeyer, A. Iksanov and U. Rösler, On distributional properties of perpetuities. J.
Theoret. Probab. 22 (2009), 666–682.
10. G. Alsmeyer and D. Kuhlbusch, Double martingale structure and existence of -moments for
weighted branching processes. Münster J. Math. 3 (2010), 163–211.
11. G. Alsmeyer and M. Meiners, Fixed points of inhomogeneous smoothing transforms. J.
Difference Equ. Appl. 18 (2012), 1287–1304.
12. G. Alsmeyer and M. Meiners, Fixed points of the smoothing transform: two-sided solutions.
Probab. Theory Relat. Fields. 155 (2013), 165–199.
13. G. Alsmeyer and U. Rösler, On the existence of -moments of the limit of a normalized
supercritical Galton-Watson process. J. Theoret. Probab. 17 (2004), 905–928.
14. G. Alsmeyer and U. Rösler, A stochastic fixed point equation related to weighted branching
with deterministic weights. Electron. J. Probab. 11 (2005), 27–56.
15. G. Alsmeyer and M. Slavtchova-Bojkova, Limit theorems for subcritical age-dependent
branching processes with two types of immigration. Stoch. Models. 21 (2005), 133–147.
16. V. F. Araman and P. W. Glynn, Tail asymptotics for the maximum of perturbed random walk.
Ann. Appl. Probab. 16 (2006), 1411–1431.
17. R. Arratia, A. D. Barbour and S. Tavaré, Logarithmic combinatorial structures: a probabilistic
approach. European Mathematical Society, 2003.

© Springer International Publishing AG 2016 237


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4
238 Bibliography

18. S. Asmussen, Applied probability and queues. 2nd Edition, Springer-Verlag, 2003.
19. K. B. Athreya, D. McDonald and P. Ney, Limit theorems for semi-Markov processes and
renewal theory for Markov chains. Ann. Probab. 6 (1978), 788–797.
20. K. B. Athreya and P. E. Ney, Branching processes. Springer-Verlag, 1972.
21. M. Babillot, Ph. Bougerol and L. Elie, The random difference equation Xn D An Xn1 C Bn
in the critical case. Ann. Probab. 25 (1997), 478–493.
22. R. R. Bahadur, On the number of distinct values in a large sample from an infinite discrete
distribution. Proc. Nat. Inst. Sci. India. 26A (1960), 66–75.
23. A. D. Barbour, Univariate approximations in the infinite occupancy scheme. Alea, Lat. Am.
J. Probab. Math. Stat. 6 (2009), 415–433.
24. A. D. Barbour and A. V. Gnedin, Small counts in the infinite occupancy scheme. Electron. J.
Probab. 14 (2009), 365–384.
25. F. Bassetti and D. Matthes, Multi-dimensional smoothing transformations: existence, regular-
ity and stability of fixed points. Stoch. Proc. Appl. 124 (2014), 154–198.
26. R. Basu and A. Roitershtein, Divergent perpetuities modulated by regime switches. Stoch.
Models. 29 (2013), 129–148.
27. A. D. Behme, Distributional properties of solutions of dVt D Vt dUt C dLt with Lévy noise.
Adv. Appl. Probab. 43 (2011), 688–711.
28. A. Behme, Exponential functionals of Lévy processes with jumps. Alea, Lat. Am. J. Probab.
Math. Stat. 12 (2015), 375–397.
29. A. Behme and A. Lindner, On exponential functionals of Lévy processes. J. Theoret. Probab.
28 (2015), 681–720.
30. Ju. K. Beljaev and V. M. Maksimov, Analytical properties of a generating function for the
number of renewals. Theor. Probab. Appl. 8 (1963), 108–112.
31. J. Bertoin, Random fragmentation and coagulation processes. Cambridge University Press,
2006.
32. J. Bertoin and I. Kortchemski, Self-similar scaling limits of Markov chains on the positive
integers. Ann. Appl. Probab. 26 (2016), 2556–2595.
33. J. Bertoin, A. Lindner and R. Maller, On continuity properties of the law of integrals of Lévy
processes. Séminaire de Probabilités XLI, Lecture Notes in Mathematics 1934 (2008), 137–
159.
34. J. Bertoin and M. Yor, Exponential functionals of Lévy processes. Probab. Surv. 2 (2005),
191–212.
35. J. D. Biggins, Martingale convergence in the branching random walk. J. Appl. Probab. 14
(1977), 25–37.
36. J. D. Biggins, Growth rates in the branching random walk. Z. Wahrscheinlichkeitstheorie
Verw. Geb. 48 (1979), 17–34.
37. J. D. Biggins and A. E. Kyprianou, Seneta-Heyde norming in the branching random walk.
Ann. Probab. 25 (1997), 337–360.
38. J. D. Biggins and A. E. Kyprianou, Measure change in multitype branching. Adv. Appl.
Probab. 36 (2004), 544–581.
39. J. D. Biggins and A. E. Kyprianou, The smoothing transform: the boundary case. Electron. J.
Probab. 10 (2005), 609–631.
40. P. Billingsley, Convergence of probability measures. Wiley, 1968.
41. P. Billingsley, Probability and measure. John Wiley & Sons, 1986.
42. N. H. Bingham, Limit theorems for occupation times of Markov processes. Z. Wahrschein-
lichkeitstheorie Verw. Geb. 17 (1971), 1–22.
43. N. H. Bingham, R. A. Doney, Asymptotic properties of supercritical branching processes II:
Crump-Mode and Jirina processes. Adv. Appl. Probab. 7 (1975), 66–82.
44. N. H. Bingham, C. M. Goldie and J. L. Teugels, Regular variation. Cambridge University
Press, 1989.
45. L. V. Bogachev, A. V. Gnedin and Yu. V. Yakubovich, On the variance of the number of
occupied boxes. Adv. Appl. Math. 40 (2008), 401–432.
Bibliography 239

46. L. V. Bogachev and Z. Su, Gaussian fluctuations of Young diagrams under the Plancherel
measure. Proc. R. Soc. A. 463 (2007), 1069–1080.
47. A. A. Borovkov, Asymptotic Methods in Queuing Theory. Wiley, 1984.
48. P. Bougerol and N. Picard, Strict stationarity of generalized autoregressive processes. Ann.
Probab. 20 (1992), 1714–1730.
49. P. Bourgade, Mesoscopic fluctuations of the zeta zeros. Probab. Theory Relat. Fields. 148
(2010), 479–500.
50. O. Boxma, O. Kella and D. Perry, On some tractable growth-collapse processes with renewal
collapse epochs. J. Appl. Probab. 48A (2011), 217–234.
51. A. Brandt, The stochastic equation YnC1 D An Yn C Bn with stationary coefficients. Adv.
Appl. Probab. 18 (1986), 211–220.
52. L. Breiman, On some limit theorems similar to the arc-sin law. Theory Probab. Appl. 10
(1965), 323–331.
53. S. Brofferio, How a centred random walk on the affine group goes to infinity. Ann. Inst. H.
Poincaré Probab. Statist. 39 (2003), 371–384.
54. S. Brofferio and D. Buraczewski, On unbounded invariant measures of stochastic dynamical
systems. Ann. Probab. 43 (2015), 1456–1492.
55. S. Brofferio, D. Buraczewski and E. Damek, On the invariant measure of the random
difference equation Xn D An Xn1 C Bn in the critical case. Ann. Inst. H. Poincaré Probab.
Statist. 48 (2012), 377–395.
56. H. Brozius, Convergence in mean of some characteristics of the convex hull. Adv. Appl.
Probab. 21 (1989), 526–542.
57. D. Buraczewski, On invariant measures of stochastic recursions in a critical case. Ann. Appl.
Probab. 17 (2007), 1245–1272.
58. D. Buraczewski, E. Damek, S. Mentemeier and M. Mirek, Heavy tailed solutions of
multivariate smoothing transforms. Stoch. Proc. Appl. 123 (2013), 1947–1986.
59. D. Buraczewski, E. Damek and T. Mikosch, Stochastic models with power-law tails: the
equation X D AX C B. Springer, 2016.
60. D. Buraczewski, E. Damek and J. Zienkiewic, Precise tail asymptotics of fixed points of the
smoothing transform with general weights. Bernoulli. 21 (2015), 489–504.
61. D. Buraczewski and A. Iksanov, Functional limit theorems for divergent perpetuities in the
contractive case. Electron. Commun. Probab. 20 (2015), article 10, 1–14.
62. D. Buraczewski and K. Kolesko, Linear stochastic equations in the critical case. J. Difference
Equ. Appl. 20 (2014), 188–209.
63. D. L. Burkholder, B. J. Davis and R. F. Gundy, Integral inequalities for convex functions of
operators on martingales. In: Proceedings of the Sixth Berkeley Symposium on Mathematical
Statistics and Probability (Univ. California, Berkeley, CA, 1970/1971), vol. II: Probability
Theory, pp. 223–240. University of California Press, 1972.
64. A. Caliebe and U. Rösler, Fixed points with finite variance of a smoothing transformation.
Stoch. Proc. Appl. 107 (2003), 105–129.
65. H. Carlsson and O. Nerman, An alternative proof of Lorden’s renewal inequality. Adv. Appl.
Probab. 18 (1986), 1015–1016.
66. L.-C. Chen and R. Sun, A monotonicity result for the range of a perturbed random walk. J.
Theoret. Probab. 27 (2014), 997–1010.
67. Y. S. Chow, H. Robbins and D. Siegmund, Great expectations: the theory of optimal stopping.
Houghton Mifflin Company, 1971.
68. Y. S. Chow and H. Teicher, Probability theory: independence, interchangeability, martin-
gales. Springer, 1988.
69. E. Çinlar, Introduction to stochastic processes. Prentice-Hall, 1975.
70. D. Cline and G. Samorodnitsky, Subexponentiality of the product of independent random
variables. Stoch. Proc. Appl. 49 (1994), 75–98.
71. M. Csörgő, L. Horváth and J. Steinebach, Invariance principles for renewal processes. Ann.
Probab. 15 (1987), 1441–1460.
240 Bibliography

72. D. A. Darling, Some limit theorems assiciated with multinomial trials. Proc. Fifth Berkeley
Symp. on Math. Statist. and Probab. 2 (1967), 345–350.
73. B. Davis, Weak limits of perturbed random walks and the equation Yt D Bt C ˛ supfYs W s 
tg C ˇ inffYs W s  tg. Ann. Probab. 24 (1996), 2007–2023.
74. D. Denisov and B. Zwart, On a theorem of Breiman and a class of random difference
equations. J. Appl. Probab. 44 (2007), 1031–1046.
75. P. Diaconis and D. Freedman, Iterated random functions. SIAM Review. 41 (1999), 45–76.
76. C. Donati-Martin, R. Ghomrasni and M. Yor, Affine random equations and the stable . 12 /
distribution. Studia Scientarium Mathematicarum Hungarica. 36 (2000), 387–405.
77. D. Dufresne, On the stochastic equation L.X/ D L.B.X C C// and a property of gamma
distributions. Bernoulli. 2 (1996), 287–291.
78. D. Dufresne, Algebraic properties of beta and gamma distributions and applications. Adv.
Appl. Math. 20 (1998), 285–299.
79. O. Durieu and Y. Wang, From infinite urn schemes to decompositions of self-similar Gaussian
processes. Electron. J. Probab. 21 (2016), paper no. 43, 23 pp.
80. R, Durrett, Probability: theory and examples. 4th Edition, Cambridge University Press, 2010.
81. R. Durrett and T. Liggett, Fixed points of the smoothing transformation. Z. Wahrschein-
lichkeitstheorie Verw. Geb. 64 (1983), 275–301.
82. M. Dutko, Central limit theorems for infinite urn models. Ann. Probab. 17 (1989), 1255–1263.
83. P. Dyszewski, Iterated random functions and slowly varying tails. Stoch. Proc. Appl. 126
(2016), 392–413.
84. P. Embrechts and C. M. Goldie, Perpetuities and random equations. In Asymptotic Statistics:
Proceedings of the Fifth Prague Symposium (P. Mandl and M. Hus̆ková, eds.), 75–86.
Physica, 1994.
85. P. Erdős, On a family of symmetric Bernoulli convolutions. Amer. J. Math. 61 (1939), 974–
976.
86. P. Erdős, On the smoothness properties of Bernoulli convolutions. Amer. J. Math. 62 (1940),
180–186.
87. T. Erhardsson, Conditions for convergence of random coefficient AR.1/ processes and
perpetuities in higher dimensions. Bernoulli. 20 (2014), 990–1005.
88. K. B. Erickson, The strong law of large numbers when the mean is undefined. Trans. Amer.
Math. Soc. 185 (1973), 371–381.
89. W. Feller, An introduction to probability theory and its applications. Vol II, 2nd Edition.
Wiley, 1971.
90. Sh. K. Formanov and A. Asimov, A limit theorem for the separable statistic in a random
assignment scheme. J. Sov. Math. 38 (1987), 2405–2411.
91. F. Freund and M. Möhle, On the number of allelic types for samples taken from exchangeable
coalescents with mutation. Adv. Appl. Probab. 41 (2009), 1082–1101.
92. B. Fristedt, Uniform local behavior of stable subordinators. Ann. Probab. 7 (1979), 1003–
1013.
93. I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes I. Springer, 2004.
94. L. Giraitis and D. Surgailis, On shot noise processes with long range dependence. In
Probability Theory and Mathematical Statistics, Vol. I (Vilnius, 1989), 401–408. Mokslas,
1990.
95. L. Giraitis and D. Surgailis, On shot noise processes attracted to fractional Levy motion. In
Stable Processes and Related Topics (Ithaca, NY, 1990). Progress in Probability 25, 261–273.
Birkhäuser, 1991.
96. P. W. Glynn and W. Whitt, Ordinary CLT and WLLN versions of L D W. Math. Oper. Res.
13 (1988), 674–692.
97. A. V. Gnedin, The Bernoulli sieve. Bernoulli 10 (2004), 79–96.
98. A. Gnedin, A. Hansen and J. Pitman, Notes on the occupancy problem with infinitely many
boxes: general asymptotics and power laws. Probab. Surv. 4 (2007), 146–171.
99. A. Gnedin and A. Iksanov, Regenerative compositions in the case of slow variation: A renewal
theory approach. Electron. J. Probab. 17 (2012), paper no. 77, 19 pp.
Bibliography 241

100. A. Gnedin, A. Iksanov and A. Marynych, Limit theorems for the number of occupied boxes in
the Bernoulli sieve. Theory Stochastic Process. 16(32) (2010), 44–57.
101. A. Gnedin, A. Iksanov, and A. Marynych, The Bernoulli sieve: an overview. In Proceedings
of the 21st International Meeting on Probabilistic, Combinatorial, and Asymptotic Methods
in the Analysis of Algorithms (AofA’10), Discrete Math. Theor. Comput. Sci. AM (2010),
329–341.
102. A. Gnedin, A. Iksanov and A. Marynych, On ƒ-coalescents with dust component. J. Appl.
Probab. 48 (2011), 1133–1151.
103. A. Gnedin, A. Iksanov and A. Marynych, A generalization of the Erdős-Turán law for the
order of random permutation. Combin. Probab. Comput. 21 (2012), 715–733.
104. A. Gnedin, A. Iksanov, P. Negadailov and U. Rösler, The Bernoulli sieve revisited. Ann. Appl.
Probab. 19 (2009), 1634–1655.
105. A. Gnedin, A. Iksanov and U. Roesler, Small parts in the Bernoulli sieve. In Proceedings of
the Fifth Colloquium on Mathematics and Computer Science, Discrete Math. Theor. Comput.
Sci. Proc. AI (2008), 235–242.
106. A. Gnedin, J. Pitman and M. Yor, Asymptotic laws for compositions derived from transformed
subordinators. Ann. Probab. 34 (2006), 468–492.
107. C. M. Goldie, Implicit renewal theory and tails of solutions of random equations. Ann. Appl.
Probab. 1 (1991), 126–166.
108. C. M. Goldie and R. Grübel, Perpetuities with thin tails. Adv. Appl. Probab. 28 (1996), 463–
480.
109. C. M. Goldie and R. A. Maller, Stability of perpetuities. Ann. Probab. 28 (2000), 1195–1218.
110. M. I. Gomes, L. de Haan and D. Pestana, Joint exceedances of the ARCH process. J. Appl.
Probab. 41 (2004), 919–926.
111. D. R. Grey, Regular variation in the tail behaviour of solutions of random difference
equations. Ann. Appl. Probab. 4 (1994), 169–183.
112. D. R. Grey and Lu Zhunwei, The fractional linear probability generating function in the
random environment branching process. J. Appl. Probab. 31 (1994), 38–47.
113. A. K. Grincevičius, On the continuity of the distribution of a sum of dependent variables
connected with independent walks on lines. Theory Probab. Appl. 19 (1974), 163–168.
114. A. K. Grincevičius, Limit theorems for products of random linear transformations on the line.
Lithuanian Math. J. 15 (1975), 568–579.
115. A. K. Grincevičius, One limit distribution for a random walk on the line. Lithuanian Math. J.
15 (1975), 580–589.
116. A. K. Grincevičius, Products of random affine transformations. Lithuanian Math. J. 20 (1980),
279–282.
117. A. K. Grincevičius, A random difference equation. Lithuanian Math. J. 21 (1981), 302–306.
118. A. Gut, On the moments and limit distributions of some first passage times. Ann. Probab. 2
(1974), 277–308.
119. A. Gut, Stopped random walks. Limit theorems and applications. 2nd Edition, Springer, 2009.
120. L. de Haan and S. I. Resnick, Derivatives of regularly varying functions in Rd and domains
of attraction of stable distributions. Stoch. Proc. Appl. 8 (1979), 349–355.
121. B. Haas and G. Miermont, Self-similar scaling limits of non-increasing Markov chains.
Bernoulli. 17 (2011) 1217–1247.
122. P. Hall and C. C. Heyde, Martingale limit theory and its applications. Academic Press, 1980.
123. X. Hao, Q. Tang and L. Wei, On the maximum exceedance of a sequence of random variables
over a renewal threshold. J. Appl. Probab. 46 (2009), 559–570.
124. S. C. Harris and M. I. Roberts, Measure changes with extinction. Stat. Probab. Letters. 79
(2009), 1129–1133.
125. L. Heinrich and V. Schmidt, Normal convergence of multidimensional shot noise and rates of
this convergence. Adv. Appl. Probab. 17 (1985), 709–730.
126. P. Hitczenko, Comparison of moments for tangent sequences of random variables. Probab.
Theory Relat. Fields. 78 (1988), 223–230.
127. P. Hitczenko, On tails of perpetuities. J. Appl. Probab. 47 (2010), 1191–1194.
242 Bibliography

128. P. Hitczenko and J. Wesołowski, Perpetuities with thin tails revisited. Ann. Appl. Probab. 19
(2009), 2080–2101. Erratum: Ann. Appl. Probab. 20 (2010), 1177.
129. P. Hitczenko and J. Wesołowski, Renorming divergent perpetuities. Bernoulli. 17 (2011),
880–894.
130. H. K. Hwang and S. Janson, Local limit theorems for finite and infinite urn models. Ann.
Probab. 36 (2008), 992–1022.
131. H. K. Hwang and T. H. Tsai, Quickselect and the Dickman function. Combin. Probab.
Comput. 11 (2002), 353–371.
132. D. L. Iglehart, Weak convergence of compound stochastic process. I. Stoch. Proc. Appl. 1
(1973), 11–31. Corrigendum, ibid. 1 (1973), 185–186.
133. D. L. Iglehart and D. P. Kennedy, Weak convergence of the average of flag processes. J. Appl.
Probab. 7 (1970), 747–753.
134. O. M. Iksanov, On positive distributions of the class L of self-decomposable laws. Theor.
Probab. Math. Statist. 64 (2002), 51–61.
135. A. M. Iksanov, Elementary fixed points of the BRW smoothing transforms with infinite number
of summands. Stoch. Proc. Appl. 114 (2004), 27–50.
136. A. M. Iksanov, On the rate of convergence of a regular martingale related to the branching
random walk. Ukrainian Math. J. 58 (2006), 368–387.
137. A. Iksanov, On the supremum of perturbed random walk. Bulletin of Kiev University. 1
(2007), 161–164 (in Ukrainian).
138. A. Iksanov, On the number of empty boxes in the Bernoulli sieve II. Stoch. Proc. Appl. 122
(2012), 2701–2729.
139. A. Iksanov, On the number of empty boxes in the Bernoulli sieve I. Stochastics. 85 (2013),
946–959.
140. A. Iksanov, Functional limit theorems for renewal shot noise processes with increasing
response functions. Stoch. Proc. Appl. 123 (2013), 1987–2010.
141. A. M. Iksanov and Z. J. Jurek, On fixed points of Poisson shot noise transforms. Adv. Appl.
Probab. 34 (2002), 798–825.
142. A. Iksanov, Z. Kabluchko and A. Marynych, Weak convergence of renewal shot noise
processes in the case of slowly varying normalization. Stat. Probab. Letters. 114 (2016), 67–
77.
143. A. Iksanov, Z. Kabluchko, A. Marynych and G. Shevchenko, Fractionally integrated inverse
stable subordinators. Stoch. Proc. Appl. 127 (2017), 80–106.
144. A. M. Iksanov and C. S. Kim, On a Pitman-Yor problem. Stat. Probab. Letters. 68 (2004),
61–72.
145. A. M. Iksanov and C. S. Kim, New explicit examples of Poisson shot noise transforms. Austr.
New Zealand J. Statist. 46 (2004), 313–321.
146. A. Iksanov, A. Marynych and M. Meiners, Limit theorems for renewal shot noise processes
with eventually decreasing response functions. Stoch. Proc. Appl. 124 (2014), 2132–2170.
147. A. Iksanov, A. Marynych, M. Meiners, Limit theorems for renewal shot noise processes with
decreasing response functions. (2013). Extended preprint version of [146] available at http://
arxiv.org/abs/arXiv:1212.1583v2
148. A. Iksanov, A. Marynych and M. Meiners, Asymptotics of random processes with immigration
I: Scaling limits. Bernoulli. 23, to appear (2017).
149. A. Iksanov, A. Marynych and M. Meiners, Asymptotics of random processes with immigration
II: Convergence to stationarity. Bernoulli. 23, to appear (2017).
150. A. M. Iksanov, A. V. Marynych and V. A. Vatutin, Weak convergence of finite-dimensional
distributions of the number of empty boxes in the Bernoulli sieve. Theory Probab. Appl. 59
(2015), 87–113.
151. A. Iksanov and M. Meiners, Exponential moments of first passage times and related quantities
for random walks. Electron. Commun. Probab. 15 (2010), 365–375.
152. A. Iksanov and M. Meiners, Fixed points of multivariate smoothing transforms with scalar
weights. Alea, Lat. Am. J. Probab. Math. Stat. 12 (2015), 69–114.
Bibliography 243

153. A. Iksanov and M. Möhle, On the number of jumps of random walks with a barrier. Adv.
Appl. Probab. 40 (2008), 206–228.
154. O. Iksanov and P. Negadailov, On the supremum of a martingale associated with a branching
random walk. Theor. Probab. Math. Statist. 74 (2007), 49–57.
155. A. Iksanov and A. Pilipenko, On the maximum of a perturbed random walk. Stat. Probab.
Letters. 92 (2014), 168–172.
156. A. Iksanov and A. Pilipenko, A functional limit theorem for locally perturbed random walks.
Probab. Math. Statist. 36 (2016), 353–368.
157. A. Iksanov and S. Polotskiy, Tail behavior of suprema of perturbed random walks. Theory
Stochastic Process. 21(36) (2016), 12–16.
158. A. M. Iksanov and U. Rösler, Some moment results about the limit of a martingale related
to the supercritical branching random walk and perpetuities. Ukrainian Math. J. 58 (2006),
505–528.
159. R. Iwankiewicz, Response of linear vibratory systems driven by renewal point processes.
Probab. Eng. Mech. 5 (1990), 111–121.
160. J. Jacod and A. N. Shiryaev, Limit theorems for stochastic processes. 2nd Edition, Springer,
2003.
161. P. Jagers, Age-dependent branching processes allowing immigration. Theory Probab. Appl.
13 (1968), 225–236.
162. S. Janson, Moments for first-passage and last-exit times, the minimum, and related quantities
for random walks with positive drift. Adv. Appl. Probab. 18 (1986), 865–879.
163. W. Jedidi, J. Almhana, V. Choulakian and R. McGorman, General shot noise processes
and functional convergence to stable processes. In Stochastic Differential Equations and
Processes. Springer Proc. Math. 7, 151–178, Springer, 2012.
164. P. R. Jelenković and M. Olvera-Cravioto, Implicit renewal theorem for trees with general
weights. Stoch. Proc. Appl. 122 (2012), 3209–3238.
165. Z. J. Jurek, Selfdecomposability, perpetuity laws and stopping times. Probab. Math Statist. 19
(1999), 413–419.
166. Z. J. Jurek and W. Vervaat, An integral representation for selfdecomposable Banach space
valued random variables. Z. Wahrscheinlichkeitstheorie Verw. Geb. 62 (1983), 247–262.
167. O. Kallenberg, Foundations of modern probability. Springer, 1997.
168. R. Kalpathy and H. Mahmoud, Perpetuities in fair leader election algorithms. Adv. Appl.
Probab. 46 (2014), 203–216.
169. S. Kalpazidou, A. Knopfmacher and J. Knopfmacher, Lüroth-type alternating series repre-
sentations for real numbers. Acta Arith. 55 (1990), 311–322.
170. R. Kapica and J. Morawiec, Refinement equations and distributional fixed points. Appl. Math.
Comput. 218 (2012), 7741–7746.
171. S. Karlin, Central limit theorems for certain infinite urn schemes. J. Math. Mech. 17 (1967),
373–401.
172. S. Karlin and H. M. Taylor, A first course in stochastic processes, 2nd Edition. Academic
Press, 1975.
173. R. Keener, A note on the variance of a stopping time. Ann. Statist. 15 (1987), 1709–1712.
174. H. G. Kellerer, Ergodic behaviour of affine recursions III: positive recurrence and null
recurrence. Technical report, Math. Inst. Univ. München, Theresienstrasse 39, D-8000
München, Germany. Available at http://www.mathematik.uni-muenchen.de/~kellerer/
175. R. Kershner and A. Wintner, On symmetric Bernoulli convolutions. Amer. J. Math. 57 (1935),
541–548.
176. H. Kesten, Random difference equations and renewal theory for products of random matrices.
Acta Math. 131 (1973), 207–248.
177. H. Kesten and R. A. Maller, Two renewal theorems for general random walks tending to
infinity. Probab. Theory Relat. Fields.106 (1996), 1–38.
178. P. Kevei, A note on the Kesten-Grincevičius-Goldie theorem. Electron. Commun. Probab. 21
(2016), paper no. 51, 12 pp.
244 Bibliography

179. J. F. C. Kingman, The first birth problem for an age-dependent branching process. Ann.
Probab. 3 (1975), 790–801.
180. C. Klüppelberg and C. Kühn, Fractional Brownian motion as a weak limit of Poisson shot
noise processes-with applications to finance. Stoch. Proc. Appl. 113 (2004), 333–351.
181. C. Klüppelberg and T. Mikosch, Explosive Poisson shot noise processes with applications to
risk reserves. Bernoulli. 1 (1995), 125–147.
182. C. Klüppelberg and T. Mikosch, Delay in claim settlement and ruin probability approxima-
tions. Scand. Actuar. J. 2 (1995), 154–168.
183. C. Klüppelberg, T. Mikosch and A. Schärf, Regular variation in the mean and stable limits
for Poisson shot noise. Bernoulli 9 (2003), 467–496.
184. V. F. Kolchin, B. A. Sevastyanov and V. P. Chistyakov, Random allocations. V.H.Winston &
Sons, 1978.
185. B. Kołodziejek, Logarithmic tails of sums of products of positive random variables bounded
by one. Ann. Appl. Probab., to appear (2017).
186. T. Konstantopoulos and S.-J. Lin, Macroscopic models for long-range dependent network
traffic. Queueing Systems Theory Appl. 28 (1998), 215–243.
187. D. Kuhlbusch, Moment conditions for weighted branching processes. PhD thesis, Universität
Münster, 2004.
188. T. G. Kurtz and P. Protter, Weak limit theorems for stochastic integrals and stochastic
differential equations. Ann. Probab. 19 (1991), 1035–1070.
189. T. L. Lai and D. Siegmund, A nonlinear renewal theory with applications to sequential
analysis. I. Ann. Statist. 5 (1977), 946–954.
190. T. L. Lai and D. Siegmund, A nonlinear renewal theory with applications to sequential
analysis. II. Ann. Statist. 7 (1979), 60–76.
191. J. Lamperti, Semi-stable Markov processes. Z. Wahrscheinlichkeitstheorie Verw. Geb. 22
(1972), 205–225.
192. J. A. Lane, The central limit theorem for the Poisson shot-noise process. J. Appl. Probab. 21
(1984), 287–301.
193. A. J. Lawrance and N. T. Kottegoda, Stochastic modelling of riverflow time series. J. Roy.
Statist. Soc. Ser. A. 140 (1977), 1–47.
194. G. Letac, A contraction principle for certain Markov chains and its applications. Random
matrices and their applications (Brunswick, Maine, 1984), 263–273, Contemp. Math. 50,
Amer. Math. Soc., 1986.
195. P. A. W. Lewis, A branching Poisson process model for the analysis of computer failure
patterns. J. Roy. Statist. Soc. Ser. B. 26 (1964), 398–456.
196. X. Liang and Q. Liu, Weighted moments for Mandelbrot’s martingales. Electron. Commun.
Probab. 20 (2015), paper no. 85, 12 pp.
197. T. Lindvall, Weak convergence of probability measures and random functions in the function
space DŒ0; 1/. J. Appl. Probab. 10 (1973), 109–121.
198. T. Lindvall, Lectures on the coupling method. Wiley, 1992.
199. Q. Liu, Fixed points of a generalized smoothing transformation and applications to the
branching random walk. Adv. Appl. Probab. 30 (1998), 85–112.
200. Q. Liu, On generalized multiplicative cascades. Stoch. Proc. Appl. 86 (2000), 263–286.
201. G. Lorden, On excess over the boundary. Ann. Math. Stat. 41 (1970), 520–527.
202. R. Lyons, A simple path to Biggins’ martingale convergence for branching random walk.
Classical and modern branching processes, IMA Volumes in Mathematics and its Applica-
tions. 84, 217–221, Springer, 1997.
203. H. M. Mahmoud, Distributional analysis of swaps in Quick Select. Theoret. Comput. Sci. 411
(2010), 1763–1769.
204. A. H. Marcus, Some exact distributions in traffic noise theory. Adv. Appl. Probab. 7 (1975),
593–606.
205. A. V. Marynych, A note on convergence to stationarity of random processes with immigration.
Theory Stochastic Process. 20(36) (2015), 84–100.
Bibliography 245

206. K. Maulik and B. Zwart, Tail asymptotics for exponential functionals of Lévy processes. Stoch.
Proc. Appl. 116 (2006), 156–177.
207. M. M. Meerschaert and S. A. Stoev, Extremal limit theorems for observations separated by
random power law waiting times. J. Stat. Planning and Inference. 139 (2009), 2175–2188.
208. M. Meiners and S. Mentemeier, Solutions to complex smoothing equations. Probab. Theory
Relat. Fields., to appear (2017).
209. S. Mentemeier, The fixed points of the multivariate smoothing transform. Probab. Theory
Relat. Fields. 164 (2016), 401–458.
210. R. Metzler and J. Klafter, The random walk’s guide to anomalous diffusion: a fractional
dynamics approach. Phys. Reports. 339 (2000), 1–77.
211. V. G. Mikhailov, The central limit theorem for a scheme of independent allocation of particles
by cells. Proc. Steklov Inst. Math. 157 (1983), 147–163.
212. T. Mikosch and S. Resnick, Activity rates with very heavy tails. Stoch. Proc. Appl. 116 (2006),
131–155.
213. T. Mikosch, G. Samorodnitsky and L. Tafakori, Fractional moments of solutions to stochastic
recurrence equations. J. Appl. Probab. 50 (2013), 969–982.
214. D. R. Miller, Limit theorems for path-functionals of regenerative processes. Stoch. Proc. Appl.
2 (1974), 141–161.
215. Sh. A. Mirakhmedov, Randomized decomposable statistics in a generalized allocation
scheme over a countable set of cells. Diskret. Mat. 1 (1989), 46–62 (in Russian).
216. Sh. A. Mirakhmedov, Randomized decomposable statistics in a scheme of independent
allocation of particles into cells. Diskret. Mat. 2 (1990), 97–111 (in Russian).
217. N. R. Mohan, Teugels’ renewal theorem and stable laws. Ann. Probab. 4 (1976), 863–868.
218. M. Möhle, On the number of segregating sites for populations with large family sizes. Adv.
Appl. Probab. 38 (2006), 750–767.
219. P. Mörters and Yu. Peres, Brownian motion. Cambridge University Press, 2010.
220. P. Negadailov, Limit theorems for random recurrences and renewal-type processes. PhD
thesis, University of Utrecht, the Netherlands. Available at http://igitur-archive.library.uu.nl/
dissertations/2010-0823-200228/negadailov.pdf
221. J. Neveu, Discrete-parameter martingales. North-Holland, 1975.
222. A. G. Pakes, Some properties of a random linear difference equation. Austral. J. Statist. 25
(1983), 345–357.
223. A. G. Pakes and N. Kaplan, On the subcritical Bellman-Harris process with immigration. J.
Appl. Probab. 11 (1974), 652–668.
224. Z. Palmowski and B. Zwart, Tail asymptotics of the supremum of a regenerative process. J.
Appl. Probab. 44 (2007), 349–365.
225. Z. Palmowski and B. Zwart, On perturbed random walks. J. Appl. Probab. 47 (2010), 1203–
1204.
226. E. I. Pancheva and P. K. Jordanova, Functional transfer theorems for maxima of iid random
variables. Comptes Rendus de l’Académie Bulgare des Sciences. 57 (2004), 9–14.
227. E. Pancheva, I. K. Mitov and K. V. Mitov, Limit theorems for extremal processes generated by
a point process with correlated time and space components. Stat. Probab. Letters. 79 (2009),
390–395.
228. J. C. Pardo, V. Rivero and K. van Schaik, On the density of exponential functionals of Lévy
processes. Bernoulli. 19 (2013), 1938–1964.
229. J. Pitman and M. Yor, Infinitely divisible laws associated with hyperbolic functions. Canad. J.
Math. 55 (2003), 292–330.
230. S. V. Polotskiy, On moments of some convergent random series and limits of martingales
related to a branching random walk. Bulletin of Kiev University. 2 (2009), 135–140 (in
Ukrainian).
231. M. Pratsiovytyi and Yu. Khvorostina, Topological and metric properties of distributions of
random variables represented by the alternating Lüroth series with independent elements.
Random operators and stochastic equations. 21 (2013), 385–401.
232. W. E. Pruitt, General one-sided laws of the iterated logarithm. Ann. Probab. 9 (1981), 1–48.
246 Bibliography

233. S. T. Rachev and G. Samorodnitsky, Limit laws for a stochastic process and random recursion
arising in probabilistic modelling. Adv. Appl. Probab. 27 (1995), 185–202.
234. J. I. Reich, Some results on distributions arising from coin tossing. Ann. Probab. 10 (1982),
780–786.
235. S. Resnick, Extreme values, regular variation, and point processes. Springer-Verlag, 1987.
236. S. I. Resnick, Adventures in stochastic processes. 3rd printing, Birkhäuser, 2002.
237. S. I. Resnick, Heavy-tail phenomena. Probabilistic and statistical modeling. Springer, 2007.
238. S. Resnick and H. Rootzén, Self-similar communication models and very heavy tails. Ann.
Appl. Probab. 10 (2000), 753–778.
239. S. Resnick and E. van den Berg, Weak convergence of high-speed network traffic models. J.
Appl. Probab. 37 (2000), 575–597.
240. S. I. Resnick and E. Willekens, Moving averages with random coefficients and random
coefficient autoregressive models. Commun. Statist. Stoch. Models. 7 (1991), 511–525.
241. C. Y. Robert, Asymptotic probabilities of an exceedance over renewal thresholds with an
application to risk theory. J. Appl. Probab. 42 (2005), 153–162.
242. I. Rodriguez-Iturbe, D. R. Cox and V. Isham, Some models for rainfall based on stochastic
point processes. Proc. R. Soc. Lond. A. 410 (1987), 269–288.
243. U. Rösler, V. A. Topchii and V. A. Vatutin, Convergence conditions for the weighted
branching process. Discrete Mathematics and Applications. 10 (2000), 5–21.
244. St. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional integrals and derivatives: theory
and applications. Gordon and Breach, 1993.
245. G. Samorodnitsky, A class of shot noise models for financial applications. Athens conference
on applied probability and time series analysis, Athens, Greece, March 22–26, 1995. Vol. I:
Applied probability. In honor of J. M. Gani. Lect. Notes Stat., Springer-Verlag. 114 (1996),
332–353.
246. V. Schmidt, On finiteness and continuity of shot noise processes. Optimization. 16 (1985),
921–933.
247. W. Schottky, Spontaneous current fluctuations in electron streams. Ann. Phys. 57 (1918),
541–567.
248. M. S. Sgibnev, Renewal theorem in the case of an infinite variance. Sib. Math. J. 22 (1982),
787–796.
249. B. Solomyak, On the random series ˙i (an Erdös problem). Ann. Math. 242 (1995), 611–
625.
250. L. Takács, On secondary stochastic processes generated by recurrent processes. Acta Math.
Acad. Sci. Hungar. 7 (1956), 17–29.
251. H. Thorisson, Coupling, stationarity, and regeneration. Springer, 2000.
252. G. Toscani, Wealth redistribution in conservative linear kinetic models. EPL (Europhysics
Letters). 88 (2009), 10007.
253. K. Urbanik, Functionals on transient stochastic processes with independent increments.
Studia Math. 103 (1992), 299–315.
254. D. Vere-Jones, Stochastic models for earthquake occurrence. J. Roy. Statist. Soc. Ser. B. 32
(1970), 1–62.
255. W. Vervaat, On a stochastic difference equation and a representation of nonnegative infinitely
divisible random variables. Adv. Appl. Probab. 11 (1979), 750–783.
256. Y. Wang, Convergence to the maximum process of a fractional Brownian motion with shot
noise. Stat. Probab. Letters. 90 (2014), 33–41.
257. T. Watanabe, Absolute continuity of some semi-selfdecomposable distributions and self-
similar measures. Probab. Theory Relat. Fields. 117 (2000), 387–405.
258. E. Waymire and V. K. Gupta, The mathematical structure of rainfall representations: 1. A
review of the stochastic rainfall models. Water Resour. Res. 17 (1981), 1261–1272.
259. G. Weiss, Shot noise models for the generation of synthetic streamflow data. Water Resour.
Res. 13 (1977), 101–108.
260. M. Westcott, On the existence of a generalized shot-noise process. Studies in probability and
statistics (papers in honour of Edwin J. G. Pitman), 73–88. North-Holland, 1976.
Bibliography 247

261. W. Whitt, Stochastic-process limits: an introduction to stochastic-process limits and their


application to queues. Springer, 2002.
262. E. T. Whittaker and G. N. Watson, A course of modern analysis. 4th Edition reprinted,
Cambridge University Press, 1950.
263. S. Wild, M. E. Nebel and H. Mahmoud, Analysis of Quickselect under Yaroslavskiy’s dual-
pivoting algorithm. Algorithmica. 74 (2016), 485–506.
264. B. B. Winter, Joint simulation of backward and forward recurrence times in a renewal
process. J. Appl. Probab. 26 (1989), 404–407.
265. M. Woodroofe, Nonlinear renewal theory in sequential analysis. SIAM, 1982.
266. A. L. Yakimiv, Probabilistic applications of Tauberian theorems. VSP, 2005.
267. M. Yamazato, On a J1 -convergence theorem for stochastic processes on DŒ0; 1/ having
monotone sample paths and its applications. RIMS Kôkyûroku. 1620 (1999), 109–118.
268. M. Yor, Exponential functionals of Brownian motion and related processes. Springer, 2001.
269. A. Zeevi and P. W. Glynn, Recurrence properties of autoregressive processes with super-
heavy-tailed innovations. J. Appl. Probab. 41 (2004), 639–653.
Index

Bernoulli sieve, 1, 2, 191, 203, 207 stable Lévy process, 112, 116, 120, 123,
number of empty boxes, 2, 122, 191, 195, 127, 147, 162, 177
203 continuity, 128
weak convergence, 192–194 unboundedness, 128
Blackwell theorem, 199, 212, 215, 225
Breiman theorem, 7, 16, 17
Brownian motion, 20, 21, 111, 123, 124, 126, intrinsic martingale in the branching random
127, 226 walk, 180, 188
logarithmic moment, 179, 182,
188
direct Riemann integrability, 91, 93, 198, 199, supremum, 185
212, 213 uniform integrability, 179, 181
sufficient conditions, 213
distribution
lattice, 7, 8, 17, 19, 200 key renewal theorem
Mittag–Leffler, 48, 134, 135 for distributions with infinite mean, 198,
nonlattice, 7, 8, 90, 92, 192, 199, 214 199, 215
positive Linnik, 50 lattice case, 18, 215
nonlattice case, 214
version for nonintegrable functions, 219,
elementary renewal theorem, 91, 157, 211 220, 223
Erickson inequality, 13, 33, 36, 211
exponential functional of a Lévy process, 46,
58, 84, 134 Lüroth series, 51, 85
Lamperti representation, 133
Lorden inequality, 106, 156, 212
fixed point
of Poisson shot noise transform, 49, 85
of smoothing transform, 49, 84 nonincreasing Markov chain, 202
fractionally integrated
inverse stable subordinator, 112, 116, 121,
123, 131, 136, 152, 165, 193, 206 ordinary random walk, 1, 4, 18, 19, 210,
Hölder continuity, 132 226
unboundedness, 132 first-passage time, 29

© Springer International Publishing AG 2016 249


A. Iksanov, Renewal Theory for Perturbed Random Walks and Similar Processes,
Probability and Its Applications, DOI 10.1007/978-3-319-49113-4
250 Index

distributional subadditivity, 148, 155, Poisson random measure, 20, 21, 25, 75, 193,
157, 211 235
exponential moment, 231 Poissonization, 195
power moment, 232
last-exit time, 29
exponential moment, 232 random process
power moment, 232 birth and death, 95
number of visits, 29, 230 conditionally Gaussian, 112, 114, 121, 122,
exponential moment, 231 125, 127, 136, 137, 145, 164, 165,
power moment, 232 167
overshoot, 19, 90, 217 extremal, 20
supremum, 229 Gaussian, 111, 113, 119, 120, 122,
power moment, 227 125–127, 138, 161, 162, 193, 207
undershoot, 90, 217 inverse stable subordinator, 112, 115, 125,
126, 134, 206
semi-stable Markov, 133
perpetuity, 1, 43, 179, 185 shot noise
almost sure finiteness, 44 Poisson, 46, 88, 175
continuity properties, 52 renewal, 87, 92
absolute continuity, 54, 56 stable Lévy, 21, 111, 130
discreteness, 55 stable subordinator, 112, 131, 193
mixtures, 56 stationary, 194
singular continuity, 54 stationary Ornstein–Uhlenbeck, 126
exponential moment, 58 stationary renewal, 90, 96, 203
logarithmic moment, 57, 179 strong approximation, 226
power moment, 57 with immigration, 33, 87, 175
related Markov chain, 43, 45 examples, 88
tail behavior, 57 exponential moment, 169
weak convergence, 66, 67 power moment, 170
perturbed random walk, 1, 3 stationary, 91
first-passage time, 29 weak convergence, 91, 92, 113–115,
almost sure finiteness, 29 117, 119, 120
exponential moment, 30 regular variation, 6, 15, 20, 118, 147, 148, 209
last-exit time, 29 in R2C , 110
almost sure finiteness, 29 fictitious, 110, 127, 142
exponential moment, 31 limit function, 112, 127, 136, 142
power moments, 31 uniform in strips, 111, 127, 136, 142
number of visits, 29 wide-sense, 110–112
almost sure finiteness, 29 renewal function, 13, 33, 34, 37, 38, 196, 210
exponential moments, 30 subadditivity, 36, 37, 39, 211
power moments, 31
supremum, 6
exponential moment, 6 Skorokhod space, 20, 87, 137
power moment, 6, 179, 227 J1 -topology, 20, 23, 28, 70, 115, 121, 126,
tail behavior, 7 151, 233
weak convergence, 20, 21 M1 -topology, 115, 233

You might also like