This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

Editors' Picks Books

Hand-picked favorites from

our editors

our editors

Editors' Picks Audiobooks

Hand-picked favorites from

our editors

our editors

Editors' Picks Comics

Hand-picked favorites from

our editors

our editors

Editors' Picks Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Ban Zheng

∗ † ‡

Fran¸ois Roueﬀ c

†

Fr´d´ric Abergel e e

‡

January 23, 2013

arXiv:1301.5007v1 [q-fin.CP] 18 Jan 2013

Abstract We introduce a multivariate Hawkes process with constraints on its conditional density. It is a multivariate point process with conditional intensity similar to that of a multivariate Hawkes process but certain events are forbidden with respect to boundary conditions on a multidimensional constraint variable, whose evolution is driven by the point process. We study this process in the special case where the fertility function is exponential so that the process is entirely described by an underlying Markov chain, which includes the constraint variable. Some conditions on the parameters are established to ensure the ergodicity of the chain. Moreover, scaling limits are derived for the integrated point process. This study is primarily motivated by the stochastic modelling of a limit order book for high frequency ﬁnancial data analysis.

1

Introduction

Since two decades, the high-frequency price dynamics in a limit order book has received much attention. On the one hand, an extensive statistical study on the limit order book dynamics and limit order book information content can be found in [9, 26, 39, 19, 30, 10, 25, 14, 41]; on the other hand, many researchers have endeavored to propose diﬀerent models, including equilibrium model, agent-based model and Markov model, to model the limit order book dynamics, see [36, 38, 17, 15, 1, 16]. More recently point processes have been introduced to ﬁnancial application. Early works focus on modelling inter-event durations in [23, 24] and the eﬀect of duration on price impact and trade sign autocorrelation in [20]. Dufour and Engle [22] extend these models to allow for the censoring of quotes after a trade by intervening trades. Bauwens and Hautsch [8] provide a comprehensive introduction of the application of point processes in ﬁnancial time series. Hawkes processes belong to the class of selfexciting point processes, where the intensity is driven by a weighted function of the time distance to previous points of the process. Hawkes processes originate from the literature in seismology and are used to model the occurrence of earthquakes, see [28, 27, 35, 18, 13]. To the best of our knowledge, the ﬁrst application of Hawkes processes in ﬁnancial time series is introduced by [11]. Since then, there is a growing literature on the application of Hawkes processes to the modelling of high frequency ﬁnancial data, see [7, 29, 12,

∗ Natixis, Equity Markets. E-mail: ban.zheng@natixis.com/ban.zheng@melix.net. The authors would like to thank the members of Natixis quantitative research team for fruitful discussions. † T´l´com ParisTech (CNRS LTCI, Insitut Mines-T´l´com). E-mail: francois.roueﬀ@telecom-paristech.fr ee ee ‡ BNP Paribas Chair of Quantitative Finance, Ecole Centrale Paris, MAS Laboratory. E-mail: frederic.abergel@ecp.fr

1

31, 5, 2, 34, 21, 6]. Although some nonparametric estimation of the intensity functions has been investigated by Reynaud-Bouret and Schbath [37] and Al Dayri et al. [4] in dimension d = 1, the majority of Hawkes processes related publications work in the setting of exponential kernel functions for computational reasons. Embrechts et al. [21] apply marked multivariate Hawkes process to model the daily ﬁnancial data. Muni Toke and Pomponio [34] introduce Hawkes processes in the modelling of the trade-through. Bacry et al. use marked Hawkes processes to reproduce empirically microstructure noise and the stylized facts such as Epps eﬀect and lead-lag in [5] and reach to derive and characterize the exact macroscopic diﬀusion limit of this model in [6]. In this contribution we wish to model the dynamics of the limit prices in the limit order book. Nevertheless, we remark that the limits evolve according to speciﬁc constraints. For instance the best bid/best ask (buy/sell) price’s behavior is constrained by a minimal bid-ask spread. This means that, as soon as the minimal bid-ask spread is attained, the best bid price cannot increase until the best ask price goes up. Therefore, in this work, we propose to add a constraint variable to the classical multivariate Hawkes process to model the dynamics of the limit prices. We restrict ourselves in the Markov setting (with exponential fertility functions) and then investigate the ergodicity property and scaling limits of the constrained multivariate Hawkes processes. In contrast, the Markov models proposed in [17, 16, 1] use simple Poisson process and they do not take into account the cross-interaction between diﬀerent events. Apart from its application to a limit order book, the constrained multivariate Hawkes process can be extended to many domains where the intensities of the process may depend on some constraint variables. The paper is structured as follows. In Section 2 we introduce the constrained multivariate Hawkes processes and present the main assumptions and notation. Section 3 gives the main results on the geometrical ergodicity and the large scale behavior. The application to a limit order book is presented in Section 4. We have gathered the detailed proofs in Section 5. Some useful technical results are postponed to Appendix A.

2

2.1

**Main assumptions and notation
**

Multivariate Hawkes process with constraints

Let p, q be two positive integers. We introduce a constrained multivariate Hawkes process deﬁned by a couple (S, N), where {S(t), t ≥ 0} is a constraint process taking its values in Zq = {1, 2, . . . }q and N is a p-dimensional multivariate point process. We shall describe + the dynamics of this model by the conditional intensity of N as deﬁned in [13] and an evolution equation for S. We assume that S is right-continuous with left limits and denote by S(t−) its left limit at time t. The conditional intensity of the multivariate point process N at time t shall depend on the constraint variable S(t−). For notational compactness we will see N as a marked point process having arrivals in R with marks in {1, . . . , p}. We will denote by Nj , j = 1, . . . , p the p point processes corresponding to each mark j, namely

p

N=

j=1

N j ⊗ δj .

In other words, each thinned unmarked process Nj is given from N by the formula Nj (g) = N(g ⊗ ½{j} ) , for all non-negative measurable function g deﬁned on R. Here we use the notation µ(g) for the integral of g with respect to a measure µ. The constraint variable S(t−) acts on the point process by constraining the set of possible marks for the ﬁrst event arriving

2

after time t in N. The constraint is added in the usual expression of the conditional intensity of a multivariate Hawkes process. More precisely, for all time t ≥ 0 and any mark i = 1, . . . , p, the conditional intensity of Ni given N|(−∞,t) and S(t−) is set as 0

p u<t

µ(t, i) =

if S(t−) ∈ Ai φi,j (t − u) Nj (du) otherwise ,

where µ0 (i) is the immigrant rate of mark i, φi,j is the fertility rate for producing a mark i event from a mark j event, and A1 , . . . , Ap are products of ﬁnite subsets of Zq , + Ai = Ai (1) × · · · × Ai (q). In turn, an arrival in N increases the value of the constraint by a value only depending on the mark. More precisely, the constraint process S satisﬁes the evolution equation, for all t > u ≥ 0, S(t) = S(u) + N(½(u,t] ⊗ J) , (2.2) where J is deﬁned on {1, . . . , p} with values in Zq . In other words, at each arrival in N with mark equal to i, S jump by an increment given by J(i). Note that equations (2.1) and (2.2) are valid for t ≥ 0 and t > u ≥ 0 respectively so that the distribution of the process {(S(t), Nj ((0, t])) , t ≥ 0, j = 1, . . . , p} is deﬁned conditionally to the initial conditions given by S(0−) and N restricted to (−∞, 0). To summarize, this distribution is parameterized by the constrained sets A1 , . . . , Ap , the immigrants rates µ0 (i), i = 1, . . . , p, the cross–fertility rate functions φi,j : [0, ∞) → [0, ∞), i, j = 1, . . . , p, and the jump increments J(i) ∈ Zq for i ∈ {1, . . . , p}. Meanwhile, we deﬁne an unconstrained multivariate Hawkes process N′ with the same fertility rate and immigrant intensity (but without constraints), that is, with conditional intensity on [0, ∞) satisfying, for i = 1, . . . , p,

p

µ0 (i) +

(2.1)

j=1

µ′ (t, i) = µ0 (i) +

j=1

u∈(−∞,t)

φi,j (t − u) N′ (du), j

t≥0,

where N′ = N′ (· ⊗ ½{j} ) for j = 1, . . . , p. j The average fertility matrix is deﬁned by ℵ = [αi,j ]i,j=1,...,p with αi,j = φi,j , 1 ≤ i, j ≤ p . (2.3)

The following assumption ensures the existence of a stationary version of the (unconstrained) multivariate Hawkes process N′ , see [18, Example 8.3(c)]. Assumption 1. The spectral radius of ℵ is strictly less than 1. Under Assumption 1, we will use the following (well deﬁned) vector u = (Idp − ℵT )−1 1p = where 1p denotes p-dimensional ones vector, 1p = [1 · · · 1] ,

p T

k≥0

ℵk 1p ,

(2.4)

3

by convention.j ≡ 0. we shall use the notation → − = [w(1). We denote a new process deﬁned in the state space X = Zq × Rp by X(t) = (S(t). we investigate the joint ergodicity of the constraint variable and of the point process. By [3.5) Here and in the following. . In the case where q is set to 1. we only need to study the stationarity of the constraint variable since the intensities are constant. For instance. .3 The Markov assumption We shall consider a particular shape of the fertility functions φi.5]. {S(t). J is a p × q matrix. we suppose that there exists β > 0 such that φi. This parameter is the reciprocal of a time : the larger β is. we write Ai in unbold face for the constraint sets. p} corresponding to the indices where J takes values +1 and −1. . Suppose moreover that J only takes values +1 and −1 and the constraint sets Ai are all equal to {1}. In this case. . we shall write J and Sn in unbold faces since they are scalar valued and. we use the same notation to obtain a matrix. . . t ≥ 0} is a birth-death process on Z+ with constant birth rate µ+ = µ0 (i) and i∈I+ i. N) and is also of interest in applications since the model is parameterized by a restricted set of well understood parameters.and Idp denotes the p × p identity matrix. AT denotes the transpose of matrix A. the shorter the dependence persists along the time between successive events. 2. . . p)]T + + with. . Corollary 2. . p}. w → − If w is vector-valued. . λ(t)) where λ(t) = [λ(t. In this case. w(p)]T . . 2. . 1). and q=1. . p λ(t. λ(t. Let I+ . we use bold faces symbols only when q may be larger than one. .j which implies a Markov property for the model. A very special case is obtained by setting φi. I− be the partition of {1. i=1 (2. . . i) = j=1 (−∞. similarly. respectively. constant death rate µ− = i∈I− µ0 (i).2 A very special case When we study the ergodicity of the process. From now on. For a real valued function w deﬁned on {1. 1 ≤ i. . j = 1. 4 . .j (t − u)Nj (du) . Proposition 2. j ≤ p . p}. . This allows us to obtain very precise results on the ergodicity of (S. . .t) φi. In the following. The process {X(t). for all i ∈ {1.j (u) = αi.1. . p .j βe−βu . (S(t))t≥0 is ergodic if and only p if → − µ+ − µ− = J T µ 0 = J(i)µ0 (i) < 0 . t ≥ 0} is a Markov process.

and ∆p is that of positive independent variables whose marginal distributions n+1 n+1 are determined by hazard rates Hr0 . . Observe that S(t) is constant between two consecutive jump instants. ∆n = Tn − Tn−1 .. see the end of the proof of Proposition 5. Sn . 1. . . ∆1 .6) Hri (t) = ½Aj (Sn )[µ0 (j) + λn (j)e−βt ]. Each jump of λ(t) corresponds to an arrival in the point process N. . Let µ0 denote an arbitrary positive constant (say µ0 = 1). Hence the marks of the embedded chain will take values in the set {0. . 1. .9) 5 . we set In = 0.. . . we deﬁne ∆1 = T1 and. For other sample times Tn . ∆n and I1 . . p} deﬁned by Jo (i) = J(i) 0q if i ≥ 1 if i = 0 . . · · · . ∆n+1 = min(∆0 . . i) = λn (i)e−β∆n+1 + βαi. . where Jo is the extension of J to {0.3.p} min (∆i ) . The sample time instants generated by n0 are artiﬁcially added to avoid the periodicity of the embedded chain. iteratively as follows. n+1 n+1 n+1 where. In the following. . t ≥ 0} is sampled at increasing discrete times (Tn )n≥1 deﬁned by the jump instants of λ(t) and by the arrivals of an independent homogeneous PPP n0 with intensity µ0 . Hence. ∆p ) . (µ0 (i) + λn (i)e−βt )½Ac (Sn ). which thus correspond to the arrivals of n0 . in the following. . The continuous time 0 0 process {X(t).7) and.This property directly follows from the exponential form of the fertility functions φi. We denote its mark by In . p}. Finally. . we can generate the sequence (∆n . for each n ≥ 2. More importantly. n+1 the constraint variable at time Tn+1 is given by Sn+1 = S(Tn+1 ) = Sn + Jo (In+1 ) . the event type at time Tn+1 is In+1 = arg i∈{0. ∆1 . Observe that.··· . . (2. where n is the positive integer such that the corresponding jump instant t equals Tn . p}. for all i ∈ {1. for all Tn ≤ t < Tn+1 . . . λn ). . t ≥ 0} through a discretely sampled version based on the description of the process N using its hazard rate.8) (2. S does not change at arrivals corresponding to the PPP n0 . which takes values in {1. i p (2. n+1 ∆1 . t ≥ 0}.j (see [32] in the unconstrained case). . (2. the continuous time process is interpolated as S(t) = Sn λ(t. the conditional distribution of ∆0 . . Xn . . . µ0 + 0 j=1 Then. . given X0 . This description implies Proposition 2. given (S0 . we shall exclusively rely on this discrete description to study the ergodicity of {X(t). λn = λ(Tn ) and Xn = X(Tn ) = (Sn . λ0 ) and T0 = 0. . λn ). . . Then.1. almost 0 surely. . . . . . 1. Hr1 .In+1 . Sn = S(Tn ). i) = λn (i)e−βt . i = 0 . n = 0. we shall describe the dynamics of the process {X(t). . . Hrp deﬁned for each i and t ≥ 0 by i>0. . the self-excitation intensities at time Tn+1 are given by λn+1 (i) = λ(Tn+1 . p}. . In .

the conditional distribution of (∆1 .. . {j} × A) = Pℓ (I1 = j. The general case is presented in Section 3. A) = Px ((I1 . In particular. 1. X1 ) ∈ A) for all y = (i. we denote by Q(−J ) the transition kernel deﬁned q−#J on {0. ˇ ˇ Remark 2. . . . . respectively. . We ˇ provide a partial drift condition in Section 3. p} × Zq × Rp and Z = R+ × {0. ℓ) ∈ {0. . x) ∈ Y and A ∈ B(Y). the conditional distribution of {(In . . . for any J ⊆ {1. Notation Eν corresponds to the corresponding expectation. Ap (j). for any y = (i. the whole path {Zn . λn ) and Zn = (∆n . . We shall + + + + + + ˜ denote by Q.5. p} and Borel subset A ⊂ Rp . . λn ). 1. Q and Q the transition kernels of these Markov processes. . .. . . . λ0 ) but with transition kernel ˇ ˜ Q.. namely. p} × Z#J × Rp deﬁned as the transition Q + + kernel Q with the constraint variable Sj . All the results will rely on the Markov assumption introduced in Section 2. In particular.3 are ψ-irreducible and aperiodic. . (2..q}) = Q. λn ). . We will call this chain the unconstrained Hawkes embedded chain. I0 . q}. + j ∈ {0.6. . Sn . 1. 1. we determine the scaling limit of the point process in physical time in Section 3. Observe that. . S0 . A) = Px ((∆1 . Under this Markov framework. p} × Rp . + ˇ ˇ ˇ Q(y. .3. ˜ ˜ ˜ ˇ we have Q(−∅) = Q and Q(−{1. . n ≥ 0} as the chain valued q in {0. In particular Q(+{1. S1 .. where V is unbounded oﬀ petite sets. . . we denote by Pν the probability + + corresponding to the initial distribution ν at time t = 0. λ1 ) given (∆0 . . . which is deﬁned as Q but with all the Ai ’s replaced by the empty set and without the constraint variable. λ0 ) does not depend on I0 . n ≥ 1} only depends on the initial condition set on X0 and we may write. p} × R+ that starts at the same state as (I0 . Q(z.q}) = Q and Q(+∅) = Q. . . . λ1 ∈ A) . λn ).10) ˜ ˜ Clearly the same remark holds for Q.. . we need to introduce further Markov chains and transition kernels. Q(y. . . If ν is a Dirac distribution at point x ∈ X. λ0 ) does not depend on (∆0 . λn ). . Then we study the ergodicity in the case q = 1 in Section 3. ˜ Furthermore. X1 ) ∈ A) . In . . we deﬁne the Markov chain {(In .. we ˜ show that the kernel Q and Q deﬁned in Section 2. Remark 1. Meanwhile. x) ∈ Z and all A ∈ B(Z). p} × Zq × Rp . . Yn = (In . . . we will simply write Px and Ex . in Section 3. 6 . 1. I0 ). .3. we denote by ˜ (+J ) the transition kernel deﬁned on {0. Finally. n ≥ 1} ˇ ˇ0 . deﬁned for n = 0. Ap (j). 1. . for all z = (δ. Because the ergodicity of Q will be proved using an induction on the number of constraints q. i. . I1 . for any J ⊆ {1. j ∈ J and their corresponding constraint sets A1 (j). . are Markov chains respectively valued in X = Zq × Rp . Y = {0. 1. To ˇ ˇ initiate the induction. . Hence we will use the notation ˇ given the initial condition (I Eℓ or Pℓ to underline this fact. p}×Z+ ×Rp deﬁned as the transition kernel Q but without the constraint + variable Sj . j ∈ J and their corresponding constraints ˜ ˜ ˜ ˇ A1 (j). As in the case q ≥ 1. q}. I1 .For any probability distribution ν on X = Zq × Rp . The above facts thus imply that Xn = (Sn . for instance. ..1. .4. This chain corresponds to the case q = 0. Sn . 3 Main results We now present the main results of this work.2 and then prove that Q is V -geometrically ergodic in Section 3. from the description above. since it is associated to a classical (unconstrained) multivariate Hawkes point process. . . .

3. The following deﬁnitions will be useful. M ]p and {0. · · · . T (3. p} (possibly in a diﬀerent order). In the following result. The fertility matrix ℵ deﬁned in (2. the path of the process S cannot evolve arbitrarily. . . . under Assumption 1.3. jm } = We refer to [33] for the deﬁnition of small sets and ψ-irreducibility. p} × Zq × Rp . p} × ˜ {1. j (3. For all integer K ≥ 1. . . .2) (3. .1) An admissible path (j1 . . respectively. · · · . . The second assumption says that there exists an admissible path (see Deﬁnition 1) with the last p steps containing the p diﬀerent marks. . . by (2. Moreover. n=1 Jo (jn ) = so and {jm−p+1 . . Suppose + + ˜ moreover that Assumption 2 holds. . M ]p are petite sets for Q and Q. independently of the process Sn . θ ∈ (0.γ )](s. p}m such that m−1 s ∈ Ac1 . {1. Let Q and Q be the transition kernels deﬁned in Section 2. 1. . . K}q × (0.1 Irreducibility Because of the constraints sets Ai and the function J. jm ) ∈ Am (s) implies that. p and ℓ ∈ (0. The proof is postponed to Section 5. for all s ∈ Zq and ℓ ∈ Rp . 2. there + exists an integer m ≥ p + 1 such that. . · · · . It only says that. Then there exists γ > 0.2.7) Sk = s + n=1 Jo (jn ) for all k = 1. ℓ) ≤ θ V1.γ (ℓ) + b½(0.1. given Y0 = (i. + V1. .3 on the space Zq × Rp with + + p ≥ 1 and q ≥ 0. .γ (ℓ) = eγu with u deﬁned as in (2. The main assumption of this section is the following one. . respectively. K}q .3) where ℓ . .1. 7 . . s. We can now state the main result of this section. Suppose that Assumption 1 holds. It will be used to establish the existence of small sets for Q. with p ≥ 1 and q ≥ 0. jm ) ∈ Am (s) such that s + {1. we have + + [Q(½Zq ⊗ V1. . · · · . . · · · . the set of admissible paths Am (s) + is deﬁned as the set of (j1 . . Im = jm k (and thus. We will check this condition for the application of the constrained Hawkes process to a limit order book in Section 4. . . see Proposition 5. M > 0 and b > 0 such that. . the drift condition is partial in the sense that it does not control Sn . s + Jo (j1 ) ∈ Ac2 . . there exists so ∈ Zq such that the following assertion holds. The assumption on the invertibility of ℵ will be useful for proving the ψ–irreducibility of the embedded Markov chain. Proposition 3. m) is positive. t ≥ 0}. . there exists an admism sible path (j1 .2 Partial drift condition We now derive a partial drift condition that will be useful to obtain a “complete” drift condition on the process {X(t). Let m be a positive integer and s ∈ Zq .3 on the spaces q p Z+ × R+ and {0. Moreover for all K ≥ 1 and M > 0.3. Let Q be the kernel deﬁned in Section 2.3) is invertible. 1). for any i = 0. . Then then kernels Q and Q are aperiodic and ψq irreducible. . . ℓ) the conditional probability to have I1 = j1 . . . 1. ∞)p . λn should not have large excursions away of a compact set. Assumption 2. for all s ∈ {1. K} × (0.4). ˜ Theorem 3. jm ) ∈ {0. s + j j n=1 Jo (jn ) ∈ Acm . if q ≥ 1. . · · · .M]p (ℓ) . Deﬁnition 1. .

Assumption 2 boils down to assuming that the fertility matrix ℵ deﬁned in (2. 1. 1. The following inequality holds → −T J (Idp − ℵ)−1 µ0 < 0 . see [33. Suppose that ℵ is invertible and that Assumption 1 holds. p} × Rp ..γ are small sets by Theorem 3.. . As explained above..3 on the space {0. ˇ Corollary 3. Let Q be the transition kernel deﬁned in Section 2.. we can establish the following result on its stationary distribution.8] to get the Harris ˇ ˇ recurrence). In fact the same result holds on the extended chain {(In . λn )..3. p} → R.3.. .. p} obtained by setting wo (0) = 0. 8 .γ replaced by (½{0. where V1. n ≥ 1} is a Markov chain (the constraint variable Sn vanishes) and the “partial” drift condition of Proposition 3.p In the case q = 1.2 becomes a “complete” drift condition for this chain. µ0 + 1T (Idp − ℵ)−1 µ0 p 0 (3. Let wo be the extension of w to {0.3). .2. while. ˇ Having proved the ergodicity of Q. the process N is a standard (unconstrained) multivariate Hawkes process.1. 3.3 The case q = 0 If q = 0. Theorem 9.γ is deﬁned in (3.2 apply to the case q = 0 and. Let w : {1. n ≥ 0} with V1.. . λn ). We introduce the following assumption.The proof is postponed to Section 5. To this end we deﬁne V0.3.1. n ≥ 0}. n ≥ 0} is V1.γ )-geometrically ergodic. ˇ as a consequence.γ (s) = eγs . In this case the process {λn . Moreover. . λn ) which is unbounded oﬀ petite sets. .3.p} ⊗ V1.4. . Assumption 3.1 and Proposition 3. . Under the same assumptions as Proposition 3.p} ⊗ V1..2 the drift function goes to inﬁnity only when one of the components of λn goes to inﬁnity. that is. denote by π the staˇ tionary distribution of Q. transition kernel has been denoted by Q ˇ Proposition 3.γ -geometrically ergodic. 3. Theorem 3.γ geometrically ergodic. in Proposition 3. We also note that the sublevel ˇ sets of V1. {λn .7) (3. ˇ0 ) does not depend on I0 . Then there + ˇ exists γ > 0 such that Q is (½{0. It will be used to obtain the case q = 1. Hence we obtain that {λn . n ≥ 0} is a V1. This conclusion also applies ˇ of the chain {(In .. Then we have π (wo ⊗ ½Rp ) = ˇ + → − T (Id − ℵ)−1 µ w p 0 . Proof.6) Note that Condition (2..3) is invertible. (3. Chapter 15] (and also [33.4) The proof is postponed to Section 5. whose ˇ in Section 2. ˇ1 ) given (I0 . .5) corresponds to Assumption 3 in the special case ℵ = 0.1. .γ ) ˇ ˇ for the kernel Q ˇ ℓ ˇ ℓ ˇ because the conditional probability of (I1 . (3. . if q = 0 in Theorem 3.5) We shall further denote j=1.4 The case q = 1 s∗ = 1 + max Aj . we need a drift function that applies to Xn = (Sn . .. the drift function must diverge as at least one component of Xn goes to inﬁnity.

5).5. Let {(Sn . . Q is (½{0. In general the induction is more involved. (ii) Otherwise..γ1 )-geometrically ergodic.4. Theorem 3.γ deﬁned by (3.7. + ∗ ∗ ˜ Then. Check the following conditions in this order : 9 .. We shall need some additional notation. Then.Theorem 3. .γ1 )-geometrically ergodic. Assumption 2 is used to obtain small → − ˜ sets for the chains Q and Q.9) j∈J J J We further denote by Jo the extension of J J on {0. then Q is (½{1. . The proof of Theorem 3.8) 3. The following theorem partially answers to this question. λn ).s∗ −1} ⊗ V1. it allows us to compute some moment under the stationary ˇ distribution of the unconstrained chain (with kernel Q). (3. . . Suppose that Assumptions 1 and 2 hold.γ1 are deﬁned in (3. Thus if s0 in Assumption 2 satisﬁes s0 ≥ s∗ .γ0 and V1. for + + ˜ all J ⊂ {1. 2 and 3 hold.3 on the space {0. .3. p} × Z+ × Rp with p ≥ 1. Let ˜ Q be the kernel deﬁned in Section 2. the kernel Q(+J ) as in Section 2. p} × Zq × Rp . A natural question is to ask whether Assumption 3 J T (Idp − ℵ)−1 µ0 < 0 is sharp. Theorem 3. . .. . we shall use the function V0.3) applied to intensity λn .γ1 deﬁned by (3. 1. . p} → Z by J J (i) = Jj (i) .7. . r Let q ≥ 1 and p ≥ 1.5. by deﬁnition of ν in the proof of Proposition 5.6) applied multiplicatively to each component of Sn and the function V1. (3. for all γ1 > 0 small enough. . The fact that Q is ψ-irreducible follows from Theorem 3. there exists γ0 > 0 such that.3 on the space {0.. For any set J ⊆ {1. } × (0.γ0 ⊗ V1. and suppose that Assumption 1 and 2 hold. s∗ + 1.1. (i) If ψ({s∗ . p} deﬁned by Jo (0) = 0.. where V0. . → −T J (Idp − ℵ)−1 µ0 > 0 . n ≥ 0} be the Markov chain on the space Z+ × Rp with + transition kernel Q deﬁned in Section 2. Remark 3.p} ⊗ V0. . Let ˜ Q be the kernel deﬁned in Section 2.6. . for all γ0 ∈ (0. For q = 1 the induction applies under the simple Assumption 3 because. The proof is postponed to Section 5. Let q = 1 and p ≥ 1 and suppose that Assumption 1..3. . q}. ∞)p ) > 0. q}. . .5 is omitted as it is a particular case of Theorem 3. γ0 ]. Moreover the two following assertions holds. We shall see that the proof of the ergodicity for q ≥ 1 relies on an induction on q. using Corollary 3. 1. Assumption 1 is related to the stability of the underlying unconstrained Hawkes process. . Then Q is ψ-irreducible. .3).6) and (3.5 General case To obtain a drift function in the general case q ≥ 1. Deﬁne. Assumption 2 implies that ψ({s0 } × (0. . Assumptions 1 and 2 appear to be very mild.4. ∞)p ) = 0.. where s∗ is deﬁned by (3. then the case (i) does not happen. see the details in Section 5. .3. we deﬁne J J : {1. which is proved in Section 5.5. . . .. . if then Q is transient. .

. In practice. + + ∗ ∗ for all γ1 > 0 small enough.. ⊗(#J ) (+J ) (+J ) ˜ Q is (½{0.γ (δ) = eγδ .5. By Corollary 3.7]). . there exists γ0 > 0 such that for all γ0 ∈ (0. p} × Zq × Rp . see the details in Section 4. Indeed. 2. . Let q. there exists γ0 > 0 such that for all γ0 ∈ (0.. γ1 > 0 and V0.1.γ1 )-geometrically ergodic. . .11) end ∗ ∗ Then. Thus the ′ stationary distribution π (+J ) is well deﬁned.. (3.γ2 ⊗ ½{0. we actually need a functional CLT for the kernel Q deﬁned in Section 2..γ1 )geometrically ergodic for some γ0 .. see [5] for the non-constrained case. J } do / Check that π (+J ˜ ′ T (Idp − ℵ)−1 µ0 < 0 . . Q is (V2. q ˜ Suppose that the Assumptions of Theorem 3. 1.p} ⊗ V0. p ≥ 1 and Q be the kernel deﬁned in Section 2.γ1 )-geometrically ergodic. Observe that Condition (3.7 hold. Hence we must ﬁrst check that Q is geometrically ergodic. γ0 ]. The proof is postponed to the end of Section 5.4. ... γ0 ].γ0 ⊗ V1. Theorem 17.γ1 deﬁned in (3.γ0 and V1. Corollary 3.3). Remark 4.p} ⊗ V0. end end ∗ ∗ ˜ Then.. and #J ′ < k. We denote by π distribution.γ0 ⊗ V1. . . (3.3.. Theorem 17. .. We provide the ergodicity property of Q in the following corollary. we obtain a functional central limit theorem ˜ (FCLT) in discrete time for the chain Q.2]. Let us deﬁne V2.. . . one may check this assumption ˜ using Monte Carlo simulations..p} ⊗ V0. since the law of large number holds in this case (see [33.8. Q q is (½{0. It is interesting to investigate the microscopic behavior of the mid-price (mean of the best bid price and the best ask price) at large scales. there exists γ0 > 0 such that for all γ0 ∈ (0. 3.7 hold.11) with J ′ = ∅. 3. for all γ1 > 0 small enough. for all γ1 > 0 small enough.for k = 1.γ0 ⊗V1. γ0 ] and all q γ2 > 0 small enough.. . Then. p}. . Condition (3. .6 Scaling limit In the application of this model to the limit order book.4. q} such that #J = k do Check that → −J J for J ′ ⊂ J with J ′ ∈ {∅. q do for J ⊂ {1. Applying the results of [33. an increment of the best bid price or the best ask price at time Tk is speciﬁed by the mark Ik ∈ {0.10) ) J Jo ⊗ ½Zq ×Rp < 0 . To obtain a result in (physical) continuous time.10) is equivalent to (3. + + (3.γ1 )-geometrically ergodic.. Suppose that the Assumptions of Theorem 3.γ0 ˜ its stationary ⊗ V1.11) makes sense at this step.3 on the space R+ × {0. since #J = ′ ˜ k.6) and (3.. Remark 5. . . then Q is (½{0.γ : R+ → R+ by V2.p} ⊗V0. we have already checked that Q(+J ) is geometrically ergodic.12) 10 .

Moreover. Let g = g − π(g). Let w : {1.18) The proof is postponed to Section 5. we have Σn (g) = Sn − S0 and thus + + n Σn (g) = Sn − S0 − E(Sn ) + E(S0 ) + k=1 E(Jo (Ik )) − π(g) . 1]. In this case. ℓ). Theorem 17. sup z∈Z g(z) <∞. . for all z ∈ Z. We shall use the notation E for the expectation under the stationary distribution.9.γ1 ](z) (3.8 and [33.p} ⊗ V0.15) where → denotes the weak convergence in C([0. Theorem 3. Note that σg may not be strictly positive.4. γ1 . d d (3. 1]) (the space of continuous functions deﬁned on [0.6.. Under the assumptions of Theorem 3. 2 Hence. which implies σg = 0. 11 . we have π(|g|) < ∞.13) By Corollary 3. if the (nonnegative) constant 2 ˆ g σg := π g 2 − {Qˆ}2 (3.. Let us denote the linearly interpolated partial sums of g g(Zn ) by sn (t. γ2 > 0. d (3.17) where is deﬁned as in (3..The proof is postponed to Section 5. for any initial distribution.. We now derive the main result of this section. Chapiter 17]) admits a solution g satisfying the ˆ q bound |ˆ| ≤ R (V2. g) = Σ⌊nt⌋ (¯) + (nt − ⌊nt⌋) Σ⌊nt⌋+1 (¯) − Σ⌊nt⌋ (¯) . g ) .4]. we have.p} ⊗ V0. Deﬁne E(w) = Further deﬁne 2 v(w) = σg . p} → R. for any initial condition. which determines the scaling limit of the constrained Hawkes process.4]. s..8. g ∞ g(z) = ˆ k=0 2 Q (z.tT ] ⊗ w) − t T E(w) → (v(w)/E[∆1 ])1/2 Bt .γ2 ⊗ ½{0. g g g k where Σk (h) = j=1 h(Zj ). By Corollary 3. . 2 (nσg )−1/2 sn (t.7. under this condition. as n → ∞. 1]) and B denotes a standard Brownian motion on [0.γ2 ⊗ ½{0..8..4. An interesting example is given by g = ½R+ ⊗ Jo ⊗ ½Zq ⊗ ½Rp . g) → Bt .γ0 ⊗ V1. .16) (3. γ1 . Using ¯ Corollary 3. T −1/2 N(½[0. 2 σg E[w(I1 )] . ¯ k It also follows that π(|ˆ| ) < ∞.. we get that Σn (g) = 0P∗ (1). γ2 > 0.γ1 ) + 1 . E[∆1 ] (3. Q admits a stationary distribution π. for all γ0 . 2 Remark 6. i.14) with g : Z → R deﬁned by g(z) = w(i) − E(w)δ for all z = (δ. Theorem 17. there exists R > 0 such that g ¯ ˆ the Poisson equation g −Q(ˆ) = g (see [33. Let now g : Z → R be such that for all γ0 .14) is strictly positive. we have. .7. if v(w) > 0. We need some additional notation before stating the result. q [V2.γ0 ⊗ V1. we have. Applying [33.

3 allows us to use a ﬁnite set of parameters for this model. The Markov assumption of Section 2. namely β.4 Application to a limit order book (LOB) Let us apply our results in the Markov setting to a simple limit order book where only the two best limits are considered (Best Bid and Best Ask).76 BestBid BestAsk 15. i = 1 or 4 Ai = (4. This case corresponds to p = 4.PA BestBid BestAsk 15. FTE.72 15.71 0 15. The evolution is displayed in physical time (left) and in event time (right). In this example.73 Price 15. • Event 3 : Best Bid price moves upward one tick. we need to introduce a constraint.75 10 20 30 40 50 GMT Time (s) MidPrice Event time Figure 1: Limit order book of Orange : Evolution of Best Bid and Best Ask prices over a short time window of April 1. for sake of simplicity.76 BestBid/BestAsk dynamics. i = 1 or 4 i = 2 or 3 (4.74 15. i = 2 or 3 The spread variable S evolves according to (2.75 15. 2011. events 2 and 3 do not happen whenever S = 1.72 15.2) • Event 4 : Best Bid price moves downward one tick.PA 15. However.1) {1} . Hence we propose the constrained multivariate Hawkes process described in Section 2. FTE. Namely deﬁning the spread as S = BestAsk − BestBid (in ticks).2) with a scalar J deﬁned by J(i) = 1. • Event 2 : Best Ask price moves downward one tick. since the Best Bid has to remain at least one tick below the Best Ask. we focus on the following four best limit events : • Event 1 : Best Ask price moves upward one tick. ℵ and µ0 . In order to forecast future events based on past observations of the LOB. Figure 1 presents a snapshot of the price evolution in physical time and event time. BestBid/BestAsk dynamics. 12 .71 25850 25900 25950 26000 15.73 15. q = 1 and the constrained sets Ai are deﬁned by ∅.74 Price 15. Hawkes processes seem to be a sensible approach to model the time evolution of these events.1 to model this data. −1 .

of s = 1. (4.3) where w(i) takes values 1/2. i. .3) α = max αi. .j . We set so = 2. . jm−1 = 2 and jm = 3.1 µ0 = max µ0 (j).2).) Applying Theorem 3. by (4. 4. where E (w) and v (w) are respectively deﬁned in (3. Then. .p j=1 → − | J i. 4}.2) and (2. for k=1 2 1 if s ≥ 2. . . Although it does not seem easy to check that v (w) > 0 (except perhaps by numerical means).16) and (3. jm ) ∈ Am (s) such that s + 1 ≤ i ≤ |s − 2|. Then given s ∈ {1. . . we set ji = 0 for |s−2| < i ≤ m−4 m and jm−3 = 1. ..j i. Indeed. we focus on the scaling limit of the mid-price.1) holds for this choice of (j1 . jm−2 = 4. we set m = |K − 2| + p.5. the mid price satisﬁes the following equation P (t) = P (u) + N ½(u. jm−2 . 1/2 and −1/2 for i = 1. To conclude.8). .j .2)..j | . ji = |s−2| Jo (jk ) = 2(= so ).5 applies provided that ℵ is invertible and has spectral radius smaller than 1.9. α = min αi. Assumption 2 says that ℵ has to be invertible and requires an additional property which only depends on the constrained sets Ai and J. jm } = {1. jm−1 .Assumptions 1 and 3 then amount to assumptions on the parameters µ0 and ℵ. . so that. To see why. In the framework of our model. we easily check that (3. deﬁned as the middle price between the Best Bid and Best Ask prices. we only need to check that this additional property holds. yielding a vanishing asymptotic variance in the large scale behavior (see Remark 6).1 Postponed proofs Proof of Theorem 3. This value is often considered as the continuous time price of the asset. Moreover. 13 .t] ⊗ w . . while the spread behaves as a stationary variable. The ergodicity of the underlying chain allows one to perform meaningful statistical analysis of the data.. µ0 = min µ0 (j).1) (5. −1/2. . and µ0 satisﬁes (3. s+ k=1 Jo (jk ) = 2. . the best-bid. . we have s+ k=1 J(jk ) = 2. best-ask and mid prices behave as co-integrated random walks. This will be done in a forthcoming paper. For the moment. provided that v (w) > 0.j q J = max i=1. . (Note that similar equations hold for the Best Bid and Best Ask prices with diﬀerent functions ws. we get the scaling limit of the mid-price in physical time is given by d T −1/2 (P (tT ) − P (0) − t T E (w)) → (v(w) /E[∆1 ])1/2 Bt . by (4. 5 5. we do expect this to be true.. Hence Assumption 2 holds and Theorem 3. For any K ≥ 1. to apply Theorem 3. let us set. respectively. Hence.7) with J deﬁned by (4.2) (5. j j In the following we shall denote (5. jm ) and that {jm−3 . .17). K} we need to show that there is an admissible m path (j1 . . . .

in the event {X0 = x}.5) Recall that given (I1 . .M+βα = 2(µ0 + M + βα + (m − 2)βα) = Cm. The case m = 1 is given by Lemma 5. we need two preliminary results.M = 2(µ0 + M + (m − 1)βα). . Let x = (s. jm ) ∈ {1. To establish the Lemma 5. Note that. · · · . ℵp . · · · . . if (j1 . . Let x = (s. Let m be an integer ≥ 1. jm−1 ) replaced by (j2 .. Corollary 5. Xk ). we get that Ex g(λ1 )½{I1 =j} = Ex g(e−∆1 β ℓ + βℵj )½{∆j <U j } . X1 ) . ℓ = λ1 and the constant Cm−1. jm ) ∈ Am (s). . M ]p and (j1 . X1 ). we have Ex g(λm )½{I1 =j1 . see that the density of ∆j is bounded from below by µ0 e−(µ0 +ℓ)t on t ∈ R+ . j ∈ A1 (s) with j = 0 and g : X → R+ . Hence we 1 get (5. 1 i=j j Observe that U1 ≥ Exp µ0 + ℓ where Exp(z) denotes the exponential distribution 1 j with mean . We have Ex g(λm )½{I1 =j1 . . jm ).. we 1 ℓ + e− m i=2 ti β βℵj1 + · · · + βℵjm ] × e−Cm.··· . we proceed by induction.1. (5. (5. using (2. i +∞ 0 g(e−tβ ℓ + βℵj )e−2(µ0 +ℓ)t dt . Then. hence.M . ℓ) ∈ X. M + βα]p . the right-hand side of the last display can be bounded from below by applying the induction hypothesis with (j1 . ℓ) ∈ X with ℓ ∈ (0. 14 .4) Proof. where Cm. · · · . k ≥ 2} does not depend on I1 . Proof. since ∆j and U1 are independent. Using (2. ℓ) ∈ X with ℓ ∈ (0. jm ) ∈ Am (s) with ji = 0 for i = 1.Im =jm } ≥ µ0 m g[e− Rm + m i=1 ti β Since j ∈ A1 (s).9) and (5. . p}m . M ]p and (j1 . . m. .Im =jm } |(I1 . M ∈ R+ and g : X → R+ .··· .and the j-th column vector of ℵ by ℵj . .Im−1 =jm } . Then Ex g(λ1 )½{I1 =j} ≥ µ0 where ℓ = max ℓ(i). with the Markov property. the distribution of {(Ik .2. Let x = (s. . .··· . thus. we obtain that E g(λm )½{I2 =j2 . using that the hazard rate of ∆j is bounded between µ0 and µ0 + ℓ. .··· .9).··· .M m i=1 ti dtm · · · dt1 .4). j 1 1 j where U1 = min ∆i .Im =jm } |(I1 .Im =jm } = Ex ½{I1 =j1 } E g(λm )½{I2 =j2 . To get the result for any m ≥ 2. we have 1 z st Ex g(λ1 )½{I1 =j} ≥ Ex g(e−∆1 β ℓ + βℵj )e−(µ0 +ℓ)∆1 j j . Thus.2). X1 ) = EX1 g(λm−1 )½{I1 =j2 .1. · · · . so that ℵ = ℵ1 existence of small sets. M ]p implies λ1 ∈ (0. {X0 = x} and ℓ ∈ (0.

Proof of Proposition 5.6) where the omitted argument of g reads [. ν ) and (m+1. Let x = (s. λ0 ) does not depend on I0 . . . (5. and an admissible m path (j1 .′ ] e−Cm. . S1 . jm } = {1. Let Q be the transition kernel deﬁned in Section 2. p}. . . . q ≥ 1. ǫ. . M ]p is an (m.M m−1 i=1 ti dtm−1 · · · dt1 . Then there exists a probability measure ν on X such that. ℓ) ∈ C.7). Suppose that Assumption 2 holds. . for all subsets A ∈ B(Zq ) and B ∈ B(Rp ). p}×{1.Im =jm } ≥ µ0 m−1 Ex ½{I1 =j1 } Rm−1 + g [. M and K. . . ν )-small ˜ ˜ ˜ set for Q.6) and then using (5. · · · .5). there exists ǫ > 0 and a positive ˜ integer m such that {1. S0 . there exists an integer m > 0. .3 on the space {0. . .8) i=1 i=1 15 . . ] × e−Cm.4.3 on the space Zq × Rp with p. .1. Proposition 5. . let us state the following corollary which follows by observing that the conditional distribution of (I1 . Before providing the proof of this result. 1. . ǫ. q ≥ 1. Suppose that Assumption 2 holds. K}q ×(0. Using again Lemma 5. . . . ν)-small set for Q. By Assumption 2. Let Q be the transition kernel deﬁned in Section 2. Hence we get the result by applying the Tonelli theorem in (5. p}× q p Z+ × R+ with p. · · · . we have Ex ½{I1 =j1 } g [. for all K ≥ 1 and M > 0.′ ] e−2(µ0 +ℓ)t dt g [. Then there exists a probability + + measure ν on X such that. Let K ≥ 1 and M > 0 and denote C = {1. K}q × (0. .3. λ1 ) given (I0 . ˜ Corollary 5.M t dt . . .7) where the omitted argument in the right-hand side reads [. K}q × (0. there exists ǫ > 0 and a positive integer m such that {1. + Now. M ]p . for all K ≥ 1 and M > 0. . · · · . ǫ. A × B) = Ex [½A (Sm )½B (λm )] ≥ Ex m i=1 m−p n=1 J(jn ) = so and {jm−p+1 .′ ] = e− =e − m−1 i=1 ti β (e−tβ ℓ + βℵj1 ) + e− ℓ+e − m−1 i=1 ti β m−1 i=2 ti β m−1 i=1 ti β−tβ βℵj2 + · · · + βℵjm βℵj1 + · · · + βℵjm . ǫ. (5. . . (5. M ]p is an (m. ] = e− m−1 i=1 ti β λ1 + e− m−1 i=2 ti β βℵj2 + · · · + βℵjm . ] ≥ µ0 ≥ µ0 +∞ 0 +∞ 0 g [.3. . where ½A (Sm )½B (λm ) ½{Ii =ji } p = ½A (so )Ex ½{Ii =ji } EXm−p ½B (λp ) ½{Ii =jm−p+i } . we may write + + Qm (x. . . ν) and (m + 1. jm ) such that s + so ∈ Zq is independent of m.··· . . we get that Ex g(λm )½{I1 =j1 .Plugging this bound in (5. . .

(jm−p+1 .M ′ p dtp · · · dt1 . .11). i = 1.M m . · · · . ϑ) : 0 < u < ϑ(1) < · · · < ϑ(p − 1) < 1}. ǫ. Ii = ji . it is important to note that ν1 only depends on β and ℵ.11) ½{Ii =ji } . we further have that m−p Ex i=1 ½{Ii =ji } ≥ µ0 m e−Cm. . K}q × (0. · · · . (5. . i = 1. . jm ). we simply observe that we can carry out the same proof as above with m replaced by m + 1 and the sequence of marks (j1 . . M ′ ]p where M ′ = M + αβ(m − p).9) yields. m − p}. A × B) ≥ ǫ1 ½A (so ) ν1 (B)E m m−p x i=1 ½B (λm ) m i=m−p+1 ½{Ii =ji } ≥ ǫ1 ν1 (B) . 1.where we used that. · · · . Using Inequalities (5. m − p}. K}q × (0. which does neither depend on M nor K. . . i = 1. ν)-small set (by possibly decreasing ǫ). Applying Lemma A. . jm ) ∈ Ap (Sm−p ). .2. . . ϑ(p − 1)]T p where ϑ(j) = e− i=j+1 ti β for j = 1. (5.12) Using that x ∈ {1. . . . . we get that Qm (x. M ]p . . i = 1. we have λm−p ∈ (0.10). (5. m − p}. By (2. ǫ.8) and (5. j1 .9) and (5. ν)-small set (see [33]).2 with g = 1.M ′ = 2(µ0 + M ′ + (m − 1)α). m − p}. . · · · .2). jm ) is admissible. Take now K ≥ 1 and M > 0 and let x ∈ {1. Ii = ji . . . jm ) replaced by (0.9) where Cm. the last p-steps path (jm−p+1 . · · · . C is a (m. From Inequality (5. under {X0 = x. E Xm−p ½B (λm ) m i=m−p+1 ≥ µ0 p Rp + ½B e− ½{Ii =ji } ℓ + e− p i=2 ti β p i=1 ti β βℵj1 + · · · + βℵjp p i=1 ti × e−Cm. . Ii = ji .M ′ β dϑdu . we get that.10) where D = {(u. EXm−p ½B (λm ) m i=m−p+1 ½{Ii =ji } 1 β p−1 D ≥ µ0 p ½B (uℓ + βℵϑ) u C m. . we get that Q (x. M ]p and Corollary 5. A × B) ≥ ǫν(A × B) for some positive constant ǫ not depending on x and ν is the measure on X deﬁned by ν(A × B) = ½A (so )ν1 (B). Setting u = e−β i=1 ti . · · · . under {X0 = x. ϑ = [ϑ(1). that is. p − 1. . E Xm−p Moreover. i = 0. (5. under {X0 = x. m − p. under {X0 = x. Inequality (5.1 with Cm. . j > m − p only depends on Xm−p and the Markov property. given Yi . the conditional distribution of Yj . we get that there exists ǫ1 > 0 and a probability Γ = βℵ and γ = β measure ν1 such that. To obtain that C is also an (m + 1. In other words.M ′ in (5.12). .M Rm + m i=1 ti dtm · · · dt1 = µ0 Cm. 16 . under {X0 = x}. Ii = ji . Applying Corollary 5. Moreover. · · · .

Hence the chain is φ-irreducible with φ = ν. for any i ∈ {0.14) [Q(½Zq ⊗ V1..γ )](x) = Ex + p i=0 exp γuT ℓe−β∆1 + γβ(u(i) − 1) Hri (∆1 )/Hr(∆1 ) .14). . p) reads 1 p Hr(u) = i=0 Hri (u) = µ0 + 1T µ0 + 1T ℓe−βu .γ )](x) = + ∞ p eγu 0 T ℓe−βu −Ir(u) i=0 eγβ(u(i)−1) Hri (u) du . Using (2. . Moreover. i = 0.. p}. . p and the particular form of Hr(0) and Hr(i) in (2..9). the corresponding probability density function is given by u → Hr(u)e−Ir(u) on R+ . ν)-small. the hazard rate of + + ∆1 = min(∆i . Section 5. we have λ1 = ℓe−β∆1 + βℵI1 ½{I1 =0} . .1. the chain is aperiodic. by conditioning on ∆1 and using that ℵT u = u − 1p (see (2. Using the hazard rate of ∆1 under Px .. .4)) and setting u(0) = 1.p i=1.. Using that eγβ(u(0)−1) ≤ eγβ(u(i)−1) for all i = 1.3.2 Proof of Proposition 3. hence. and thus = Ex exp γuT ℓe−β∆1 + γβuT ℵI1 ½{I1 =0} p (5. ℓ) ∈ Zq × Rq .p Recall the deﬁnition of u in (2. ǫ.3]. t≥0.. · · · . β (5. we get [Q(½Zq ⊗ V1. we get that. given the initial condition x = (s. In Proposition 5. ǫ. which can be chosen to contain any arbitrary point x in the state space X. Moreover the set C is (m. i=1.15) Ir(t) = i=1 Iri (t) = µ0 t + 1T µ0 t + 0 p Hence we get [Q(½Zq ⊗ V1. 1. 5.6). Inserting Formula (5.6). . we easily obtain that p p i=0 eγβ(u(i)−1) Hri (u) ≤ µ0 + 0 eγβ(u(i)−1) µ0 (i) + ℓ(i)e−uβ i=1 . the conditional probability that the ﬁrst arrival’s mark is i is given by Px (I1 = i|∆1 ) = Hri (∆1 )/Hr(∆1 ) . Observe that.1. the measure ν does not depend on the set C.. .Proof of Theorem 3. sse [33. 1T ℓ p (1 − e−βt ) . We shall further denote (5. given ∆1 . 0 p p where we used (2. .2 1 ≤ u = min u(i) ≤ u = max u(i) < ∞ . 17 .13) The inequality u ≥ 1 follows from the fact that ℵk has non-negative entries for all k ≥ 1.γ )](x) + = Ex exp γuT ℓe−β∆1 i=0 Px (I1 = i|∆1 )eγβ(u(i)−1) .. where t Iri (t) = 0 p Hri (u) du.4.4). These arguments hold for the Kernel ˜ Q and thus we obtain Theorem 3. ν)-small and (m + 1.

16) + i=1 eγβ(u(i)−1) ℓ(i) ∞ 0 eγu T ℓe−βu −Ir(u)−uβ Using (5.2 with β = β + µ0 + 1p µ0 . 0 p β ′ 0 T Similarly. Since V1. 18 .γ (ℓ) ∞ ∞ 0 eγu T ℓe−βu −Ir(u)−uβ du = 0 exp (γuT ℓ + 1 γβuT ℓ + 1T ℓ p 1T ℓ −βu p )(e − 1) − (β + µ0 + 1T µ0 )u du 0 p β (5. inserting (5. we have for all γ ∈ (0. ≤ 1 − βγ 1 + βγu where C. this achieves the proof.γ (ℓ) p ≤ µ0 + 0 i=1 eγβ(u(i)−1) µ0 (i) p 1 V1.γ (ℓ) ∞ eγu 0 T ℓe−βu −Ir(u) ∞ du = 0 e(γu T ℓ+ 1T ℓ p −βu −1)−(µ0 +1T µ0 )u 0 p β )(e du (5.γ (ℓ) ∞ 0 eγu T ℓe−βu −Ir(u) du du . T ℓ + 1T ℓ βγu ℓ∈(0. using (5. C ′ do not depend on ℓ.13). for γ > 0 small enough. we have 0 < u ≤ uT ℓ/1T ℓ ≤ u < ∞ . γ1 ]. p Therefore.17) → 0 as 1T ℓ → ∞ .γ (ℓ) 1 V1.15).17) and (5.2 with a = γuT ℓ + 1T ℓ p and β ′ = µ0 + 1T µ0 .18) ∼ as 1T ℓ → ∞ . we obtain 1 V1.Consequently.γ is bounded on (0.16). we have 1 V1.∞)p p γβ(u(i)−1) ℓ(i) i=1 e γβuT ℓ + 1T ℓ p . we have 1 [Q(½Zq ⊗ V1. we get (3. p and thus. we have p γβ(u(i)−1) ℓ(i) i=1 e βγuT ℓ + 1T ℓ p ≤ 1T ℓ + γβ(uT ℓ − 1T ℓ) + C γ 2 1T ℓ p p p βγuT ℓ + 1T ℓ p ≤ 1 − βγ 1T ℓ p + C′ γ2 .γ )](x) + V1. M ]p provided that M large enough. Now observe that.γ (ℓ) ℓ∈(0. we get lim sup 1T ℓ→∞ p 1 [Q(½Zq ⊗ V1.∞)p p Hence choosing such a γ and setting θ = θ(γ). M ]p . p γβ(u(i)−1) ℓ(i) i=1 e θ(γ) = sup <1.γ )](x) ≤ sup + V1. βγuT ℓ + 1T ℓ p 1 + C ′ γ 2 = 1 − βγ + O(γ 2 ) . using a Taylor expansion of the exponential function at the origin. Hence taking γ1 > 0 small enough.18) in (5. applying Lemma A. (5.2) for ℓ out of (0. p where we used Lemma A.

1. (5.M]p inf P(s. Observe now that under the event {λ0 = 0}. we have ˇ n0 ([0. p} are counted in the sum of the ˇ left-hand side of (5. we have lim a.4). ˇ The last two displays yield. k ≥ 1} avoids {s′ } × (0.s. for any m ≥ 1.21) ˇ where Tn is deﬁned as Tn in Section 2. > 0 for all ℓ ∈ (0.6] that Q is transient. In particular ˇ 0 = 0. Theorem 8.p} ) = n .. Theorem (1)]. .T ] ⊗ w) = − T (Idp − ℵ)−1 µ0 w T By [5.. Since these two sets have positive ψ-measure. . ˇ Tn 1 = 0 . ∞)p starting from anywhere in {s} × (0. Tn ]) + N′ (½[0. Since (Tn ) is an increasing sequence.20) ˇ ˇ Here [P∗ ] means that the result holds for any initial distribution on (I0 .4 1 ′ → N (½[0. ∞)p ) > 0 for some s2 ≥ s∗ .1. Deﬁne s = s2 +mJ(i) and take m and M > 0 large n≥1 inf Sn > s′ thus have that the probability that the chain {Xk . we have 1 lim n→∞ n n k=1 ˇ wo (Ik ) = π (wo ⊗ ½Rp ) a. we get that taking λ n→∞ lim 1 ′ N (½[0.p} . (5.2.20).Tn ] ⊗ w) = π (wo ⊗ ½Rp ) .19) with w = ½{1.3 but for the unconstrained chain. ˇ ˇ + n (5. we get by [33.[P∗ ] .s. M ]p .5. M )p ) > 0 and P(s. we get that it diverges a. lim inf s→∞ ℓ∈(0.. M ]p is positive. ˇ (Recall that n0 is an independent homogeneous PPP which corresponds to the events with ˇ marks equal to 0).. the constant sequence (i.. ... We shall prove that for all s′ ≥ 1 and M > 0.s.. By (3. ∞)p ) > 0. .21) and the last display. under the event {λ0 = 0}. we get (3.. Thus. Theorem 17. we have.. ∞)p ) > 0 for all m ≥ 1. We 19 .4 Proof of Theorem 3. under the event {λ0 = 0}.ℓ) n≥1 inf Sn > s′ >0. . 5. . i) of length m belongs to Am (s2 ) (see Deﬁnition 1).22) enough so that ψ({s} × (0.ℓ) Take s′ such that ψ({s′ }×(0. so that only the marks in {1. . and ˇ thus.19) T →∞ On the other hand by the law of large numbers for Harris recurrent chains (see [33. . λ0 ).7]). We next consider the case (ii) so that ψ({s2 } × (0. ˇ + (5. (5.. . . . T (Id − ℵ)−1 µ n→∞ n µ0 + 1 p p 0 lim Applying (5.19).s.3.Tn ] ⊗ ½{1. n→∞ lim 1 ′ N (½[0. applying (5. using the deﬁnitions and notation of Section 2.p} ) = 1T (Idp − ℵ)−1 µ0 ˇ p ˇ Tn a. . p} such that J(i) > 0. It follows that ψ({s2 + mJ(i)} × (0..Tn ] ⊗ ½{1. Note that we used that wo (0) = 0.8) there exists i ∈ {1.6 The case (i) follows from Proposition 3.3 Proof of Corollary 3.

23) ˇ ˇ (Recall that U only depends on the unconstrained chain (In . we shall reason by induction on the number of constraints q. To obtain a drift condition on the constrained case q ≥ 1. (5.γ (ℓ) + V1. for all ℓ ∈ (0. We shall rely on ˜ the transition kernels Q(−J ) introduced in Section 2. . For such ˇ ˇ ˇ a given set J . (The fact that this expectation only depends on x is explained in Remark 1.s. n ≥ 0. Uk = inf Mn so that U = M1 ∧ · · · ∧ Mk ∧ Uk+1 . for all k ≥ 1. Choose k large enough so that + c = C θk (V1. . for all s ≥ s∗ and all ℓ ∈ Rp . M ]p .22). 1) not ˇ depending on s1 such that. .5 Proof of Theorem 3. This and (5.3 and [33. .22) holds. and all s1 large enough. Observe that ws1 is a function valued in [0. . Applying Corollary 3.ℓ) n≥1 inf Sn > s′ ≥ Pℓ (U > (s′ ∨ s∗ ) − s) . ] the expectation under {(S0 . Sn . + n≥1 P(s.) 20 . q}. n ≥ 0} the chain with transition ˜ (−J ) which starts at the same state as (I0 . λ0 ) = x}. n≥k we have. Finally. we have n n→∞ lim Mn = ∞ a. for all x ∈ Zq−#J × Rp . i=1 For all s ≥ s∗ . . ℓo ∈ Rp and k ≥ 1. the event { inf Sn ≥ s∗ } coincides with the event {U + s ≥ s∗ }. 1]. we get that. By Proposition 3. Mk ≥ −kJ. n≥1 Now under the event {S0 = s}. . 5.3 for subsets J in {1. Section 15]). Hence U = inf Mn is valued in R a. . Then we may choose s1 large enough so that s1 > kJ and ˇ [Qk (ws1 )](ℓo ) = Pℓo (Uk+1 > −s1 ) ≥ Pℓo (U > −s1 ) ≥ 1/2 . Hence. . The last four displays yield that. . Now.[P∗ ]. Let ǫ ∈ (0. C > 0 and θ ∈ (0.[P∗ ] (it equals −∞ with probability 0). for all s1 > kJ. n. see Theorem 3.1. . Since M1 .x [. kernel Q + + ˇ ˇ by E(−J ).γ (ℓo )) < 1/2 .4 and Condition (3. for all ℓ. . ﬁx some ℓo ∈ Rp and M > 0. ˇ Q admits a stationary distribution π and there are constants γ. we have that Sn = s + Mn as long as Sk does not hit the constraint set ∪p Ai for k = 1. . + ˇ ˇ [Qk (ws1 )](ℓ) ≥ [Qk (ws1 )](ℓo ) − C θk (V1. which concludes the proof.23) yields (5. n≥0 under this latter event we have inf Sn = s + U .7 : induction on q ˜ We have already shown that the chains Q and Q are ψ-irreducible and aperiodic and exhibited petite sets. λn ).) Let us denote. Pℓ (U > −s1 ) ≥ 1/2 − c > 0 .γ (ℓo )) . . ˇ ws1 (ℓ) := Pℓ (U > −s1 ) = Pℓ (Uk+1 > −s1 ) = [Qk (ws1 )](ℓ) . we shall further denote by {(In . λ0 ) and.γ (M 1p ) + V1. 1).8).s. with Mn = k=1 ˇ Jo (Ik ) . λn ).Hence it only remains to show that (5. S0 .

25) where V1. To address the case V0. (5. and suppose that it satisﬁes the following condition. for all m ≥ 1. V0.26) Observe that. Let p. in the event {X0 = x}.γ0 (Sm ) = V0. Sn . Proof. . the bound (5.3. . .25) holds for all x = (s. To conclude the proof.γ0 (s) eγ0 m k=1 w(Ik ) . for all γ0 > 0.γ1 (ℓ) + b1 / (1 − θ1 )] ≤ e γ 0 J θ1 m ⊗q (V0. in the event {S0 = s}.γ0 (s) eγ0 m k=1 q j=1 Jj (Ik ) ⊗q = V0. by (2. .γ1 )(x) + b .γ1 )(x) ≤ θ (V0. choosing such a γ0 . λn ). p} × Z+ × R+ with transition kernel Q(−J ) deﬁned in Section 2.7) and (5. γ0 ]. Take x = (s. . we obtain. Suppose that ℵ is invertible and that Assumption 1 holds. there exist θ ∈ (0. ∗ ∗ Then. ⊗q ⊗q Qm (V0. ⊗q ⊗q Qm (V0. For any ˇ ˇ ˇ non-empty J ⊆ {1.γ1 )(x) = V0.γ0 (s) > K.γ1 (ℓ) + b1 / (1 − θ1 ) .γ0 (s) Ex eγ0 m k=1 w(Ik ) V1.24) where we denoted w is deﬁned in (5. . b ≥ 0 and θ ∈ (0. ⊗q ⊗q m Qm (V0.γ1 )(x) Ex eγ0 w(Ik ) V1.γ are deﬁned in (3. m→∞ lim sup q−#J ×Rp (s. 1) such that (5. γ0 ]. then we get ⊗q ⊗q V0.γ0 (S0 ) . γ)).25) ⊗q ⊗q for all x = (s. By induction. . for all m ≥ 1 and K ≥ 1. .γ0 ⊗ V1.26) implies (5.γ1 (λm − ℓ) . It follows that.γ0 (s) [θ1 V1. for any γ0 > 0. ℓ) ∈ X. Hence. (5.γ0 ⊗ V1. ⊗q ⊗q By (2. there exists γ0 > 0 such that for all γ0 ∈ (0. 1).27) 21 . there exist m ≥ 1. V 1 − θ1 0.γ and V0.ℓ)∈Z+ + E(−J ). let {(In . there exists γ0 > 0 such that for all γ0 ∈ (0.γ0 ⊗ V1. m m Qm (½Zq ⊗ V1.γ0 (5. ℓ) ∈ X ⊗q satisfying V0.γ1 (λm − ℓ) = 0 .2 (which holds for some γ > 0 and thus.3.γ0 (Sm ) ≤ eγ0 J m V0. (5. q ≥ 1 and deﬁne the transition kernels Q on the space Zq × Rp + + as in Section 2.γ0 ⊗ V1. we can choose γ0 > 0 small enough so that eγ0 J θ1 < 1.γ0 (s) > K. K > 0.γ1 )(x) ≤ eγ0 Jm V0. for all x = (s. ∗ ∗ (C) For all γ1 > 0 small enough.2) of Proposition 3. we now show that Assertion (D) holds. Hence we have. for all small enough γ1 > 0. b > 0 and m ≥ 1. for all γ1 > 0 small enough.γ0 (S1 ) ≤ eγ0 J V0.(s.γ0 ⊗ V1. there exists γ0 > 0 such that for all γ0 ∈ (0.γ0 (s) ≤ K. there exists b1 > 0 and θ1 > 0. ℓ) ∈ X.γ1 )(x) + b1 eγ0 Jm ⊗q (s) . ℓ) ∈ X. such that.γ1 (λm ) m k=1 ⊗q = (V0. γ0 ].3) we have.42).Proposition 5. Observe that.6) respectively.7).γ1 )(x) ≤ θ1 V1. such that for any initial condition x ∈ X. We already have the drift condition (3. q}.γ0 ⊗ V1. ⊗q ⊗q V0.ℓ) eγ0 m k=1 ˇ w(Ik ) ˇ V1. Let m ≥ 1 and x = (s.γ0 (s) . by Jensen inequality for any γ1 ∈ (0.γ1 (ℓ) + b1 (1 + θ1 + · · · + θ1 ) + m ≤ θ1 V1. ∗ ∗ (D) For all γ1 > 0 small enough. ℓ) ∈ X such that V0. n ≥ 0} be a Markov chain on the space q−#J p ˜ {0.3) and (3. for a given θ1 < 1.5. we shall prove the following assertion. we have.

..30) ⊗q indices j ∈ {1. γ ∗ ] ⊗q ˜ such that Q is ½{0.29) ˇ ˇ ˇ ˜ We can then deﬁne a Markov chain {(In .5 needs to be simpliﬁed.γ ′ are deﬁned in (3. Hence. for all γ ∈ (0. . for ∗ all γ0 ∈ (0. . see above). .γ ′ )-geometrically ergodic. behaves as a Markov ˜ ˜ chain with kernel Q(−J ) (deﬁned as Q but without the j-th constraints with j ∈ J . . there exists γ ∗ > 0 such that..γ and V1. Sn .p} ⊗ V0.γ1 )(Xj ) ≤ eγ0 wj Ex (V0. . ℓ) such that s(j) ≥ s∗ (j) for all j ∈ J . there are positive constants γ. 1. we use the following coupling argument.31) Proof.γ0 (S0 ) is large enough. λn ). m}.1. we have m Ex eγ0 m k=1 w(Ik ) V1. .3.γ ⊗ V1.. we have for all j ≥ 1 and x = (s. . for all γ1 > 0 small enough.γ1 (λm − ℓ) . ℓ) ∈ X satisfying ⊗q ∗ V0. the sequence {(In . γ1 > 0 small enough. . . Now. The set J is not empty since V0.. m (5. n ≥ 0} with transition kernel Q(−J ) ˇ ˇ ˇ and such that in the event {S0 (j) ≥ s∗ (j). γ1 ∈ (0. (5. ..p Ai (j) . . (5. λn ). More precisely. sup sup x∈X m≥1 Ex eγ0 m k=1 {w(Ik )+ρ} ⊗q (V0.γ ⊗ V1.γ1 -geometrically ergodic. Q ⊗q is (½{0. . for any γ ′ > 0 small enough.γ ⊗ V1. S(−J ) .γ1 )(Xm ) <∞. 1.3) respectively. ˜ ˜ + + ∗ Then. + + Suppose that the following assertions hold. where S(−J ) n n is obtained by removing the j-th component of Sn for all j ∈ J ..6. ρ and γ0 such that.γ1 )(x) ⊗q (V0.. p} → R.γ ⊗ V1. then it will take some time for all the components j in a set J (with cardinal #J at least 1) of the multivariate spread to reach their constraint set A(j) = i=1. there γ. Ex eγ0 j k=1 0≤i≤p w(Ik ) ⊗q ⊗q (V0.γ ⊗ V1. n ≥ 0}. n ≥ 0} be the Markov chain on the space {0. q} such that s(j) ≥ s∗ (j). j ∈ J }. by construction.γ ⊗ V1.γ0 (s) > K. n = 0..6) and (3. m} coincides with {(In . For a positive integer m. γ ∗ ].⊗q Now the idea of the proof relies on the fact that if V0. where V0. p} × ˜ Zq × Rp with transition kernel Q deﬁned in Section 2.32) 22 .28) (−J Up to this time the process deﬁned by {Yn ) = (In . Then. . We let J denote the set of all m Condition (C) of Proposition 5. For γ0 as in Condition (C) (we can choose it independently of J since there is a ﬁnite number of such subsets). 1.. (5. ˜ (E-1) For all γ ′ > 0 small enough. S(−J ) . ℓ) ∈ X. Lemma 5. .γ1 )(x) . ˜ (E-2) The stationary distribution π of Q satisﬁes π w ⊗ ½Zq ×Rp < 0. (5. ℓ).. . we obtain Assertion (D). Sn . Let {(In .. Sn .q ′ m k=1 ˇ w(Ik ) V1. A ﬁrst step in this direction is given by the following lemma. .γ1 (λm − ℓ) = E(−J ). denoting x′ = (s(−J ) . Using Assumption (E-1).30) applies for all x = (s. Equality (5. for any γ ∗ > 0. .γ1 )(Xj ) ⊗q ≤ C eγ0 wj (V0.γ ⊗ V1. we take K > 0 such that log K ≥ pγ0 max s∗ (j).. γ0 ].. Let w : {0. for n all x = (s.. n = m 0. Then.. . λn ). Let w = max w(i). λn ). 1.γ0 (s) > K m ∗ implies max(s) ≥ max(sm ). we denote s∗ (j) = 1 + mJ + max(A(j)) . we may choose γ.. .p} ⊗ V0.x eγ0 j=1. λn ).

we get o Ex eγ0 m k=1 w(Ik ) ⊗q (V0.γ ⊗ V1. w∗ ) ≤ π (w∗ ) + R e−κl elγ0 w V0.p} ⊗ V0. p} × Zq × Rp .0. p}. in the last inequality.γ ⊗ V1..35) yields that. Then we have for these a and γ. which is negative by Condition (E-2). 23 . we used (E-1) and thus the constant C only depends on γ.γ1 ) ⊗q ≤ R e−κm V0. s. . . . . . using (5. by dominated convergence. by dominated convergence.γ1 )(Xnl ) 1/l ⊗q (V0. Let now m. .γ ⊗ V1. for all i ∈ {0. ℓ).γ1 ) ∞ ≤1 . s. . Hence there exists a > 0 such that b := π exp(aw ⊗ ½Zq ×Rp ) < 1 . ℓ).γ ⊗ V1. ≤ elγ0 w . we may decrease γ ∗ so that γ. .36) Now. w∗ ) ≤ b1/2 + R e−κl eaw ⊗q V0. s. π a−1 exp(aw ⊗ ½Zq ×Rp ) − 1 ˜ + + → π w ⊗ ½Zq ×Rp ˜ + + . s. γ1 . l}. ℓ) ∈ {0. Then + + ⊗q w∗ /(½{0.. . . (5. s.. . we set + + ξ Deﬁne w∗ (i. l ≥ 1 and write m = nl + j with j ∈ {0.32)..37) Now.γ1 (ℓ) . ˜ Ql ((i.γ Ce γ0 w(l−i) E x e l γ0 n−1 k=0 w(Ikl+i ) ⊗ V1. .γ ⊗ V1.γ (s)V1. for all (i.γ ⊗ V1. γ1 ∈ (0.γ1 (ℓ) ˜ (5.p} ⊗ V0. ˜ + + (5. (5.. and for all l and γ0 satisfying lγ0 = a..where. ℓ).γ1 )(Xm ) nl k=1 ≤ C eγ0 wj Ex eγ0 l w(Ik ) ⊗q (V0. as a → 0.γ ⊗ V1. .γ1 )(Xnl ) w(Ikl+i ) ⊗q (V0. γ1 . Q has a unique invariant probability measure π and there exists R > 0 and κ > 0 such that. .35) where for any signed measure ξ on {0. for all (i. s. ˜ Qm ((i. + + ⊗q ˜ Ql ((i. Theorem 15.γ1 = sup ξ(g) : ⊗q g/(½{0. p} × Zq × Rp .γ (s) V1.γ1 )(Xnl ) 1/l (5. we have.γ1 (ℓ) . .34) ˜ ˜ Applying [33. ·) − π ˜ (γ.γ1 (ℓ) . the H¨lder inequality and (5. γ ∗ ] implies π (w∗ ) ≤ b1/2 for all l and γ0 satisfying lγ0 = a. .γ1 ) ∞ γ.γ1 )(X(n−1)l+i ) . . p} × Zq × Rp .1].γ (s) V1. (5.33) = C eγ0 wj Ex i=1 l γ0 wj i=1 l γ0 wj i=1 l eγ0 n−1 k=0 ≤Ce ≤Ce E x e l γ0 n−1 k=0 w(Ikl+i ) ⊗q (V0.γ (s) V1. ℓ) = elγ0 w(i) V0. s ∈ Zq and ℓ ∈ Rp and all + + positive integer m.γ1 )(X(n−1)l+i ) 1/l ≤C e 2 2γ0 wl i=1 E x e l γ0 n−1 k=0 w(Ikl+i ) ⊗q (V0. Then. ˜ which set the values of R and κ. .. ℓ) ∈ {0. Thus. .32) again. ..

6 holds with V0. there are positive constants ρ0 and γ0 > 0 such that. it is easy to shown that this can ∗ ∗ be extended to all γ0 ∈ (0. . q}.γ1 )(x) ⊗q ≤ C 3 e3 a w b−1/2 bm/(4l) (V0. . . . for q ≥ 2.γ1 )(x) (5.γ0 ≡ 1.γ0 in the denominator of (5.γ1 )(X(n−1)l+i )) ⊗q ≤ b(n−1)/4 Ex el γ0 w(Ii ) (V0. p} → R. λn ). Condition (C) is implied by the ˇ geometric ergodicity of the unconstrained chain Q (see Section 2. we get that. ℓ). we get ⊗q ˜ Ql ((i.5 because of the presence of V0. Let {(In .γ ⊗ V1.31) holds for all γ0 ∈ (0. . Let p. which then corresponds to q = 0.37) holds and L only depends on a.γ ⊗ V1. Ex eγ0 m k=1 w(Ik ) ⊗q (V0. γ0 only depending on γ1 . Then. see (E-2). 24 . (5. for all n ≥ 1 and all 1 ≤ j ≤ l. . for m = nl + j with j ∈ {0.γ1 )(x) . that is V0.7. Lemma 5.γ1 )(Xm ) ⊗q ≤ C 3 e3 a w b(n−1)/4 (V0.We now denote by L some integer such that b1/2 + R e−κl eaw ≤ b1/4 for all l ≥ L.γ1 (λm − ℓ) < ∞ . Nevertheless. Let w : {0. If q = 1 in Proposition 5. we get.31) by taking ρ > 0 such that eρ < b−1/(4a) . Hence an immediate consequence of Lemma 5. Ex el γ0 n−1 k=0 w(Ikl+i ) ⊗q (V0. p}×Z+ ×Rp . .34). Note that we have shown that (5.γ ⊗ V1.6 is no longer suﬃcient to prove Condition (C) of ⊗q Proposition 5. . For any J ⊆ + + q−#J (−J ) ˜ {1. . . 1. ˇ However. Sn . This provides the case q = 0 which initiates the induction. in the last inequality. 1.γ ⊗ V1. w∗ ) ≤ b1/4 V0.6 is that if q = 1 in Proposition 5. .39) for all γ1 > 0 small ∗ enough with some ρ0 . Lemma 5. which implies (5. . sup (s. γ0 ] by choosing ρ and γ0 adequately.γ (s) V1.ℓ)∈X m≥1 sup Ex eγ0 m k=1 {w(Ik )+ρ0 } V1.3. This yields (5. ˜ the transition kernel Q(−J ) satisﬁes Assumptions (E-1) and (E-2).31).38) Now. for all γ0 ∈ (0. Inserting this in (5. provided that lγ0 = a and l ≥ L. . by successive conditioning. if lγ0 = a. . this bound holds for any γ0 = a/l with l ≥ L. l} and if lγ0 = a with l ≥ L. + as in Section 2.5. . p} × Zq × Rp with transition kernel Q deﬁned in Section 2. Lemma 5. q ≥ 1. Next we show the result in the case q ≥ 1 using the induction hypothesis that the result holds for lower values of q. . then we only need to prove Condition (C) for the chain ⊗q ˜ Q−{1} . Again we shall rely on an inductive reasoning to obtain the following result. .γ1 )(Xi )) ⊗q ≤ C ea wi b(n−1)/4 (V0. further deﬁne the transition kernel Q on the space {0. . γ and γ1 .γ ⊗ V1.5. and x = (s.39) Proof. b.3.γ1 (ℓ) . Then for all γ1 > 0 ∗ ∗ small enough. n ≥ 0} be the Markov chain on the space ˜ {0. where. we used el γ0 w(Ij ) ≤ eaw and Condition (E-1) and thus C is some constant not depending on n.γ1 )(x) ⊗q ≤ C 3 e3 a w b−1/2 bγ0 m/(4a) (V0.γ ⊗ V1.3) and a simple moment condition on the stationary distribution π of this chain. q}. Suppose that for any subset J in {1. .γ ≡ 1.γ ⊗ V1. ⊗q For q = 0. ℓ). . . We prove this lemma by induction on q. γ0 ]. where the constants a and b are chosen so that (5. . a/L] but only if a/γ0 is an integer. . s. . Using this bound.

γ ∗ > 0 such that. Hence we may write.γ1 )(Xm ) e−ργ0 m V1. . for all γ0 ∈ (0.γ1 (ℓ) .28) and s∗ (j) = 1 + max(A(j)) If x = (s. .γ1 (λm ) ≤ Ex eγ0 ≤ C0 eqs m k=1 w(Ik ) ⊗q ≤ C0 e−ργ0 m (V0. in the event {τm < m}.40) We denote by τ the ﬁrst hitting time of the set A. n ≥ 0} here denotes the chain with transition kernel Q(−J ) which starts at the same state as (I0 . 1. / Ex eγ0 τm k=1 m w(Ik ) −ργ0 (m−τm ) e V1. . . γ ∗ ].41) ˜ On the other hand. q.γ1 (ℓ) . m. which is a random set measurable with respect to Fτm and is non-empty by deﬁnition of τm . . then. τm . there are positive constants γ ′ and ρ′ such that for all γ0 ∈ (0. which is a stopping time with respect to the ﬁltration Fk = σ(Yn . 0 ≤ n ≤ k). Ex eγ0 m k=1 and s∗ = max s∗ (j) . . 2. . k ≥ 0. .x eγ0 l k=1 ˇ w(Ik ) −ργ0 (m−l) l k=1 ≤ e−ργ0 (m−l) E(−J ).. .γ1 (λm ) | Fτm ≤ C1 e−ργ0 (m−τm ) V1.6.γ1 )(x) ∗γ ⊗q (V0. Sk ∈ A and we set τm = τ ∧ m. (5. s∗ (q)} . s∗ (1)} × · · · × {1.γ1 (λτm ) a. A(j) as in (5. m and J . . .. .Deﬁne for any j = 1.s. for some constant C2 > 0 independent of l and x. By deﬁnition of τm .γ1 (ℓ) l=0 ′ ′ ≤ C3 m e−(ρ∧ρ )γ0 m V1. γ ′ ] and x = (s.x eγ0 ˇ w(Ik ) ˇ ˇ ˇ ˜ where {(In . We deﬁne by Jm the set of indices j such that Sk (j) stay away of A(j) for all k = 0. = E(−J )..γ1 (ℓ) . . for all integer l = 1.γ1 (λτm ) ½{τm =l}∩{Jm =J } e ˇ V1. . applying Lemma 5. all non-empty subsets J of {1. for all m ≥ 1. E eγ0 m k=τm +1 w(Ik ) V1. ′ l ˇ ˇ E(−J ).γ1 (λl ) . . . by (5. λn ). ρ. q} and x = (s. S0 . for all γ0 ∈ (0. . .γ1 (λl ) ≤ C2 e−ρ γ0 l V1. ℓ) ∈ X is such that s ∈ A = {1. λ0 ). we have. 1.γ1 (λl ) ½{τm =l}∩{Jm =J } ˇ V1. . τ = inf k ≥ 0.q w(Ik ) V1. ∗ Sτm ∈ A and thus. . if γ1 > 0 is small enough. ℓ) ∈ X such that s ∈ A. ℓ) ∈ X satisfying s ∈ A.γ ⊗ V1.γ ⊗ V1. . Sn . where Jm is deﬁned as above. . j=1. . we can choose γ. Inserting this bound in the previous display and summing over all l = 0. Using the induction hypothesis. there is some constant C0 > 0 such that. .. . . (5.x eγ0 k=1 w(Ik ) V1. x ∈ X and l ≥ 1. γ ′ ]. 25 . . setting C1 = C0 eqs γ .40). up to n = τm the chain Xn (with transition kernel Q) coincides with ˜ the chain with transition kernel Q(−Jm ) that starts at the same state. / Ex eγ0 τm k=1 w(Ik ) −ργ0 (m−τm ) e V1. for all γ1 > 0 small enough. we get.γ1 (λτm ) ≤ C3 e−ργ0 (m−l)−ρ γ0 l V1. .

39).j for all i = 1. .γ1 (λm ) ≤ Ex eγ0 τm k=1 w(Ik ) τm k=1 E eγ0 e m k=τm +1 w(Ik ) V1. . Q(−J ) ⊗(q−#J ) is (½{0. We treat the case q = 1 in the following section. ˜ (B-2) The stationary distribution π (−J ) of Q(−J ) satisﬁes π (−J ) w ⊗ ½Zq ×Rp < 0. . as in Section 2.. .8 and Theorem 3.7.γ1 q Cγ0 .γ ′ )-geometrically ergodic.8. Now. . . ρ ∧ ρ ). This result is obtained by induction on q using Theorem 5. Hence. q}.γ0 ⊗ V1.. M ]p is an (m. the transition kernel Q(−J ) satisﬁes j=1 0 (5. p} × {1. .γ1 (x) ≤ r .1 are of the form {1. + r>0. . . we get (5.4. Theorem 5. with (5. further deﬁne the transition kernel Q q−#J on the space {0. M ]p . p for i = 0. for all γ1 > 0 small enough. This last bound. . . K}q ⊗ (0..1.p} ⊗ V0. . 1.γ1 )-geometrically ergodic.. ǫ > 0 and a probability measure nu on Y such that {0.8 We ﬁrst show that for all D > 0. . ℓ) ∈ X. ..41) yields.3. K}q × (0. . . λn ). so that only Conditions of the form (B-2) have to be checked out. . Proof of Theorem 5. p} × Zq × Rp with transition kernel Q de+ + ˜ (−J ) ﬁned in Section 2. . K ≥ 1 and M > 0. By Conditions (B-1) and (B-2). γ0 ].γ0 ⊗ V1.25). Sn .5 how to perform this induction for q ≥ 2. .1.γ1 (ℓ) .γ1 )-geometrically ergodic. ν ) ˜ ˜ 26 . .γ0 ⊗V1. . K}q ⊗ (0.γ0 ⊗ V1. .γ1 )-geometrically ergodic follows as in the proof of Proposition 3.. M ]p is a petite set. ˜ ˜ + + ∗ ∗ Then..3. Theorem 5. .5 to initiate the induction at q = 1. . we get the drift condition (5. By Corollary 5. there exists γ ∗ > 0 such that for all γ ∈ (0.p} ⊗ V0. . Deﬁne + w(i) = q → − J i. .25) applies to show that the ˜ chain Q is (V0. there exits m ≥ 1. ˜ Suppose that for any subset J in {1.p} ⊗V0. Hence the drift condition (5. ˜ (B-1) For all γ ′ > 0 small enough.5 holds. . p} × {1. The small sets of Theorem 3. . γ ∗ ]..3. taking γ0 = γ ′ ∧ γ ∗ ′ ρ0 ∈ (0.. there exists γ0 > 0 such that for all γ0 ∈ (0. . are included in such sets. Proof of Theorem 3. Chapter 15]. . Let {(In . γ ′ ∧ γ ∗ ] and x = (s. . n ≥ ˜ 0} be the Markov chain on the space {0. . . .. see [33.where C3 > 0 is independent of m and x. .42) In practice.6 Proof of Corollary 3. q}.8. applying this proposition. Observe that q the sublevel sets of V0.. The fact that Q is (½{0. Let p. D] × {0. we may apply Lemma 5. for all γ0 ∈ (0. . .3.γ1 (r) = x ∈ Z+ × Rp : V0. the set C = (0. q ≥ 1 and suppose that Assumptions 1 and 2 hold.7 to show that Condition (C) of Proposition 5.γ1 (λm ) | Fτm ≤ C1 Ex eγ0 ′ w(Ik ) −ρ(m−τm ) V1.γ0 ⊗ V1. . . Ex eγ0 m k=1 w(Ik ) V1. the ⊗q ˜ transition kernel Q is (½{0.8 should be applied by induction. Then we explain in Section 3.γ ⊗ V1. p} × Z+ × Rp . ǫ. For any J ⊆ {1. 5.. . ∗ where C4 is some positive constant independent of m and x.γ1 (λτm ) ≤ C4 m e−(ρ∧ρ )m V1. The case q = 0 is treated in Proposition 3.

γ1 )(x) + b ..γ0 /2 ⊗ V1.γ0 ⊗ V1.. x) ≤ 1 − 2γ2 /1T µ0 p −1/2 m θ1/2 (V0.5. . i.1).. Therefore.γ0 /2 ⊗ V1. i.u] ⊗ ½{0.. M + (m − 1)αβ] .p} and the time interval between the last Nu arrival before u and u by Γu = u − ∆k .γ0 ⊗ V1. . probability measure deﬁned by ν(A × B) = ˜ where we used that {0. we have that is also an (m + 1. x) ≤ E [V2. By the Cauchy-Schwartz inequality and using that for all x ∈ X. ǫ. γ0 ] such that for any initial condition x ∈ X.. ǫ.. . we have Q (V2.γ0 /2 ⊗ V1.1).2γ2 (∆m )] +∞ x 1/2 m Q (½{0.γ2 ⊗ ½{0. for γ2 > 0 small enough.γ1 ) (∆. .p} ⊗ V0.. ν )-small set for Q.. Provided that γ2 < 1T µ0 /2.p} ⊗ V0.. we have ˜ Qm (V0.. As explained in Section 3. K}q ⊗ (0. We get that for all Borel sets A ⊂ R+ and B ⊂ Y.γ2 ⊗ ½{0.˜ and (m + 1.8 and thus of Proposition 5. x) ≤ θ′ (V0. ǫ. Section 15]).γ1 )(x) + b) 1/2 . The geometric ergodicity follows (see [33. there exists γ0 > 0. x) 1/2 m 1/2 ≤ 0 1T µ0 p e 2γ2 r −1T µ0 r p e dr (θ (V0.γ1 /2 )(∆.7 Proof of Theorem (3. ν)˜ small set for Q. B) A ≥ ǫ(c0 /c1 )ν(A × B) . ˜ Next we show that the drift condition obtained in Proposition 5.γ1 /2 )(x) + b′ ..γ0 ⊗ V1..γ0 ⊗ V1. For all z = (δ. 5..9) Hence we obtain a drift condition with an unbounded oﬀ petite set function V2. 1) and b′ > 0. ˜ we conclude that Q is ψ-irreducible and aperiodic and that all bounded subsets of Z are petite sets.γ2 ⊗ We need the following lemmas to complete the proof for this theorem.5 for Q extends to Q.γ0 /2 ⊗ V1. ν )-small set. 1). Hence.γ2 ⊗ ½{0.γ1 /2 )(∆. we have. . Hence C is an (m. u] by Nu = N ½(0.γ0 /2 ⊗ V1..γ1 /2 )(∆. Denote the number of arrivals on [0. we thus get p Q (V2.. the assumptions of Theorem 3. .p} ⊗ V0.. ∆m is stochastically smaller than an exponential distribution with mean (1T µ0 )−1 (see the proof of p Lemma 5. for all γ1 > 0 small ∗ ∗ enough. s..γ1 /2 )(x) + b1/2 . ν ) for Q and ν is the ˜ A c1 e−c1 t dt ν (B). Similarly. Q (V2.γ1 /2 .γ0 /2 ⊗ V1.. b > 0 and m ≥ 1 such that for all γ0 ∈ (0. we have λm−1 ∈ (0. k=1 27 . . i. A × B) ≥ c0 m ˜ e−c1 t dt Qm (y... c1 (see the proof of Lemma 5. m ½{0.. Q (z. .7 imply that of Theorem 5. i.γ1 ) (x) ≤ θ (V0.. for some θ′ ∈ (0. i. M ]p is an (m.p} ⊗ V0.5 (see the proof of Theorem 3.p} ⊗ V0... ℓ) ∈ C.7). It follows that the density of ∆m is −(m−1)c1 t bounded from below by c0 e over t ∈ R+ for some positive constants c0 . Hence. ǫ. under the initial condition ˜ p Z0 = z. θ ∈ (0. p} × {1.

using [33.9.9 using Lemmas 5.s. a.2].s.43). tT We also observe that.NT +1 (5. Lemma 5.2nT max √ ∆k ≥ ǫ T ≤ lim Px T →∞ k=1. we get.tT ] ⊗ w) − tT E(w) = x Nx tT k=1 (w(Ik ) − E(w)∆k ) − E(w)Γx .2nT max + Px (NT + 1 > 2nT ) . Using Lemma 5.10. we have Nu k=1 Nu +1 ∆k ≤ u ≤ ∆k .10.. k=1 ∆k Nu = lim Nu +1 k=1 ∆k u→+∞ Nu = E(∆1 ) . Let us deﬁne nT = T E(∆1 ) . We have Nu 1 . → u→∞ u E(∆1 ) lim Proof. T ∈ R+ and x ∈ X. a. 1]. a. Proof. By deﬁnition of Γu . Nx tT k=1 (w(Ik ) − E(w)∆k ) = ΣnT Nx tT .g nT .2nT max √ Ek ≥ ǫ T =0. we have τ ∈[0... [P∗ ] . which concludes the proof. [P∗ ] .9..9 and 5. k=1 Since Q is positive Harris..g nT = snT Nx tT . We p lim Px k=1...s. Observe that we have. 1]. for all τ ∈ [0. sup ΓxT = oP∗ τ √ T .. we have N (½[0. for all k ≥ 1. we have lim Px (NT + 1 > 2nT ) = 0. for all x ∈ X. Proof of Theorem 3. [P∗ ] .. By deﬁnition of Nu .1] Nu k=1 n ∆k = E(∆1 ) . Theorem 17. T →∞ st ½T ℓ. we have that Γτ T ≤ √ P Γτ T ≥ ǫ T √ ∆k ≥ ǫ T For any τ ∈ [0.9. ∆k ≤ Ek where {Ei }i≥1 are the inter-arrivals of a PPP with intensity get. k=1.43) max ∆k . For all t ∈ [0. for all x ∈ X. 1].. a. 28 . ǫ > 0 and T ∈ R+ .. We now provide a proof of Theorem 3. [P∗ ] . we get 1 n→∞ n lim We deduce that u = lim u→+∞ Nu u→+∞ lim Hence the result..... deﬁning nT as in (5. ≤ Px T →∞ k=1..3.s.Lemma 5. For all T ∈ R+ .

1]. g + oP∗ ( T ) . we get Nx (½[0. which achieves the proof. 2 (nT σg )−1/2 snT Nx tT . 1]p . 0 < u < ϑ(1) < · · · < ϑ(p)}. where D = {(u. for all M > 0. γ > 0. for any initial distribution. in the general case. Nx /nT → t as T → ∞ a. for all t ∈ [0. by Lemma 5. by deﬁnition of E(w). D g(uℓ + Γϑ)uγ dudϑ ≥ ǫ g dν . ϑ) ∈ (0.9.3. We only presented the case of the order book of one asset with only two limits. π(g) = 0 and that NtT is an integer. (5. In a forthcoming paper. yielding multivariate boundary conditions (hence q ≥ 2). σg = v(w) and the fact that nT /T → 1/E[∆1 ]. The scaling limit of the mid-price can be deduced from our ﬁndings. we have. we used a functional central limit theorem applying to the chain to derive the scaling limit of the integrated point process in physical time.where we used that. this convergence holds uniformly over t ∈ [0. Let p ≥ 1. we have proven that the underlying Markov chain is V -geometrically ergodic under some conditions on the parameters.18). we will use this model to provide an empirical study from real data and discuss the potential application of this model to the dynamics of limit order books.g nT → Bt . 6 Conclusion In this paper we have introduced and studied constrained multivariate Hawkes processes. as it could model the motion of an object on a discrete net with some boundary conditions.10. d 2 This. 1].tT ] ⊗ w) − tT E(w) = snT √ Nx tT . there exists ǫ > 0 such that for all g : Rp → R+ and ℓ ∈ (0. The constraints are expressed using a multidimensional constraint variable. By the Dini tT theorem.44).[P∗ ]. Under the markov setting (exponential fertility functions). Multi-assets limit order books can be considered. Moreover. this convergence and the one in (3. This is clearly not a restriction. We believe that the applicability of the constrained multivariate Hawkes process extends well beyond ﬁnancial applications. 1]. and Γ is a p×p invertible matrix. Theorem 13.2. Hence by Lemma 5. 29 . M ]p .s. 1] × (0.1. whose evolution is driven by the point process. Lemma A. Finally we have brieﬂy explained how the constrained multivariate Hawkes process can be applied to model the dynamics of a limit order book. nT (5. Then there exists a probability measure ν such that. 7 Acknowledgements This research is supported by NATIXIS quantitative research department. A converse result leading to the transience of the chain in the case where an univariate constraint variable is used (q = 1) illustrates the sharpness of our conditions.15) implies that. we get (3. A Additional technical lemmas The following result is used in the proof of Proposition 5. Applying [40.44) Now.

2. Now we set ω = −a(ϑ − 1) and obtain that 1 β 1 0 ea(ϑ−1) ϑ β −1 dϑ = β′ 1 aβ a 0 e−ω (1 − ω β′ −1 ) β dω . Working Paper 15850. Γ−1 ω − uΓ−1 ℓ))dudω . [2] Y. National Bureau of Economic Research. 30 . Denoting ΓBη = {Γϑ : ϑ ∈ Bη } and setting −1 1 η ′ = η min 1. Modeling ﬁnancial contagion using mutually exciting jump processes. ϑ(p) < 1 − η} is not empty. as a → ∞. and R.1).M]p 2 g(ω) ΓBη 0 uγ du dω = µLeb (ΓBη )(η ′ )γ+1 (γ + 1) det Γ gdν . where µLeb is Lebesgue measure.2) Proof.. η ′ ]. we have that for all u ∈ (0. Laeven. Jedidi.Proof. References [1] F. SSRN eLibrary.2.A. ∞ 0 ea(e −βt −1)−β ′ t dt ∼ 1 . we have g(uℓ + Γϑ)uγ dudϑ = D 1 det Γ g(ω)uγ ½D ((u. M ]p and ω ∈ ΓBη . ℓ ∈ (0. a Letting a → ∞. (A.. Setting ω = uℓ + Γϑ.J. then we have the following asymptotic equivalence as a → ∞. · · · . we get ∞ 0 ea(e −βt −1) (e−βt ) β dt = β′ 1 β 1 0 ea(ϑ−1) ϑ β −1 dϑ . The following result is used in the proof of Proposition 3. Setting ϑ = e−βt .2) by dominated convergence. J. Cacho-Diaz. ∞ 0 ea(e −βt −1)−β ′ t dt → 0 . [u. mar 2010.. 2011. Lemma A. We obtain that 1 g(uℓ + Γϑ)u dudϑ ≥ det Γ D γ η′ We choose any 0 < η < 1/(p + 1) so that the open set Bη = {ϑ ∈ (0. 1]p : η < ϑ(1). we get (A. η + ϑ(1) < ϑ(2). max sup [Γ−1 ℓ](i) i=1. Γ−1 ω − uΓ−1 ℓ] ∈ D. aβ (A. η + ϑ(p − 1) < ϑ(p). A¨ ıt-Sahalia.p ℓ∈(0. we obtain (A. Abergel and A. β′ Letting a → ∞. Then. ν is the uniform probability measure on ΓBη . by dominated convergence. A mathematical approach to order book modeling. Let β > 0 and β ′ > 0..1) If moreover β ′ > β.

and J. Embrechts. e e 24:1563–1588. Non-parametric kernel estimation for symmetric hawkes processes. Probab. Econophysics review: Ii. [20] A. Cont and A. L. 1995. nov 2002. [5] E. and J. CORE Discussion Papers 2003103. P. apr 2004. Bauwens and N. I. Journal of Empirical Finance. The Journal of Finance. and A. [23] R. 55:2467–2498. J. [18] D. Spatt.F. and L. M. 2011. F. Russell. and C. [13] P. Hoﬀmann. and R. [14] A. multivariate point process models. Chakraborti. Modelling security market events in continuous time: Intensity based. 2012. 1998. S. and F. Asmussen. K. Modelling security market events in continuous time: Intensity based. Patriarca. Papers. Springer-Verlag. Delattre. [16] R. [17] R. 1996. Mike. Chakraborti. Lunde. M. Journal of Applied Probability. Bacry. Lillo. Stoikov. I. 1997. Gillemot. 2003. 2000. Social Science Research Network Working Paper Series. 11:991–1012. Res. Engle and A. Engle. empirical facts. and F. Bowsher. How markets slowly digest changes in supply and demand. Ann. Hautsch. Springer-Verlag. multivariate point process models. and J. 4(2-3):187–212. New York. An Empirical Analysis of the Limit Order Book and the Order Flow in the Paris Bourse. submitted to quantitative ﬁnance. Daley and D. dec 2007. Bouchaud. Bacry. Lillo.F. Trades and quotes: A bivariate point process. Doyne Farmer. Doyne Farmer. [19] J. An introduction to the theory of point processes. nov 2006.. 2003. [7] L. 2011. 2010. F. Liniger. JOURNAL OF FINANCE. [8] L. Econophysics review: I. Scaling limits for hawkes processes and application to ﬁnancial statistics. sep 2008. Journal of Financial Econometrics. Cont. 31 . Nuﬃeld College Economics Discussion Papers. New York. Muzy. T. arXiv. Dayri. 2003. Quantitative Finance. Universit´ catholique de Louvain. [22] R. Social Science Research Network Working Paper Series. Talreja.F. [4] E. Price dynamics in a markovian limit order book market. [21] P. [6] E. second edition. 1:159–188. Muni Toke. 2011. agent-based models. S. ArXiv e-prints. Massouli´. 2010. 50:1655–1689. Preprint. [10] J. Probability and its Applications (New York). Hillion. Elementary theory and methods. Stability of nonlinear Hawkes processes. Modelling ﬁnancial high frequency data using point processes. Delattre.org. 58:549–563. Vol. Vere-Jones. jan 2011.P. [15] A. A stochastic model for order book dynamics. Center for Ope erations Research and Econometrics (CORE). Quantitative Finance.F. 141:876–912. [12] C. Bauwens and N. Abergel. S. Bowsher. Muni Toke. Time and the price impact of a trade. Multivariate hawkes processes : an application to ﬁnancial data. Hautsch. Modelling microstructure noise with mutually exciting point processes. second edition. 11:1013–1041. S. M. volume 51 of Applications of Mathematics (New York). and F. Biais. Lin. Hoﬀmann. M. [11] C. Abergel. What really causes large price changes? Technical report. Bacry. Muzy. I. Muzy.F. Engle and J.R. Journal of Econometrics. [9] B. 2011..[3] S. Forecasting the frequency of changes in quoted foreign exchange prices with the autoregressive conditional duration model. Sen. application to high frequency ﬁnancial data. Applied probability and queues. De Larrard. Oper. Patriarca. Br´maud and L. 48A:367–378. Dynamic latent factor models for intensity processes. J. Dufour and R.

Hawkes. Adaptive estimation for hawkes processes. preprint. Springer-Verlag.G. Econometrica. Gourieroux. Series B (Methodological). 1971. Engle and J. 2009. [26] C. L. application to genome analysis. Statist. Large. A dynamic model of the limit order book. Smith. 2003. Measuring the resiliency of an electronic limit order book. 58(1):83–90. u [33] S. Muni Toke and F. Review of Financial Studies. [31] J. Quantitative Finance. Statistical models for earthquake occurrences and residual analysis for point processes. Hewlett. 1971. Price dynamics in limit order markets. Tweedie. Hussain. pages 9–27. 2002. 1998. [39] E. Ann. Multiple kernel learning on the limit order book. 32 . Hawkes.G. Journal of Financial Markets. Le Fol. SSRN eLibrary. price impact and trade path optimisation. Price jump prediction in limit order book. Modelling trades-through in a limited order book using hawkes processes. ETH Z¨ rich. SSRN eLibrary.F. R. Autoregressive conditional duration: A new model for irregularly spaced transaction data. The Review of Economic Studies. [25] T. [32] T. Krishnamurthy. 1988. Sand˚ Empirical analysis of limit order markets. Statistical theory of the continuous double auction. Moulines.[24] R. Journal of Financial Markets. Meyn and R. and S. oct 2004. 38:2781–2822. 66(5):1127–1162. Z. Russell. Stochastic-process limits. Spectra of some self-exciting and mutually exciting point processes. [41] B. Intra-day market activity. 2010. Ogata. 2009. 22(11):4601–4641. sep 1998. aug 1999. and P. as. Reynaud-Bouret and S. and G. Schbath.A.F. Abergel. E. 2006. Markov chains and stochastic stability. Point spectra of some mutually exciting point processes. L. and F. Review of Financial Studies. Clustering of order arrivals. Journal of the Royal Statistical Society. Shawe-Taylor. A. 11(4):789–816.. J. [27] A. [38] I. PhD thesis. [37] P. [40] W. Workshop on Financial Modeling. Multivariate Hawkes Processes. 2(3):193–226. [30] B. 33(3):438–443. Biometrika. Pomponio. An introduction to stochastic-process limits and their application to queues. [35] Y. Journal of the American Statistical Association. [34] I. 71(4):1027–1063. 2010. J. 2012. Zheng. Quantitative Finance. With a prologue by Peter W. 2009. 10(1):1–25. 3:481–514. Fletcher. Cambridge. Glynn. 2007.R. Whitt. and J. 2011. Miller. Springer Series in Operations Research. Rocu. Jasiak. [29] P. Holliﬁeld. Parlour.H. New York. Gillemot. second edition. Cambridge University Press. [28] A. Doyne Farmer. Liniger. [36] C.

paper, limit order book modeling with hawkes process

paper, limit order book modeling with hawkes process

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd