You are on page 1of 23
imation of Poisson P, Approxima' Tocesses strons : 65 (4 28) follows at once. Wf qhe™ denominator of (4 28) j that the den *48) is the Laplace funeti Mvpaim distribution Q(N,-) and could be estimnate using the ta ae due ated in Section 4. om a pres ‘ strong Approximation of Poisson Processes shi section, we use the as Concept of strong approximation for 1 ee aad Kivle a approximations of Poisson ae begin with some background. Suppose that F — (0, 1] and that the x0 independent and identically uniformly distributed. Introducing the «piform empirical distribution functions F,(z) = Q/n) or, 1(X; < z) and i. uniform empirical error processes E,(z) = valF,(z) — F(z)], we have it - {En(e):0 <2 <1} 5 (B(2):0<2 <1, (4.29) where B is a Brownian bridge. That is, B has the Tepresentation B(z) = w(2)-2W(1), where W is a standard Wiener Process, and hence has co- variance function R(z,y)=2ry~ay. (4.30) as (4.29) is strong tion is replaced bya arate of convergence ). Provided that the ath properties of the concerning modulus of continuity and oscil- lation behavior) carry over to the E,. To pursue these ramifications (among them the invariance principle) would lead us too far afield at this point; rather, we refer the reader to Csdrgé and Révész (1981), especially Chapter 5, and the references there. The following result of Komlés et al. (1975, 1976) contains the best possible rate of convergence. An important tool for analysis of limit Properties such approzimation: a statement of convergence in distribut statement of almost sure convergence accompanied by (this necessitates a sequence of Brownian bridges B,, rate of convergence is sufficiently rapid, results on p Brownian bridge (for example, Theorem 4.24. There exists a probability space on which are defined iid. uniform random variables X; and Brownian bridges B, such that almost surely SUPoce<1|En(z) ~ Bn(x)| = O((log n)/ Vn), no. We omit the Proof; this is a deep and difficult result. ain i is asia y ae . Empiticay Inf fer if Brownian bridges can be used to approximate empirical _ sible that partial sum processes for iid. random vapjay. Wiener processes, and indeed this is s0 (see ce i it then conditional uniformity of Poisson pro, Omg gt approximations of them. In the neat"? wa Xt regy ld lt, , seems plau approximated by 1975, 1976). But allow one to construct strong do so. Theorem 4.25. There © a homogeneous Poisson process such that almost surely as ¢ — 0° wap. |vi(ee/Me~ 2) — (1/V0LW (2) ~ 20 = (og ey, o 93 (6.2) b) The error processes /nlji— J], as signed random measures on E, con- verge in distribution to a Gaussian random measure with covariance function R(f,g) = H(F9)i (6.3) c) Let K be a compact subset of C*(£) having finite metric entropy with respect to the uniform metric. Then the error processes {/n[ji(f) ~ H(f)] * J €K)}, as random elements of C(X), converge in distribution to a Gaussian Process with covariance function R given in b); 226 6. Poisson Processes on Gene, us Pac, d) Almost surely the set {Vali — 4]/V2 Tog logn : n > Pies > 3}j tively compact in the function space C(E)* with limit set { mie VCP?) for all f}. 0 < For Poisson processes we define by substitution the empiriog, i i ~ f0-e aa, with ju given by (6.1), 7 “Place functionals L(f) = y with i gi YY . These ie advantage of being tailored to the Poisson model, but lack robustnesy, Poisson assumption were suspect, the general empirical Laplace of Section 4.2 would be preferred. - We now summarize properties of the empirical Laplace fun, L(f) = e~ J0-*1 be the Laplace functional of the Ithe functional ctionals, Ni, under P,, let Proposition 6.3. a) For each equicontinuous subset X of Cp) sup sex |L(f) - L(f)| — 0 almost surely; : ; 5 )) If K is a compact subset of C+(£) with finite metric entropy, they the error processes { /n[L(f) - L(f)] : f € K} converge in distribution, random elements of C(K), to a Gaussian process with covariance function R(f.g) = wll — eA — eae Moe Heme), (64 Proof: a) Equicontinuity of K implies that as well of the set K! ef : f € K}, which is uniformly bounded. Since sup,. IE(A) ~ L(p) < sup, |fi(g) — u(g)], a) follows from (6.2). s b) For fixed f € K, ={l~ " Vali(s) = L()] = vale AO) — eH vali e~F) ~ w(t - e-S)Jen=", where z* lies between ji(1 — e~/) and (1 — e~/). In view of a), therefore, supx | ALE(S) ~ £(f)}~ Vala(a — e-4) ~ wa — eA fe-mt-e converges to zero in P,-probability, and the same reasoning used to show that a continuous image of a compact set is compact confirms that K' has finite metric entropy. Consequently, b) ensues from Proposition 6.2c), with (6.4) obtained from (6.3) by straightforward computation. I The general empirical Laplace functionals of Section 4.2, given by i*()= (1/n) Diy e“N), have limit covariance function R(f,9) en Mente) emo a(e#), 7" 1 pstimation 207 6h salar, VAIL*(f) ~ £(F)] has asymptotic variance in partion RY fof) = MIM) — etter), the asymptotic variance of Yi[E(f) ~ (J)] i RF, f) = a — ef Pe M8), «the asymptotic efficiency of L*(f) relative to L(f) is en) = RL ARE D = fet” yu(tt —eFP), always exceeds 1. Therefore, L is preferred to £*. (No contradiction ample 4.12 exists because the specialized Laplace functional there is ay (64 so thal which with Bo L. ‘ ee on the relationship z(A) = e~“(4) we define empirical zero-prob- netionals 2(A) = e~A(4), While nonparametric within the Poisson iit a ene iffer nevertheless from the general empirical zero-probability mode ine { Section 4.3. So far we have viewed the estimators ji as indexed functionals o! : as 7 by functions rather than sets, but obviously this will not do for analysis of zero-probability functionals, Results from Chapter 4 concerning asymptotics arset-indexed empirical processes do not apply directly, although we shall Siseuss presently how they can be made to apply. Before doing so, however, we record elementary results. Proposition 6.4. For each js let 2(A) = e~M(4) = P,{N(A) = 0}. Then a) For every A, 3(A) > 2(A) almost surely; b) For Aiy.--y Ak € Ey Vai [(3(At)y +++ (An) ~ (2(A1)s +++ (Ab) 4 N(R), an RF, where R(i,j) = mAs Aye" MA)+M4)1, Independence of /n[2(A) - 2(A)] and ¥n[2(B) — 2(B)] for disjoint A and B carries over in the form of R. Moreover, Proposition 6.4 implies that for any sequence of error processes { /n[2(A) ~ 2(A)] : A € C} converging in distribution, the limit covariance function must be a restriction of the function R(A, B) = u(A 1 B)e~A)+012)), We next explore more general version of Proposition 6.4b) based on strong approximation (see Section 4.6); the starting point is the strong ap- proximation for homogeneous, unit rate Poisson processes on [0,1] given in ‘Theorem 4.26: asymptotic behavior of the processes (1//n){[N"(z) — nz] : 0< 2 <1} is that of the Gaussian process G(z) = B(z) + Zz, (6.5) a 228 6. Poisson Processes on Genera, Spay | where B is a Brownian bridge, Z has distribution N(0,1), and p — independent (see Theorem 4.14). a We now discuss applications to empirical zero-probability func which Theorem 4.26 must be extended to more general Poisson py...) First, if N" has rate \n rather than n, then Theorem 4.26 applies to yield {(1//a)[N"(2) - Anz] 0S 2 S1} > AG(z):0<2< G is as in (6.5). . ' To obtain a result for zero-probability functionals we put 2. P,{N(z) = 0} and 2(2) = 9). Proposition 6.5. Under Ps, a) Vetbatin 4, Ther, 2) Sede Vali(2) — 2(2)]20< 2 $1} 4 (AG(2):0<2<1j.9 ‘To generalize further we employ a device appearing in Kallen that can be viewed as a generalized quantile transformation. Suppose th pis a diffuse measure on E, assumed without loss of generality to sain H(E) = 1. Then there exists an invertible mapping f,, (not unique) of measure 1 subset of) [0,1] into E such that » = Af", where now } des fa Lebesgue measure. Hence a Poisson process N” on E with mean mae ny admits the representation (1983) NP = Diep xy (65 where the X; are independent random variables uniformly distributed gy (0,1), and $, = Dra Yi, with the ¥; i.id. random variables independent of the X,, each having a Poisson distribution with mean 1. We then obtain the following result. Proposition 6.6. Let NV, be given by (6.6), where , satisfies yp = \j-! and let (Az)ocect be an increasing family of subsets of E. Then with a(z)= mAs), (Vals) ~ 2(Ae) 10 <2 < 1} 4 (e*)G(a(2)) 0 < 2 <1, under P,, where G is the Gaussian process given by (6.5). x Proof: We may assume that f,;'(Az) = (0,a(z)]. Let N" = so that N" = N" f-} (see Section 1.5). In particular fi( Az) = (1/n). and therefore by Theorem 4.26, Oy N"(a(2)) vali(Az) - 2(Az)] vnlexp{-N"(a(z))/n} - e-*)] e*)(N(a(z)) ~ na(e)]/ Vn e*)G(a(z)). da i a Bstimation 229 In some instances an unknown mean measure is sti inated by a known measure io, but not further, rocess N on Ry may be known on physical groun function a = aE(Ni]/dt, but a itself may be unknown, By sieve meth analogous t those in Theorem 5.18 — but with histogram sieves teak ods stjnate by maximum likelihood the unknown derivative — du/da ae be model P = {Px : a € L}(jo)}, [duo for even on a general space E. § : 1 . Sup that under each Pay Ni, Nay... are id. copies of a Poisson process Yon FF with mean measure n(dz) = a(2)yo(dz), where pip € My is known. With yee Dia which is remains a sufficient statistic for a given the i Nyoe+s Nas and with P the probability under which the NY; are i.id Poisson rocesses with mean measure fio, Proposition 6.10 (in Section 2) yields the Joglikeihood functions . ipulated to be dom- For example, a Poisson ‘ds to admit an intensity Ln(a) = nfp(1 - a)dyio + fg(loga)dn”. (6.1) As is true for the multiplicative intensity model of Section 5.3, of which ite model is a special case when is a compact interval in R., Ly is unbounded above as a function of a, necessitating an indirect approach to maximum likelihood estimation. Our approach utilizes histogram sieves (see Grenander, 1981). Let {Am; : n> L154 < Lm} be a null array of partitions of E with jio(Am;) > 0 for cach mand j. For each m let Im be the histogram sieve of functions a € L}(y4o) constant over each set Am;. Then by straightforward computations the maximum likelihood estimator 4(n, m) relative to sample size n and Im satisfies G(n,m)(z) = f(Am;)/Ho(Am;), & € Am), (6.8) where ji is given by (6.1). ‘The appropriate sense for estimators & = a(n, mp) defined momentarily to converge to a is with respect to the norm on L} (jo), which we denote by |) For suitable choice of my convergence takes place almost surely. ‘Theorem 6.7. If m = mp is chosen in such that, with 6, = lmas 20 it /n? < 00, (6.9) then for a = &(n, mp), ||a — al]: + 0 almost surely with respect to Pa. Proof: Let nand m both be variable; then with jio(a; Am;) = Sans @ Ao» \la(n, m) - ally = DE, Sag, |ii(Ams)/Ho( Ams) ~ a(2)|s0(d2) 7 230 6. Poisson Processes on Gen, eral Sp ace SDS, Lay [f(A )/H0( Ami) ~ Hol Ams) /soh Ans oa + E854 Say [oes Ams) /Ho(Ams) ~ 0(2)|ya(de) . bey | Ans) — Ball (Ams)]] + TS Sigs [Hols Ams)/Ho(Ams) ~ a(2)| no(de), A That the second, nonrandom term converges to zero provided m — oo is analytical and can be shown by a variety of techniques only that Grenander, 1981, pp. 419-420). (see, eg, We deal with the first term by methods analogous to those in Th, 4.15 and 4.18. For ¢ > 0, eorens Pa {3604 | Ams) ~ BallY(Ams)]l > €} S (eB [Sf lal ns) — Balm}. With n and m fixed the random variables ji(Amj)—_Ea[N(Am,)] are indep dent in j and by brute-force expansion of the fourth power of the rami followed by repeated application of the Cauchy-Schwarz inequality, we ak Pa {$04 bi(Amj) ~ Bal (Ams)]l > €} = O(/ (en?) the dominant term comes from the O(¢4,) summands with all indices distinct. Thus (6.9) suffices to give Pa{Dj21 |(Am;) ~ EalN(Am;)]l > €} < 00 for every ¢ > 0, which completes the proof. 1 In general, little can be said about estimation of the mean measure of a Poisson process in the Model 3.2 context of observation of a single eal ization over a noncompact set E; one can do arbitrarily poorly if the mean measure is particularly ill-chosen. However, good results obtain for ste tionary Poisson processes. Suppose that E = R4, let pio denote Lebesgue measure and for each v > 0 let N bea stationary Poisson process with inter: sity v under the probability P,. It is natural to estimate v with estimators & = N(B)/wo(B), where B | E, but if jio(B) — co in a completely abi? manner, consistency and asymptotic normality can fail, For Bn = Dierdi where the A, are disjoint sets with lo( A,) the same for all i, the indepesdest and stationary increments properties of N yield the following result. Proposition 6.8. Let Ay, A2,... be disjoint subsets of E with 0<4> po(Ai) < 00 independently of i and for each n let # = (1/ma)N(S=1 Then oo : Hypothesis Testing 481 62. jv almost surely; a $y vale 2 4 (0,00). The simplest choice of the Aj is as translates ally, Theorem 1.71 and the final result of ¢ en uct numerous consistent estimators of v (s of a fixed set Ao. More he section enable one to ee ee Chapter 9 for specific choices)> proposition 6.9. A stationary Poisson process N on R4 is ergodic. Proof: The Palm distribution P of NV, by Example 1.57, is just the P- distribution of N + €0; here we have taken v = 1 for simplicity. Therefore, p and P agree on the invariant o-algebra Z (De finition 1.70) and hence py Theorem 171a), E[N(f)IZ] = no(f) for all f, which proves that Z is P-degenerate. This concludes our general treatment of estimation, but not our consid- eration of the topic. Additional aspects appear in Sections 3-5, 6.2 Equivalence and Singularity; Hypothesis Test- ing This section is more about properties of likelihood ratios and probability distributions of Poisson processes than about hypothesis testing per se, but even the less overtly statistical portions can be interpreted as addressing the simple-vs.-simple hypothesis test Hy Ay Ho ie (6.10) for a Poisson process N with unknown mean measure p. As in Section 1 the setting is nonparametric, but here we accentuate the single-realization case, with asymptotics pertaining to behavior of Poisson process likelihood ratios and probability laws associated with bounded sets increasing to a noncompact space E. In particular we examine equivalence and singularity of probability laws P,, and P,, engendering Poisson pro- cesses with mean measures jig and j1,, especially in the case that io and j1, are equivalent. In this case P,,, and P,,, must be either equivalent or sin- gular; intermediate cases cannot arise. This dichotomy theorem (‘Theorem 6.11; see also Proposition 3.24) has analogues for Gaussian processes and diffusions (Liptser and Shiryaev, 1978), and is the main result of the section, although Proposition 6.10, which derives explicitly the form of likelihood ra- 232 6. Poisson Processes on Genera) ben % tios, is probably the most useful. General ‘iscussion concludes wit al asymptotic normality theorem for likelihood ratios over boundeq tet Ocal Let E be noncompact but locally compact and let (2, F) be a the space supporting @ (simple) point process N on E; we consider the srl tical model P = {Pu: # € Ma}, where under P,,, N i8 a Poisso ati, with (diffuse) mean measure }!- For construction of likelihood ratio i principal interest is equivalence or singularity of probability Meastres - and P,, corresponding to the hypotheses (6.10) on a-algebras FN B), wh ‘ly B is bounded, as well as on FN = FN(E), as a function of equivalence singularity of the mean measures jlo and /11- When Pag ~ Pu, there : <7 a positive, finite likelihood ratio, and hence a log- likelihood Tatio Usable 5, : testing the hypotheses (6.10). At the other extreme, if Py L P,,, then e test is rendered trivial: every realization of N leads without error to e ty as the “true” mean measure. Singularity of Py and P,, cannot i. unless at least one of o(Z) and j(E) is infinite; see Exercise 6,9, cur Within this setting the most interesting case, mathematically anq bh ically, is that jlo ~ #1. We commence with a criterion for equivalence « a and Py, on FN(B) when B is bounded and pio ~ Ay on B; thereatter we fe velop equivalence criteria for the unbounded case, including the dichotom, theorem. y Proposition 6.10. Suppose that B is bounded and that y, to on the o-algebra £0 B. Then Puy < Pup on FN(B), with diy dy a Uh ( : | sie [es ane} (6.1) Proof: For f > 0 vanishing outside B, dP, HL dP. FN(B) By [eM exp{Jp(1 — dis /duo)dio + Ia Jog( dus /dyo)0V}] expl fa(1 — dus /duo)ditol Ey» [exp Jal F — logldus /dyol)aN}] exp[fg(1 — du /dito)dpo] exp[- Ja (1 - e~F duty /djio)d jo] exp[-Ja(1 - ef din] = Ey le-M; hence (6.11) holds by Theorem 1.12. Wl u" Note the resemblance of (6.11) to the likelihood function of Section 5.3 for multiplicative intensity processes. By interchanging jio and j1, we establish a criterion for equivalence. Corollary 6.11. If fo ~ 4 on B, Py, ~ Py, on FN(B) and (6.11) holds. 0) othesis Testing 6.2. Hyp 233 In the setting of Proposition 6.10, Ne the hypotheses (6.10) have the form 2. = { fx log(dju, /dyio)dN > 6}, with a constant depending on B, wo, jy and the power bound under Hy (th robability of type Terror). The null distribution of the test statistic is dificult to express in closed form; however, its Laplace transform is com- utable and can be used, together for example with Chebyshev’s ine alit Pr construct approximate critical regions. Alternatively, if yo(B) is large then the test statistic = by independent increments — is poe normally distributed, yielding another way to derive approximate critical regions (see M. Brown, 1972, for details). Remaining a moment longer with the hypotheses (6.10) and observations FN(B), even when jo and 41, are not equivalent, one can formulate a like- iihood ratio test by the following device. Both pio and jz, are absolutely contiguous with respect to u* = jig + 4, and Proposition 6.10 implies that on FN(B ‘yman-Pearson critical Tegions for dP yi/ Pye = 2? {Ja(1 ~ dii/dy" dy” + fy log(dyi/dut) an} for i = 0,1, so we may form the likelihood ratio (not a Radon-Nikodym derivative but a ratio of likelihoods notwithstanding) Py, /dPy [ dy /dp* lO we B) ~ y(B aii / du" Gn dbs =? (HOB) — 16 + fee an] and use it to test the hypotheses (6.10). For observation of N over the noncompact set £ it is possible that jo ~ 4, on E, and hence that Py) ~ Py, on F(B) for every bounded set B, but that nevertheless P,, . P,, on F% (Exercise 6.10). Next we give a necessary and sufficient condition for P,, ~ P,,, in the unbounded case when jig ~ 11 but this dichotomy theorem yields more: if P,, and P,, are not equivalent, then they must be singular. Theorem 6.12. Suppose that yo ~ pi on E and that yo(E) = 41(E) = co. Then on F™ either Puy ~ Py, OF Puy 1 Py, according as the integral Sell — di /do P duo (6.12) converges or diverges. Proof: We show first that convergence in (6.12) implies that Py, < Pui since convergence of (6.12) implies that of the corresponding integral with 4o and 1; interchanged, this proves the “convergence implies equivalence” part of the theorem. By Liptser and Shiryaev (1978, Lemma 19.13) it is enough ™ 234 6. Poisson Processes on General Sy, randed sets By 1 By litma(dP ys /APye)1z™(B,) exists ang t for bo ae ear pect to P,,. From (6.11), is finite almost surely with res = = dyns/p0)ds10 + Sp, load dP [4Poo| yp) = 2 {lat pn /no)4b0 + Jn, }OR( di /duo)an and by the P,,-strong law of large numbers for N, Jp, tog(dyn/duo)dN _ | lim, 7 Tos( din /duo)din almost surely with respect to Py, (one can choose the By ina mame ensuring that this holds); consequently, oP (.-d8 +h dH) a \ En sox {J 1 Gao * duo 8 duo) J: ‘The proof that Pj, < Pus is completed by showing that Je [1 — de /duo + (diss /dt00) log(di1s /duo)] dio < co. (623) ‘That (6.13) and convergence in (6.12) are equivalent is seen by first noting that the function h(y) = 1—y+ y(log y) in (6.13) can be replaced without affecting convergence of the integral by k(y) = 1—y + yp(log y), where (2) = z for |z| < 1 and 2/|2| otherwise, and then using the property that there are constants c and c’ such that ¢(1 — Vu)? < Ry) < c(1 ~ ap (see Grenander, 1981, Chap. 8, and Liptser and Shiryaev, 1978, Chap. 19, as well as the proof of Theorem 5.18, for details). Conversely, given divergence in (6.12), almost surely with respect to P, “a lim, 4Poo/4Puy laa) a (tim, dy. /4Pyo| gap.) = 1/00 =0, but by Neveu (1975, Proposition IIJ-1-5) the limit is the Radon-Nikodym derivative of the absolutely continuous component in the Lebesgue decom position of P,,, with respect to P,,, and therefore Py 1 P,,- Equivalent means of stating convergence in (6.12) exist. For example, M. Brown (1971, 1972) demonstrates that P,. ~ P,, if and only if for some ¢ > 0 (and with g = 1 - dyy/dyo) Sitr.sloldo + Jigicey9 dito < &- An advantage of (6.12) as opposed to this criterion is that the former ge eralizes to point processes with nondeterministic compensators (see Liptset and Shiryaev, 1978). 4, Hypothesis Testing 6.2 ' 235 ere is an extended criterion for singularity, nat Ho ~ A proposition 6.18, Suppose that Mm (dz) = sebeseue decomposition of 1 with respect to svalent! oy Pyo 1 Pry 9 FN; p) Either v(E) = 00 or In which it is not assumed F(2)po(de) 4 v(dz) is the Ho. Then the following are Je(1- VF djing = 00. (6.14) Proof: a) = b). If both parts of b) fail, then there exists A € € with wo(A) = v(AS) = 0 and Pu, {N(A) = 0} = e-4) y 0. Conditional on the vent {N(A) = 0}, which has P,,.-probability 1, N is Poisson with mean sf under Pao and Poisson with mean f(x)so(dz) under P,,. Failure of (6.14) implies convergence in (6.12) for these conditional processes; hence by the “convergence implies equivalence” part of Theorem 6.12, Pu, < P, on an event having positive probability for both, which makes P.. a P.. impossible. . i b) + a). If v(E) = oo, then there is a set A with Ho(A) = 0 (hence Pu {N(A) = 0} = 1) but v(A) = oo, which implies that Pu, {N(A) = 0} = 0; therefore, a) holds. If (6.14) is fulfilled and P,,, denotes the probability law of the Poisson process on E with mean measure ji2(dz) = f(z)uo(dz), then Pyy + Pugs bY Theorem 6.12. By construction, P,,, is the convolution of Py, and P,, the first of which is singular with respect to Puy, as is the second, except possibly on {NV(£) = 0}; therefore P,,, 1 P,,. From the perspective of hypothesis testing, Theorem 6.12 describes the pleasant situation that from a single realization of N there is either error-free discrimination between P,,. and P,,, (the singular case) or a positive, finite log-likelihood ratio fp(1 — dy1/dpto)dpuo + fp log(dpt /dyo)dN, which can be used to perform likelihood ratio tests. Proposition 6.13 provides additional conditions for perfect discrimination between io and yy. If either measure is infinite, then there is no error in testing the hypotheses (6.10), while if both are finite, then every nonzero realization of N yields unerring determination of jig or #1; but on {N(E) = 0} flawless discrimination is not possible. However, on this event, an atom of F™, one can employ the log-likelihood ratio in the usual fashion. We now examine log-likelihood function asymptotics for a sequence of Processes on a bounded set with mean measures converging to infinity. As Proposition 3.25 and Theorem 5.21 suggest, the appropriate milieu for such iain 6. Poisson P; 236 rocesses on Gener, 8p study is local asymptotic normality under contiguous alternatiy, lated in this case as follows. Let Naa: : be point Processes on gy set E and let po and p* be clements of a ee < ho. Suppers Py der the probability measure eo. oisson processes wig toy measure jio and that there are probabilities P' under which ,, Nine, j.i.d. Poisson processes with mean measure Ho +H /Jn. The contign* & orem describes asymptotic pehavior of log-likelihood Tatios asso¢, it the superpositions N® = Dh Ni Since under Po, N” is Poke mean nyo, while under Pp” its mean measure Is Njlo + Van, by 6 n With log-likelihood ratio is 11) th | log(dP"/4Po) nnyo( E)— (neil E) + Ve" (E)) + if log ti — Faut(B) + fp loe (1+ (1/ ¥A)(dH" /dno)] av”, lated on " Ln u " More generally, allowing for observation over a variable subset A of E have the log-likelihood function we Lal A) = Van"(A) + Jalo8 [1+ (1/vm)(de"/dpo)] dN", 5 if a signed random measure on E. In Theorem 5.21 we used martingale meth, ods to analyze such processes; here we employ instead the independent ; crements property of Poisson processes. a Theorem 6.14. Suppose that fy (du*/dyo)du* < 00. Then with nota. tion and hypotheses as in the preceding paragraph, under Po, L, 4g eee dom measures on E, where G is a Gaussian random measure with indepen. dent increments E[G(A)] = —p*(A)/2 and Var(G(A)) = f4(du* /duo)du', Proof: By Theorem 1.21 and the fact that the L, and G have indepen. dent increments (for details of the construction of G, see Neveu, 1965, pp. B4ff.), it suffices to show that Ln(A) 4, G(A) for each set A. Continuing from (6.15), we have 1 fs dp’ 1 fp dp* = *(A)+—= | san" - — | ——dN" van" + Fe dy dua an, dee, 1 ~ ( dt du 1 pe LS ({ Pan | Haye) — E> | Sth vn a dio duo an ao an zero and finite Jaw of IR L(A) The summands in the first term are i.i.d. under Po with me variance f,(dy*/dyo)*dpo = J4(du* /dpo)dy*, while by the strong pothesis Testing i i 237 pers, the second term converges to wy the central limit theorem and Slutsk: Gl ~(1/2)u*(A); hence L(A) 4 y’s theorem. pe ease with which further testing problems can be posed for Pois- cesses of general spaces is matched by the paucity of rigorous pro- pal available for addressing them in nonparametric settings. For ae a0 geneous Poisson processes on R, there do exist techniques (some to be vation’ presently) for testing structure of rate functions or comparing rate cue For planar Poisson Processes there are “distance methods,” devel- ad mainly by British statisticians, based on distances to nearest neighbors a the point process and distances from points in R? to points of N that in ve beet applied extensively to real problems of data analysis. The reader interested in applications is urged to pursue the latter; see the chapter notes for some references. peturning to general spaces, among testing problems of both mathemat- ial ead practical interest, but concerning which little is known, are the following. 1, Structure of the mean measure. The prototypical problem is to test whether p1 is absolutely continuous with respect to a prescribed mea- sure io. Consistent estimators of djt/djig under Ho can be constructed using Theorem 6.7, but it is unclear what properties of them (intu- itively, “largeness,” indicative of a singular component) should lead to rejection of Ho- Equality and ordering of mean measures. Given independent Poisson processes Ny and N2 with mean measures j; and j12, except on Ry and with further assumptions, few generally applicable procedures are available for testing even the hypothesis that 1; = j12, let alone the hypothesis that 4) 2 p42. r . Whether a point process is Poisson. The main difficulty, beyond those already present for ordinary Poisson processes on R, is lack of credi- ble, tractable alternative hypotheses. 2 For Poisson processes on Ry martingale methods are available. One could use techniques described in Section 5.3 to test, for example, whether a Poisson process N on (0, 1] known to have an intensity function, has a pre- scribed intensity function. Two-sample tests can be effected with martingale test statistics from Section 5.3, which under Ho : a(1) = a(2) {here (1) and N(2) are independent Poisson processes with intensity functions a(1), a(2)} have the advantage of removing the nuisance parameter a(1) = a(2) from the problem. Under Ho the process M; = N;(1) — N;(2) is a mean zero > 238 6. Poisson Processes ong eneray 5 martingale; procedures whose asymptotics are described }, Pee test for this property. : Theor 5 24 6.3 Partially Observed Poisson Processeg In this section we consider three models of Poisson Processes partially observable, and for each present results on statistical es Ste on state estimation. We begin with p-thinned Poisson processes erence ae specializing results appearing in Sections 4.5 and 1; in Suan ing = velop a maximum likelihood interpretation of the estimators 5 ion, We te Theorem 4.18 and establish consistency under weaker condition wd in examine the stochastic integral process studied in Examples ae es but in greater generality: for some results the integrand is ai and 3.19 semi-Markov process rather than a Markov process. For the mee tobe, give an explicit application of the state estimation procedure ee we Theorem 5.30. Finally, we treat inference for Poisson processes g, ating ; spaces such that only one component of the process is observable rent models are illustrative rather than exhaustive; in some ways ths © three models and the techniques with which they are analyzed is the re : ent of the section. 6.3.1 p-Thinned Poisson Processes We recall the setting established in Section 4.5. Let E be a compact a and let Nj, N3,.-. be observable point processes. They are thinnings : unobservable Poisson processes Nj, so we retain the “prime” notation from Section 4.5. The statistical model is P = {P,}, where p ranges over the family of measurable functions from E to (0,1); under P, the N! are iid, copies of the p-thinning of the Poisson process with known mean measure Ho € Ma. The model is hence a submodel of the model treated in Theoren 6.7, as well as a specialized version of Model 3.3. Our goal is to estimate the thinning function p. Denote by P the probability measure corresponding to p = 1 (i.,n0 thinning), under which the N! have mean measure /Ho, and for each n let Ni = 52, Ni. Then from (6.11) we obtain log-likelihood functions Ln(p) = rfp (1 — p)dHo + Je (lo8 p)dNi. (6.16) rem 6.7, there do exist maximul 1 consistent. Since log p 0, the maximized by taking p '? be By contrast with the situation of Theo likelihood estimators p, but they are no second term in the log-likelihood function is ji bserved Poi: Pi 6.3. Partially Obser oisson Processes a9 one at each atom of Ni, and then the first term is he sero everywhere else. Because P{NA(E) < 00} = 1 and pn is diffuse jp-pll = lll: = 1, where | llr is the norm on L"(jio); consequently, the p are not consistent. However, the histogram sieve estimators of (6.8), which Mincide with the estimators of Theorem 4.18 are str t conditions weaker in some ways than those there. Let {Amy #™ 2 1,1 S j < m} be a null array of partitions of E with po(Ams) > for each m and j. For each m, let Ij, be the sieve of functions +E — [0,1] constant on each Am,. ‘Then we have the following counterpart fp Theorems 4.18 and 6.7, maximized by letting p ongly consistent under Theorem 6.15. a) For each n and m, the n-sample maximum likelihood estimator of p relative to Im satisfies a(n,m)(2) = min{Ny(Am;)/MpXo(Am;)s1}, 2 € Anyi b) Suppose that 0 < 6 < 1 and that m = mp is chosen so that with el, : nar en/2?§ < 00. (6.17) Then with p = p(n, m,), for each p such that Jim 245% Sag, [H0(P! Ams)/Ho(Amj) ~ P(2)|Ho(de) = 0, (6.18) we have n®/4|| — pll, > 0 almost surely. Proof: a) For p= 4", a14,,, belonging to Im, (6.16) gives Ln(p) = D2y[(1 - 43) H0( Any) + N4(Am;)(log a;)). Since a Ln (p)/a; = —Ho(Am;) + Nn(Am;)/455 maximization with respect to a, occurs at @) = Ni(Ajm;)/s10(Am;) provided that this quantity does not exceed 1. If it does, then the derivative is positive everywhere on (0, 1), so the maximizing value in (0, 1] is a; = 1. b) The argunent follows the proof of Theorem 6.7. One splits n°/*||p—p\ly into random and nonrandom terms; for the former Py {n!4500, |N4(Ams)/n ~ EplN'(Amy)]| > €} = ie ante [(Efs INe (Ary) /m— EgLN"(Ams)l) ] = © (x33) 6. Poisson Pro, 240 TOCesses on Genera Sp, ae implies that the random ¢ ~ for each ¢ > 0, so that (6.17) imp! : erm zero almost surely. That (6.18) suffices to dispose of the Monran "8 A | follows from the proof of Theorem 6.7. Mf ema Unlike Theorem 4.18, Theorem 6.15 with 6 = 0 imposes no = requirement on p. A Poisson process V with mean measure ,, satan E|N(Am3)°] = Ho( Ams) + 30(Am3)? + pol Am,)3, so that (4.20) in this case becomes (essentially) B = O(n!) for : i.e., dy & n¥/3-* for some € > 0, while (6.17) is more restrictive, it re that 0, & n¥/4-¢, at Wire, If each Nj, under Pp, is the p-thinning of an unobservable Poisson NV, with mean measure io, with the pairs (Nj, N) iid., then state moe tion for the underlying processes IV; is nearly trivial. By Exercise 16 and N/! = N,; — Nj are independent Poisson processes with Mean ate » p(2)#o(dz) and [1 — p(z)]#o(dz); consequently, Utes BLN FN! = BLN] + NUIF™) = NE + (1-p)yo, where [(1 — p)#0](dz) = (1 — p())so(dz). Of course, however, (6.19) can be implemented only if is known, oy wise we are in the combined statistical inference and state estimation st of Section 3.4. With p the estimators in Theorem 6.15, we can form, anit by principles enunciated in Section 3.4, pseudo-state estimators ElNnsilF +] = Naga + (1 — Bo, (6a (6.19) which have the following consistency property. Proposition 6.16. Under the conditions of Theorem 6.15b), for each § and p satisfying (6.17)-(6.18), 08/4] BW ngslF0] ~ EglNagslF%a]] 0 almost surely, where ||-|| denotes the total variation norm on the set of finite signed measures on E. Proof: From (6.19)-(6.20), (E1NnsalF¥+1] ~ EylNngilF%+s]) (de) = (ple) ~ p(2))uode), which implies that 04 BLN gga] FNot1] — EplNasal Foes] = n'M|ip— pls therefore, the result follows from Theorem 6.15b). Hf partially observed Poisson Processes in 2 stochastic Integral Process 6.3. Nbea Poisson process on Ry having unknown rate \ under the prob- et "measure Ph and let (X;) be a semi-Markov Process wit! ability hat under each P, is independent of NV with sojourn distributions G fot} sojourns of X in 0 having distribution G alternate with sojourns and iain g distribution F, and all sojourns are mutually independent (see in} Metion 8.4). We assume that F and @ are known, although if, as in a iples 3.5 and 3.10, X is a Markov process with generator A= 3 iE (6.21) then the transition rates a and 6 can be unknown as well (see Karr, 1984a). ‘The observations are the stochastic integral process (X +N), = So X.dN, shich is a point process, together perhaps with X itself, The interpretation js that an atom of NV at wis observable in X*N if and only if Xy = 1. When iX is observable, that is, when the observed history is h state space Hy = FEN v EX, (6.22) inference concerning A and state estimation for unobserved portions of N are both straightforward. Some statistical results are given in Exercise 6.16; Exercises 3.15-3.17 treat the Markov case. After noting elementary properties of X x N we examine state estima- tion for unobserved portions of N given the observations (6.22); thereafter we move on to the more difficult and interesting case that only X « N is observable. Proposition 6.17. Under each probability Py, a) X * N is a Cox process directed by the random measure M(A) = dy Xudu; b) If F and G are nonarithmetic with finite, positive means # and G, then with g = F/(F + G), (X *N),/qt > A almost surely; ¢) If F and G are nonarithmetic with finite variances o*(F) and 0(G), then with 0? = [F?o?(G) + G?o?(F)|/(F + G)3, as t + 00 (14N),—Agt « SV + qo)Aqt N(0,1).0 _ The quantity q is the asymptotic fraction of time X spends in state 1 (i, the fraction of time during which NV is observable). For proofs, see Karr (1982) and Grandell (1976); independence of N and X is crucial. In

You might also like