You are on page 1of 11
10 Inference for Stochastic Proce . Based on Poisson Process Sample, cuses not on inference for point processes but Tather on ; ference for other stochastic processes ‘given observations Tesulting fro, as point process sampling mechanism. It is also a sample in itself: ive . only Markov processes, stationary processes on R. and stationary rand fields, and only Poisson process samples. We use marked point Process, to unify the problem formulation, but this produces a coherent statement not broadly applicable methods. Nevertheless, problems that arise in Point process sampling are of practical interest and do lead to challenging Mathe, matics. To impart structure and harmony, we concentrate on Poisson sampling of stochastic processes. One facet, Poisson sampling of renewal Processes, has been treated in Section 8.1. In more general terms the problem is as follows, Given a process X = (X,) on R with unknown probability law and a Poise, process N = De7,; often but not invariably presumed homogeneous, such that X and N are independent, the goals are to effect statistical inference for the law of X and state estimation concerning unobserved past values or not yet (and possibly never to he) observed future values, given as observations single realizations of the Poisson samples X (T,), the values of X at the points of N, together possibly with N itself. Observable aspects are encapsulated in the marked point process V = ),, €(7,,X(T,))» in which each point of N is marked by the current value of X. As in Chapter 9, asymptotics arise given observations over [0,t] as t — oo. Before proceeding we consider the foundation issue of whether inference ‘This chapter fo: 382 a oe from Poisson Process Samples ce he law of X is even Possible, J, bs ani 33 ind jsson sample sequence? Unique} em Poiss ce? Und Y deter: One of “ee answer is affirmative with jot ®Sumptigg mite ie , tioy . NO restriggs Pein that Nj WF geno) be random variables, A nati Tietion i yo X(T0) Patial version appears other th, the 3 in Sectj ist stjon 10-1. Let X; and X) be meas ection 9, fr 0 as in probability, and let V he q fa stochast; i on 2 ; a el of X1 and Xp. If (X4(Tn))nzo 4 (X,(7, pee With rate one «dee niIn>0, then x 4 inl Cf It is important to Temember that x; X), rr te-dimensional distributions. With Nand 2 2 Means only equal. Hos *12n fixed, define ja = P{Xi(t;) <9. 5 (teste) = PAX) < 25,5 = Ieeesn}, fay : 12, itive integers ky,...,kn, let Ky = 59 p ove Ui ke and let S;= Tk,. Then heist n nye oo fo0Fi(uas 1 + tas > Dawe) wi “4 /(k; ~ 1d = E[F(S11-++1 5n)] = P{X,(8,) < Pid = Ayes. n} equal by assumption. Multiplication by (1 - 4 = 9;)*i, wh 4 by summation over k1,...,kn, confirms that seine 1S, followe fore So” Faluay +++» Larue) exp{— Lja18juj}duy +--duy co n = Solo Pa(tay oy Dea we) e{-DF yu }duy cdg By the assumption of continuity in probability, F, and F, are continuous: therefore, this last expression implies that F, = F,, Since n and ies ; d abitrary, X; 5X2. Wl were For the Markov process model of Section 1, the Poisson sample sequence isa Markov chain, to which, given assumptions of irreducibility and recur- rence, well-known methods apply. Indeed, there is a converse, Theorem 10.3, to Proposition 10.1: if the Poisson sample sequence is Markov, then so must be X itself. For binary (0-1 valued) Markov processes, consistent estimation is possible even if the rate of N is unknown and the observations are only the thinned point process N' = > X(T,)er,. This situation has been discussed, from the viewpoint of partially observed Poisson processes, in Section 6.3. 'n Section 1 we also consider combined inference and state estimation, for which, in the finite state space case, a satisfactory solution is available. In Section 2 we examine Poisson sampling of stationary stochastic pro- “sses on R. Although the law of X may not be recoverable from single a 10. Inference from Poisson p, . TOCegy i Pl, ealizations consistent estimation of the mean and the covari leg realiza' 9 ee ossible from asynchronous and synchronous observations of a ne te is possil wot. th fact, Poisson sampling is superior to Sam a* Bi sample em instants fy = nA, for because of “aliasing,” eatin,s8 & ularly spat function given regular samples is impossible in general of t Sai: in Section 3 We consider Poisson sampling of stations. . fields. Results mimic those in Section 2 but are more fragm, entary Tan, don 10.1 Markov Processes ‘i ine Poisson sampling for Markov this section we examine ; P Processes, i ke X be a Markov process with finite state space E, let Ae b thi of X under the initial condition Xo = i, let Pi(i,j) = Px, - ‘ the lay transition function, let A = Pj be the generator matrix and for 4 Y vs the let U%(i,j) = fore Peli, j)at be the a-potential matrix. Then U% = (aJ-A)~*, so that (P) is determ by each U*. q : ; tmined Suppose that N = Der, isa Poisson process with rate A, and ig ; pendent of X. Then the following result is immediate, 18 inde. iti . i le process (X) ; Proposition 10.2: The Poisson sample p: s (X(T) is @ chain with transition matrix Qg=\U.0 Markoy Less apparent is the converse: a Markov Poisson sample PrOcesS can ay only from a Markov process. atise Theorem 10.3. Let X be a measurable process, continuous in bility, with finite state space E, and let N be a unit rate Poisson pi ae independent of X. If the Poisson sample process is a (homogeneous) Maat chain, then there exists a Markov process X such X £ X, . Proof: Let Ry(i,j) = P{X: = j|Xo = i}; these are continuous functions of t, and the transition matrix Q of (X(Tj)) satisfies Q"(i, 3) = Jo” Rul i, j)(e~*u"*/(m — 1)!)du for each n. From the Chapman-Kolmogorov equation for (X(T,)) we infer that for each n and m, SP [2 (Rusel i) - Cee Rulis k)Ro( ky j))u"tv™ et") dudv = 0 toskov Processes jor coset (by transform or Hilbert space ay . . ae tguments) utu(ty J) = Crew Rul i,k) Ry (k, j) «and j- There exists a Markov process ¥ wit! ce diate that (X(T,)) 4 it is immediate that (X(Tn)) © (X(2,)), f y appeal to Proposition 10.1. n/), rom which the theor em fe n0W consider statistical inference, for which that the rate of the Poisson process is onknwa, bj we 4, Our goal js nonparametric estimation of \ on the ee ations FF of the marked point process W = > mi Generator A from fn dx and with asymptotics as t oo. More “expllee eae over sets 5 hy h transition function fu) Cows 7 impose the compli P iplica- ut permit N to be ob- j * tion of N’ over [0,¢] and the Poisson sampl FR" corresponds ples X(T1),..., X(Ty,). r to transition function satisfies Peaet4 = Dot" A"/nl, (10 1 spake the FAN likelihood function is Jy(ds A) = [AN Ne] TT gen fe OAC, MO) (10.2) where of Ne Nii, j) = DMX (Te-1) = i, X(T) = §) (10.3) js the number of i-to-j transitions for (X(Tn)) during (0, t] and Ulin) = Thea (Te ~ Tea) U(X (e-1) = 6X (Zh) = 5), whichis the time spent jn state i during sojourns that terminate with a jump to a of Ly with respect to A and A seems impossible. Even in (see Karr, 1984a), dependence of L; on A is too compli- licit solution of the likelihood equations. Of course, since jon of ) alone and a function of A alone, we can easily derive that the maximum likelihood estimator of ) is \=N,/t- For estimation of A we resort to & simplification: (X(Tn)) has transition matrix Q = AU? = A(T — A)~}; consequently, A=\MI-Q")- servable, the nonparametric max- 1961b) » paximization the two-state case cated to permit exp! I, factors into a funct (10.4) Ifthe Markov chain (X(Tn)) alone were ob: innum likelihood estimator of Q would be (Billingsley, 1961a, Olin) = Nelind)/ Thea MA (Ee) = (10.5) 10. Inference from Poisson Pr ‘Oceg5 386 is gi 0.3). m, ; j)is given by (1 -— Mn amalgamate (10.4)-(10.5): Az iu-9"). ; (to, 4 likelihood estimators, they are strong] A While not maximum ; i ney al cae asymptotically normal notwithstanding, and inferior only to - sey . they are inefficient. ir “ Theorem 10.4. ‘Assume that X is irreducible (hence positiyg Then as t 7 AOA almost surely. the . almost surely was established i Proof: That \ + > ae while by the strong law of large numbers for irreducible oa 4 (Doob, 1953) 2 : Ther UX (Tet) = i, X(N) = 5) _ AQU3) fe eG) Hi) = Mii), almost surely, where Y is the unique limit distribution of (x(7 Wr co almost surely, it follows, with @ given by (10.5), en) Sine almost surely. In particular, since Q is nonsingular, 6 ibe 654 all suficiently large f removing any doubts that the estimators Fe for Well defined. Since matrix inversion is continuous, consistency is confirmed, ed, Theorem 10.5. Assume that X is irreducible. Then : vifA - A) 4 (0, R), te .1) on R4, where d = |E|? and the covariance matrix R is gi below. w given by (10.2) Proof: Consider the random elements 3, A) of R41 3.21, &. ~ By Proposition Valn/T, - ] 4 N(0, 22), hat while by the central limit theorem for Markov chains (Doi : an ob, 1953; Basawa and Prakasa Rao, 1980 and Billingsley, 1961a, 1961b), ik = Vier 1(X(Tk-1) = #) : or we have ValQ* - Q] 4 N(0, Ro), (109) 10.1. Markov Processes where Ro is the d x q matrix a Ro((i,4)s(#,3’)) = HSH), - gga F245) - O(4, Na”, 59), (10.10) in v is the unique inva: (Again v is ique invariant dist. N allows us to combine (19.8) nu cn lox Gj indtpeadiocs of 124 Val(n/T,, 6° nj 2) (014 v(0,n,) on R'*4, where Rx(0,0) = 2 R(0,(,3)) = 9 MLA @5)) = Ro((i, 3), (3). By appeal to Serfozo (1975, Theorem 8.1), we then have vel(A, @) — where Ro = (1/A)Ra. Now let G be the open set of points (14,7) in RI! V is nonsingular, and let H : G — R be the function H(y, V)=a(I-v-3 According to (10.4), A = H(A,Q), while Theorem 10.4 implies that almost surely (A, Q) € G for all sufficiently large t, so that (10.6) gives A = H(i, 9) With J7(A, Q) the Jacobian of H evaluated at (\,Q), it follows from (10.11) and multivariate transformation theory that (10.1) holds with QQ s N(0, Ry), (10.11) such that the matrix R= In(,Q)RoJu(A,Q)". (10.12) State estimation for X is easy. Given observations F = £%((0, 1] x B), in order to calculate the conditional distribution P{X,, = j|F/"}, we notice that by independence of X and N, for u € (T-1, Tk), (X(Tr-1), 5) Pr.—u( ds X (Ti) ogy _ PurTa Sees I Fe ote (Tha) CE) ag (10.13) where (P,) is the transition function and Uj = T,~Th-1 is the kth interarrival time of NV. For u > Thy,, P{Xu = FE} = Paty, (Zi) (10.14) 10. Inference from Poisson Pp, Ocegy 388 ly observed val Mp), s js the most recently lue of X, ley baa 2 «xt, the filtering problem: In Pang, choosing 4 = a - My p(hr= slFe} = Py,(Zts4), (y ce time of NV at t (see Chapter g ay the Poisson nature of N; th, ). ev, rocess independent of X. However ma for, d state estimation the Poise © Con, ON assy ett Ump ion any sampling P! tatistical inference an iJ] be im ortant again. will be imp restricted form of observation, b » but un, We now examine @ more ; ° assumption that B= {0,1}; 8° that X is a binary Markov prog, transition rates @ (o> 1) and b (1 0). Suppose that X is obser 88 wing = X,=1,80 that the observations are the pain aty Proces, 8 if and only if dNe = N= D201 (*! der, (Tn) = Lets = DoX (Thera (10.16) = fi XdN. This process isa thinning of N, albeit not in th 1,38, and has already appeared several times, viewed © Sense N forced by X, rather than vice versa. From Th eon hose interarrival distribution F satis, les, that is, Ny of Definition partial observation of 8.24, Nisa renewal process Ww! = atbt+a 4,_r(a) = Ire ot — F(t = Saag 4 b+ A) Pe F(a) = So aya(atb+\)taa (10.17) ‘We assume that 4, band are all positive. FN’ consistent estimation of a, b and X is possib, ible in From observations F¢ the following manner. There exists (Exercise 10.4) an invertible functi ction H satisfying (a,b) = H(4-r(0) 4-P(f1-F(2))- (teas) With P(2) = (1/N) Dk, 1(Uy < 2) the Fe"-estimat 8.1, (a, 5, A) is then estimated by substitution: or of F from Section (ai,\) =H (fpr Fh Grell ~ Fh fore Ut Fi). These estimators (see Karr, 1984 eae eeel . ja, for proof) are strongly consistent and Proposition 10.6. a) (4,6, \) — (a, 6, A) almost surely; Dy vAt@sbd}~(o-6.) 4 N(R), where Ri cov calculated bere. , 2), where R is a covariance matrix not 389 10.1. Markoy Processes The principal state estimation problem, the filtering problem - es ing P{Xe = 117"), is solved in Theorem 6.22; variants are treated in (1984a). Reverting to Poisson sampling with observations N = ¥ €(7,,X(Tn))) ¥® conclude the section with analysis of combined statistical inference and state estimation. The state space £ is again an arbitrary finite set. Since the main state estimation issue is filtering, we confine attention to it (for extension to prediction, see Exercise 10,5). By (10.1) and (10.15), for each ¢, PUK = iF} = eA 2,3), (10.19) where Ve is the backward recurrence time at t in the sampling process NV and 2 = x (Ty,) is the most recent observation of X. Following our method in Sections 7.4 and 8.3, we construct pseudo-state estimators by substituting the F/"-estimator A into (10.19): P{Xe = HFR} = A(z, j). The main feature is asymptotic normality of the difference between the pseudo- and the true state estimators, Theorem 10.7. Assume that X is irreducible, and let + be the limit distribution of X; as t + oo. Then as random elements of R!=I, Vi [PX € OF} — PEK € (AP 40, where Y(-) = e*4(Z.o,+), and a) Aco ~ N(0, 2), with R is the covariance matrix of (10.12); b) Veo is exponentially distributed with parameter dr; ¢) Zoo is a random element of E with distribution iw; d) Aco Voo and Z,. are mutually independent. Proof: It suffices to show that (Vi[A— A}, Vz, Z) 4 (Ago, Vaos Zoo) with a)-d) fulfilled. That V#[A~ A] 4 A., satisfying a) is established in Theorem 10.5; A is asymptotically independent of (Vi, Z:) by the mixing character of X and independence of X and N. Concerning (Vi, Ze), let F be the exponential distribution with parameter \ and R(t) = 1+ \t the associated renewal function [see Chapter 8]. Then for each j and for y yy Ze=F} = CeLoP{Tk < t- y, X(Tk) = j, Thai > t} = LiLo Joe F*(du)P{X, = j}{1 — F(t — u)] P{Xo = jhe + AMS YP{Xy = jhe du P{Xo = jhe + P{X, = jhe (je. Rou 1 10. Inference from Poisson . 8 7 : a “ “bly In particulary uf jer(e)lat < ©, B|A(u)) = v2 f(u) + VR(0)/2% + 01/1 (10,4 and | = (1/t)[v?f(v1) + PR(0)/2 x [Ad(vr + 02) ~ Ar(vy %)] + O(1/t), exnel Ay(v) = (1/t)[2sin(vt/2)/o}2, see Masry, 1978a, or Exercise 10.15), te sample differences, asymptotics are th eg Cov H(v1)s H(v2)) (10.3 where A, is the Féjer k hich " 7) in time series analysis (s aig Notwithstanding finit for synchronous data. ey Ame as 10.3 Stationary Random Fields This section isa spatial version of Section 2, but is incomplete, and e problems, not solutions. Before proceeding to the main results, we aa with random fields and present a multidimensional analogue of Propose dom field is a stochastic process with a multidim, 10N 19, A ran stic proce 4 aes L clidean) parameter set, and stationarity is translation invariance in t, (By. € ob. vious sense. Definition 10.16. a) A random field on R? is a measurable ri process (Y(z))zeR4 taking values in R; tic b) The random field Y is stationary if (Y(z+Y))scRe a (Y(2)) each y € R4. ‘As in the one-dimensional case, if Y is stationary and if BIY(2\p) for each 2, then Y is L?-stationary and there exist a covariance fung, 0 R(y) = Cov(¥(z), Y(z+y)) and spectral measure F satisfying Rly) = Jase Fld), ERY, zeRé for ction where (-,+) denotes the inner product. However, whereas in one dimensi the covariance function is symmetric and hence a function only of |e], in i multidimensional case the stronger condition of isotropy is not impli ed i stationarity. y Definition 10.17. An L?-stationary random field Y is isotropic if for every rotation 7 of R4, Cov(¥ (72),Y (ru) = Cov(¥(2),¥(y)) for all and yeR*. na “tionary Random Fields 399 Por an iso, tion Ron RiPic random field the covariance fun satisfies p,—i*' Covy(, ¥(n)) = Rie — yl), For simplicity yo" ®¢h rotation 7. 4 €M assume that EY (2 exists a Spectral Tepres, i (ve) entation an, Ince'z, dv), where isa ii ments id inert ited ye l2 ations oy” 42). For isotropic randon fields more detailed SPectral “Presentation, material can be foun¢ in Ya, enko (1989) Our main topic M the Section j. ceed sent ationar santo fields given oisson samples. That in: lar Peta eee Re eld on RE ane = Lex, be a stationary Ao cate inte? dependent TY, with intensity ». The data are the ae elnie ae? Ns VX n))> with each Point X, of marked by d conside: att Xe We assume that the sampling process is observable, an consider Nehronous” data NK) = FH K XR), where K is a compact, convex Subset of R@, Before examining estimation ction reduces to a fune- and the spectral measure ing of the ™ean and covariance function we resent a generalization of TOPosition 10.1 to the effect that the law of i marked point process determines that m field. Indeed, in following result neither Process need be stati saaltly[conllanstaze required: y regt em 10.18. Let ¥ be a oe let be a Poisson » vr atistying u(G) > 0 for every M dependent. Then the law cr the are that of Y, \e Tandom field on R¢ that is continuous in, recess on R? with diffuse mean measure eben set @, assume that Y and N are marked point Process W — >> "(Xn¥(Xq)) : Let Yi and ¥2 be random fields fulfilling the hypotheses of the Proof: ith associated marked Point processes MN, and 2, and suppose theorem, wil f generality that |¥i| <1 and Wal <1. Then for ha function ee rike sb 0 nm 7 gets Gin SUD V2, o for each we gs continuity in probability went Ys ana’ n, while H( in) | tim, (Ma Gr) (Gin) +++» Ma(G, y . )/M(G,,)) Ve Vim (Ma(Gin)/#(Gan)s «+» Ma( Gr )/ “V/u(G,,) ) (sens Y2028) limits are in the sense of convergence in distribution, x + Heng that Y is an [?-stationary random field on R? yi ance function to be estimated from deen . Je process N, (Once again N is a station ations ensity v-) BY analogy with (10.20) we me Pois. TOduey suppose n0¥ known mean and covari ‘H(K) of the Poisson samp! son process with known int 4(K-estimators me (1/vMK) JV 2 (10.38) nvex. As customary, the “sample size” k is where K is compact and co! suppressed. Were v unknown, one coult whose properties are superior 1986a). Mean square cons! analogue of Proposition 10.8, Let ri be given by (10.38). Then for each K, use the estimators th* = (1/N K in some ways even When V is foe van, ee arr, ssteney of the ri is one consequence of the fol Srsck Ghinputatloaal proof is omittad. Proposition 10.19. E(M)=m and Var(in) = (R(0) + m*)/v(K) + (/A(K) fe R(W)1 AK 0 (K = 9) Ady, (10.3 where K -y={2-y:t€ XK}. 0 ac ‘To establish consistency not onl} jn y of the rh but i covariance function, we need the following. enue een bebe apters 1 and 9 that for a compact, convex set Kt / 4 et A’, 6(K) denot: emum of rad Euclidean balls contained in K’. Then, bs cach y oR nas ’ ah MEE — IME) = 1. (10.40) We can now show consistency of the ri

You might also like