Professional Documents
Culture Documents
29, No. 4 (Dec., 1997), pp. 986-1003 Published by: Applied Probability Trust Stable URL: http://www.jstor.org/stable/1427850 . Accessed: 06/09/2011 17:20
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Advances in Applied Probability.
http://www.jstor.org
Adv. Appl. Prob. 29, 986-1003 (1997) Printedin N. Ireland ) AppliedProbability Trust1997
0. Introduction In recent years, non-linear time series analysis has developed rapidly [1], [12]. Using modern Markov chain theory [13], [14], many types of sufficient and necessary conditions have been worked out for the existence of ergodic distributions. These conditions have been applied to some non-linear time series models to show that these models are asymptotically strictly stationary under certain conditions. This is a 'probability domain' approach. In contrast, the classical time series analysis concentrates on the 'time domain' and 'frequency domain'. In the time domain, ARMA models and ARMA estimation methods have been developed. In the frequency domain, sinusoidal models and the spectra of second-order stationary processes have been investigated. It is well known that the time and frequency domains are interrelated. New approaches have established the relationship between time domain models and probability domain results. For example, threshold autoregression (TAR) models have been shown to have asymptotically strict stationarity [1].
Received4 October1995;revisionreceived17 June 1996. * Postal address:School of Mathematics,Queensland University of Technology, GPO Box 2434, Brisbane,QLD 4001, Australia.
986
987
In this paper, combining methods in the three domains, we investigate the model: following communications
(0.1) s, = A cos (toc + O,), 0< oc < /r,
Ot = Ot-i
+ pm,,
t = 1, 2, 3,
*,
where A and wc are the amplitude and carrier frequency respectively,{mj} is the message process to be transmitted,and f3 is the modulation index to control the bandwidthof the spectrumof the FM signal. The initialphase 00 may be constantor random; we will specify it later in different situations. We call (0.1) a stochastic frequencymodulation(FM) model. This model can be regardedas a non-lineartime series model since the observationis a non-linearfunction of the process of interest {m,}. The spectral analysis of signals in communicationsis very important.As mobile and cellularphone communicationsare developing rapidly,we have to accommodate more and more channels in a limited frequency band. This requires us to calculate the spectra of FM signals accuratelyto guarantee that the bandwidthin each channel is narrow enough without overlap with others. For example, if the message process is a pure sinusoidalfunction, the bandwidthof the FM signals can be calculated by Bessel functions. The calculation shows that the bandwidthis a function of 13, so we can choose the modulation index 3 to satisfy a bandwidth requirement.This is the method introducedin most textbooks in communications. However, we do not transmit pure sinusoidal messages in real communication systems. A more plausiblemodel of {m,}is that it is a stochasticprocesswith certain properties.A thorough study for the continuousversion of the model (0.1) can be found in [8, ch. 14]. This study is based on two types of Fouriertransforms. Firstly, the autocovariancefunction of the signal process {st} can be calculated using the characteristic functionthat is the Fouriertransformof the probabilitydistributionof a linear combination of the m,. When the message process {mj} is a Gaussian stationaryprocess, their linear combinationis still a Gaussianrandomvariable. So the autocovariancefunction of {s,} can be representedwith Gaussiancharacteristic functions. It turns out that under certain conditions this autocovariancefunction does not depend on time t, so {s,} is second-orderstationary.Secondly,the spectrum of {s,} can be obtained through the use of the inverse Fourier transform of autocovariance functions. However, the resultantspectrumis described by an integration.Although some limiting forms are given in [8] to simplifythis result under some special conditions, we do not know any error bound between the limitingform and the true spectrum. Another spectral study for stochastic FM models is described in Sections 4.12 and 4.13 in [11]. This result is simple-the power spectral density of the FM signal process 'is determined by, and has the same form as, the (probability) density function' of the message process. However, the authors of [11] admittedthat their approachis only 'heuristic'and 'we shall not be able to deduce the spectrumwith the (same) precision' as when the message process is a deterministicsinusoidal
988
D. HUANG
function(p. 159 in [11]). We shall show that the second resultmay have a large error in some cases. Our contributionin this problem is in developing parametricARMA models for some FM signals and ARMA approximations the spectraof general FM signals. for Firstly,for any MA (q) Gaussianprocess {m,},we show that the signalprocess{s,} is an exact ARMA (2, q + 1) process. Secondly, we introduce a method that can for providean errorbound for an ARMA approximation a general Gaussianprocess {m,} in the spectrum.Examples show that this method works much better than the method in [8] even for the case when {mr} not a MA process.Thus, the non-linear is model (0.1) in thefrequencydomain can be convertedto or approximated a linear by ARMA model in the time domain. These results can also be applied for FM signal demodulation,i.e. recovering{m,} from noisy observations.Many existing methods have been developed for this purpose. However, to the author's knowledge, no exactly optimal results are availablefor this non-linearfilteringproblem. Using the ARMA representationof FM signals,we can establisha state space model. Let {x,} be the Markovianextension (see [2]) of the ARMA representation the FM signal for = and {Yt s, + vj} be the noisy observationsequence. Then (0.2) xt = Gxt,- + u,, yt = [1, 0, *- *, 0]xt+ vt. So, the best linear trackingalgorithmfor {s,}can be developed by the Kalmanfilter. We will discuss this problemin full detail in a subsequentpaper. In this paper we also study the distributionof {s,} which provies a focus on non-lineartime series analysis.Firstly, assume that {m,}is a Markovchain process, which may not be Gaussian.We consider a vector-valuedMarkovchain
(0.3) X, = [ ,,m, t, two+t,(mod 2r), t = 1,2,3, * .
Using the classicalresult for ergodic Markovchains [3], we show that under certain conditionson the transitionkernel of {m,},{Xt}is Harrisergodic [13], [14]. Then {st} will be asymptotically strictlystationary.Usually it is difficultto find a closed form of the limit distributionfor an ergodic Markovchain. Fortunately,for a very large class of stationaryprocesses{m,}, we have (0.4)
t---
lim Pr{st, x} =
arcsin(xIA)
J~
+-
Ixl A.
This result shows the probabilisticcharacteristic the non-linearmodel (0.1). It also of has significancein FM demodulation.Although the Kalmanfilter can be used for FM demodulationbased on the model (0.2), it is efficientonly when the state and observation variables in that model are Gaussian. The distributionin (0.4) shows that this is not true. To obtain a more efficient estimator, we have to consider probabilisticapproaches,such as Bayesian and Gaussian sum methods [10]. The limit distribution(0.4) provides a basis for this approach.We will continuethe study in this directionin the future.
989
There are other types of signal modulation in communications. Amplitude modulation(AM) and phase modulation(PM) are two such types. In contrastto FM i.e. signals, they are not second-orderstationaryunless they are phase-randomized, unless the initialphase 80 is a randomvariableuniformly distributed [a - i, a + or] on for a real number a (see [4, p. 370]). Without phase randomization,AM and PM signals should be modelled by cyclostationaryor periodicallycorrelatedprocesses [4], [5], [7]. They are more general processes than stationaryprocesses. However, according to [4, p. 352], at least the FM signals with 'low-pass (or all-pass) modulation' are asymptoticallysecond-order stationary even without phase randomization. In this paper we obtain a more accurateexpression for the condition that FM signalsare asymptotically stationary(see (1.2) in Section 1). This expression shows that we do not need to use the more complicatedcyclostationary processesfor most FM signals (see Remark 1.2 in Section 1). Finally,since the sufficient modelling and necessary condition for a scale process being cyclostationaryis that it can be convertedto a vector-valuedstationaryprocess [5], extendingthe above resultsfrom the scale case to the vector case, we expect in the futureto applythe methodsin this AM and PM signals. paper to the non-stationary This paper is organisedas follows. Section 1 startsfrom the discreteversion of the results in [8] (Lemma 1.1). A comparisonof condition (1.2) with the statementsin [4], both being for second-orderstationaryFM signals, is in Remark 1.2. Then the ARMA spectrum (Theorem 1.3) and the ARMA approximationwithin an error bound (Theorem 1.4) are presented. Some examplesof spectrafor differentmessage processes {m,} are given at the end of Section 1. In Section 2, we show that the {X,} in (0.3) is a Harrisergodic Markovchain under certainconditions(Theorem2.1 and Corollary 2.2). Then the limit distribution of sin [O(t) + 0] and cos [((t) + ,], where +(t) is a deterministicfunctionor a stochasticfunctionbut independentof 0,, is obtained in Lemma2.3. The result (0.4) is obtainedin Theorem2.4. Other results about a special FM signal, including the best non-linearprediction and the limit behaviourof sample statistics,can be found in a previouspaper [6].
1. ARMA spectra and ARMA approximations Set (1.1) 4(0)=0 , (k)-1E[( m), k= 1,2,3, .
Similar to the result in [8], which deals with the continuous case, we have the following result. Lemma 1.1. Assume that: (i) {m,} in (0.1) is a scalar Gaussianstationarysequence with zero mean and autocovariancefunction (acf) rk= E{m,+km,}; (ii) the initial
990
D. HUANG
phase 0o is constant or random but independentof {mt};(iii) {m,} has a spectral densityfunctionf,(A) thatsatisfies fm(O+)+ fm(-) = 28/r > 0, (1.2) wherefm(O+)and fm(O-) are the right-and left-handside limitsrespectively. second-orderstationarysequencewith zero Then {s,} in (0.1) is an asymptotically mean and acf
(1.3) Ro= lim E(s) = A2,
t-o=o
Rk = limE(st+kst)=
t-..}o
A2e-(k)cos
kc=
R-k,
0.
(k)
m=ln=l
r-
m1
k-mrn
2m=ln=l-m
r.=-
kk-n=l-k
(1 i
|In\ lr
where the series on the right-handside is the Cesaro sum of the acf sequence {rn} is (see, for example, [9, ch. 2]). Note that {rn} the Fourierseries of f(A). According to a well-knownresult in Fourier analysis(see, for example, Theorem 2 on p. 30 in [9]), we have that
(1.5)
O(k)
-(k)
k-I
2n=l-k
- (1 \
inl
r-"
K = 2 [fm(O+) +fm(O-)]
as k - oo.
functionexp [-(t)/32u2], we have Since 6, is Gaussianand has characteristic E{s,}= Re (E{A exp [i(twc+ S,)]})= Re A exp (itwc)E exp (i/ mj+ i0o)
= Re {A exp (itwc)exp [-12b(t)]E[exp (iOo)]}. Then it follows from (1.5) that E{s,} tends to zero geometrically.Also,
E{stst+k}= ?A2Re (E{exp [i(kco +
t+k
- 9,)] + exp
= A2 Re {ek'E[exp (i
+ ei(2t+k)cE[ exp (i,
' mj)
j=t+1
m1
+ i.
=l
mj)J E[exp(i20o)].
Similarly,under (1.2),
E[exp (i3
=l
S m/+il3
j=
) m)] J
as t -oo.
991
(1.3) holds. Finally, for any 8 > e > 0, when k is large enough, it follows from (1.5) that +(k) - (8 - E)k. So Rk tends to zero geometrically.Thus, (1.4) is true. Remark1.2. All FM signalswith 'low-pass(or all-pass)modulation'[4] satisfythe condition (1.2). However, many FM signals with 'high-passor band-passmodulation' also satisfy (1.2). Although theoreticallythe transferfunctionof ideal high-pass or band-pass filters should be zero at frequency zero, most real digital filters are not. For example, any {m,} generated by AR (all-pole) models satisfies (1.2), no matter in which frequencyband the peaks of the AR spectral density are located. Also, if (1.2) is not true but 0o is uniformlydistributedon [a - 7r,a + 7r], then both vanish. So {s,} will be exactly second-orderstationary E[exp (iOo)]and E[exp (i20@)] and (1.3) holds. When {m,}is an MA (q) sequence, we have that rn= 0 for all Inl> q. So it follows from (1.5) that when k > q, o(k) is a linear functionof k, i.e. (1.6) (k) =k
n=-q
rn 2n=-q
Inlrn,
for all k - q.
This gives us the clue to simplifying(1.4) for MA message processes. For an MA process {m,}, we denote {rn},{+(k)} and {Rk} as above and put, for all k # 0,
(1.7) (1.8)
(1.9) ao = 1, m, =
f3kE-k, k-O
3o= 1,
8= 2 (t) k=O /
cos wo,
E{?2} =
2,
E{e,+k}
= O,
a=e-^
2 2
a, = -2a
a2 = a2,
y, =
akajR,+k
k=Oj=O
(i) If ,=oPk O00, for any initial phase 0o that is constant or random but a ARMA (2, q + 1) independent {m,, {s,} is asymptotically purelynon-deterministic of process satisfying
(1.10) s,- 2a cosocs,t+ a2s,_t2 =
k=0
bkt-k,
(1.11)
k-0
bkbk+j =
l1
, j=0,1,
2,--,
q +1,
bkzk
k=O
forall iz < 1.
992
D. HUANG
Thespectraldensityfunctionof {s,}is q
(1.12)
fs=kt -q-1
7teiu
wceiA
k=Obkeu
(ii) If the sum of the 1k equalszero and the initialphase 00 is uniformlydistributed on [a - n, a + Ir], with a a real number, {s,} has a mixed spectrum. The purely non-deterministic componentof {s,} is an MA (q - 1) process while the deterministic is the sinusoidalfunctionwith carrier component frequency(w. Proof. When {m,}is an MA (q) process, (1.6) is true and
1
t
2n=-q
( 1f)
2(3k)
k=-O
= 8.
q Inlrn;we have c(k)= 8k - b. Then, accordingto Remark 1.2, So, let b = 2 under either conditionin (i) or (ii) we have
RK = ?A2exp [p2(b - 8k)] cos kwo= ?A2exp (bp2)ak cos kwc, Since cos kwoc 2 cos oc cos (k - l)oc + cos (k - 2)oc = 0, we have Rk - 2a cosocRk-l + a2Rk-2 = 0, for all k _ k - q.
q + 2.
Thus, {s,} must be an ARMA (2, q + 1) process and (1.11) has a unique solution
{bk} satisfying 2?,ty= ' bkeiA2, (see, for example, Theorems 3.9 and
3.10 in [2]). Finally, when the sum of the fk does not equal zero, we have that a < 1, so the roots of the polynomial z2 - 2a cos wcz + a2 are inside the unit circle; otherwise the roots are on the unit circle. Then the results in Theorem 1.3 follow directlyfrom Theorem3.9 in [2]. Using Theorem1.3, we can constructexamplesto show that the statementin [11] may not be correct.In fact, we can find differentGaussianMA processessharingthe same mean and variance but having a different spectral density, for example,
{m, = E, + 0.9E,-_} and {mt = E,- 0.9E,-1}. According to Section 4.12 and 4.13 in [11],
the two correspondingspectral density functions should have the same shape. However, using (1.8) we find differentparameters8 and a for them. So they are differentARMA (2,2) processes.We plot them in Figure 1 to show the difference.
993
1 15
--
1 10-
Figure 1. FM spectral density functions. Two MA(1) models, m, = e, + 0.9e,-_ and m, = e, - 0.9e,_1, are used for modulating processes, where {e,} is a Gaussian white process with mean zero and variance one. Both models have the same mean variance. Using (1.16) we can find the corresponding spectral density function for {s,}. They are totally different. The x-axis in the graph is for frequency with unit n. Wc= 0.25x, p = 0.1, a = 10 and PI = 0.9 (top) and -0-9 (bottom)
Now we consider the error bound of ARMA approximationsfor general FM signals.We denote +(k), {R},,ak, a and {yj} as above and let
2
c, =
j-o
(1.13)
P(A) =
t=-q
C =
1/
1+
)1]2
Theorem 1.4. Assume that the message sequence {m,} is a Gaussian stationary
sequence and that (1.2) holds. Then the spectral density function f,(A) of {st} satisfies (1.14) f(A) -P(A)/(2r l akei
k-O
2)c
C
t=:q+3
i/(ir
akeA
k-O
C
t=q+3
Ic,.
D. HUANG
Ic,I A2p2e-(')
(1.15)
rn
n
+ r,_-+ 2
rn max[1,exp (-32(r,l + 2
R R,etA
2 akeikA
2 +
(1.16)
.6 f(A)k=OfsW
2r X akeik
0k=O
^2
P(A)
2
2r
k=O
akeikA
2+'
- (1 + ale -A)C+2e
+2)A
ake
2 akt+ ke
=
2 ak k-O
ytite
t=q+l
c,e i(-k)A
t=q+l+k
t=q+l 2
k-O
= 2
ake-*A
t=q+3
k=O
Thus,
|A|=
Ce
t=q+3
a
itA
/( 2It
( k-O
\) akeikA
and the firstinequalityin (1.14) follows from (1.16). Now, we evaluate the denominatorin A. Manipulating trigonometricoperations, we have
= + 11 2a cos w,ceiA a2e2AI2 11- aei(A+)J2)l11 aei-(A-w)12 \\1+ a2
12
].
. The Notethat cs second inequalityin (1.14) is true Note that IcosAl--1. The second inequalityin (1.14) is true.
995
cos Finally, we evaluate the c,. Since cos tow= 2 cos wC (t - 1)o, - cos (t - 2)0,, it
Note that r, = r_,. Since {r,} is summable,it follows from (1.5) that
o(t) - C(t1)-2n
1~
00 1 , r.
1 t-1 21 nlt
1
-
t-2 n=2-t
(t-lnl)rn
(t -
1 Inl)rn
E
n=-oo
rn =
n=t
rn
t-1
n= -cc
2)- rn
2 n=l-t
n=-ox
(t -nl)rn rn=-r,-,l-2 n
1 t-3
2 n=3-t
(t-2-
nl)rn
rnn=t
11 eXlI
we have that (1.15) holds.
Example. We now apply the results in Theorem 1.4 to the Gaussian Markov process m, = pmt-1+ , pl< 1, (1.17)
where
{E,}
is a Gaussian i.i.d. with zero mean and variance a2 and {e,} is independent
0.2 11- peiA12
2r
P,
exp (2r(
_2))
(k)=8
k-
p2))
- p2)); then
= Cohkak,
Assume that p > 0. Then 0 < hk< 1 and we have that IRkl A2Coak/2. I
996
D. HUANG
So, when we use a truncatedFourierseries ]'k=-t RkeikA/(2;r)to approximatethe spectraldensity functionfs(A) in (1.4), a tight errorbound is
k=t+l
In contrast,for the ARMA approximation, P(A) and C be definedin Theorem let 1.4 and C1= Co2(1 + 3p)A2,f2Co/[p(l - p2)(1 + p)(l - ap)]. It follows from (1.14) and (1.15) that
(1.20) + a2ei2A2 (1.20) -2r 1 - 2a cos OceiA _+p) Cl()aP(A)
C()
(1.21)
q=
2.
Two examples are shown in Figure 2. For p = 0.8, a = 1, A = V2 and wc= r/4, we set a tolerance A = 0.00001. When f3= 0.05, using (1.19) we need t = 448 in the truncatedFourierseries to achieve this tolerance,while using (1.21) we only need an ARMA (2,43) spectraldensity functionto obtain the requiredaccuracy.When 45 or 100 terms are used in the truncatedFourierseries, Figure2 shows that the truncated Fourier series has negative values in some places. So they are really not accurate enough. When 3 = 0.01, the case is much worse for truncatedFourier series: we have to use 13648 terms! However, an ARMA (2,48) spectraldensity function can achieve the required accuracy.From (1.20) and (1.18) we can conclude that the closer p is to one, the more efficientthe ARMA approximation be. will 2. The distributionsof stochasticFM signals In communications, can assume that the message m,, which usually presents we either sound or image, is bounded. So we develop the following result for bounded Markovchains.
Theorem 2.1. Let {m,} be a Markov chain with transition probability density kernel
(a)
4 2 0
-7
W-,
MA&
? fv-N
-1
-0.5
0.5
1 ARMA(2,43),ErrorBound: 8.579e-006
6-I
4
2-
-2
-1
-0.5
0.5
-5
-1
-0.5
0.5
0
10010(
50-
5{
0
1 0.5 0 -1 -0.5 1 0.5 0 -1 -0.5 truncatedFourierseries and ARMA (2, q + 2) by Figure2. FM spectraldensity functionapproximation processfor an AR (1) modulatingprocess.(a) B = 0.05;(b) 3 = 0.01
n t
998
D. HUANG
K(Y IV) such that K(y Iv) =0 for all y and v outside a bounded interval[a, b]. We denote I1= [0, 2,rIX [a, b], I2 = I x II, and
K2(x,yIu,v)=
(2.1)
E
k=-o
K(y1x-u-2w,-y+2k7r) -y+2krjv),
t
XK(x-u-2co
(x,Iy, u, v) E 12.
(2.2)
f(2n(X,Y Iu, V) =
n
(x, Y I t)&-2(S,
u, v) ds dt,
n >1.
If there is an integer n such that k2n(x, y Iu, v) is positive almost surely in 112and is continuouson fl2, then the process {X,} definedby (0.3) is a Harrisergodic process and the corresponding FM signal s, has a limit distribution. Proof. First, to define the transform in modulo 2ir uniquely, we define qi(x) = k=C-co (x -2kr)I{2kirwx
+
k=1
and then {X, = [nt, mt]'i in (0.3) is a Markovchain. Further, <x, m,<y Pr{fOt
I6-2 = Mt-2 vI =
mt +m,t
K(S
It)K(t
dt ds
=yj.xg
K(slt-
-u2w-cs+2k)K(t-U-
2wc-s+2k7rIv)dtds.
Thus, the Markov chain {X21} has the transition probability density kernel f2(X, y IU, v) in (2.1). Further, for a given integer n, the Markov chain {X2,, has the transition probability density kernel k2,(x, y Iu, v) in (2.2). Note that when the density k2n(x, y Iu, v) is positive almost surely in H and is continuous on I, Theorem 1 in [3, p. 272] holds (see Examples (b) on the same } page). Thus {Xu,,, t = 1, 2, 3, ? ?) is a Harris ergodic Markov chain. Then, it follows from Lemma 3.1 in [12] that {X,} is also a Harris ergodic chain. Note that FM signals, has a limit. t= A cos k,, then the corresponding
999
A class of message processes satisfying the conditions in Theorem 2.1 is given in the following result. Corollary 2.2. Let {mj} be a Markov chain with a continuous transition density kernel K(y Iv) on R2 and that K(yI v)>O for (y, v)e [a, b] x [a, b], -00 <a <b < 00; K(y Iv) =0 otherwise.Then,the Markovchain {[Ot, mt]'} is Harrisergodicand the FM signal {st} in (0.1) is asymptotically strictly stationary. corresponding
Proof. Let K(x, y I U, v) = K(yfx u --y -2wc)K(x - U- y - 2w,, Iv). As in the u, proof of Theorem 2.1, K(x, y I v) is the density kernel of the Markov chain {[2twc + 02k, m21]T, t = 0, 1, 2, ... }. Under these conditions, we have (2.3)
K2(X, Y I U, V) >0
on{J(x, y, U,v):a < v < b, a <y < b, a + 2wc <x - y - u < b + 2wj.
Further, let
= K2,,(x, Iu,v) y
Then, K2(x, y Iu, v) is the density kernel of {J2ntwc+ 0,,,, m2,,1]T, t = 0, 1,2, - ... Note that, for any given (x, y, U,v), K2(X, y Is, t)K22(s, t Iu, v) is a continuous
function of (s, t). So K2,,(x, y u, v) >0 if there is an overlap between the two regions D = {(s, t); K2(X, y Is, t) > 0} and D2,,2 = {(S, t); K2n-2(s, t Iu, v) > 0}. When
D = {(s, t):a < t < b, a <x - y - s - 2wt,< b} = {(s, t):a < t < b, x -y - 2wc - b < s <x -y - 2ao - al, D2= {(s, t):a < t < b, a + 2w, + u < s - t < b + 2wc + u}.
Since D is a rectangle and D2 is a parallelogram, they overlap if 2b + 2wc+ u > - y - 2wc- b and 2a + 2w, + u<x - y - 2wc- a. Thus, x (2.4)
u K4(x,y Iu, v)>0 on{f(x, y,U, v):a < v < b, a < y < b, 3a + 4wc <x - y - u < 3b + 4wJ. By induction, we can show that (2.5) K21(x, y Iu, v)> 0 < on{f(x, y,*U,v):a v, y < b, (2n - 1)a <x -y - u - 2nwK< (2n - 1)b}.
such that
Now, we choose the integer n satisfying (2n - 1)(b - a) > 2i. Let II and 112 be defined as in Theorem 2.1. Then for any (x, y, u, v) e n2 we can find an integer k
a<v<b, a<y<b, (2n -1)a<x-y-u-2nwc+2kK<(2n-1)b.
This means that starting from (u, v) the Markov chain {[tw, + 6,, mjIT}will arrive at
1000
D. HUANG
a neighbourhood of (x + 2k7r,y) with positive probability. Note that 0-, q(tw, + 8,). We can conclude that the kernel R21(x, y Iu, v) >0 for all (x, y, u, v) e H2.
Thus, Theorem2.1 is valid in our case and this corollaryholds. Next, we develop throughthe following lemma some probabilitydomain results without the assumptionthat {m,}is a Markovchain. Lemma 2.3. Let {8,, t = 1, 2, 3,
density functions {f,(u), t = 1, 2, 3,
}
(i) There exists an integer n for which all functions f,(u) are n-piecewise monotonic, i.e. for each fuinction f,(u) we can find a partition -00 = ao(t)< a,(t) < < an-1(t)< a,(t) = oosuch that f,(u) is monotonicon each interval[a1(t),a,+i(t)I. = u E (-00, oO)} 0. (ii) lim, 0sup{f;(u); is a deterministic of function of t, or it is a randomvariableindependent 0,. (iii) o,
Then, uniformly in x, we have that
(2.6a)
t__10=I
x}
arcsinx 1
r 2
(2.6b)
1
2
Proof. When w, is a random variable let C,(x) be its cumulative distribution; when o, is a deterministic function, set C,(x) = 0 for x < W,; 2,(x) = 1 for x i o,. It
follows from (iii) that
Pr{la< w, + 8
Then,
b=fb
fft(u)
dudCt(v)f
r+arcsinx}
k= -co
I
k-o
0
irn-n-arcsinx
arcsin x
k=
-a
n-arcsin
u - v)du dC,(v)
I =r"~' J
x-arcsin x-k=--o
f,(2kir+u -v)dudC,(v).
1001
1- X22 J-o
k=-oo
[f(2kr + arcsinx - v)
sup
inf
12kxr-ul<|S/2
ft(u - v),
sup
I2kx-ujI|r/2
f (u - V) du >
inf
12krn-ulinr/2
7t J2kf-rn/2
f(u - v).
So,
1 f (2kr + arcsinx - v) - r2kn+yr/2
f J2kn-n 2
f,(u - v) du
ft(u - v). f(u - v) du ft(u - v).
12kn-u lcn/2
sup
f,(u - v)-
inf
12kn-ulr1l2
1 r2k,r-r+n/2 t 2kn-rn-r/2
12kx-nx-u r j-12
sup
f(u - v) -
inf
12k- n--u\in/2
k=oc
p
kx-x/2
^f,(u v) duv)du
-
d=1,
2
k=--
- -
1 rknc+rk/2
k=-at
Jklrx-
x/2
= |
k=-oo cr
k=-oo LIkx-u/Sr/2
i [ sup ft(u- v) -
Ikn-ulin/2
Eo [ sup f(u)LIkx-ul|/2
IkY-uj S/2J
inf 122
j=l
1002
Note that fJo, dC,(v)= 1. It follows from (2.8) and above that
D. HUANG
- -
f_I
c1k=-o
cc j=l
n
| 2~ [f,(2kir + arcsin x
v) + f,(2kr
- arcsin x - v)]-
dC,(v).
=2 E If,(aj-,(t))-f(aj(t)).
j=l
2
j=l
Thus, it follows from (2.7) that, for any given x E (-1, 1), 1 IrV'I IF;(x) Then, (2.6a) follows from
F(y)-
', c)}
as t oo. --
arcsin y+ 1 +
as t-
oo.
The same argumentscan be used to prove (2.6b). Theorem2.4. For the FM signal process definedby (0.1), if the messageprocess {m} is a Gaussianstationary processand satisfiesthe condition(1.2), then (0.4) holds. Proof. Under these conditions, 0, is Gaussian with zero mean and variance
2,324(t). So, all density functions f,(u) of the 0, are increasing on (- o, 0) and are decreasing on (0, oo). Also, since under condition (1.2) cf(t)-- oo as t- oo (see the
as t
oo.
Thus, all conditionsin Lemma2.3 hold and (0.4) follows from (2.6b) with w, = W. In fact, for Gaussian r.v.'s with zero mean and variancesgreater than two, the formula (2.4) is very accurate as shown in Table 1. The values in the table are calculated within tolerance 0.00001. In the last column, F(x)= arcsinx/rx+0.5 is
Stochastic modelsand non-lineartimeseriesanalysis FM 1. TABLE distribution of SIN NORMAL. F,(x)= _=-Pr {2kir - Xr- arcsin x _ , -- 2k,r + arcsin x}, 0, - N(0, t2) F1(x)
0.1098 0.1635
1003
Cumulative
2k-
x
-0.9 -0.8
F2(x)
0.1435 0.2047
F(x)
0.1436 0.2048
F(x)
0.1436 0.2048
-0.7
-0.6 -0.5
0.2101
0.2538 0.2960
0.2531
0.2951 0.3332
0.2532
0.2952 0.3333
0.2532
0.2952 0.3333
-0.4
-0.3
0.3374
0.3783
0.3689
0.4030
0.3690
0.4030
0.3690
0.4030
-0.2 -0.1 0
the limiting distribution.One can confirmthat, for standarddeviation greater than two, F,(x) and F(x) are very close. Acknowledgments The author is gratefulto the editor and referee for their valuable comments and suggestions. References
[1] CHAN, K. S. AND TONG, H. (1985) On the use of the deterministicLyapunovfunction for the ergodicityof stochasticdifferenceequations.Adv. Appl. Prob. 17, 666-678. J. [2] DOOB, L. (1944)The elementaryGaussianprocesses.Ann. Math.Statist.15, 229-282. W. to 2nd edn. Wiley, [3] FELLER, (1971) An Introduction ProbabilityTheoryand its Applications. New York. W. to [4] GARDNER, A. (1987) Introduction Random Processes with Applicationsto Signals and Systems. 2nd edn. McGraw-Hill, New York.
New York. McGraw-Hill, D. [12] TJOSTHEIM,(1990) Non-lineartime series and Markovchains.Adv. Appl. Prob. 22, 587-611. R. conditionsfor ergodicityand recurrence Markovchainson a of [13] TWEEDIE, L. (1975) Sufficient
general state space. Stoch. Proc. Appl. 3, 385-403.
R. measuresfor Markovchainswith no irreducibility [14] TWEEDIE, L. (1988) Invariant assumptions. J. Appl. Prob. 25A, 275-285.