You are on page 1of 85

# Random processes - Chapter 4 Random process 1

Random processes

Chapter 4 Random process

4.1 Random process

4.1 Random process

Random processes - Chapter 4 Random process 2

❘ ❙ ❚ ■ Random process ■❚❙❘

❖ Random process, stochastic process
The infinite set {Xt, t ∈ T } of random variables is called a random process, where
the index set T is an infinite set.

❑ In other words, a random vector with an infinite number of elements is called a
random process.

❖ Discrete time process, continuous time process
A random process is said to be discrete time if the index set is a countably infinite set.
When the index set is an uncountable set, the random process is called a continuous
time random process.

4.1 Random process

Random processes - Chapter 4 Random process 3

❖ Discrete (alphabet) process, continuous (alphabet) process
A random process is called a discrete alphabet, discrete amplitude, or discrete state
process if all finite length random vectors drawn from the random process are discrete
random vectors. A process is called a continuous alphabet, continuous amplitude, or
continuous state process if all finite length random vectors drawn from the random
process are continuous random vectors.

❑ A random process {X(·)} maps an element ω of the sample space on a time
function X(ω, t).
❑ A random process {X(·)} is the collection of time functions X(t) called sample
functions. This collection is called an ensemble. The value of the time function
X(t) is a random variable at t.

4.1 Random process

Random processes - Chapter 4 Random process 4

❑ A random process and sample functions

4.1 Random process

the first-order cdf. · · · .··· .1 Random process . dFX(t)(x) fX(t)(x) = .Random processes . the statistical characteristics of the random process can be considered vi the cdf and pdf of X(t). ❑ For example. X(t2) ≤ x2}. and FX(t1).X(tn)(x1. dx FX(t1). second-order cdf. 4.X(t2)(x1. xn) = Pr{X(t1) ≤ x1. x2) = Pr{X(t1) ≤ x1. · · · . and nth-order cdf of the random process {X(t)} are FX(t)(x) = Pr{X(t) ≤ x}. respectively. first-order pdf.Chapter 4 Random process 5 ❑ Since a random process is a collection of random variables with X(t) denoting a random variable at t. X(tn) ≤ xn}.

t2) = E{X(t1)X ∗(t2)}. 4.Random processes .1 Random process .Chapter 4 Random process 6 ❖ Mean function The mean function mX (t) of a random process {X(t)} is defined by mX (t) = Z E{X(t)} ∞ = xfX(t)(x)dx. t2) of a random process {X(t)} is defined by RX (t1. −∞ ❖ Autocorrelation function The autocorrelation function RX (t1.

8) = 13. When X(t) = s(t) is a known signal. If Z = X(5). 8) = 9 + 4e−0.2|t1 − t2|). we can easily obtain E{Z} = E{X(5)} = 3. and E{ZW } = R(5. E{Z 2} = R(5. t2) = 9 + 4 exp(−0. 5) = 13. E{W 2} = R(8.6 ≈ 2. t2) = E{s(t1)s(t2)} = s(t1)s(t2).195. R(t1. Var{Z} = 13 − 32 = 4. In other words.6 ≈ 11. 4.Random processes . ❑ Consider the random process {X(t)} with mean E{X(t)} = 3 and autocorrelation function R(t1.195.Chapter 4 Random process 7 ❖ Known signal An extreme example of a random process is a known or deterministic signal. W = X(8). 8) = 4e−0. the random variables Z and W have the variance σ 2 = 4 and covariance Cov(5. Var{W } = 13 − 32 = 4. we have m(t) = E{s(t)} = s(t).1 Random process . E{W } = 3.

❑ The autocovariance function can be expressed in terms of the autocorrelation and mean functions as KX (t1.Random processes . t2) = RX (t1. t2) = E{[X(t1) − mX (t1)][X ∗(t2) − m∗X (t2)]}. t2) − mX (t1)m∗X (t2). the autocovariance and autocorrelation functions are functions of t1 and t2. ❑ In general.1 Random process . 4.Chapter 4 Random process 8 ❖ The autocorrelation function of X(t) = Aejωt defined with a random variable A can be obtained as RX (t1. t2) of a random process {X(t)} is defined by KX (t1. t2) = E{Aejωt1 A∗e−jωt2 } = ejω(t1−t2)E{|A|2}. ❖ Autocovariance function The autocovariance function KX (t1.

the autocorrelation and autocovari- ance functions are RX (t. respectively. t = s. s) = 0 for t 6= s. ❑ If a random process {X(t)} is uncorrelated. 4. t = s. = E{Xt}E{Xs∗}.1 Random process . s) = E{Xt}E{Xs∗} or KX (t. and ( 2 σX t . t = 6 s. s) = 0.Chapter 4 Random process 9 ❖ Uncorrelated random process A random process {Xt} is said to be uncorrelated if RX (t.Random processes . KX (t. t 6= s. s) = E{XtXs∗} ( E{|Xt|2}.

❑ We can show that |ρX (t1. t2) = . t2) KX (t1. t2) = p p KX (t1.1 Random process .Chapter 4 Random process 10 ❖ Correlation coefficient function The correlation coefficient function ρX (t1.Random processes . 4. t2) of a random process {X(t)} is defined by KX (t1. t1) KX (t2. t2)| ≤ 1. σ(t1)σ(t2) where σ(ti) is the standard deviation of X(ti). t2) ρX (t1. ρX (t. t) = 1.

t2) = RY∗ X (t2. t1). t2) is positive semi-definite. ❑ The autocorrelation function RX (t1.Chapter 4 Random process 11 ❖ Crosscorrelation function The crosscorrelation function RXY (t1. tj ) ≥ 0 for non-negative constants {ak }. t2) of random processes {X(t)} and {Y (t)} is defined by RXY (t1. PP aiaj R(ti. t2) = E{X(t1)Y ∗(t2)}. ❑ The autocorrelation and crosscorrelation functions satisfy RX (t. t2) = RX (t2. i j 4. In other words.Random processes . RXY (t1. ∗ RX (t1.1 Random process . t) = E{X(t)X ∗(t)} 2 = σX (t) + |E{X(t)}|2 ≥ 0. t1).

❖ Orthogonality The two random processes {X(t)} and {Y (t)} are said to be orthogonal if RXY (t1. 4. t2) = E{[X(t1) − mX (t1)][Y ∗(t2) − m∗Y (t2)]} = RXY (t1.Random processes .Chapter 4 Random process 12 ❖ Crosscovariance function The crosscovariance function KXY (t1. ❖ Two random processes which are uncorrelated The random processes {X(t)} and {Y (t)} are said to be uncorrelated if RXY (t1. t2) = E{X(t1)}E{Y ∗(t2)} or KXY (t1. t2) of random processes {X(t)} and {Y (t)} is defined by KXY (t1. t2) − mX (t1)mY∗ (t2). t2) = 0 for all t1 and t2. t2) = 0 for all t1 and t2.1 Random process .

2 Properties of random processes .2 Properties of random processes 4.Random processes .Chapter 4 Random process 1 Random process Chapter 4 Random process 4.

2 Properties of random processes / 4.Chapter 4 Random process 2 ❘ ❙ ❚ ■ Stationary process and independent process ■❚❙❘ ❖ A random process is said to be stationary if the probabilistic properties do not change under time shifts. t2.s. · · · .1 Stationary process and independent process . 4.s. · · · . X(t2 + τ ). ❖ Stationary process A random process {X(t)} is stationary. X(tn + τ )} for all n. X(tn)} is the same as the joint cdf of {X(t1 + τ ). τ. · · · . or strongly- stationary if the joint cdf of {X(t1). strict-sense stationary (s.2.Random processes . X(t2).). t1. tn.

s. 4. respectively. s) depends only on t − s but not on t and s individually. or second-order stationary if (1) the mean function is constant and (2) the autocorrelation function RX (t.s. t2) = R(t1 − t2).. the autocorrelation function RX (t1. process {X(t)} is a function of τ = t1 − t2. we have RX (τ ) = E{X(t + τ )X ∗(t)}.1 Stationary process and independent process .Random processes .s.2.s.Chapter 4 Random process 3 ❖ Wide-sense stationary (w.s. t2) of a w. process {X(t)} are thus mX (t) = m and RX (t1. For all t and τ .s. RX (0) = E{|X(t)|2}. weakly-stationary.s.) process A random process {X(t)} is w.s. ❑ In other words. When τ = 0. ❑ The mean function mX and the autocorrelation function RX of a w.2 Properties of random processes / 4.

A1. · · · . and let Xn = {Ak cos nωk +Bk sin nωk } for n = 0. 2π). Am and B0. ±1.s. 4. ω1. Bm having mean zero and variance σi2. B1..1 Stationary process and independent process .2.Chapter 4 Random process 4 ❑ Consider two sequences of uncorrelated random variables A0. k=0 Then we can obtain m X E{XnXn+l } = σk2 cos lωk .Random processes .2 Properties of random processes / 4. · · · . Assume that the two se- quences are uncorrelated with each other. {Xn} is w.s. ωm be distinct frequen- P m cies in [0. · · · . k=0 E{Xn} = 0. · · · . Thus. Let ω0. ±2.

2. then it is also continuous at every value of τ . then RX (τ ) is periodic. ❑ If there is a constant T > 0 such that RX (0) = RX (T ).Chapter 4 Random process 5 ❖ Properties of the autocorrelation function RX (τ ) of a real stationary process {X(t)} ∗ ❑ RX (−τ ) = RX (τ ) : RX (τ ) is an even function.1 Stationary process and independent process .Random processes .2 Properties of random processes / 4. 4. ❑ If RX (τ ) is continuous τ = 0. ❑ |RX (τ )| ≤ RX (0) : RX (τ ) is maximum at the origin.

· · · . t2. The i. ❖ Independent and identically distributed (i. xn . x2. · · · . tn. process is the simplest process. 4.Chapter 4 Random process 6 ❖ Independent random process A random process is said to be independent if the joint cdf satisfies n Y FXt1 .··· .Xtn (x) = FXti (xi) i=1 for all n and t1. · · · . x2.i. xn.··· . process is sometimes called a memoryless process or a white noise.i.d.Xt2 .Xt2 .i. t2.) process A random process is said to be i.2.d. x1. · · · .i.Xtn (x) = FX (xi) i=1 for all n and t1.Random processes . x1. if the joint cdf satisfies n Y FXt1 . tn.d.2 Properties of random processes / 4. and yet it is the most ‘stochastic’ process in that past outputs do not have any information on the future. ❖ The i.d.1 Stationary process and independent process .

consider the random process {Xn} defined by ( 1. random process with two possible values is called a Bernoulli process. if the nth result is head.2.i. if the nth result is tail. For example. 4. The success (‘head’) and failure (‘tail’) probabilities are P {Xn = 1} = p and P {Xn = 0} = 1 − p.Random processes .1 Stationary process and independent process . when we toss a coin infinitely.Chapter 4 Random process 7 ❖ Bernoulli process An i. Xn = 0. respectively.2 Properties of random processes / 4.d. The random process {Xn} is a discrete-time discrete- amplitude random process.

.s. ❑ Clearly.s.2 Properties of random processes / 4. n1 = n2.1 Stationary process and independent process .Chapter 4 Random process 8 We can easily obtain mX (n) = E{Xn} = p and KX (n1. The autocovariance KX (n1. n2) depends not on n1 and n2 individually. n1 6= n2.2. the Bernoulli process is w. = 0. n2) = E{Xn1 Xn2 } − mX (n1)mX (n2) ( p(1 − p). The mean function mX (n) of a Bernoulli process is not a function of time but a constant.Random processes . 4. but only on the difference n1 − n2.

t2. s2. and t1. sl . · · · .Chapter 4 Random process 9 ❖ Two random processes independent of each other The random processes {X(t)} and {Y (t)} are said to be independent of each other if the random vector (Xt1 .2 Properties of random processes / 4. l. · · · . Ys2 . s1. · · · . Xt2 . Xtk ) is independent of the random vector (Ys1 .1 Stationary process and independent process . · · · .Random processes . Ysl ) for all k. 4.2. tk .

s.s.Chapter 4 Random process 10 ❖ Normal process. Xt2 . a w. · · · . Xtk ) is a k−dimensional normal random vector for all k and t1.2.s. · · · . ❑ A stationary process is always w.Random processes . but the converse is not always true.1 Stationary process and independent process .s. On the other hand. normal process is s.s..s. Gaussian process A random process {Xt} is said to be normal if (Xt1 .. t2.2 Properties of random processes / 4. tk . This result can be obtained from the pdf ½ ¾ 1 1 T −1 fX (x) = exp − (x − mX ) K X (x − mX ) (2π)n/2|K X |1/2 2 of a jointly normal random vector. 4.

= R 4. The crosscorrelation function of {X(t)} and {Y (t)} is ˜ XY (τ ) RXY (t + τ.s.s..2 Properties of random processes / 4.2.s.Random processes .s.s.s.s. then {X(t)} and {Y (t)} are both w. processes Two random processes are said to be jointly w.s. ❑ If two random processes {X(t)} and {Y (t)} are jointly w..1 Stationary process and independent process .Chapter 4 Random process 11 ❖ Jointly w. t) = R = E{X(t + τ )Y ∗(t)} ¡ ∗ ¢∗ = E{Y (t)X (t + τ )} ˜ Y∗ X (−τ ). if (1) the mean functions are constants and and (2) the autocorrelation functions and crosscorrelation function are all functions only of time differences.

for all a and b. random processes has the following properties: ∗ 1.1 Stationary process and independent process .Random processes .s.s. RY X (τ ) = RXY (−τ ).2 Properties of random processes / 4. The converse is also true. if the linear combination Z(t) = aX(t) + bY (t) is w.s. |RXY (τ )| ≤ RXX (0)RY Y (0) ≤ 12 {RXX (0) + RY Y (0)}. p 2. 3.2.s.s. RXY (τ ) is not always maximum at the origin. 4.Chapter 4 Random process 12 ❑ The crosscorrelation function RXY of two jointly w.s. ❖ Linear transformation and jointly normal process Two processes X(t) and Y (t) are w.

a2. al be a sequence of real numbers and W0. · · · be a sequence of uncorrelated random variables with mean E{Wn} = m and variance Var{Wn} = σ 2.Random processes . W2.2 Properties of random processes / 4.1 Stationary process and independent process . Var{Xn} = (a21 + a22 + · · · + a2l )σ 2. 4. W1.2. Xn = a1Wn + a2Wn−1 + · · · + al Wn−l+1 Xl = aiWn−i+1. Then the following process {Xn} is called a moving average process. i=1 The mean and variance of Xn are E{Xn} = (a1 + a2 + · · · + al )m. · · · .Chapter 4 Random process 13 ❖ Moving average (MA) process Let a1.

k ≥ l. k ≤ l − 1.Chapter 4 Random process 14 ˆ 2} = σ 2 when X Since E{X ˆ n = Wn − m.2 Properties of random processes / 4. ( (al al−k + · · · + ak+1a1)σ 2. from n ½³ X l ´³ X l ´¾ Cov(Xn. = 0.s. 4.1 Stationary process and independent process . {Xn} is w.2.s. Xn+k ) = E Xn − m ai Xn+k − m ai i=1 i=1 ½³ X l ´³ X l ´¾ = E ˆ n−i+1 ai X ˆ n+k−j+1 aj X i=1 j=1  ©  ˆ2 E al al−k Xn+k−l+1 + al−1al−k−1ª  = ˆ2 ·X ˆ2 .Random processes . n+k−l+2 n   0. k ≥ l. + · · · + ak+1a1X k ≤ l − 1.

Xn = λXn−1 + Zn. n X = λn−iZi. We can obtain Xn = λ(λXn−2 + Zn−1) + Zn = λ2Xn−2 + λZn−1 + Zn . is called the first order autoregressive process. i=0 4. Z1..1 Stationary process and independent process . n ≥ 1. Var{Zn} = σ2.Random processes . n ≥ 1. Then the random process {Xn} defined by X0 = Z0.2 Properties of random processes / 4. where λ2 < 1.Chapter 4 Random process 15 ❖ Autoregressive (AR) process Let the variance of an uncorrelated zero-mean random sequence Z0.. · · · be ( 2 σ 1−λ 2. n = 0.2.

it follows that {Xn.s.2. 1 − λ2 Now. Zi) i=0 Ã n ! 1 X = σ 2λ2n+m + λ−2i 1 − λ2 i=1 2 m σ λ = . n ≥ 0} is w. from the result above and the fact that the mean of {Xn} is E{Xn} = 0. λn+m−iZi i=0 i=0 n X = λn−iλn+m−iCov(Zi.s. Xn+m) = Cov λn−iZi.2 Properties of random processes / 4.1 Stationary process and independent process .Chapter 4 Random process 16 Thus the autocovariance function of {Xn} is Ã n n+m ! X X Cov(Xn. 4.Random processes ..

2.s. 4. Clearly.Random processes ..Chapter 4 Random process 17 ❖ Square-law detector Let Y (t) = X 2(t) where {X(t)} is a Gaussian random process with mean 0 and autocorrelation RX (τ ).2 Properties of random processes / 4. the autocorrelation of Y (t) can be found as RY (t. Thus E{Y 2(t)} = RY (0) = 3RX 2 (0) and σY2 = 3RX 2 2 (0) − RX 2 (0) = 2RX (0). Then the expectation of Y (t) is E{Y (t)} = E{X 2(t)} = RX (0). Since X(t+τ ) and X(t) are jointly Gaussian with mean 0. t + τ ) = E{X 2(t)X 2(t + τ )} = E{X 2(t + τ )}E{X 2(t)} + 2E 2{X(t + τ )X(τ )} 2 2 = RX (0) + 2RX (τ ).1 Stationary process and independent process .s. {Y (t)} is w.

Then we can easily obtain P {Y (t) = 1} = P {X(t) > 0} = 1 − FX (0) and P {Y (t) = −1} = P {X(t) < 0} = FX (0). Clearly. Thus the mean and autocorrelation of {Y (t)} are E{Y (t)} = 1 · P {Y (t) = 1} + (−1) · P {Y (t) = −1} = 1 − 2FX (0). X(t + τ ) and X(t) are jointly Gaussian with mean 0. g(x) = −1. x > 0. x < 0.2. Now.Chapter 4 Random process 18 ❖ Limiter Let {Y (t)} = {g(X(t))} be a random process which is defined by a random process {X(t)} and a limiter ½ 1. 4.2 Properties of random processes / 4.Random processes . and correlation coefficient RX (τ )/RX (0). if {X(t)} is a stationary Gaussian random process with mean 0. FX (0) = 1/2. RY (τ ) = E{Y (t)Y (t + τ )} = P {Y (t)Y (t + τ ) = 1} − P {Y (t)Y (t + τ ) = −1} = P {X(t)X(t + τ ) > 0} − P {X(t)X(t + τ ) < 0}.1 Stationary process and independent process . variance RX (0).

1 Stationary process and independent process . p.2. Nam.100). π RX (0) from which we have E{Y 2(t)} = RY (0) = 1 and σY2 = 1 − {1 − 2FX (0)}2 = 1. Park.Random processes . Random Processes. 2 π RX (0) P {X(t)X(t + τ ) > 0} = 1 − P {X(t)X(t + τ ) < 0} 1 1 RX (τ ) = + sin−1 . Song.2 Properties of random processes / 4. 156. 2 π RX (0) Thus the autocorrelation of the limiter output is 2 −1 RX (τ ) RY (τ ) = sin . 2004) n X(t) o P {X(t)X(t + τ ) < 0} = P <0 X(t + τ ) = FZ (0) 1 1 −rσ1 = + tan−1 √ 2 π σ1 1 − r 2 1 1 = − sin−1 r 2 π 1 1 RX (τ ) = − sin−1 . 4.Chapter 4 Random process 19 We have (refer to (3.

4. power spectral density.s. or spectral density.2 Properties of random processes / 4.Chapter 4 Random process 20 ❘ ❙ ❚ ■ Power of a random process ■❚❙❘ ❖ Power spectrum The Fourier transform of the autocorrelation function of a w. It is usually assumed that the mean is zero.Random processes . the power spectrum is SX (ω) =  F{RX } X   RX (k)e−jωk .s. the power spectral density is defined by the Fourier trans- form of the autocovariance instead of the autocorrelation. continuous time.2 Power of random process . discrete time.2. ❑ When the autocorrelation is RX (τ ). random process is called the power spectrum.  = Zk ∞   RX (τ )e−jωτ dτ.  −∞ ❑ If the mean is not zero.

Then we have X SX (ω) = σ 2δk e−jωk k 2 = σ .Random processes .2 Power of random process . white process Suppose that a discrete time random process {Xn} is uncorrelated so that RX (k) = σ 2δk .2 Properties of random processes / 4. 4.Chapter 4 Random process 21 ❖ White noise. If {Xn} is Gaussian in addition. it is called a white Gaussian noise.2. Such a process is called a white noise or white process.

2 Properties of random processes / 4.Random processes . Wi is the time between adjacent Poisson points. t + τ ] is odd} n (λτ )2 o n (λτ )3 o −λτ −λτ = e 1+ + ··· − e λt + + ··· 2! 3! = e−λτ cosh λτ − e−λτ sinh λτ = e−2λτ .2 Power of random process .Chapter 4 Random process 22 ❖ Telegraph signal Consider Poisson points with parameter λ. Here. t]. Assuming τ > 0.2. t + τ ] is even} +(−1) · P {the number of points in the interval (t. 4. As shown in the figure above. the autocorrelation of X(t) is RX (τ ) = E{X(t + τ )X(t)} = 1 · P {the number of points in the interval (t. Let N (t) be the number of points in the interval (0. consider the continuous-time random process X(t) = (−1)N (t) with X(0) = 1.

and the power spectral density of {Y (t)} is SY (ω) = F{e−2λ|τ |} 4λ = 2 . ω + 4λ2 4.Random processes .2 Properties of random processes / 4.s.2 Power of random process . we have RX (τ ) = e−2λ|τ |. Thus {Y (t)} is w. Combining the two results. E{Y (t1)Y (t2)} = E{A2}E{X(t1)X(t2)} = E{A2}RX (t1 − t2) = e−2λ|t1−t2| since E{A} = 0. Consider a random variable A which is independent of X(t) and takes on 1 or −1 with equal probability.s. Let Y (t) = AX(t). E{A2} = 1.Chapter 4 Random process 23 We can obtain a similar result when τ < 0.2.. We then have E{Y (t)} = E{A}E{X(t)} = 0.

Chapter 4 Random process 24 ❖ Band limited noise. colored noise When W > 0. The autocorrelation of a colored noise is thus RX (τ ) = F −1{SX (ω)} sin(W τ ) = . Such a process is called a colored noise. otherwise. SX (ω) = 0. πτ 4.2 Power of random process .2.Random processes . W ]. ω ∈ [−W.2 Properties of random processes / 4. let us consider a random process with the power spectral density ( 1.

Random processes - Chapter 4 Random process 25

❖ Power spectral density is not less than 0. That is,

SX (ω) ≥ 0.

❖ Cross power spectral density
The cross power spectral density SXY (ω) of jointly w.s.s. processes {X(t)} and
{Y (t)} is
Z ∞
SXY (ω) = RXY (τ )e−jωτ dτ.
−∞

❑ Thus SXY (ω) = SY∗ X (ω), and the inverse Fourier transform of SXY (ω) is
Z ∞
1
RXY (τ ) = SXY (ω)ejωτ dω.
2π −∞

4.2 Properties of random processes / 4.2.2 Power of random process

Random processes - Chapter 4 Random process 26

❖ Time delay process
Consider a w.s.s. process {X(t)} of which the power spectral density is SX (ω).
Letting Y (t) = X(t − d), we have
RY (t, s) = E{Y (t)Y ∗(s)}
= E{X(t − d)X ∗(s − d)}
= RX (t − s).

Thus the process {Y (t)} is w.s.s. and the power spectral density SY (ω) equals to
SX (ω). In addition, the crosscorrelation and cross power spectral density of {X(t)}
and {Y (t)} are
RXY (t, s) = E{X(t)Y ∗(s)} = E{X(t)X ∗(s − d)}
= RX (t − s + d) = RX (τ + d)
and
Z ∞
SXY (ω) = F{RX (τ + d)} = RX (τ + d)e−jωτ dτ
−∞
Z ∞
= RX (u)e−jωuejωddu = SX (ω)ejωd.
−∞

That is, {X(t)} and {Y (t)} are jointly w.s.s..
4.2 Properties of random processes / 4.2.2 Power of random process

Random processes - Chapter 4 Random process 27

❘ ❙ ❚ ■ Random process and linear systems ■❚❙❘

❖ If the input random process is two-sided and w.s.s., the output of a
linear time invariant (LTI) filter is also w.s.s..

❖ If the input random process is one-sided and w.s.s., however, the output
of an LTI filter is not w.s.s. in general.

4.2 Properties of random processes / 4.2.3 Random process and linear systems

Random processes - Chapter 4 Random process 28

❖ Let h(t) be the impulse response of an LTI system and let
H(ω) = F{h(t)} be the transfer function. Then the crosscorrelation
RXY (t1, t2) of the input random process {X(t)} and output random
process {Y (t)} and autocorrelation RY (t1, t2) of the output are
n Z o
∗ ∗ ∗
RXY (t1, t2) = E{X(t1)Y (t2)} = E X(t1) X (t2 − α)h (α)dα
Z Z
= E{X(t1)X ∗(t2 − α)}h∗(α)dα = RX (t1, t2 − α)h∗(α)dα

and
RY (t1, t2) = E{Y (t1)Y ∗(t2)}
nZ Z o
∗ ∗
= E X(t1 − α)h(α)dα X (t2 − β)h (β)dβ
Z Z
= RX (t1 − α, t2 − β)h(α)h∗(β)dαdβ
Z
= RXY (t1 − α, t2)h(α)dα,

respectively.
4.2 Properties of random processes / 4.2.3 Random process and linear systems

t2 − α) = RX (t1 − t2 + α) = RX (τ + α) and RXY (t1 − α.2 Properties of random processes / 4.Chapter 4 Random process 29 ❑ If the input and output are jointly w.s.. ❑ The cross power spectral density and power spectral density of output are SXY (ω) = SX (ω)H ∗(ω).Random processes . respectively. since RX (t1.3 Random process and linear systems .s. RY (τ ) = RXY (τ ) ∗ h(τ ). SY (ω) = SXY (ω)H(ω). 4. t2) = RXY (t1 − t2 − α) = RXY (τ − α).2. we can obtain Z RXY (τ ) = RX (t + α)h∗(α)dα = RX (τ ) ∗ h∗(−τ ).

❑ Let SY (ω) be the power spectral density of the output process {Yt}. Specifically.2 Properties of random processes / 4. 4.2.3 Random process and linear systems .Chapter 4 Random process 30 ❑ We can express the autocorrelation and power spectral density of the output in terms of those of the input. where ρh(t) is called the deterministic autocorrelation of h(t) and is defined by ρh(t) = F −1(|H(ω)|2) ∗ = h(t) Z ∗ h (−t) = h(t + τ )h∗(τ )dτ. Then we can obtain RY (τ ) = F −1{SY (ω)} = F −1{|H(ω)|2SX (ω)}. we have RY (τ ) = RX (τ ) ∗ ρh(τ ) and SY (ω) = SX (ω)|H(ω)|2.Random processes .

s. Note the similarity between the coherence function ρ(ω) and the correlation coefficient ρ.2 Properties of random processes / 4.2.3 Random process and linear systems .Random processes . [SX (ω)SY (ω)]1/2 ˜ Here. |ρ(ω)| = 1 if and only if {X(t)} and {Y (t)} are the linearly related. ˜ Y (t) = X(t) ∗ h(t). that is. The coherence function exhibits the property |ρ(ω)| ˜ ≤1 similar to the correlation coefficient between two random variables. processes are related by an LTI trans- formation is the coherence function ρ(ω) ˜ defined by SXY (ω) ρ˜XY (ω) = . 4.Chapter 4 Random process 31 ❖ Coherence function A measure of the degree to which two w.s. a measure of the degree to which two random variables are linearly related.

Random processes . n i=0 n→∞ Z T 1 ˆ X(t)dt −→ X.2 Properties of random processes / 4.2. T 0 T →∞ {Xn. the sample mean con- verges to ‘something’.4 Ergodic theorem* . ❑ When a process satisfies an ergodic theorem.Chapter 4 Random process 32 ❘ ❙ ❚ ■ Ergodic theorem* ■❚❙❘ ˆ such that ❖ If there exists a random variable X n−1 1X ˆ Xi −→ X. which may be different from the expectation. a random process with time-varying mean satisfies an ergodic theorem as shown in the example below. 4. continuous time random process. ❑ In some cases. n ∈ I} is said to satisfy ergodic theorem. discrete time random process.

Yn. the time average will converge: it will converge to p if the first coin was selected and to q if the second coin was selected. Yn = Y.m. one having bias p and the other having bias q. it will not converge to the expected value p/2 + q/2. 4.i. Clearly. In particular.2 Properties of random processes / 4.Chapter 4 Random process 33 ❑ Suppose that nature at the beginning of time randomly selects one of two coins with equal probability. 2. After the coin is selected it is flipped once per second forever. · · · is said to converge to Y n→∞ in mean square. n = 1.4 Ergodic theorem* . n→∞ where l.i. ❖ If lim E{(Yn − Y )2} = 0.m.2. The output random process is a one-zero sequence depending on the face of a coin. the time average will converge to a random variable. denotes ‘limit in the mean’. That is.Random processes . which is denoted as l.

2 Properties of random processes / 4. discrete time random process {Xn. n−1 1X l. Xi = m. n→∞ n i=0 ❖ A sufficient condition for a w.Chapter 4 Random process 34 ❖ Mean ergodic theorem Let {Xn} be an uncorrelated discrete time random process with finite mean E{Xn} = 2 2 P n−1 m and finite variance σXn = σX .s.m. n→∞ 4.4 Ergodic theorem* .i. Then the sample mean Sn = Xi/n converges i=0 to the expected value E{Xn} = m in mean square. n ∈ I} to satisfy a mean ergodic theorem is KX (0) < ∞ and lim KX (n) = 0.2. That is.Random processes .s.

Xi = m n→∞ n i=0 is n−1 1X lim E{Xi} = m n→∞ n i=0 and n−1 n−1 1 XX lim KX (i.2 Properties of random processes / 4. if and only if a process is asymptotically uncorrelated and its sample averages converge. j).Random processes .i.Chapter 4 Random process 35 ❖ Let {Xn} be a discrete time random process with mean E{Xn} and autocovariance function KX (i. The process need not be stationary in any sense.4 Ergodic theorem* . k) = 0.m. 4.2. the sample mean converge in mean square. n→∞ n2 i=0k=0 That is. A necessary and sufficient condition for n−1 1X l.

2.Random processes . 2. a second order stationary Gaussian process with mean 0.4 Ergodic theorem* . · · · . If T 1X 2 lim K (i) = 0. Then lim E{(X n→∞ P n−1 if and only if lim K(i)/n = 0. where K l=1 4. n ≥ 1} is a second order stationary process with i=1 ¯ n − m)2} = 0 mean m and autocovariance K(i) = Cov(Xn. T for i = 1.Chapter 4 Random process 36 ❖ Mean square ergodic theorem ¯ P n Let Xn = Xi/n where {Xn. n→∞ i=0 ❑ Let K(i) be the autocovariance of {Xn}. T →∞ T i=1 then ˆ T (i) − K(i)|2} = 0 lim E{|K T →∞ ˆ T (i) = PXl Xl+i/T is the sample autocovariance. Xn+i).2 Properties of random processes / 4.

Chapter 4 Random process 37 ❘ ❙ ❚ ■ Ergodicity* ■❚❙❘ ❖ Shift operator An operator T for which T ω = {xt+1. ❖ Stationary process A discrete time random process with process distribution P is stationary if P (T −1F ) = P (F ) for any element F in B(R)I.5 Ergodicity* . where ω = {xt. P ). t ∈ I}.2. is called a shift operator.2 Properties of random processes / 4. t ∈ I} is an infinite sequence in the probability space (RI .Random processes . B(R)I . 4.

Clearly. x−1 = 0.Random processes . Any other invariant event . and has probability P (F ) = 1. x2 = 1. ❖ Ergodicity.for example.5 Ergodicity* . ❑ Ergodicity has nothing to do with stationarity or convergence of sample averages. x1 = 0. ❑ Consider a two-sided process with distribution P (· · · .that does not include F has probability 0. Thus the random process is ergodic. x0 = 1. ergodic process A random process is said to be ergodic if P (F ) = 0 or P (F ) = 1 for any invariant event F .2. x0 = 0. · · · ) = p. x1 = 1. x2 = 0.Chapter 4 Random process 38 ❖ Invariant event An event F is said to be invariant with respect to the shift operator T if and only if T −1F = F .2 Properties of random processes / 4. the all 1 sequence . · · · ) = 1 − p. P (· · · . 4. F = {sequence of alternating 0 and 1} is an invariant event. x−1 = 1.

Random processes .Chapter 4 Random process 1 Random processes Chapter 4 Random process 4. 4.3 Process with i.i.s.3 Process with i. .i.s.

t1. .s. 2.s.i. ❖ Process with stationary increments When the increments {Yt − Ys} are stationary. k are independent random variables. A random process is called an independent and stationary increment (i. ■❚❙❘ ❖ Process with independent increments A random process {Yt.i. the k increments Yti − Yti−1 . 4. tk }.s. the random process {Yt} is called a stationary increment random process.3 Process with i. · · · and all choices of ordered sample times {t0.Chapter 4 Random process 2 ❘ ❙ ❚ ■ Process with i. · · · . 2.i.i.Random processes . i = 1.) process if its increments are independent and stationary.s. · · · . t ∈ I} is said to have independent increments or to be an independent increment process if for all choices of k = 1. ❖ Process with i.

s) = σY21 min(t. s ≥ 0.s. 4.i. ❖ Mean and autocovariance of i.s.s.Random processes .Chapter 4 Random process 3 ❖ A discrete time random process is an i. process itself is not stationary. random variables. s). an i.s. process The mean and autocovariance of a discrete time i.d.i.i. t≥0 and KY (t.i.s. ❑ Clearly.3 Process with i.i. . process if and only if it can be represented as the sum of i. process are E{Yt} = tE(Y1). t.i.

m1 = E{X1} − m0. .i.Chapter 4 Random process 4 ❖ Let the process {Xt.Random processes . σ12 = E{(X1 − m1)2} − σ02. σ02 = E{(X0 − m0)2}.s. If m0 = E{X0}.s.3 Process with i. we have E{Xt} = m0 + m1t. process.i. Var{Xt} = σ02 + σ12t. 4. t ∈ T } be an i.

1. . with T0 = 0. We have. t).i. A counting process Y (t) can be defined as the number of points in the interval [0.3 Process with i.Chapter 4 Random process 5 ❖ Point process and counting process A sequence T1 ≤ T2 ≤ T3 ≤ · · · of ordered random variables is called a point process. Ti ≤ t < Ti+1. 4. For example. Y (t) = i. i = 0. · · · . the set of times defined by Poisson points is a point process.Random processes .s. .

· · · . · · · } as Y0 = 0. · · · } be a Bernoulli process with parameter p. · · · . 2.Chapter 4 Random process 6 ❘ ❙ ❚ ■ A counting process ■ ❚ ❙ ❘ ❖ A process constructed by summing the outputs of a Bernoulli process ❑ Let {Xn. Since the random variable Yn represents the number of 1’s in {X1. 4. n Yn = Xi i=1 = Yn−1 + Xn. a discrete time process satisfying this relation is called a counting process since it is nondecreasing. · · · . and when it jumps. n = 1. n = 1. it is always with an increment of 1. 2.i. we have Yn = Yn−1 or Yn = Yn−1 + 1. n = 1. 2. n = 2.Random processes . 3. Define the random process {Yn.3 Process with i.s. Xn}. In general. . X2.

n. y ❑ Since the marginal pdf is binomial. the process {Yn} is called a binomial counting process.s. j) ❑ Marginal pmf for Yn pYn (y) = Pr{ there are exactly y ones in X1. 1.i.}   n y = p (1 − p)n−y . 4.3 Process with i. X2. j) = p(1 − p) min(k. Xn. ❑ The process {Yn} is not stationary since the marginal pmf depends on the time index n. Var{Yn} = np(1 − p). · · · . KY (k. . y = 0. · · · .Chapter 4 Random process 7 ❖ Properties of the random process {Yn} ❑ E{Yn} = np.Random processes .

. for ‘success’ in the nth trial. Zn = −1.i. random walk ❑ Consider the modified Bernoulli process for which the event ‘failure’ is represented by −1 instead of 0. 4. for ‘failure’ in the nth trial. The process {Wn} is referred to as one dimensional random walk or random walk.s. ❑ Let p = Pr{Zn = 1}.3 Process with i. and consider the sum n  Wn = Zi i=1 of the variables Zn.  +1.Random processes .Chapter 4 Random process 8 ❘ ❙ ❚ ■ Random walk process ■ ❚ ❙ ❘ ❖ One dimensional random walk.

it follows that the random walk process {Wn} is related to the binomial counting process {Yn} by Wn = 2Yn − n.Chapter 4 Random process 9 ❑ Since Zi = 2Xi − 1. n2) = 4p(1 − p) min(n1.i.3 Process with i. we have mW (n) = (2p − 1)n.s. KW (n1. . Using the results on the mean and autocorrelation functions of the binomial count- ing process and the linearity of expectation. n2).Random processes . 4. 2 σW n = 4p(1 − p)n.

s.Random processes .i.Chapter 4 Random process 1 Random process Chapter 4 Random process 4. .4 Discrete time process with i.i. 4.4 Discrete time process with i.s.

pYn|Y n−1 (yn|y n−1) = pYn|Y n−1 (yn|yn−1.s. Yn−1. 2.s. Xi = yi − yi−1.Chapter 4 Random process 2 ❘ ❙ ❚ ■ Discrete time process with i. In addition. · · · . · · · . under the conditioning event. {Yn} can be deﬁned by the sum of i. where Y n = (Yn. ■❚❙❘ ❖ Discrete time discrete alphabet process {Yn} with i. ❑ The conditioning event {Yi = yi.s.i. · · · . .i. yn−1. y1). As mentioned before. 4. n − 1}. i = 2. · · · .d. we have Yn = yn if and only if Xn = yn − yn−1.4 Discrete time process with i. random variables {Xi}.i.Random processes . Y1) and y n = (yn. · · · .i. n − 1} above is the same as the event {X1 = y1. y1) = Pr(Yn = yn|Y n−1 = y n−1). ❑ Let us consider the following conditional pmf. i = 1.

4 Discrete time process with i. Xi = yi − yi−1. n − 1) = pXn|X n−1 (yn − yn−1|yn−1 − yn−2.d. i = 1. pYn|Y n−1 (yn|y n−1) = pX (yn − yn−1) since Xn is independent of Xk for k < n... n − 1) = Pr(Xn = yn − yn−1|Xi = yi − yi−1. i = 2.··· . y1). ❑ If {Xn} are i. · · · .i.  n n  = pY1 (y1) pYi|Yi−1. · · · . · · · . . y2 − y1. · · · ..i. i=2 i=1 4.Random processes . X1). y1) = pX (yi − yi−1). 3. · · · . ❑ Thus the joint pmf is pY n (y n) = pYn|Y n−1 (yn|y n−1) pY n−1 (y n−1) . pYn|Y n−1 (yn|y n−1) = Pr(Yn = yn|X1 = y1.Y1 (yi|yi−1. where X n−1 = (Xn−1.s. 2. Xn−2.Chapter 4 Random process 3 ❑ Assuming y0 = 0.

this conditioning event does not aﬀect Xn. Thus.i. The conditioning event {Yn−1 = yn−1} depends only on Xk for k < n.s. .Chapter 4 Random process 4 ❖ Applying the result above to the binomial counting process.i. ❖ Properties of processes with i. we obtain n  n pY n (y ) = p(yi−yi−1)(1 − p)1−(yi−yi−1). 2. and Xn is independent of Xk for k < n.4 Discrete time process with i. ❑ We can express the conditional pmf of Yn given Yn−1 as follows: pYn|Yn−1 (yn|yn−1) = Pr(Yn = yn|Yn−1 = yn−1) = Pr(Xn = yn − yn−1|Yn−1 = yn−1).s. pYn|Yn−1 (yn|yn−1) = pX (yn − yn−1). n and y0 = 0. 4. i=1 where yi − yi−1 = 0 or 1 for i = 1. · · · . Consequently.Random processes .

i.s. ❑ A discrete time discrete alphabet random process with this property is called a Markov process. Roughly speaking. the other past samples do not aﬀect the probability of what happens next.Chapter 4 Random process 5 ❑ Discrete time i. processes are Markov processes. 4.i.Random processes .s. processes (such as the binomial counting process and discrete random walk) has the following property: pYn|Y n−1 (yn|y n−1) = pYn|Yn−1 (yn|yn−1). .s. given the most recent past sample (or the current sample).4 Discrete time process with i. Pr{Yn = yn|Y n−1 = y n−1} = Pr{Yn = yn|Yn−1 = yn−1}. Thus all i.i.

Assuming the man continues to play the game until he earns enough money for a new car or lose all the money he has. and q = 1 − p. Let p represent the probability of heads.i. P (Ak ) = P (Ak |B)P (B) + P (Ak |B c)P (B c).Random processes .4 Discrete time process with i. P (Ak |B c) = P (Ak−1). 4. he will earn 1 won. he will lose 1 won. and if it results in a tail. what is the probability that the man loses all the money he has? ❑ Let Ak be the event that the man loses all the money when he has started with k won and B be the event the man wins a game. it is easy to see that P (Ak |B) = P (Ak+1). The game this person is going to play is that if a toss of a coin results in a head. .s. Then. Since the game will start again with k + 1 won if a toss of a coin results in a head and k − 1 won if a toss of a coin results in a tail. The person has k (0 < k < N ) won.Chapter 4 Random process 6 ❖ Gambler’s ruin problem A person wants to buy a new car of which the price is N won. and he intends to earn the diﬀerence from gambling.

we have pk = (A1 + A2k)θ1k since q/p = 1.s. 1 − (q/p)N ❑ If p = 0. we can ﬁnd k pk = 1 − . we can ﬁnd (q/p)k − (q/p)N pk = .5. ❑ If p = 0. and pk = ppk+1 + qpk−1.4 Discrete time process with i. . we get from the equation above pθ2 − θ + q = 0. Thus using the boundary conditions p0 = 1 and pN = 0. which gives θ1 = 1 and θ2 = q/p. N 4. then p0 = 1. pk = A1θ1k + A2θ2k .Chapter 4 Random process 7 ❑ Let pk = P (Ak ).Random processes . 1 ≤ k ≤ N − 1.5. Assuming pk = θk .i. pN = 0. Thus using the boundary conditions p0 = 1 and pN = 0.

· · · . ❑ Since the discrete time Wiener process is formed as sums of an i. 2. ❑ The discrete time Wiener process is also called the discrete time diﬀusion process or discrete time Brownian motion. j) = σ 2 min(k.i. process.d.3 discrete time Wiener process .d. it has i.i.i.. The discrete time Wiener process {Yn} is deﬁned by Y0 = 0.i.4. Thus we have E{Yn} = 0 and KY (k.Random processes . 4.4 Discrete time process with i. ❑ The Wiener process is a ﬁrst-order autoregressive process. zero-mean Gaussian process with variance σ 2.Chapter 4 Random process 8 ❘ ❙ ❚ ■ Discrete time Wiener process ■❚❙❘ ❖ Discrete time Wiener process Let {Xn} be an i. / 4. j). n = 1.s.s. n Yn = Xi i=1 = Yn−1 + Xn.

/ 4. we have fYn|Yn−1 (yn|yn−1) = fX (yn − yn−1). 4.3 discrete time Wiener process . fYn|Y n−1 (yn|y n−1) = fYn|Yn−1 (yn|yn−1) = fX (yn − yn−1).Chapter 4 Random process 9 ❖ The discrete time Wiener process is a Gaussian process with mean func- tion m(t) = 0 and autocovariance function KX (t. s). yn−2. ❑ As in the discrete alphabet case.i. yn. · · · . for all n. ❖ Markov process A discrete time random process {Yn} is called a ﬁrst order Markov process if it satisﬁes Pr{Yn ≤ yn|yn−1. process. ❖ Since the discrete time Wiener process is an i.4.i.s. s) = σ 2 min(t.s. yn−2. · · · } = Pr{Yn ≤ yn|yn−1}.Random processes .4 Discrete time process with i. yn−1. a process with this property is called a Markov process.

5 Continuous time i.Random processes .i.5 Continuous time i.Chapter 4 Random process 1 Random processes Chapter 4 Random process 4.s.s. process 4. process .i.

we assume that we are given the cdf of the increments as FYt−Ys (y) = FY|t−s|−Y0 (y) = FY|t−s| (y).i.s.s. t > s. ❑ In the continuous time case. 4.i. process . we need to consider more general collection of sample times than in the case of discrete time process. process ❑ When we deal with a continuous time process with i.Random processes .i.5 Continuous time i.i.s. process ■❚❙❘ ❖ Continuous time i.s.Chapter 4 Random process 2 ❘ ❙ ❚ ■ Continuous time i.

i=1 n fYt1 .··· .s.Ytn (y1.i. · · · .Random processes . · · · .··· .5 Continuous time i.Ytn (y1. these can be used to ﬁnd the joint pmf or pdf as n  pYt1 . i=1 Pr{Ytn ≤ yn|Ytn−1 = yn−1.Chapter 4 Random process 3 ❖ The joint probability functions of a continuous time process Deﬁne the random variable {Xn} by Xn = Ytn − Ytn−1 . i=1 4. · · · } = FXn (yn − yn−1) = FYtn −Ytn−1 (yn − yn−1). yn) = pYti −Yti−1 (yi − yi−1). Then {Xn} are independent and n  Ytn = Xi. yn) = fYti −Yti−1 (yi − yi−1). As in the case of discrete time processes. process . Ytn−2 = yn−2.

Ytn−2 = yn−2. ❑ A continuous time i. · · · . and the cdf pdf.··· .Yt1 (yn|yn−1. · · · . tn . the process can be completely described as shown above.5 Continuous time i.i.··· .Random processes . · · · . · · · } = Pr{Ytn ≤ yn|Ytn−1 = yn−1}. y1) = pYtn |Ytn−1 (yn|yn−1) for all n. yn−1. t2.s.Chapter 4 Random process 4 ❖ If a process {Yt} has i.s.s.i. or pmf for Yt = Yt − Y0 is given. and t1.i. a continuous time random process {Yt} is called a Markov process if we have Pr{Ytn ≤ yn|Ytn−1 = yn−1. or pYtn |Ytn−1 . yn.. ❖ As in the discrete time case. · · · . fYtn |Ytn−1 . process is a Markov process.Yt1 (yn|yn−1. process . y1) = fYtn |Ytn−1 (yn|yn−1). 4.

5.i. t ≥ 0. and Gaussian. ❑ The mean is zero. process. W (0) = 0.5 Continuous time i.Random processes .Chapter 4 Random process 5 ❘ ❙ ❚ ■ Wiener process ■ ❚ ❙ ❘ ❖ Wiener process A process is called Wiener process if it satisﬁes ❑ The initial position is zero. ❖ Wiener process is a continuous time i. That is.s. ❖ The increments of Wiener process are Gaussian random variables with zero mean. stationary.s. process / 4.1 Wiener process . E{W (t)} = 0. ❑ The increments of W (t) are independent.i. That is. ❖ The ﬁrst order pdf of Wiener process is   1 x2 fWt (x) = √ exp − 2 . 2πtσ 2 2tσ 4.

Brownian motion Wiener process is the limit of the random-walk process.i. not t1 and t2 individually. t2 > t1.5.Chapter 4 Random process 6 ❖ Wiener process.s.Random processes . ❖ Properties of Wiener process ❑ The distribution of Xt2 − Xt1 .5 Continuous time i. process / 4.1 Wiener process . depends only on t2 − t1. 4.

W (t2) = W (t1) + {W (t2) − W (t1)}. process / 4. W (t3). W (t2) − W (t1).Chapter 4 Random process 7 ❖ Let us show that the Wiener process is Gaussian. · · · . W (t2).5 Continuous time i. W (t3) − W (t2). W (tk ) − W (tk−1) are independent Gaussian random variables. W (tk ) can be obtained from the following linear transformation of W (t1) and the increments W (t1) = W (t1).1 Wiener process .i. · · · . {W (t)} is Gaussian. W (t3) = W (t1) + {W (t2) − W (t1)} + {W (t3) − W (t2)}. · · · .5. W (t2). Since W (t1). W (t3). Thus the random variables W (t1).. the random variables W (t1). 4. W (tk ) = W (t1) + {W (t2) − W (t1)} + · · · +{W (tk ) − W (tk−1)}.s. W (tk ) is jointly Gaussian. From the deﬁnition of the Wiener process..Random processes . .

Pr{Nt+∆t − Nt = 1} = λ∆t + o(∆t).s. Hence. and the probability of an increment larger than 1 is 0.s. the probability of an increment of 1 is proportional to the length of the interval. where λ is a proportionality constant.i. process / 4. 4. and Pr{Nt+∆t − Nt = 0} = 1 − λ∆t + o(∆t). Pr{Nt+∆t − Nt ≥ 2} = o(∆t). ❑ The process has i. ❑ N0 = 0. Thus.i.5 Continuous time i. t ≥ 0} with the following properties is called the Poisson counting process.Random processes .Chapter 4 Random process 8 ❘ ❙ ❚ ■ Poisson counting process ■ ❚ ❙ ❘ ❖ Poisson counting process A continuous time counting process {Nt.2 Poisson counting process .. the increments over non-overlapping time intervals are independent random variables.5. ❑ In a very small time interval.

5. pro- cess.Chapter 4 Random process 9 ❖ The Poisson counting process is a continuous time discrete alphabet i.2 Poisson counting process .i. ❖ We have obtained the Wiener process as the limit of a discrete time discrete amplitude random-walk process. process / 4. 4.i. Similarly.s.5 Continuous time i.s. the Poisson counting process can be derived as the limit of a binomial counting process using the Poisson ap- proximation.Random processes .

t)(1 − λ∆t) + p(k − 1.Chapter 4 Random process 10 ❖ The probability mass function pNt−N0 (k) = pNt (k) of the increment Nt − N0 between the starting time 0 and t > 0 Let us use the notation p(k.s. t) = p(k − 1. Using the independence of increments and the third property of the Poisson counting process. t)λ − p(k. we have k  p(k.2 Poisson counting process . t + ∆t) = Pr{Nt = n}Pr{Nt+∆t − Nt = k − n|Nt = n} n=0 k  = Pr(Nt = n)Pr(Nt+∆t − Nt = k − n) n=0 ≈ p(k. t) = pNt−N0 (k). t > 0.5. t + ∆t) − p(k.5 Continuous time i. process / 4.i.Random processes . t)λ. ∆t 4. which yields p(k. t)λ∆t.

2. dt where the initial conditions are  0. k = 0. t ≥ 0. t) (λt)k e−λt = . 1. process / 4.5. t ≥ s. t ≥ s (λ(t − s))k e−λ(t−s) pNt−Ns (k) = . 0) = 1. t > 0. t) = λp(k − 1.Chapter 4 Random process 11 When ∆t → 0. · · · .5 Continuous time i. t). k! ❖ The pmf for k jumps in an arbitrary interval (s. k! 4.s. Solving the diﬀerential equation gives pNt (k) = p(k.i. k=0 since Pr{N0 = 0} = 1. 1. k = 0.Random processes . t).2 Poisson counting process . k = 0. p(k. · · · . the equation above becomes the following diﬀerential equation d p(k. t) + λp(k.

X(tn−1)} = 0 for all t1 < t2 < · · · < tn and integer n ≥ 2. process / 4. 4. X(t2). This property can be rewritten as E {X(tn)|X(t1). ❑ E{|Xn|} < ∞. · · · . X(t2).Random processes . Xn} = Xn.5. n ≥ 0} with the following properties is called a martingale process. · · · .i.3 Martingale .s. which is called the martingale property. · · · . ❖ Martingale A process {Xn.Chapter 4 Random process 12 ❘ ❙ ❚ ■ Martingale ■❚❙❘ ❖ Martingale property An independent increment process {Xt} with zero mean satisﬁes E {X(tn) − X(tn−1)|X(t1). X(tn−1)} = X(tn−1). ❑ E{Xn+1|X0.5 Continuous time i.

Random processes .6 Compound process* .Chapter 4 Random process 1 Random processes Chapter 4 Random process 4.6 Compound process* 4.

1 Discrete time compound process* . 4.6 Compound process* / 4.d. The process {Yk } is called a discrete time compound process. Assume that the two processes are independent of each other.6.Random processes . k = 1. Deﬁne the random process {Yk . 2. Nk  Yk = Xi.Chapter 4 Random process 2 ❘ ❙ ❚ ■ Discrete time compound process* ■❚❙❘ ❖ Discrete time compound process ❑ Let {Nk . · · · } be an i. k = 0. k = 0. 1.i. 1. i=1 where we assume Yk = 0 if Nk = 0. and let {Xk . · · · } by Y0 = 0. · · · } be a discrete time counting process such as the binomial counting process. random process. · · · . The process is also referred to as a ‘doubly stochastic’ process because of the two sources {Nk } and {Xk } of randomness. k = 0. 1.

sequence of random variables and GX be their common moment generating function.6.6 Compound process* / 4.i. ❖ Let X1.Chapter 4 Random process 3 ❑ The expectation of Yk can be obtained using the conditional probability as  Nk  E{Yk } = E Xi (1) i=1 = E{E{Yk |Nk }}  = pNk (n)E{Yk |Nk = n} n  = pNk (n)nE{X} n = E{Nk }E{X}.Random processes . 4. Let the random variable N be independent of Xi and GN be the moment generatingN function of N .d. · · · be an i. Then the moment generating function of S = i=1 Xi is GS (t) = GN (GX (t)). X2.1 Discrete time compound process* .

we put Y (t) = 0 when Nt = 0. Here.d. the process Nt  Yt = Y (t) = Xi i=1 is called a continuous time compound process. MYt (u) = E{MXNt (u)}.6.Random processes . process {Xi.6 Compound process* / 4. 2. 4.Chapter 4 Random process 4 ❘ ❙ ❚ ■ Continuous time compound process* ■❚❙❘ ❖ When a continuous time counting process {Nt. t ≥ 0} and an i. ❖ We have E{Yt} = E{Nt}E{X}. i = 1.2 Continuous time compound process* . · · · } are independent of each other.i.

we have E{Yt} = λtE{X}. where a new random variable is added.6 Compound process* / 4.2 Continuous time compound process* .Chapter 4 Random process 5 ❖ A compound process is continuous and diﬀerentiable except at the ‘jumps’.Random processes . ❖ If {Nt} is a Poisson counting process.6. 4. ∞  (λt)k e−λt MYt (u) = MXk (u) k! k=0 ∞  (λtMX (u))k = e−λt k! k=0 = e−λt(1−MX (u)).