Professional Documents
Culture Documents
Gloria González-Rivera
University of California, Riverside
and
Jesús Gonzalo U. Carlos III de Madrid
Spring 2002
Copyright(© MTS-2002GG): You are free to use and modify these slides for educational purposes, but
please if you improve this material send us your new version.
Brief Review of Probability
• Sample Space: Ω {}, the set of possible outcomes of some random experiment
• State Space: S, a space containing the possible values of a random variables –common choices are
the integers N, reals R, k-vectors Rk, complex numbers C, positive reals R+, etc
• Probability: P : F [0,1] , obeying the three rules that you must very well know
FZ (z) P( Z1 z1,..., Z n z n )
P({ : Z1 () z1,..., Z n () z n })
Stochastic Processes
Z t1 ( ), Z t2 ( ),....... Z tn ( )
From which a realization is:
zt1 , zt2 ,.... ztn
This collection of random variables is called a STOCHASTIC PROCESS
A realization of the stochastic process is called a TIME SERIES
Examples of stochastic processes
E1: Let the index set be T={1, 2, 3} and let the space of outcomes () be the possible
outcomes associated with tossing one dice:
1, 2, 3, ,4 ,5, 6}
Define
Z(t, )= t + [value on dice]2 t
Therefore for a particular , say 3={3}, the realization or path would be (10, 20, 30).
Q1: Draw all the different realizations (six) of this stochastic process.
Q2: Think on an economic relevant variable as an stochastic process and write down an
example similar to E1 with it. Specify very clear the sample space and the “rule” that
generates the stochastic process.
E2: A Brownian Motion B=(Bt, t [0, infty]):
• It starts at zero: Bo=0
• It has stationary, independent increments
• For evey t>0, Bt has a normal N(0, t) distribution
• It has continuous sample paths: “no jumps”.
Distribution of a Stochastic Process
In analogy to random variables and random vectors we want to introduce non-random
characteristics of a stochastic process such as its distribution, expectation, etc. and
describe its dependence structure. This is a task much more complicated that the
description of a random vector. Indeed, a non-trivial stochastic process Z=(Zt, t T)
with infinite index set T is an infinite-dimensional object; it can be inderstood as the
infinite collection of the random variables Zt, t T. Since the values of Z are functions
on T, the distribution of Z should be defined on subsets of a certain “function space”, i.e.
P(X A), A F,
where F is a collection of suitable subsets of this space of functions. This approach is
possible, but requires advanced mathematics, and so we will try something simpler.
Definition.
A process is strongly (strictly) stationary if it is a n-order stationary
process for any n.
Moments
E( Z t ) t
Z t f (z t )dz t
2 2
Var ( Z t ) t E( Z t t )
2
( Z t t ) f (z t )dz t
Cov( Z t 1 , Z t 2 ) E[( Z t 1 t 1 )( Z t 2 t 2 )]
cov( Z t 1 , Z t 2 )
( t1, t 2 )
2 2
t1 t2
Moments (cont)
For strictly stationary process: t
t2 2
because
F ( zt ) F ( zt k ) t t k
1 1 1 1
provided that E ( Zt ) , E ( Z 2t )
F ( zt1 , zt2 ) F ( zt1 k , zt2 k )
cov( zt1 , zt2 ) cov( zt1 k , zt2 k )
(t1 , t2 ) (t1 k , t2 k )
let t1 t k and t2 t , then
(t1 , t2 ) (t k , t ) (t , t k ) k
The correlation between any two random variables depends on the
time difference
Weak Stationarity
1. If 0 var( Zt ) then 0 1
k k
since k E ( Z t k )( Z ( t k )k )
E ( Z t k )( Z t ) k
Partial Autocorrelation Function (conditional correlation)
Yt if t is even
E4: Zt=
Yt+1 if t is odd
1 k 0
kk
0 k 0
Dependence: Ergodicity
• See Reading 1 from Leo Breiman (1969) “Probability and Stochastic Processes: With
a View Toward Applications”
• We want to allow as much dependence as the Law of Large Numbers (LLN) let us do it
• Stationarity is not enough as the following example shows:
E7: Let {Ut} be a sequence of iid r.v uniformly distributed on [0, 1] and let Z be N(0,1)
independent of {Ut}.
Define Yt=Z+Ut . Then Yt is stationary (why?), but
n
Yn
1
n
t 1
no
Yt
E ( Yt )
1
2
1
Yn Z
2
The problem is that there is too much dependence in the sequence {Yt}. In fact the
correlation between Y1 and Yt is always positive for any value of t.
Ergodicity for the mean
Z i Z t
z i 1 z t 1
m n
Which estimator is the most appropriate? Ensemble average
0 n
2 ( t 1 t 2 t n )
n t 1
02 [( 0 1 n 1 ) ( 1 0 1 n 2 )
n
( ( n 1) ( n 2 ) 0 )]
Ergodicity for the mean (cont)
0 n 1
k 0
var( z ) 2 ( n k ) k (1 ) k
n k ( n 1) n k n
0
k
lim var( z ) lim
n n n
k
(1
n
) k 0
0
k
k
k 0
k or
k 0
k
that is as k k 0
Ergodicity under Gaussanity
P1: Let {Zt} be a sequence of uncorrelated real-valued variables with zero means and unit variances,
and define the “moving average”
r
Yt
i 0
ai Z t i
for constants a0, a1, ... , a . Show that Y is weak stationary and find its autocovariance function
P2: Show that a Gaussian process is strongly stationary if and only if it is weakly stationary
P3: Let X be a stationary Gaussian process with zero mean, unit variance, and autocovariance function
c. Find the autocovariance functions of the process
Zt Z Z t 1 Z Z t 1
log( Z t ) log( Z t 1) log( ) log( 1 t ) t
Z t 1 Z t 1 Z t 1