You are on page 1of 14

Gaussian Processes, Multivariate Probability Density Function,

Transforms

A real-valued random process X(t) is called a Gaussian process, if all of


its nth-order joint probability density functions are n-variate Gaussian
pdfs. The nth-order joint probability density function of a Gaussian
vector

X = [X1 X2 ... Xn]T = [X(t1) X(t2) ... X(tn)]T

is given by
p( x ) =
1
(2 )n | C x |

exp -
2 x x x
1 ( x - m )T C-1 ( x - m )

where

x = [x1 x2 ... xn]T

mx = E[X] = [mx(t1) mx(t2) ... mx(tn)]T = mean vector

= covariance matrix

|Cx| = determinant of matrix Cx


If random variables X(t1), X(t2), ..., X(tn) are uncorrelated, then the
values of autocovariance function are given by

 x ( t i ), i = j
2

C x ( t i , t j ) = E[(X( t i ) - m x ( t i ))(X( t j ) - m x ( t j ))] = 



 0, i  j

Thus, Cx is a diagonal matrix, and from this it follows that


2
1 ( x - m ) C ( x - m )= ( x - m ( t n) )

T -1 k x k
x
2  2x ( t k )
x x
2 k =1

and
n
| C x |=  2x ( t k )
k =1

and the pdf can be factored into a product of n univariate Gaussian pdfs.

n 1
p( x ) =  e
-( x k - m x ( t k ) ) 2 /2 2x ( t k )

k =1 2  x ( t k )

=> If random variables X(t1), X(t2), ..., X(tn) from a Gaussian


process are uncorrelated, then they are also statistically
independent
The n-variate Gaussian pdf is completely determined by its mean vector
and covariance matrix. If a Gaussian process is wide-sense stationary,
the mean mx(t) and autocovariance Cx(t, t+) do not depend on time t.
Thus the pdf of the process and the statistical properties derived from the
pdf are invariant over time.

=> If a Gaussian process is wide-sense stationary, then the


process is also strictly stationary.

Besides this, it can also be shown, that if a Gaussian process is


widesense stationary, then the process is also ergodic.

Another extremely important property of Gaussian process is, that any


linear operation on a Gaussian process X(t) produces another Gaussian
process.

=> linear filtering of Gaussian signals retains their Gaussianity


Example 1:

Let us consider two-dimensional case, i.e. n=2:

x = [x1 x2]T

mx = [2, 1]T
6 3
Cx=  
3 4

Then
 4 
 - 1
 15 5
-1  
Cx =
 
- 1 2
 5 5 
and

| C x |= 15

and further
( x - m x )T C -x1 ( x - m x )
 x - 2
4 1 1 2  1 
=  (x1 - 2) - (x2 - 1),- (x1 - 2)+ (x 2 - 1)
 15 5 5 5   x - 1
 2 
 4x1 x2 1 x1 2x2   x1 - 2 
= - - ,- +  
 15 5 3 5 5  
 x2 - 1

4x1 x2 1 x 2x
=( - - )(x1 - 2)+ (- 1 + 2 )(x 2 - 1)
15 5 3 5 5

2(-5x1 + 2x12 - 3x1 x22 + 3x22 + 5)


=
15

Thus, the pdf is given as

p( x ) =
1
2 15

exp 5x1 - 2x12 + 3x1x2 - 3x22 - 5)/15 
Example 2:

Let us consider another two-dimensional case, i.e. n=2:

x = [x1 x2]T

mx = [2, 1]T
6 0 
Cx=  
0 4 

Then
1 0
 
6
- 1  

Cx =
 
0 1
 4 
and

| C x |= 24

and further
( x - m x )T C-x1 ( x - m x )
( x1 - 2) ( x 2 - 1)  x1 - 2 
= ,  
 6 4   x 2 - 1

 x 1 x 1   x1 - 2 
=  1 - , 2 -  
 6 3 4 4   x 2 - 1

x1 1 x 1
=( - )( x1 - 2) + ( 2 - )( x 2 - 1)
6 3 4 4

- 8 x1 + 2 x12 - 6 x 2 + 3 x 22 + 11
=
12

Thus, the pdf is given as

exp(8 x1 - 2 x12 + 6 x 2 - 3 x 22 - 11)/24


1
p( x ) =
2  24

1 -( x1- 2 )2 /12 1 -( x2 -1 )2 /8
= e e
2 6 2 4
Example 3: Randomly phased sinusoid with AWGN

A random signal x(t) is given by

x(t) = A cos( 0 t +  )

where A and 0 are constants and the phase  is a uniformly


distributed random variable with pdf

p(  ) = 21 for 0    2

Let
y(t) = x(t) + n(t)

where n(t) is a zero-mean white Gaussian process with variance 2.

Find the joint pdf of Y1, Y2, ... Yn where Yi = y(ti).

Let us consider the case for given value of the phase , in which case
x(t) is a deterministic signal. Then

Y = [Y1, Y2, ..., Yn]T

is a Gaussian random vector with mean

mx = [Acos(0t1+), Acos(0t2+),..., Acos(0tn+)]T


Since n(t) is white noise, the samples Y1, Y2, ... Yn are
uncorrelated and the conditional pdf of Y is given by

n
1
p( y |  ) =  e -( y k - A cos( 0 t k + ) )2 /2 2

k =1 2 

n
- 1
 ( y k - A cos( 0 t k + ) )2
1 2 2
k =1
= e
(2  )n/2
 n

n n
- 1
 y2 -2A cos( 0 t k + ))  y k + A 2 cos 2( 0 t k + ))
1 2 2
k =1
k
k =1
= e
(2  )n/2
 n

To find the unconditional pdf of Y we should evaluate the integral

  2
1
p( y ) =  p( y , )d  =  p(  )p( y |  )d  =  p( y |  )d
- -
2 0

n n
2 - 1
 y
2
-2A cos (  t +  ))  y k +A 2 cos 2 ( 0 t k + ))
1 0 k

e
22 k=1 k
= k=1
d
(2  )1+n/2  n 0
Let us consider a complex random variable Z = X + jY, where X and Y
are independent Gaussian variables with same variance 2. Then

mz = mx + jmy
 z2 = E[| Z - mz |2 ] = E[(X - mx )2 + (Y - my )2 ] =  x2 + y2 = 2 2

The second-order joint probability density function of X and Y is the


bivariate Gaussian pdf
1  (x - m x )2 + (y - m y )2 
p XY (x, y)= p X (x) pY (y) = exp - 

2  2
 2 2

=
1
 2
 
exp - | z - m z |2 /  2z = p Z (z)
z

We have found the pdf of a complex random variable Z.


If Z = X + jY is a complex random vector from a complex-valued
random process Z(t)
Z = [Z(t1) Z(t2) ... Z(tn)]T = [X1 X2 ... Xn]T + j[Y1 Y2 ... Yn]T

where X and Y are statistically independent and jointly distributed


according to a real multivariate Gaussian distribution, and the
covariance matrixes of X and Y fulfill the conditions

 C x  C y
 T
C xy  C yx  0

Under these conditions the nth-order joint probability density function of


a complex-valued Gaussian vector Z is given by

pZ ( z )=
1
exp 
- ( z - m z )
H -1

Cz ( z - m z )
 | Cz |
n

where
z = [z1 z2 ... zn]T

mz = E[Z] = [mz(t1) mz(t2) ... mz(tn)]T = mean vector

H
Cz = E[( Z - m z )( Z - m z ) ] = 2 Cx = covariance
matrix

|Cz| = determinant of matrix Cz

[ ]H denotes the Hermitian operation, which is equivalent to


transposal and complex conjugation of a matrix
By using basic equations of matrix algebra, i t can be easily seen that

|Cz| = 2n |Cx|

and

C z = 21 C x
-1 -1

And further

( z - mz )H C-z1 ( z - mz ) = ( x - jy - mx + jmy )T 1 C-x1 ( x + jy - mx - jmy )


2
= 1 (( x - mx )T C-x1 - j( y - m y )T C-x1 )( x - mx + j( y - m y ))
2
= 1 [( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )]
2
The pdf of complex random vector Z is equivalent to joint probability
density function of random vectors X and Y, or equivalent to the (2n)th-
order pdf of random vector U

U = [XT YT]T = [X(t1), X(t2), ... X(tn), Y(t1), Y(t2), ... Y(tn)]T

with

mu = E[U] = [mxT myT]T

and

T
Cu = E[( U - m u )(U - m u ) ]

= E[[( X - m x )T , (Y - m y )T ]T [( X - m x )T , (Y - m y ) T ]]

E[( X - m x )(X - m x )T ] E[( X - m x )(Y - m y )T ]


= 
 T T 
 E[( Y - m y )( X - m x ) ] E[( Y - m y )( Y - m y ]
) 

 C x C xy  C x 0
= = 
C yx C y   0 C x 

From this it follows

|Cu| = |Cx|2
and
C-x1 0
-1  
Cu =
 0 C-1
x
And further

( u - mu )T Cu-1 ( u - mu )
 C
-1
x 0
= [( x - mx ) ,( y - m y ) ] 
T T
 [( x - mx ) ,( y - m y ) ]
T T T

 0 C-x1

= [( x - mx )T C-x1 ,( y - m y )T C-x1 ][( x - mx )T ,( y - m y )T ] T

= ( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )

Thus

p( x , y ) = p( u ) =
1
 
exp - 21 ( u - mu )T Cu- 1 ( u - mu )
(2 ) | Cu |
2n

=
1
(2 ) | C x |
n
 
exp - 21 ( x - mx )T C-x1 ( x - mx ) + ( y - m y )T C-x1 ( y - m y )

=
1
exp - ( 
z - m )
H
C
-1

z ( z - mz ) = p Z ( z )
 Cz
n z
| |

Also complex-valued Gaussian processes have the important property,


that any linear operation on the process produces another Gaussian
process.

You might also like