You are on page 1of 45

# Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Random Vectors

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

1 Introduction

2 Basic Quantities
Definitions
3 Transformation
Transformation of Random Vectors
4 Covariance Matrices
Covariance Matrices
Transformation/Diagonalization
Examples
5 Gaussian Random Vectors
pdf
Transformed pdf

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Introduction

## In This Chapter We Study Random Vectors

A Random Vector Is Just A Collection of Random Variables
Consider Random Vectors X and Y

X = (X1 , . . . , Xn )T

Y = (Y1 , . . . , Ym )T
Each Entry Is Itself a Random Variable
Could Have Identical Or Different Distribution for Each
Dimension
Could Be Independent Or Have Correlation Between
Dimensions
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Joint Distribution Function

Definition
The Joint Distribution Function of the random variables X and Y
is defined as
FXY (x, y ) = P[X x, Y y ]

Properties Include
FXY (, ) = 1
FXY (, y ) = FXY (x, ) = 0
FXY (, y ) = FY (y )
FXY (x, ) = FX (x)
2
xy FXY (x, y ) = fXY (x, y )
Can Generalize to N Random Variables
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Probability Distribution Function

Definition
The Probability Distribution Function (PDF) of the random vector
X = (X1 , X2 , . . . , Xn )T is defined as

FX (x) = P(X x)
= P(X1 x1 , X2 x2 , . . . , Xn xn )
= P (nk=1 {Xk xk })

## Intersection of Events Is Still F

Has Same Properties FX () = 1 and FX () = 0
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Probability Density Function

Definition
The Probability Density Function (pdf) of the random vector
X = (X1 , X2 , . . . , Xn )T can be obtained from FX (x) as

n FX (x)
fX (x) =
x1 . . . xn

## Just take partial with respect to each dimension

Derivatives must exist

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Joint Distribution Function

Definition
The Joint Distribution Function of the random vectors
X = (X1 , X2 , . . . , Xn )T and Y = (Y1 , Y2 , . . . , Ym )T is defined as

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Joint Density Function

Definition
The Joint Density Function of the random vectors
X = (X1 , X2 , . . . , Xn )T and Y = (Y1 , Y2 , . . . , Ym )T is defined as

## n+m FX,Y (x, y)

fX,Y (x, y) =
x1 . . . xn y1 . . . ym

## Just take partial with respect to each dimension

Derivatives must exist

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Marginal Density Function

Definition
The Marginal Density Function of the random vector
X = (X1 , X2 , . . . , Xn )T can be obtained from the joint density
function as
Z Z
fX (x) = fXY (x, y)dy1 . . . dym

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

Mean Vector

Definition
The Mean Vector of the random vector X = (X1 , X2 , . . . , Xn )T
is a vector whose elements are given by
Z Z
i = xi fX (x1 , . . . , xn )dx1 . . . dxn

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

## Jointly Gaussian (N-D)

Definition
The random vector X = (X1 , X2 , . . . , Xn )T is Jointly Gaussian iff
the joint density function has the form
 
1 1 1 T
fX (x) = exp (x )K (x )
(2)n/2 | det(K)|1/2 2

## where K is the n n covariance matrix of the random vector X,

with Ki,j = E {(Xi i )(Xj j )}, i is the mean of Xi ,
x = (x1 , . . . , xn )T , and = (1 , . . . , n )T

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

Uncorrelated

Definition
Let X and Y be real ndimensional random vectors with mean
vectors X and Y respectively. The random vectors are
uncorrelated if
E {XYT } = X T Y

## The Outer Product XYT Yields n n Matrix

Natural Extension of Uncorrelated Definition for Random
Variables

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

Orthogonal

Definition
Let X and Y be real ndimensional random vectors. The random
vectors are orthogonal if
E {XYT } = 0

## 0 Is n n Matrix of All Zeros

Implies That Expected Value of Inner Product is Zero, i.e.
E {XT Y} = 0
The Inner Product Yields a Scalar

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Definitions

Independent

Definition
Let X and Y be real ndimensional random vectors with joint pdf
fXY (x, y). The random vectors are independent if
fXY (x, y) = fX (x)fY (y)

## Joint pdf Factors Into Product of Marginal pdfs

Natural Extension of Independence Definition for Random
Variables
Independence Uncorrelated
Uncorrelated + Gaussian Independence
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

## Transformation of Random Vectors

Problem Statement

## Assume We Have Random Vector X = (X1 , X2 , . . . , Xn )T with

pdf fX (x)
Create New Vector of Random Variables Y Using
Transformation

Y1 = g1 (X1 , X2 , . . . , Xn )
Y2 = g2 (X1 , X2 , . . . , Xn )
..
.
Yn = gn (X1 , X2 , . . . , Xn )

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Problem Solution

## Solve Equations for Each xi to Yield

x1 = 1 (y1 , y2 , . . . , yn )
x2 = 2 (y1 , y2 , . . . , yn )
..
.
xn = n (y1 , y2 , . . . , yn )

## Denote This Solution as x = (x1 , x2 , . . . , xn )

In General, There Could Be Multiple Solutions, Denoted as
x(i) for i = 1, . . . , r

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Problem Solution

## Solution for fY(y) Given By

r
X
fY (y) = fX (x(i) )/|Ji |
i=1

## where Ji is the Jacobian evaluated at solution x(i) and | |

denotes the determinant operation
The Jacobian Is Defined As
g1 g1
x1 xn
J = ... ..

.
gn gn
x1 xn

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

## Transformation of Random Vectors

An Example

Notes
Random Vector Transformation

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Covariance Matrices

Definition

Definition
Let X be a real-valued random vector with associated mean vector
. The covariance matrix K is
K = E [(X )(X )T ]

Notes
Covariance Form

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Covariance Matrices

zT Az 0

## for all real-valued vectors z

A matrix A is positive definite if

zT Az > 0

## for all real-valued vectors z

Notes
A Proof
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Covariance Matrices

## The eigenvalue of a matrix A is a number that satisfies the

characteristic equation

A =

## for 6= 0, and is called an eigenvector of A

Typically Normalize Eigenvectors To Unit Length, i.e.

T = ||||2 = 1

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Covariance Matrices

## Eigenvalues and eigenvectors play a large role in: adaptive

signal processing, STAP, information theory, pattern
recognition, classification, etc
Can Show That Eigenvalues of a Real, Symmetric Matrix are
Always 0
Can Show That Eigenvectors of R.S. Matrix Are Mutually
Orthogonal
Eigenvalues Satisfy det(A I) = 0

Notes/Matlab
Eigenvalue and Eigenvector Computations

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

Introduction

## In This Section We Examine How To Transform Covariance

Matrices
Diagonalization of a Covariance Matrix
Joint Diagonalization of Two Covariance Matrices
These Techniques Are Useful In Signal Processing,
Classification/Discrimination, etc
These Approaches Are Often Seen In Journal Papers,
Textbooks, etc
Were Essentially Going to Rotate The Covariance Matrix
To Get Uncoupled Random Variables
First we Need Some Basic Definitions/Theorems

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

Definitions

Definition
Two n n matrices A and B are similar if there exists an n n
matrix T with det(T) 6= 0 such that

T1 AT = B

T Is A Transformation Matrix

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

Theorems

Theorem
Let M be a real symmetric (r.s.) matrix with eigenvalues
1 , . . . , n . Then M has n mutually orthogonal unit eigenvectors
1 , . . . , n .

Theorem
An n n matrix M is similar to a diagonal matrix if and only if M
has n linearly independent eigenvectors.

Matrix

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

Diagonalization

## Let Matrix U Consist of the Ordered Eigenvectors of the

Covariance Matrix M, i.e.

U = (1 , . . . , n )

Matrix M Is Transformed As

U1 MU =

## where is the Diagonal Matrix = diag(1 , . . . , n )

Matlab
Covariance Diagonalization
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

Joint Diagonalization

## Consider Two r.s. Matrices P and Q, and Assume P is

Positive Definite
There Exists an n n Matrix V Such That

VT PV = I

VT QV = diag(1 , . . . , n )
where 1 , . . . , n are generalized eigenvalues satisfying

Qvi = i Pvi

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformation/Diagonalization

## 1 Compute the eigenvalues of

P1 Q
0
2 Calculate the unnormalized eigenvectors vi for i = 1, . . . , n by
solving
0
(P1 Q I)vi = 0
3 Find normalization constants Ki for i = 1, . . . , n such that
0
vi = Ki vi satisfies viT Pvi = 1
4 Vectors vi form the columns of V

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Example

Matlab
Joint Diagonalization

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## Want to Distinguish Between Different Classes, i

Random Vector X Is Our Observation or Measurement Vector
We Define
i = E [X|i ]
Ki = E [(X i )(X i )T |i ]

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## Keep Things Simple, Just Have Two Classes, 1 , 2

Given ndimensional observation X, we reduce to scalar
feature Y via
Y = aT X
This Operation Projects X Along the Direction a Where
||a|| = 1

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

i = E [Y |i ]
= aT i

= aT Ki a

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## For Accurate Classification, Want Gap Between Features of

Different Classes

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## Figure : Class Distributions

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

Choice Of a Is Important
Want to Maximize Distance Between Means i and Minimize
Variances i2
Want to Maximize Cost Function
(1 2 )2
J(a) =
12 + 22

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## We Can Write J(a) As

(1 2 )2
J(a) =
12 + 22
(aT 1 aT 2 )2
=
aT K1 a + aT K2 a
(aT (1 2 ))2
=
aT (K1 + K2 )a
(aT (1 2 )(1 2 )T a)
=
aT (K1 + K2 )a

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## Define The Matrices

Q = (1 2 )(1 2 )T

P = K1 + K2
So J(a) Can Be Written As

aT Qa
J(a) =
aT Pa

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## Perform Joint Diagonalization With Matrix V

Let
a = Vb
So J(a) Can Be Written As

bT VT QVb
J(a) =
bT VT PVb
bT b
=
||b||2

## J Has Been Written In A Special Form That Makes It Easy To

Maximize via the Theorem to Follow
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

Theorem
Let M be a real symmetric (r.s.) matrix with largest eigenvalue 1 ,
then
xT Mx
1 = max
||x||2
and the maximum is achieved for x = K 1 where 1 is the unit
eigenvector associated with 1 and K is any real-valued constant.

a = Vb = V1 = v1

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example
Since a Is Just The Eigenvector Associated With 1 It
Satisfies
P1 Qa = 1 a
Substituting for Q We Have That a Satisfies
P1 (1 2 )(1 2 )T a = 1 a
But (1 2 )T a Is Just a Scalar, Lets Denote as k

k 1
a= P (1 2 )
1
a is called the Fisher Linear Discriminant
Usually normalize such that ||a|| = 1
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Examples

Classification Example

## We Have Just Solved for the Optimal Projection Vector a

Using This Vector Gives Us The Best Chance of Making the
Correct Decision
This Example And Others Similar Would Be Covered in a
Detection and Estimation Theory Course

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

pdf

Introduction

 !
1 x 2

1
fX (x) = exp
2 2

## Let X Be Random Vector With Independent Gaussian

Components
We Can Write pdf as

n 
"  #
1 1 X xi i 2
= exp
(2)n/2 1 n 2
i=1
i

## Dr. Adam Panagos

Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

pdf

## Compact pdf Expression

Can Be Written More Compactly Using Matrices As
 
1 1 T 1
fX (x) = exp (x ) K (x )
(2)n/2 det(K)1/2 2
where 2
1 0
K=
..
.
0 n2
What Is det(K)?
What Is K1 ?
Easy To See It Holds For Independent Variables, But Also
Holds for Arbitrary K
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformed pdf

Introduction
Consider Now Transforming X Using the Nonsingular n n
Transformation Matrix A To Yield
Y = AX
What Is The Distribution of Y?
Using Transformation Of Random Vectors Can Show That Y
Is Also A Gaussian Random Vector With
E [Y] = E [AX] = AE [X] = A = Y

## KY = E [(Y Y )(Y Y )T ] = E [(AX A)(AX A)T ]

= E [A(X )(X )T AT ]
= AE [(X )(X )T ]AT
= AKAT
Random Signals in Communications
Outline Introduction Basic Quantities Transformation Covariance Matrices Gaussian Random Vectors

Transformed pdf

## Distribution for Y Can Be Written Compactly As

 
1 1 T 1
fY (y) = exp (y Y ) KY (y Y )
(2)n/2 det(KY )1/2 2

## Dr. Adam Panagos

Random Signals in Communications