You are on page 1of 5

Unit IV (Part B) Random Vectors

Multiple Random Variables


In Probability and Statistics, many times a single system or statistical unit has more than one
characteristic feature. Each feature is represented by a random variable.
For example age, height, weight, sex, marital status of an individual in a group of people.
Each attribute in this case is a random variable as it changes randomly from person to person.
Thus a host of features are represented as a list of random variables each of whose value is
unknown, either because the value has not yet occurred or because there is imperfect knowledge
of its value.
In these situations each outcome of a random experiment is described by a set of N >1 random
variables {X1; X2, X3, …..XN }. These are called multivariate random variable. In vector form
it is described as:

X [ X1 , X 2 , X 3 , .........., X N ]

This is called a random vector.


In signal processing X is often used to represent a set of N samples of a random signal X(t) (a
random process).
Each sample point of X on sample space has N values. This is called N-dimensional sample space.
For simplicity and convenience consider two dimensional sample space whose each sample point
consists of ordered pairs (x, y), where x and y belong to the random variables X and Y respectively.
The random Vector is denoted as [X, Y]T.
Joint Distribution Function of 2-dimensional Random Vector
The joint distribution of two random variables X and Y is defined as
FX ,Y ( x, y)  P{ X  x , Y  y}

Properties of the joint distribution


(i) FX ,Y (,)  0
(ii) FX ,Y (, y)  0
(iii) FX ,Y ( x,)  0
(iv) FX ,Y (, ) 1
(v) 0  FX ,Y (,) 1
(vi) FX ,Y ( x, y) is an increasing function of both x and y
(vii) P{x1  X  x2 , y1  Y  y2 }  FX ,Y ( x2 , y2 )  FX ,Y ( x1 , y1 )  FX ,Y ( x1 , y2 )  FX ,Y ( x2 , y1 )  0
(viii) FX ,Y ( x, )  FX ( x) And FX ,Y (, y)  FY ( y)

The last property describes what is known as Marginal probability Distribution functions.

Page | 1
Unit IV (Part B) Random Vectors

Joint probability density function


For a two dimensional random vector it is defined as

 2 FX , Y ( x, y)
f X ,Y ( x, y) 
x y

Properties of joint probability density function


(i) f X ,Y ( x, y)  0
 
(ii)  f
 
X ,Y ( x, y ) dx dy 1

x y
(iii) FX ,Y ( x, y)   f X ,Y ( x, y) dx dy
 
x 
FX ( x)   f X ,Y ( x, y ) dy dx
 
(iv) y 
FY ( y )   f X ,Y ( x, y ) dx dy
 

f X ( x)  f

X ,Y ( x, y ) dy
(v) 
fY ( y )  f

X ,Y ( x, y ) dx

Marginal density functions


Marginal density functions are the density functions of single variables X and Y and defined as

d FX ( x)

f X ( x) 
dx

d FY ( y)

f Y ( y) 
dy

Statistical Independence and independent Random Variables


Two random variables are said to be statistically independent if and only if
P{ X  x, Y  y}  P{ X  x} P{Y  y }

From this it follows


FX , Y ( x, y)  FX ( x) FY ( y)

Page | 2
Unit IV (Part B) Random Vectors

f X ,Y ( x, y)  f X ( x) fY ( y)

Expected value of a function of Random Variables


If g(X1, X2, X3, …. , Xn) is a function of N random variables then expected value of the function is
given by
 
E[ g ( X1, X 2 , ...... X N )    ......  g ( x1, x2 , .......xn ) f X1 , X 2 ....... X N ( x1 , x2 , ..... xN ) dx1 dx2 ......dxN
  

For a function of two dimensional random vector the expectation or mean is given by
 
E[ g ( X ,Y ) ]    g ( x, y ) f X ,Y ( x, y) dx dy
 

Assume g(X1, X2, X3, …. ,Xn) = a1X1+ a2X2+ ……….. aNXN then

N 
E[ g ( X 1, X 2 , ...... X N )  E  ai X i 
 i 1 
  
   ......  (a x
  
1 1  a2 x2  ..........a N x N f X1 , X 2 ....... X N ( x1 , x2 , ..... x N ) dx1 dx2 ......dxN

  
N 
E  ai X i     ......  (a1 x1  a2 x2  ..........aN xN f X1 , X 2 ....... X N ( x1 , x2 , ..... xN ) dx1 dx2 ......dxN
 i1      

N  N
E  ai X i    ai EX i 
 i1  i1

Joint moments about the origin

Let [ g ( X ,Y ) ]  X nY k

Then joint moments of two random variables about the origin are given by
 
E[ g ( X ,Y ) ]  E[ X nY k ]   x
n
y k f X , Y ( x, y) dx dy
 

Where n= 0, 1, 2, …..and k= 0, 1, 2, 3, …..


The sum (n + k) is called order of the joint moment about the origin.
There is one zeroth order joint moment about the origin i.e.
 
m00   E[ X 0Y 0 ]  
 
f X , Y ( x, y) dx dy  1

There are two first order joint moment about the origin m10 , m01 given as below

Page | 3
Unit IV (Part B) Random Vectors

 
m10   E[ X 1Y 0 ]   xf
 
X ,Y ( x, y) dx dy  E[ X ]

 
m01   E[ X Y ]   yf ( x, y ) dx dy  E[Y ]
0 1
X ,Y
 

There are three 2nd order joint moments m20, m02, m11as given below
 
m20   E[ X Y ]   x f X , Y ( x, y) dx dy  E[ X 2 ]
2 0 2

 

 
m02   E[ XY 2 ]  y f X , Y ( x, y) dx dy  E[Y 2 ]
2

 

 
m11   E[ X Y ]    xy f ( x, y) dx dy  E[ XY ]
1 1
X ,Y
 

m20 denotes mean square value of X, m02 gives the mean square value of Y and m11 is called
correlation of X and Y and denoted as RXY.
The correlation between two random variables X and Y is defined as
 
RXY  E[ XY ]    xy f X ,Y ( x, y) dx dy
 

If X and Y are statistically independent then RXY  E[ X Y ]  E[ X ] E[Y ]

If this happens then two random variables X and Y are called uncorrelated.
Statistical independence is sufficient condition for two random variables to be uncorrelated.
However the converse is not true i.e. if two random variables are uncorrelated they may or may
not be independent.
However if two random variables are Gaussian and they are uncorrelated then these are
statistically independent as well.
Joint Central Moments of Two Random variables
The joint central moments of two random variables X and Y are defined as
 
E[( X  mX ) n (Y  mY ) k ]    (x  m
 
X ) n ( y  mY ) k f X , Y ( x, y) dx dy

Where mX and mY are mean values of X and Y respectively.


Zeroth order joint central moments µ00 is 1

Page | 4
Unit IV (Part B) Random Vectors

First order joint central moments µ10 = µ01 = 0


There are three joint central m0ments µ20 , µ02 and µ11 and these are defined as

20  E[( X  mX )2 (Y  mY )0 ]  E[( X  mX )2 ]


 
20  E[( X  mX ) 2 ]    ( x  mX ) 2 f X , Y ( x, y) dx dy   2 X Var( x)
 

 
Similarly 02  E[(Y  mY ) 2 ]    ( y  mY ) 2 f X , Y ( x, y) dx dy   2Y Var(Y )
 

 
And 11  E[( X  mX )(Y  mY )]    (x  m
 
X )( y  mY ) f X , Y ( x, y) dx dy

This is known as covariance between X and Y and denoted as CXY. Thus covariance CXY is
defined as
C XY  E[( X  mX )(Y  mY )]  E[ XY ]  mX mY

C XY  RXY  mX mY

Various conditions arise


(i) If CXY = 0, the two random variables X and Y are uncorrelated
(ii) If CXY = -mXmy the two random variables are said to be orthogonal
(iii) If two random variables have zero mean, then CXY = RXY
The normalized covariance CXY is called correlation coefficient ρ between X and Y and is given
by

C XY E[( X  m X )(Y  m y )]
 
 X Y  X2  Y2

ρ can have value from -1 to +1, ρ = 0 implies that two random variables are uncorrelated.

Page | 5

You might also like