You are on page 1of 26

Introduction to Probability and

Stocastic Processes - Part I


Lecture 2
Henrik Vie Christensen
vie@control.auc.dk

Department of Control Engineering


Institute of Electronic Systems
Aalborg University
Denmark

Slides originally by: Line Ørtoft Endelt

Introduction to Probability and Stocastic Processes - Part I – p. 1/26


From Experiment to Probability
Experiment, E
Sample space, S (containing the outcomes of the
experiment)
Events and/or a Random variable is defined on the
sample space
A probability measure is found/chosen

A Probabilistic Model containes


Sample space.
Probability measure.
Class of sets forming the domain of the probability
measure.

Introduction to Probability and Stocastic Processes - Part I – p. 2/26


Example
The joint density function of X and Y is
fX,Y (x, y) = axy 1 ≤ x ≤ 3, 2 ≤ y ≤ 4
fX,Y (x, y) = 0 elsewhere

Find a:
4Z 3 4  2 
3
x
Z Z
1 = axy dx dy = a y dy
2 1 2 2 1
Z 4
= a 4y dy = 24a
2

1
so a = 24

Introduction to Probability and Stocastic Processes - Part I – p. 3/26


Example (continued)
The marginal pdf of X :
1 4 x
R
fX (x) = 24 2 xy dy = 4 1≤x≤3
fX (x) = 0 elsewhere

The distribution function of Y is


FY (y) = 0 y≤2
FY (y) = 1 y>4
1 y 3 1 y
R R R
FY (y) = 24 2 1 xv dx dv = 6 2 v dv
1 2 − 4)
= 12 (y 2≤y≤4

Introduction to Probability and Stocastic Processes - Part I – p. 4/26


Uniform Probability Density Function
X has an uniform pdf if
1
n
b−a a≤x≤b
fX (x) =
0 elsewhere

The mean and variance are


b+a
µX =
2
2 (b − a)2
σX =
12

Introduction to Probability and Stocastic Processes - Part I – p. 5/26


The Uniform pdf
fX (x)

6
1
b−a

0 -
a µ b x
FX (x)

6
1 


 

 


-

0
a b x

Introduction to Probability and Stocastic Processes - Part I – p. 6/26


Gaussian Probability Density Function
“Electrical noise in communication systems is often due to
the cumulative effects of a large number of randomly
moving charged particles and hence the instantaneous
value of the noise will tend to have a Gaussian distribution.”
The Gaussian pdf is given by

1

(x − µX ) 2 
fX (x) = q exp − 2
2πσX2 2σX
Z ∞  2 
1 (x − µX )
P (X > a) = q exp − 2 dx
a 2πσX2 2σX
Z ∞  2
1 z
= √ exp − dz
(a−µX )/σX 2π 2

Introduction to Probability and Stocastic Processes - Part I – p. 7/26


Gaussian Probability Density Function
The Q function is defined by
Z ∞  2
1 z
Q(y) = √ exp − dz, y > 0
2π y 2
P (X > a) = Q[(a − µX )/σX )]

For the standard normal distribution (µ = 0, σ = 1)

P (X ≤ x) = 1 − Q(x)
P (−a ≤ X ≤ a) = 2P (−a ≤ X ≤ 0) = 2P (0 ≤ X ≤ a)
1
P (X ≤ 0) = = Q(0)
2

Introduction to Probability and Stocastic Processes - Part I – p. 8/26


The Standard Normal Distribution
2 = 1 is called the
The normal distribution with µ = 0 and σX
standard normal distribution.

0.4

0.35

0.3

0.25

0.2

0.15

0.1

0.05

0
−5 0 5
Introduction to Probability and Stocastic Processes - Part I – p. 9/26
The Normal Distribution
0.8

0.7 µ = 0, σ = 0.5

0.6

0.5

0.4
µ = 2, σ = 1
0.3 µ = 0, σ = 1

0.2

0.1 µ = 0, σ = 2

0
−5 0 5
Introduction to Probability and Stocastic Processes - Part I – p. 10/26
Example I
X : the voltage output of a noise generator, std. norm.
distribution.
Find P (X > 2.3) and P (1 ≤ X ≤ 2.3)

y Q(y) y Q(y)
0.9 0.1841 2.20 0.0139
0.95 0.1711 2.30 0.0107
1.00 0.1587 2.40 0.0082

P (X > 2.3) = Q(2.3) ≈ 0.0107


P (1 ≤ X ≤ 2.3) = (1 − Q(2.3)) − (1 − Q(1))
= Q(1) − Q(2.3) ≈ 0.148

Introduction to Probability and Stocastic Processes - Part I – p. 11/26


Example II
V : the velocity of the wind at a certain location, normal
distributed, with µ = 2 and σ = 5.
Find P (−3 ≤ V ≤ 8) y Q(y)
1.00 0.1587
1.2 0.1151

2
R8 h i
1
P (−3 ≤ V ≤ 8) = −3 √2π25 exp − (v−2)
2∗25 dv
R (−3−2)/5 1 h 2i
= (8−2)/5 √2π exp − x2 dx
= P (X ≤ 1.2) − P (X ≤ −1)
= (1 − Q(1.2)) − (1 − Q(−1)) ≈ 0.726

since Q(−1) = 1 − Q(1) due to symmetry.


Introduction to Probability and Stocastic Processes - Part I – p. 12/26
Bivariate Gaussian pdf
The Bivariate Gaussian pdf is given by
( " 2
1 −1 x − µX
fX,Y = p exp 2
+
2πσX σY 1 − ρ 2 2(1 − ρ ) σX
 2 #)
y − µY 2ρ(x − µX )(y − µY )
+ −
σY σX σY

where
E{(X − µX )(Y − µY )} σXY
ρ = ρXY = =
σX σY σX σY

Introduction to Probability and Stocastic Processes - Part I – p. 13/26


Complex Random Variables
Given two random variables X and Y , a complex random
variable can be defined as

Z = X + jY

And the expected value of g(Z) is defined as


Z ∞Z ∞
E{g(Z)} = g(z)fX,Y (x, y)dxdy
−∞ −∞
µZ = E{Z} = E{X} + jE{Y } = µX + jµY
σZ2 = E{|Z − µZ |2 }

and the covariance of two complex RV is

CZm Zn = E{(Zm − µZm )∗ (Zn − µZn )}


Introduction to Probability and Stocastic Processes - Part I – p. 14/26
Joint Distribution and Density
The joint probability distribution function for m random
variables X1 , . . . , Xm are given by

FX1 ,...,Xm (x, . . . , xm ) = P [(X1 ≤ x1 ), . . . , (Xm ≤ xm )]

The density function for a continuous random variable is the


partial derivative of the distribution function,

dm FX1 ,...,Xm (x, . . . , xm )


fX1 ,...,Xm (x, . . . , xm ) =
dx1 · · · dxm

Introduction to Probability and Stocastic Processes - Part I – p. 15/26


Marginal PDF
The marginal density function for X1 is given by
Z ∞ Z ∞
fX1 (x1 ) = ··· fX1 ,...,Xm (x, . . . , xm )dx2 · · · dxm
−∞ −∞

The joint marginal density function between two of the RV is


found as
Z ∞ Z ∞
fX1 ,X2 (x1 , x2 ) = ··· fX1 ,...,Xm (x, . . . , xm )dx3 · · · dxm
−∞ −∞

Introduction to Probability and Stocastic Processes - Part I – p. 16/26


Conditional PDF
The conditional density functions are given by

fX1 ,X2 ,...Xm (x1 , x2 , . . . xm )


fX2 ,X3 ...Xm |X1 (x2 , x3 . . . xm |x1 ) =
fX1 (x1 )

and
fX1 ,X2 ,...Xm (x1 , x2 , . . . xm )
fX3 ,X4 ...Xm |X1 ,X2 (x3 , x4 . . . xm |x1 , x2 ) =
fX1 ,X2 (x1 , x2 )

Introduction to Probability and Stocastic Processes - Part I – p. 17/26


Expected values
The expected value of a function g(X1 , . . . , Xm ) defined on
m random variables are given by

E{g(X1 , . . . , Xm )}
Z ∞ Z ∞
= ··· g(x1 , . . . , xm )fX1 ,...,Xm (x1 , . . . , xm )dx1 · · · dxm
−∞ −∞

E{g(X1 , . . . , Xm )|X1 , X2 }
Z ∞ Z ∞
= · g(x1 , ., xm )fX3 ,.,Xm |X1 ,X2 (x3 , ., xm |x1 , x2 )dx3 · dxm
−∞ −∞

Introduction to Probability and Stocastic Processes - Part I – p. 18/26


Mean value and covariance
The mean value

µXi = E{Xi }

and the covariance

σXi Xj = E{Xi Xj } − µXi µXj

Note that when i = j , σXi Xi is the variance of Xi .

Introduction to Probability and Stocastic Processes - Part I – p. 19/26


Random vectors
The m random variables X1 , . . . , Xm can be represented
using a vector
 
X1
 .. 
X =  .  or X = (X1 , X2 , . . . , Xm )T
Xm

A possible value of the random variable (relating to and


outcome of the underlying experiment), is represented as
x = (x1 , x2 , . . . , xm )T .
And the joint PDF (The same as in slide no. 15) is denoted
by

fX (x) = fX1 ,...,Xm (x, . . . , xm )

Introduction to Probability and Stocastic Processes - Part I – p. 20/26


Mean Value and Covariance-matrix
The mean vector is defined as
 
E(X1 )
 E(X2 ) 
 
µX = E(X) =  .
 . 

 . 
E(Xm )

and the covariance-matrix is defined as


 
σX1 X1 σX1 X2 ··· σX1 Xm
 σX2 X1 σX2 X2 ··· σX2 Xm 
 
T T
ΣX = E{XX } − µX µX =   .. .. ... .. 
 . . . 

σXm X1 σXm X2 · · · σXm Xm

Introduction to Probability and Stocastic Processes - Part I – p. 21/26


Independent RV
The random variables are uncorrelated, when their
covariances are 0, ie.

σXi Xj = σij = 0, for i 6= j

and independent when


m
Y
fX (x) = fX1 ,...,Xm (x, . . . , xm ) = fXi (xi ).
i=1

Introduction to Probability and Stocastic Processes - Part I – p. 22/26


Multivariate Gaussian Distribution
A random vector X is multivariate Gaussian if its pdf is
given by
 
1 1 T −1
fX (x) = m/2 1/2
exp − (x − µ X ) ΣX (x − µX )
(2π) |ΣX | 2

where |ΣX | is the determinant of ΣX .

Introduction to Probability and Stocastic Processes - Part I – p. 23/26


Multivariate Gaussian Distribution
If X has a multivariate Gaussian distribution, then
1. If X is partitioned as
   
" # X1 Xk+1
X1
X= , X 1 =  ...  , X 2 =  ... 
   
X2
Xk Xm

and
" # " #
µX 1 Σ11 Σ12
µX = ΣX =
µX 2 Σ21 Σ22

then X 1 has a k -dimensional multivariate Gaussian


distribution, with mean value µX 1 and covariance Σ11 .
Introduction to Probability and Stocastic Processes - Part I – p. 24/26
Multivariate Gaussian Distribution
2. If ΣX is diagonal, then the components of X are
independent.
Note, that this ONLY holds for the Gaussian distribution.

3. If A is a k × m matrix of rank k , then Y = AX has a


k -variate Gaussian distribution with

µY = AµX
ΣY = AΣX AT

Introduction to Probability and Stocastic Processes - Part I – p. 25/26


Multivariate Gaussian Distribution
4. If X is partioned as in 1, the conditional density of X 1
given X 2 = x2 is a k -dimensional multivariate Gaussian
with

µX 1 |X 2 = E[X 1 |X 2 = x2 ] = µX 1 + Σ12 Σ−1


22 (x2 − µX 2 )
ΣX 1 |X 2 = Σ11 − Σ12 Σ−1
22 Σ21

Introduction to Probability and Stocastic Processes - Part I – p. 26/26

You might also like