You are on page 1of 8

Unit 2 (Part B)

Random variable: A random variable is a function defined over sample space of a random
experiment which relates the sample points to a real number line. A random variable is denoted by
capital letters like X, Y, Z etc
Example: A coin is tossed three times. S is the sample space showing sample points. Let X is a
random variable denoting number of heads then X can take values 0, 1, 2, 3. The mapping from S
to X is as shown in Fig. 2.1. Now whenever the experiment of tossing three coins can be conducted
one can observe the number of Heads. Accordingly X will change randomly from one trail to next
trail.

TTT,
TTH, 0
THT, 1
HTT,
THH, 2
HTH,
3
HHT,
HHH

Fig.2.1: Mapping from S to X


Random variables can be discrete type or continuous type.
A discrete random variable takes only discrete values from a set of values. A random
experiment or phenomenon whose outcomes are discrete gives rise to discrete random variable.
E.g. number of errors in a message. Number of students failing in a subject.
A continuous random variable takes any value from a continuous range of values from
minimum to maximum. As an example is X may denote the temperature of a place.
Probability Density Function (pdf): It is denoted by fx(x). It gives the relative probability
distribution among various values of random variable X
Consider the example of tossing three coins simultaneously where both H and T are equiprobable
and if X is the random variable denoting number of heads then as seen above X can be 0, 1, 2, 3.
P(X=0) = 1/8,
P(X=1) =3/8,
P(X=2) = 3/8,
P(X=3) =1/8.
The plot of these probabilities is as shown Fig.2.2(a). This plot forms the pdf of the random
variable X.
Again consider an example of transmission of five messages in sequence. The probability that a
message is received in error is 0.2. If X is the random variable denoting number of messages
received in error. Sketch the PDF of X.
If p denotes probability of receiving the message in error then

Page | 1
Unit 2 (Part B)

P(X < 0) = 0
P(X = 0) = 0.85 =0.32768
4
P(X = 1) = 5(0.2)(0.8) =0.40960
2 3
P(X = 2) = 10(0.2) (0.8) =0.2048
P(X = 3) = 10(0.2)3(0.8)2 =0.0512
P(X = 4) = 5(0.2)4(0.8) =0.0064
P(X = 5) = (0.2)5 =0.00032
P (X > 5)= 0
The pdf of this example is shown in Fig. 2.2(b)

fx(x) fx(x)

0 1 2 3 x 0 1 2 3 4 5 x
(a) (b)

Fig.2.2: Probability Density Function fx(x)

A discrete random variable has discrete pdf and a continuous random variable has continuous
pdf. The pdfs shown above are discrete type. Mathematically a discrete pdf can be expressed as
N
f X ( x)   P(k ) ( x  k )
X k

A continuous random variable has continuous pdf. It may be uniform, increasing, decreasing or
partly increasing and partly decreasing.
Properties of pdf:
Properties of fX(x) are as given below
(i) pdf of random variable is a positive function
f X ( x)  0
(ii) The total area under a pdf function is unity

f

X ( x) dx  1

(iii) Cumulative Probability Distribution Function (CDF) is an integral function of fx (x)


x
FX ( x)   f X ( x) dx


(iv) Probability that a random variable lies between x1 and x2 is equal to the area under
pdf from x1 to x2.

Page | 2
Unit 2 (Part B)

x2

P ( x1  X  x2 )  f
x1
X ( x) dx

Probability Distribution Function(PDF): It is also known as cumulative probability


distribution function(CDF). It is defined as
FX ( x)  P( X  x)
For a discrete random variable PDF is expressed as
L
FX ( L)   P( X  k )u ( N  k )
k 0

For continuous random variable we have


x0

FX ( x0 )  f

X ( x) dx

The PDF of above two examples are sketched in Fig. 3(a) and (b).

Fx(x)
Fx(x)

0 1 2 3 x 0 1 2 3 4 5 x
(a) (b)

The PDF of a discrete random variable is a staircase function and for continuous random variable
the PDF is a continuous increasing function.
Properties of PDF:
The properties of PDF are
(i) FX ()  0
(ii) FX ()  0
(iii) 0  FX ( x)  1
(iv) If x1 ≤ x2 then FX ( x1 )  FX ( x2 )
(v) P( x1  X  x2 )  FX ( x2 )  FX ( x1 )

Types Probability density functions and Random variables


Random variables are also classified as per the pdf function they obey. Some of the important
pdf functions are
(i) Uniform pdf
(ii) Binomial pdf
Page | 3
Unit 2 (Part B)

(iii) Poisson pdf


(iv) Gaussian pdf
(v) Exponential pdf
(vi) Rayleigh pdf
Uniform pdf
A random variable X is said to be uniform in nature if its fX(x) is constant over a range and
zero outside it i.e
0 x a
1
f X ( x)  k  a  x b
ba
0 xb
The PDF of uniform random variable is given by
0 x a
xa
FX ( x)  a  xb
ba
1 xb

fX(x)

x
a b
FX(x) 1

x
a b

Binomial density function


A binomial pdf is a discrete function of X. A binomial pdf is defined as
n
f X ( x)   C (n, k ) p k (1  p) nk  ( x  k )
k 0

A binomial pdf is a collection of impulse functions. It is a function of two parameters p and n


where p indicates the probability of success and n denotes the number of trials. Both p and n
must be positive and X is an integer 0, 1, 2, ………. n.

Page | 4
Unit 2 (Part B)

Its PDF is staircase type function and is defined as


n
FX ( x)   C (n, k ) p k (1  p) nk u ( x  k )
k 0

Poisson distribution Function


A binomial distribution is characterized by the values of two parameters: n and p. A Poisson
distribution is simpler and it has only one parameter, which is denoted by λ, pronounced lambda.
The parameter λ must be positive: λ >0. Below is the formula for computing probabilities for the
Poisson.
e   k
P( X  k ) 
k!
The parameter λ is mean value as well as variance of the Poisson variable. It is also known as
mean arrival rate in some applications.
The Poisson distribution function is applied if p is very small and n is large and λ ≈ np.
Under these conditions the Poisson distribution can be derived from binomial distribution.
From Binomial distribution we know that
P( X  k )  C (n, k ) p k (1  p) nk
n!
P( X  k )  p k (1  p) nk
(n  k )!k !
If λ≈ np then p=λ/n, hence
nk
n!   
k
P( X  k )  1  
(n  k )! k ! n k  n 
   k n
n! 1
P( X  k )  1  
(n  k )! n    k ! 
k k
n
1  
 n
Under the condition n→∞
n!
1
(n  k )! n k
 
k

1    1
 n
 
n

1    e
 n
e  
k

Hence P( X  k ) 
k!
The Poisson distribution has number of applications.

Page | 5
Unit 2 (Part B)

Normal (Gaussian) random variable and pdf


A random variable X is said to be normally distributed with mean μ and variance σ2 if its
probability density function (pdf) is
( x m )2

1 2 2
f X ( x)  e   x (1)
2  2
The Normal or Gaussian distribution of X is usually represented by,
X  N (m,  2 )
The Normal or Gaussian pdf is a bell-shaped curve that is symmetric about the mean μ and that
attains its maximum value of 1/(2πσ)1/2 as shown in the figure 1

Figure 1: Gaussian or Normal pdf, N(2, 1.52)


The Gaussian pdf N(μ, σ2) is completely characterized by the two parameters μ and σ2, which are
the mean and variance of the Gaussian random variable X respectively.
The mean, or the expected value of the variable, is the centroid of the pdf. In this particular case
of Gaussian pdf, the mean is also the point at which the pdf is maximum. The variance σ2 is a
measure of the dispersion of the random variable around the mean.
The Gaussian pdf with mean zero and deviation unity (i.e N(0,1)) is called standard Gaussian
pdf. It is also called normalized Gaussian pdf. Figure 2 shows the standard Gaussian pdf

Fig.2: Standard Gaussian density function

Page | 6
Unit 2 (Part B)

The pdfs represented in Figure 3 have the same mean, µ = 2, and σ12>σ22> σ32showing that the
larger the variance the greater the dispersion around the mean.

Figure 2: Gaussian pdf with different variances (σ12 = 32, σ22 = 22, σ32 = 1)

The probability distribution FX(x) of a normal random variable is defined as


x ( x m )2

1
e
2 2
FX ( x)  dx (2)
2  2 x 

This is not a close form integral and can be evaluated only through numerical methods. The
results can be tabulated for different combinations of (m, σ). This will require infinite number of
tables. However using normalization procedure only one single table that of standard Gaussian
function can be used for all combinations of (m, σ). This is achieved by change of variables as
explained below.
xm
Put z

1
dz  dx

Z z2
1 
Fz ( z ) 
2 e
x  
2
dz (3)

 xm
Hence FX ( x)  FZ ( z )  F   (4)
  
Because of symmetry of Gaussian density function we have
FZ ( 0)  0.5
FZ (  z ) 1  Fz ( z ) (5)
Another function Q(z) is defined as
Z z2
1 
Q( z )  1  Fz ( z ) 
2 e
x  
2
dz

Page | 7
Unit 2 (Part B)

Where Q(0)  0.5


Hence FZ ( z ) 1  Q( z ) For z ≥ 0
And FZ (  z )  Q( z ) For z< 0
The Q(z) has been tabulated in following table

Page | 8

You might also like