0% found this document useful (0 votes)
55 views319 pages

Understanding Probability Concepts

Uploaded by

22l101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views319 pages

Understanding Probability Concepts

Uploaded by

22l101
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

[Link].

in

Probability

In our everyday nature life, we are passing through 2 types of phenomena


namely:

n
• Deterministic

g.i
• Probabilistic

n
In deterministic( or predictable or surety ) type, we can take sure results
as ‘YES’ or ‘NO’.
Examples: eri
ine
1. After 2 years, the age of a living person is added more. ( here the result
is true or yes).
ng

2. A person is living the planet sun. ( here the result is false or no).
E

Most of the physical and chemical phenomena are in deterministic type.


In probabilistic( or indeterministic or unpredictable ) type, we can’t get
arn

sure results as YES or NO. In that cases, the study of random (or chance )
phenomena, the result is in between 0% to 100%.
Le

Examples:

1. Getting rain today is 50-50 chance.


w.

2. In toss of a unbiased ( perfect ) coin, getting head or tail, the result is


not sure.
ww

The concept of probability has its origin in the early 17th century. Such
probability phenomena are frequently observed in games, mark in an exam,
profit in a business, birth and death rates of a country, system failure,
machine life time, queueing theory, radar detection and social sciences,
economy of a country.

(1) Definition of Probability :


Let E be an experiment. Let S be the sample space(or universal sample
space) defined on given experiment E, which is the set of all possible
outcomes. Let A be an event which is subset of sample space S. Then
the probability of the event A is
[Link]
[Link]
2 Introduction

Number of favorable outcomes n(A) m


p = P(A) = = =
Total number of outcomes n(S) n
and
  m
q = P A = P( not A) = 1 − P(A) = 1 − p = 1 −
n

where A is the complement of A and A ∪ A = S


 
∴ P(A) + P A = 1

n
(2) Axioms of probability:

g.i
(i) P(A) ≥ 0, if P(A) = 0 then A is an impossible event.
(ii) P(S) = 1 ⇒ A is an sure event.

n
eri
Important Note * : Probability of any event A always lies between
0 and 1.
ine
i.e., 0 ≤ P(A) ≤ 1. [This is from (i) and (ii)]
(iii) For mutually exclusive events
ng

P (A1 ∪ A2 ∪ · · · An ) = P (A1 ) + P (A2 ) + ... + P (An )


E

(3) Theorems:
arn

(a) Addition Theorem:

P(A or B) = P (A ∪ B)
Le

= P (A) + P (B) − P (A ∩ B)
(for non - mutually exclusive events)
w.

= P (A) + P (B) (for mutually exclusive (i.e., disjoint)events)

Similarly,
ww

P(A or B or C) = P (A ∪ B ∪ C)

= P (A) + P (B) + P (B) − P (A ∩ B) − P (B ∩ C) − P (A ∩ C) + P (A ∩ B ∩ C)

= P (A) + P (B) + P (C)


Thus, for mutually exclusive events, OR ⇒ Addition
(b) Compound Probability Theorem:
P (A and B) = P (A ∩ B)


 P (A) P (B/A)
=

or


 P (B) P (A/B)

Similarly,
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 3

P (A and B and C) = P (A ∩ B ∩ C)



 P (A) P (B/A) P (C/A ∩ B)



 or
=

P (B) P (C/B) P (A/B ∩ C)


or





 P (C) P (A/C) P (B/C ∩ A)

Conditional Probability:

P (A ∩ B)

n
P (B/A) =
P (A)

g.i
P (A ∩ B)
P (A/B) =
P (B)

n
For independent events:
(i) P (A/B) = P (A)
eri
(ii) P (A ∩ B) = P (A) P (B)
ine
which is the necessary and sufficient condition.
(iii) AND ⇒ multiplication
ng

For mutually exclusive events:


 
P (A ∩ B) = P φ = 0
E
arn

(c) P Happening of at least one of A1 , A2 , · · · An



= 1 − P Happening of none of A1 , A2 , · · · , An


(d) Total Probability:


Le

n
X
P (B) = P (Ai ) P (B/Ai )
w.

i=1

∴ For events A1 , A2 , A3 which are mutually exclusive and


ww

collectively exhaustive,
P (B) = P (A1 ) P (B/A1 ) + P (A2 ) P (B/A2 ) + P (A3 ) P (B/A3 )
(e) Baye’s Theorem:
P (Ai ) P (B/Ai )
P (Ai /B) =
P (B)
P (Ai ) P (B/Ai )
=
P (A1 ) P (B/A1 ) + P (A2 ) P (B/A2 ) + · · · + P (An ) P (B/An )
(4) Useful results:
(a) Algebra of Sets:
(i) Commutative law:
[Link]
[Link]
4 Introduction

A∩B=B∩A
A∪B=B∪A
(ii) Distributive law:
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
(iii) Associate law:
(A ∪ B) ∪ C = A ∪ (B ∪ C) = A ∪ B ∪ C
(A ∩ B) ∩ C = A ∩ (B ∩ C) = A ∩ B ∩ C

n
(iv) DeMorgan’s law:

g.i
A∪B=A∩B
A∩B=A∪B

n
(b) If A and B independent, then

eri
     
A and B , A and B and A and B and independent
ine
 
(c) P A/B = 1 − P (A/B)
  P (A) − P (A ∩ B)
(d) P A/B =
1 − P (B)
ng

(5) Mathematical Preliminaries:


E

(a) n! = n(n − 1)(n − 2) · · · 3.2.1 = n(n − 1)!


arn

0! = 1
n!
(b) n Cr =
r! (n − r)!
Le

n
Cr = Cn−r
n

n
C1 = n =n Cn−1
C0 = 1 = n Cn
w.

(c) For an Arithmetic Progression(A.P.): a, a + d, a + 2d, · · ·


ww

n (n − 1)
Sum of n terms, Sn = na + d
2
(d) For a Geometric Progession(G.P.): a, ar, ar2 , · · ·
a (1 − rn )

 1 − r , if r < 1



Sum of n terms, Sn = 

 a (rn − 1)
, if r > 1


r−1

a
S∞ = , r < 1
1−r

Example 0.1 A coin is tossed. Find the probability of getting (a) a head (b) a tail
(c) both head and tail (d) either a head or a tail (e) 2 heads.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 5

Solution: Let the experiment be E : Tossing a coin.


Sample space S : {H, T}
 
n(S) = number of elements in S = 2 = 21 coin

(a) Let A be an event to get a head. i.e., A = {H}, n(A) = 1.


n(A) 1
∴ P(A) = = .
n(S) 2
(b) Let B be an event to get a tail. i.e., B = {T}, n(B) = 1.

n
n(B) 1

g.i
∴ P(B) = = .
n(S) 2

n
(c) Let C be an event to get both head and tail.
i.e., C = A ∩ B = { } = φ = null set; n(C) = 0.

∴ P(C) =
n(C) 0
= =0 eri (⇒ C is an impossible event)
ine
n(S) 2

(d) Let D be an event to get either a head or a tail.


i.e., D = A ∪ B = {H} ∪ {T}; n(D) = 2.
ng

By addition theorem, we have


E

∴ P(D) = P(A ∪ B) = P(A) + P(B) − P(A ∩ B)


1 1
arn

= + −0 (∵ by (a), (b)&(c))
2 2
=1 (⇒ D is a sure event.)
Le

Example 0.2 2 coins are tossed. Find the probability of getting (a) 2 heads (b) 1
w.

head (c) 0 head (d) at least 1 head (e) at most 1 head.

Solution: Let the experiment be E : Tossing 2 coins.


ww

Sample space S : {HH, HT, TH, TT}


 
n(S) = number of elements in S = 4 = 22 coins

(a) Let A be an event to get 2 head. i.e., A = {HH}, n(A) = 1.


n(A) 1
∴ P(A) = = .
n(S) 4
(b) Let B be an event to get 1 head. i.e., B = {HT, TH}, n(B) = 2.
n(B) 2 1
∴ P(B) = = = .
n(S) 4 2
[Link]
[Link]
6 Introduction

(c) Let C be an event to get no heads. i.e., C = {TT}; n(C) = 1.


n(C) 1
∴ P(C) = = .
n(S) 4
(d) Let D be an event to get at least 1 head. i.e., to get number of heads
≥1
i.e., D = {HH, HT, TH}, n(D) = 3.

n(D) 3
∴ P(D) = = .

n
n(S) 4

g.i
(e) Let F be an event to get at most 1 head. i.e., to get number of heads
≤1

n
i.e., D = {HT, TH, TT}, n(F) = 3.

∴ P(F) =
n(F) 3
= .eri
ine
n(S) 4

Example 0.3 A coin is tossed 3 times. Find the probability to get (a) more heads
ng

than tails (b) tail occurs only on odd number of tosses.


Solution: Let the experiment be E : Tossing 3 coins.
E

Sample space S : {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}
arn

 
n(S) = number of elements in S = 8 = 2 3 coins

(a) Let A be an event to get more heads than tails.


Le

i.e., A = {HHH, HHT, HTH, THH}, n(A) = 4.


n(A) 4 1
∴ P(A) = = = .
w.

n(S) 8 2
(b) Let B be an event to get tail occurs only on odd number of tosses.
ww

i.e., B = {HHT, THH, THT}, n(B) = 3.


n(B) 3
∴ P(B) = = .
n(S) 8

Example 0.4 A die is thrown. Find the probability of getting (a) odd numbers (b)
even numbers (c) multiples of 3 (d) divisors of 3 (e) more than 2.
Solution: Let the experiment be E : Throwing a die.
Sample space S : {1, 2, 3, 4, 5, 6}
 
n(S) = number of elements in S = 6 = 61 die
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 7

(a) Let A be an event to get an odd number. i.e., A = {1, 3, 5}, n(A) = 3.

n(A) 3 1
∴ P(A) = = = .
n(S) 6 2
(b) Let B be an event to get an even number. i.e., B = {2, 4, 6}, n(B) = 3.

n(B) 3 1
∴ P(B) = = = .
n(S) 6 2
(c) Let C be an event to get multiple of 3. i.e., C = {3, 6}, n(C) = 2.

n
n(C) 2 1

g.i
∴ P(C) = = = .
n(S) 6 3

n
(d) Let D be an event to get divisor of 6. i.e., D = {1, 2, 3, 6}, n(D) = 4.

eri
∴ P(D) =
n(D) 4 2
n(S)
= = .
6 3
ine
(e) Let F be an event to get more than 2. i.e., F = {3, 4, 5, 6}, n(F) = 4.
n(F) 4 2
ng

∴ P(F) = = = .
n(S) 6 3
(or)
E

P(F) = 1 − P(less than equal to 2)


arn

2 2
=1− = .
6 3
Le

Example 0.5 2 dice are thrown. Find the probability of getting (a) a doublet (b)
even number on first die (c) total is 7 (d) total more than 9 (e) total less than or
w.

equal to 9.
ww

Solution: Let the experiment be E : Throwing 2 dice.


 


 (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6)


 
(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6)

 


 

 
(3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6)

 

 
Sample space S : 
 
(4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6)
 


 


 




 (5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6)



 
 (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)
 
 
n(S) = number of elements in S = 36 = 62 dice

(a) A = {(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)}, n(A) = 6.
[Link]
[Link]
8 Introduction

n(A) 6 1
∴ P(A) = = = .
n(S) 36 6
 


 (2, 1), (2, 3), (2, 5) 


(b) B =  , n(B) = 3.
 
(4, 1), (4, 3), (4, 5)
 
 

 
 (6, 1), (6, 3), (6, 5)
 

n(B) 9 1
∴ P(B) = = = .
n(S) 36 4
(c) C = {(1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)}, n(C) = 6.

n
g.i
n(C) 6 1
∴ P(C) = = = .
n(S) 36 6

n
(d) D = {(4, 6), (5, 5), (5, 6), (6, 4), (6, 5), (6, 6)}, n(D) = 6.

∴ P(D) =
n(D)
n(S)
=
6 1
= .
36 6 eri
ine
1 5
(e) P(F) = 1 − P(D) = 1 − = .
6 6
ng

Arrangement of 52 deck of cards:


E

Black Spade(♠) Ace King Queen Jack 10 9 8 7 6 5 4 3 2 13


arn

Black Club(♣) Ace King Queen Jack 10 9 8 7 6 5 4 3 2 13


52 Red Heart(r) Ace King Queen Jack 10 9 8 7 6 5 4 3 2 13
Red Diamond(q) Ace King Queen Jack 10 9 8 7 6 5 4 3 2 13
Le

Total 2 Color 4 Suit 4 4 4 4 4 4 4 4 4 4 4 4 4 52

Example 0.6 From a well shuffled deck of 52 playing cards, find the probability
w.

of getting (a) a spade (b) 2 spades (c) a black king (d) 3 heats (e) 4 diamonds (f) 5
ww

kings (e) 3 spades and 1 diamond (f) 2 kings, 2 aces and 2 queens (g) 4 cards of
each suit (h) a spade or a king (i) a black or a spade or an ace.
Solution: Let the experiment be E : Play of well shuffled 52 cards.
Sample space S : Refer (Arrangement of 52 deck of cards)
n(S) = number of elements in S = 52
13C1 13 1
(a) P( a spade ) = = = .
52C1 52 4
13×12
13C2 1×2 13 × 12 3 1
(b) P( two spades ) = = 52×51
= = = .
52C2 1×2
52 × 51 51 17
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 9

13C3 13 × 12 × 11
(c) P( 3 hearts ) = = .
52C3 52 × 51 × 50
(d) P( 5kings ) = 0. ( an impossible event)
2C1 2 1
(e) P( a black king ) = = = .
52C1 52 26
4C4
(f) P( 4 diamonds ) = .
52C4
13C3 × 4C1
(g) P( 3 spades and 1 diamond ) = .

n
52C4

g.i
4C2 × 4C2 × 4C2
(h) P( 2 kings, 2 aces and 2 queens ) = .
52C6

n
13C1 × 13C1 × 13C1 × 13C1
(i) P( 4 cards of each suit ) = .
52C4
(j) P( a spade or a king )
eri
= P( a spade ) + P( a king ) − P( a spade and a king )
ine
13C1 4C1 1 4
= + − = .
52C1 52C1 52 13
ng

(k) P( a black or a spade or a ace )


= P(A∪B∪C), where A = a black , B = a spade , C = an ace
= P(A) + P(B) + P(C) − P(A ∩ B) − P(B ∩ C) − P(C ∩ A) + P(A ∩ B ∩ C)
E

26C1 13C1 4C1 13C1 1 2 1


= + + − −  − + 
arn

52C1 52C1 52C1 52C1 52 52 52


28 7
= = .
52 13
Le

Example 0.7 A box contains 4 orange, 5 white , 7 green balls. Find the probability
of getting (a) 2 orange, 3 white and 1 green all together at a time (b) 2 orange or (3
w.

white and 1 green) (c) 2 orange, 3 white and 1 green successively with replacement
ww

(d) 2 orange, 3 white and 1 green successively without replacement (d) 2 orange,
1 orange and 3white successively with replacement (e) 2 orange, 1 orange and 3
white successively without replacement.
Solution: Let A be an event to get 2 orange 4 orange
Let B be an event to get 3 white 5 white
Let C be an event to get 1 green 7 green
16 balls
4C2 × 5C3 × 7C1
(a) P(A ∪ B ∪ C) = .
16C6
[Link]
[Link]
10 Introduction

(b) P(A ∪ (B ∩ C)) = P(A) + P(B ∩ C) − P[A ∩ (B ∩ C)]


[∵ by conditional theorem]
4C2 5C2 × 7C1
= + − 0.
16C2 16C4
4C2
(c) Let F be an event to get 2 orange, P(F) =
16C2
Let G be an event to get 3 white given that event F has occurred with
replacement.
5C3
∴ P(G/F) = .
16C3

n
Let H be an event to get 1 green given that event F ∩ G has occurred

g.i
with replacement.

n
7C1
∴ P(H/(F ∩ G)) =

eri
16C1

∴ P(H ∩ G ∩ G) = P(F).P(G/H).P(H/(F ∩ G))


ine
4C2 5C3 7C1
=
16C2 16C3 16C1
ng

Here F, G, H are independent events.


4C2
(d) P(F) =
E

16C2
5C3
P(G/F) =
arn

[∵ G occurs after F without replacement]


14C3
7C1
P(H/(F ∩ G)) = [∵ H occurs after (F ∩ G) without replacement]
11C1
Le

∴ P(H ∩ G ∩ G) = P(F).P(G/H).P(H/(F ∩ G))


w.

4C2 5C3 7C1


=
16C2 14C3 11C1
ww

Here F, G, H are dependent events.


4C2
(e) Let I be an event to get 2 orange, P(I) =
16C2
Let J be an event to get 1 orange given that event I has occurred with
replacement.
4C1
∴ P(J/I) = .
16C1
Let K be an event to get 3 white given that event I ∩ J has occurred
with replacement.

5C3
∴ P(K/(I ∩ J)) =
16C3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 11

∴ P(I ∩ J ∩ K) = P(I).P(J/I).P(K/(I ∩ J))


4C2 4C1 5C3
=
16C2 16C1 16C3
Here I, J, K are independent events.
4C2
(f) P(I) =
16C2
2C1
P(J/I) = [∵ I occurs after I without replacement]
14C1
5C3
P(K/(I ∩ J)) = [∵ K occurs after (I ∩ J) without replacement]

n
13C3

g.i
∴ P(I ∩ J ∩ K) = P(I).P(J/I).P(K/(I ∩ J))

n
4C2 2C1 5C3
=
16C2 14C1 13C3

eri
Here I, J, K are dependent events.
ine
Example 0.8 A bag contains 1 to 100 numbered balls. One ball is drawn at
random. Find the probability of getting (a) a multiple of 5 (b) a multiple of 7 (c) a
ng

multiple of 5 or 7 (d) an odd number or a multiple of 5.


E

Solution: S = {1, 2, 3, · · · , 100}, n(S) = 100.


arn

(a) Let A be an event to get a multiple of 5. i.e.,


A = {5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100},
Le

100
n(A) = = 20.
5
w.

n(A) 20 1
∴ P(A) = = = .
n(S) 100 5
ww

(b) Let B be an event to get a multiple of 7.


i.e., B = {7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, 84, 91, 98}, n(B) = 100
7 = 14.
n(B) 14 7
P(B) = = = .
n(S) 100 50
(c) Let C be an event to get a multiple of 5 or a multiple of 7.
P(C) = P(A ∪ B) = P(A) + P(B) − P(A ∩ B) (∵ by additional theorem)
20 14 2 32
= + − =
100 100 100 100
(d) Let D be an event to get a odd number. i.e., D = {1, 3, 5, · · · 99}, n(D) =
50.
[Link]
[Link]
12 Introduction

50 1
P(D) = =
100 2
20 1
We have from (a), P(A) = = , where A = an event to get a multiple
100 5
of 5.
Let F be an event to get an odd number or a multiple of 5. i.e., F = D∪A.

∴ P(F) = P(D ∪ A) = P(D) + P(A) − P(D ∩ A)


(∵ by addition theorem)

n
50 20 10 60 3
= + − = =

g.i
100 100 100 100 5

Example 0.9 5 persons are selected at random from a group of 4 men, 4 women

n
eri
and 4 children. Find the probability of getting (a) exactly 2 children (b) exactly 1
child.
ine
Solution: Here group total = 4 + 4 + 4 = 12.
Here we have to select totally 5 persons.
ng

(a) We are going to select 2 children from 4 children and remaining 3 from
4 men and 4 women.
E
arn

4C2 × 8C3
∴ P(Selection of 5 persons in which exactly with 2 children) = .
12C5
Le

4C1 × 8C4
(b) ∴ P(Selection of 5 persons in which exactly with 1 child) = .
12C5
w.

Example 0.10 A problem is given to 3 students A, B, C whose chances of solving


1 1 1
it are , , respectively. Find the probability of getting the problem solved by (a)
ww

2 3 4
none (b) 1 (c) 2 (d) all 3 (e) at least one.
1 1 1
Solution: Given P(A) = , P(B) = , P(C) = .
2 3 4
Students A, B, C are solving the problem independently.
1 1
∴P(Ā) = 1 − P(A) = 1 − = = P( Problem not solved by A)
2 2
1 2
P(B̄) = 1 − P(B) = 1 − = = P( Problem not solved by B)
3 3
1 3
P(C̄) = 1 − P(C) = 1 − = = P( Problem not solved by C)
4 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 13

(a) P(None of the students solved the problem ) = P(Ā ∩ B̄ ∩ C̄)


123 1
= P(Ā) ∩ P(B̄) ∩ P(C̄) = =
234 4
(b) P(Problem solved by exactly one )
= P(A ∩ B̄ ∩ C̄) + P(Ā ∩ B ∩ C̄) + P(Ā ∩ B̄ ∩ C)
= P(A)P(B̄)P(C̄) + P(Ā)P(B)P(C̄) + +P(Ā)P(B̄)P(C)
123 113 121 6 + 3 + 2
= + + =
234 234 234 24
11
=
24

n
(c) P(Problem solved by exactly two )

g.i
= P(A ∩ B ∩ C̄) + P(A ∩ B̄ ∩ C) + P(Ā ∩ B ∩ C̄)
= P(A)P(B̄)P(C̄) + P(Ā)P(B)P(C̄) + +P(Ā)P(B̄)P(C)
123 113 121 6 + 3 + 2

n
= + + =
234 234 234 24

eri
11
=
24
ine
(d) P(Problem solved by all 3 students)
= P(A ∩ B ∩ C) = P(A)P(B)P(C)
111 1
= =
ng

2 3 4 24
(e) P(Problem solved by at least by one student)
= P(Number of students solved the problem ≥ 1)
E

= 1 − P(Number of students solved the problem < 1)


arn

= 1 − P(Number of students solved the problem = 0)


= 1 − P(None of them solved)
1
=1−
Le

4
Example 0.11 What is the chance that a leap year selected at random will contain
w.

53 Sundays?
ww

366
Solution: W.K.T. in a leap year, the total number of days are 366. = 52
7
complete weeks and 2 days.
The remaining 2 days may be in the following possible combinations:
(1)Sunday, Monday (2)Monday, Tuesday (3)Tuesday, Wednesday
(4)Wednesday, Thursday (5)Thursday, Friday (6)Friday, Saturday
(7)Saturday, Sunday
 2
∴ P 53 Sundays in a leap year =
7
Example 0.12 Box A contains 5 oranges and 7 green balls. Box B contains 4
oranges and 6 green balls. One box is choosen at random Find the probability of
[Link]
[Link]
14 Introduction

get (a) 2 orange (b) 3 green (c) 2 orange from box A (d) 3 green from box B.

Solution: Given 5 orange 4 orange


7 green 6 green
16 balls 10 balls
Box A Box B
Total number of boxes is 2.
Selection of a box is at random.
So P(selectiono f abox) = 12 .

n
(a) P(2 orange) = P(2 orange from A or B)
=P(2 orange from A) + P(2 orange from B) − P(2 orange from A and B)

g.i
=P( First enter into box A and take 2 orange from A)
+ P( First enter into box B and take 2 orange from B) − 0

n
(∵ selection of box is 1 at random)

eri
=P( First enter into box A)P( take 2 orange from A)
+ P( First enter into box B)P(take 2 orange from B)
ine
1 5C2 1 4C2
= + . (⇒ Total Probability)
2 12C2 2 10C2
(b) P(3 green) = P(3 green from A or B)
ng

=P(3 green from A) + P(3 green from B) − P(3 green from A and B)
=P( First enter into box A and take 3 green from A)
E

+ P( First enter into box B and take 3 green from B) − 0


arn

(∵ selection of box is 1 at random)


=P( First enter into box A)P( take 3 green from A)
+ P( First enter into box B)P(take 3 green from B)
1 7C3 1 6C3
Le

= + . (⇒ Total Probability)
2 12C3 2 10C3
w.

Favorable probability
(c) P(2 orange from A) =
Total probability
P(Select box A and get 2 orange)
ww

=
P(2 orange from A or B)
1 5C2
2 12C2
=1 5C2 1 4C2
(∵ Baye’s theorem)
2 12C2 + 2 10C2

Favorable probability
(d) P(3 green from B) =
Total probability
P(Select box B and get 3 green)
=
P(3 green from A or B)
1 6C3
2 10C3
= (∵ Baye’s theorem)
1 7C3 1 6C3
2 12C3 + 2 10C3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 15

Example 0.13 A factory has three machines A, B and C which generate items in
the proportional 2 : 6 : 3. 50%, 70% and 90% of the items generated by A, B and
C respectively are known to have standard quality. An item selected at random
from a day’s production is known to have quality. What is the chance that it came
from C?
2
Solution: E1 : The item is generated by the machine A, P(E1 ) = .
11

n
6
E2 : The item is generated by the machine B, P(E2 ) = .

g.i
11
3
E3 : The item is generated by the machine C, P(E3 ) = .
11

n
A : The item has standard quality.

P(E3 /A) =
eri
P(A/E1 ) = 50% = 0.5, P(A/E2 ) = 70% = 0.7, P(A/E3 ) = 90% = 0.9
P(E3 )P(A/E3 )
ine
P(E1 )P(A/E1 ) + P(E2 )P(A/E2 ) + P(E3 )P(A/E3 )
(∵ Baye’s theorem)
ng

3
11 × 0.9
= = 0.34
2
11 × 0.5 + 6
11 × 0.7 + 3
11 × 0.9
E
arn
Le
w.
ww

[Link]
[Link]

n
ng.i
eri
ine
ng
E
arn
Le
w.
ww

[Link]
[Link]

1 UNIT I - RANDOM VARIABLES

Discrete and continuous random variables - Moments - Moment

n
generating functions and their properties. Binomial, Poisson ,

g.i
Geometric, Uniform, Exponential, Gamma and Normal distributions
- Function of Random Variable.

n
eri
A random variable is a real valued function defined over the sample
space of an experiment. i.e.,
ine
A random variable is a function X(s), which assigns a real number (x)
to every element (s) of the sample space (S) corresponding to random
ng

experiment (E).
For Example : Consider an experiment of tossing a coin twice. Define a
random variable X as the number of heads. Then
E

Sample Space : HH HT TH TT
arn

Value of X : 2 1 1 0

The range space of the r.v. X is RX = {x ∈ R/X(x) = s, s ∈ S} = {0, 1, 2}.


Le

1 2 1
∴ P(X = 2) = , P(X = 1) = , P(X = 0) =
4 4 4
w.

NOTE : In the above example, the r.v. X is a discrete random variable.


ww

Discrete Random Variable Continuous Random


Variable
Definition A random variable which A random variable which
can take only finite number can take uncountably infinite
of values countably infinite values or all values in an
number of values is called interval is called Continuous
Discrete Random Variable. Random Variable.
Example (1) The number of telephone (1) The duration of telephonic
calls received by the conversation.
telephone operator. (2) The path of the aeroplane
(2) The number of printing from Chennai to Hydarabad.
mistakes in a book.
[Link]
[Link]
18 UNIT I - Random Variables

Discrete Random Variable Continuous Random


Variable
Probability Probability mass function Probability density
Function (p.m.f.) (or) function (p.d.f.) (or)
Probability frequency density function : f (x)
function (or)
Point probability function For a continuous random
:P(x) variable X, p.d.f. is f (x)
where (i) Zf (x) ≥ 0

For a discrete random

n
(ii) f (x)dx = 1
variable X, p.m.f. is

g.i
−∞

P (X = xi ) = Pi = P (xi )

n
Pi ≥ 0
where (i) X

eri
(ii) Pi = 1
∀i
F(x) = P (X ≤ x) Z x
ine
Cumulative =
P
P (X = xi ) F(x) = f (x) dx
distribution xi ≤x −∞

function
ng

(cdf):F(x)
X Z ∞
Mean of X: E (X) = xP (X = x) E(x) = x f (x) dx
E

E(X) ∀x −∞
arn

 
V (X) = E X2 − [E (X)]2
 
2
V (X) = E X − [E (X)] 2

where
Le

Variance of X where
 P   Z ∞
: V(X) E X2 = x2 P (X = x) E X2 = x2 f (x) dx
∀x −∞
w.

h i X h i Z ∞
Moment E etx = etx P (X = x) E etx = etx f (x) dx
generating ∀x −∞
ww

function
(m.g.f.) of hX: i
MX (t) = E etx

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 19

Discrete Random Variable Continuous Random


Variable
{xi , P (xi )}

Probability x, f (x)
distribution
b
P (a < x < b) = F (b) − F (a)
Z
Relation P (a < x < b) = f (x) dx
− P (X = b)
Za b
P (a ≤ x < b) = F (b) − F (a)
P (a ≤ x < b) = f (x) dx
+ P (X = a)
Za
− P (X = b) b

P (a < x ≤ b) = F (b) − F (a) P (a < x ≤ b) = f (x) dx

n
P (a ≤ x ≤ b) = F (b) − F (a) Za

g.i
b
+ P (X = a) P (a ≤ x ≤ b) = f (x) dx
+P (X = b) a

n
eri
(1) Properties of Cumulative distribution function (c.d.f.):
(a) 0 ≤ F(x) ≤ 1
ine
(b) F (x1 ) < F (x2 ) if x1 < x2
(c) F (−∞) = lim F (x) = 0
ng

x→−∞
F (∞) = lim F (x) = 1
x→∞
(2) For a continuous r.v.:
E

P(a ≤ X < b) = F(b) − F(a)


arn

d
Also, F (x) = f (x)
dx
 2
Le

(3) Variance of X = Var(X) = E X − X , where X = E (X) and


p
Standard Deviation of X = σX = Var (X)
w.

  2
x − X P (x) , for discrete r.v. X

 P


 Zx
(a) Var (X) = 

2
ww


x − X f (x) dx, for continuous r.v. X




 R
x

(b) Var(aX + b) = Var(aX) + Var(b) = a2 Var(X) + 0 = a2 Var(X)


(4) Theorems on Expectation
(a) E(a) = a
(b) E(aX) = aE(X)
(c) E(aX + b) = aE(X) + b
(d) E (k1 X1 + k2 X2 + · · · + kn Xn ) = E (k1 X1 ) + E (k2 X2 ) + · · · + E (kn Xn )
(e) For independent r.v. X1 , X2 , · · · , Xn , then
E(X1 X2 · · · Xn ) = E(X1 ) E(X2 ) · · · E(Xn )
[Link]
[Link]
20 UNIT I - Random Variables

Moments

0
(a) rth moment about origin (Raw moment) : µr = E (Xr )
x P (x) , for discrete r.v. X
P r


x

0
(i) µr = E (Xr ) = 

 Z ∞
xr f (x) dx , for continuous r.v. X




−∞
(ii) Note:
0
µ0 = 1

n
0
µ1 = Mean
 (X) = E (X)

g.i
0
µ2 = E X 2
 r
(b) r moment about mean (Central moment) : µr = E X − X
th

n
r

eri
 X



 x − X P (x) , for discrete r.v. X
(i) µr = 

 Zx
∞ r
f (x) dx , for continuous r.v. X

ine
x − X




−∞
(ii) Note:
ng

µ1 = 0
0
 0 2
µ2 = Var (X) = µ2 − µ1
E

 0 3
arn

0 0 0
µ3 = µ3 − 3µ2 µ1 + 2 µ1
0 0 0 0
 0 2  0 4
µ4 = µ4 − 4µ3 µ1 + 6µ2 µ1 − 3 µ1
Le

0 0
 0 0
 0 2 0  0 2  0 4
In general, µr = µr − rC1 µr−1 µ1 + rC2 µr−2 µ1 µ2 µ1 − 3 µ1
∞ r
0 t
X
w.

0
(iii) MX (t) = µr where µr = (Xr ) i.e. rth raw moment.
r=0
r!
(iv) MGF of X about X = a :
ww

 
[MX (t)]x=a = E et(x−a)
∞ 0 r
where µr = E (X − a)r
0
= µr r!t
P
r=0

(c) Properties of M.G.F.:


 r
0 t
(i) µr = coefficient of in the expansion of MX (t) in series of
r!
ascending powers of t.
" r #
0 d
(ii) µr = MX (t)
dthr i t=0 h 0 i
0 0 00
∴ µ1 = MX (t) t = 0 and µ2 = MX (t) t = 0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 21

(iii) MCX (t) = MX (Ct)


(iv) If Y = aX + b , then MY (t) = ebt MX (at)
(v) If X1 , X2 , · · · , Xn are independent r.v.s then
MX1 +X2 +···+Xn (t) = MX1 (t) · MX2 (t) · · · · · MXn (t)

1.1 Discrete distributions

Why distributions? For example if 2 or 3 coins tossed, we can easily write

n
sample space and find probabilities on the experiment. Suppose 10 or 100

g.i
or more coins tossed, writing sample space in exam time is nor possible.
So we use distributions.
Refer examples:

n
Example 0.2 [Probability by directly]
Example 1.1
Example 1.17 eri
[Probability by Random variable]
[Probability by Binomial distribution]
ine
*** All the above examples give the same results for the same experiment.
ng

1.1.1 Binomial Distribution

Assumptions:
E

(1) The random experiments corresponds to two possible


arn

outcomes(success or failure).
(2) Number of trials in finite.
Le

(3) The trials are independent.


(4) The probability of success is constant in any trial.
w.

Formula:
ww

(1) P (X = x) = nCx px qn−x , x = 0, 1, 2, 3, ...n, with p + q = 1


(2) Frequency function or expected number of ‘N’ sets of ‘n’ trials each
having ‘x’ number of success is f (x) = N [P (X = x)] = N nCx px qn−x
 
where
n → number of trials
x → value of random variable X
p → probability of success in single trial
q → probability of failure in single trial
X → Random variable (Discrete), represents number of successes,
which follows Binomial distribution
N → Number of times ‘n’ trials are repeated (or) total number of sets.
[Link]
[Link]
22 UNIT I - Random Variables

Note : Symbolically, X ∼ Binomial distribution n, p , ∼ means f ollows




P.M.F. : P (X = x) = nCx px qn−x , Mean = np, Variance = npq, M.G.F.


n
= pet + q

1.1.2 Poisson Distribution

Assumptions:
(1) Number of trials in indefinitely large i.e., n → ∞.
(2) The probability of success is very small i.e., p → 0 with p + q = 1.

n
Formula:

g.i
e−λ λx
(1) P (X = x) = , x = 0, 1, 2, 3, · · · ∞.
x!

n
(2) Frequency function or expected number of ‘N’ sets of ‘n’ trials each#

eri e λx
" −λ
having ‘x’ number of success is f (x) = N [P (X = x)] = N
x!
ine
where
x → value of random variable X
X → Random variable (Discrete), represents number of successes,
ng

which follows Poisson distribution.


N → Number of times ‘n’ trials are repeated (or) total number of sets.
E

Note : Symbolically, X ∼ Poisson distribution λ = np




e−λ λx
arn

P.M.F. : P (X = x) = ,
x!
Mean = λ,
Variance = λ,
Le

M.G.F. = eλ(e −1)


t
w.

1.1.3 Geometric Distribution:


ww

Assumptions:
Suppose that independent trials are performed until the success occurs.
(or)
Number of trials to get first success
(or)
Number of trials required to get first success
(or)
Number of failures preceding the first success
(or)
Number of trials needed to obtain the first success

Formula:
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 23

P (X = x) = qx p, x = 0, 1, 2, · · · ∞, with p + q = 1
(or)
P (X = x) = qx−1 p, x = 1, 2, 3, · · · ∞, with p + q = 1
(or)
P (X = x) = qx−2 p, x = 2, 3, 4, · · · ∞, with p + q = 1 (or) etc. where
x → value of random variable X
X → Random variable (Discrete), represents number of trials, which
follows Geometric distribution

n
p → probability of success in single trial

g.i
q → probability of failure in single trial

Note : Symbolically, X ∼ Geometric distribution p

n
q q p
P.M.F. : P (X = x) = qx p, Mean = p , Variance = 2 , M.G.F. =

(or) eri p 1 − et q
ine
q pet
P.M.F. : P (X = x) = qx−1 p, Mean = 1p , Variance = , M.G.F. =
p2 1 − et q
ng

1.1.4 Memoryless property of Geometric Distribution:


E

If X is a Geometric Distribution random variable, then


arn

P [X > (s + t) /X > s] = P [X > t]


= e−αt , s > 0, t > 0
Le

1.2 Continuous Distributions


w.

1.2.1 Uniform distribution (Rectangular distribution):

A continuous random variable X is said to follow uniform distribution


ww

over a finite interval, if its p.d.f. is constant in that interval and p.d.f. is
given by
1
f (x) = ,a < X < b
b−a
Note : Symbolically, X ∼ Uniform distribution (Parameters : a, b)
1
P.D.F. : f (x) = ,
b−a
(a + b) 1
Mean = , Variance = (b − a)2 , M.G.F. =
2  12
(b + a) t b + ba + a t
2 2 2
1+ + + ···
2! 3!
[Link]
[Link]
24 UNIT I - Random Variables

1.2.2 Exponential distribution

A continuous random variable X is said follow exponential distribution


with parameter α ≥ 0, if its p.d.f. is given by

f (x) = αe−αx , x ≥ 0

Note : Symbolically, X ∼ Exponential distribution (α)


If X ∼ Exponential distribution (λ) , then f (x) = λe−λx , x ≥ 0, λ ≥ 0
P.D.F.: f (x) = αe−αx ,

n
1
Mean = ,

g.i
α
1
Variance = 2 ,
α

n
α
M.G.F. =
(α − t)
eri
ine
1.2.3 Memoryless property of exponential distribution:

If X is exponentially distributed random variable, then


ng

P [X > (s + t) /X > s] = P [X > t]


= e−αt , s > 0, t > 0
E
arn

1.2.4 Gamma distribution

A continuous random variable X is said follow Gamma distribution with


parameter β ≥ 0, if its p.d.f. is given by
Le

e−x xβ−1
f (x) = , β > 0, 0 ≤ X < ∞
β
w.

Z ∞
Note : β = e−x xβ−1 dx
ww

Z ∞0
β+1 = e−x xβ dx
(0
n!, n is +ve integer
n+1 =
n 2n , n is a fraction
Note : Symbolically, X ∼ Gamma distribution β


e−x xβ−1
P.D.F.: f (x) = ,
β
1
Mean = β , Variance = β , M.G.F. =
(1 − t)β
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 25

1.2.5 Erlang distribution(Generalized Gamma distribution)

A continuous random variable X is said follow Erlang distribution


with parameters α, β ≥ 0, if its p.d.f. is given by

e−αx xβ−1
f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β

Note : Symbolically, X ∼ Erlang distribution α, β



e−x xβ−1
If α = 1, Erlang distribution becomes Gamma distribution as

n
β
If β = 1, Erlang distribution becomes Exponential distribution as αe−αx

g.i
n
1.2.6 Normal distribution

A continuous random variable X is


  said
2
eri
 follow Normal distribution with
parameters mean µ and variance σ and its p.d.f. is given by
ine
E ng

−∞ x=µ ∞
arn

z=0
1 x − µ 2 −∞ < x < ∞
 
1 − x−µ
σ
Le

f (x) = √ e 2 ; −∞ < µ < ∞ ; z =


σ 2π σ
σ>0
w.

 
Note : Symbolically, X ∼ Normaldistribution µ, σ 2
ww

x−µ 2
− 21 ( )
e σ
P.D.F. : f (x) = √ ,
σ 2π
σ2 t2
M.G.F. =e e tµ 2

Mean = µ,
Variance = σ2

1.2.7 Properties of Normal distribution

(1) Normal probability curve(Bell shaped curve)


[Link]
[Link]
26 UNIT I - Random Variables

Z z2
(2) P (x1 < X < x2 ) = P (z1 < Z < z2 ) = φ (z) dz
z1
x1 − µ x2 − µ  x − µ
where z1 = , z2 = ∵z=
σ σ σ
1 1 2
φ (z) = √ e− 2 Z ⇒ p.d.f. of standard normal variant where
α 2π
Z ∼ N (0, 1)
1
(3) P (−∞ < Z < 0) = P (0 < Z < ∞) =
2
(4) To find area under standard normal curve between lines z = 0 and

n
z = z1 .

g.i
(5) The number of cases where x1 < X < x2 from the total number of
N cases is
x−µ

n
N [P (x1 < X < x2 )] = NP (z1 < Z < z2 ) , Z =
σ

eri
ine
ng
E
arn
Le
w.
ww

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 27

1.3 M.G.F., Mean and Variance of Distributions

1.3.1 M.G.F., Mean and Variance of Binomial Distribution

P(X = x) = nCx px qn−x , x = 0, 1, · · · , n where p + q = 1


M.G.F. of X:
 
MX (t) = E etX
Xn
= etx P(X = x)

n
x=0
n n

g.i
X X  x
= tx
e nCx p q x n−x
= nCx e p qn−x
t

x=0 x=0

n
0  1  2  n
= nC0 e p q +nC1 e p q +nC2 e p q +· · ·+nCn et p q0
t n t n−1 t n−2

eri
 1  2  n
= qn + nC1 et p qn−1 + nC2 et p qn−2 + · · · + et p
 n
ine
= pe + q
t

Mean of X:
( )
d
ng

E (X) = [MX (t)]


dt t=0
( )
d
E

h  ni
  n−1 
= pe + q
t
= n pe + q
t
pet
dt t=0
arn

t=0
n−1
= np p + q
= np
Le

Variance of X :
 
Var(X) = E X2 − [E (X)]2
w.

( 2 ) (  )
  d d  n−1 
Now, E X2 = 2
[MX (t)] = np pet + q et
dt dt
ww

t=0 t=0
  n−2   n−1 
= np (n − 1) pet + q pet et + pet + q et
t=0
= np (n − 1)p + 1
 

= n2 p2 − np2 + np
2
∴ Var(X) = n2 p2 − np2 + np − np
= np − np2 = np 1 − p


= npq
∴ PMF :P(X = x) = nCx px qn−x Mean = np
 n
M.G.F. = pet + q Variance = npq
[Link]
[Link]
28 UNIT I - Random Variables

1.3.2 M.G.F., Mean and Variance of Poisson Distribution

e−λ λx
P(X = x) = , x = 0, 1, 2, · · ·
x!
M.G.F. of X:
  X∞
MX (t) = E e =
tX
etx P(X = x)
x=0
∞ ∞ x
X
tx e λ
−λ x X et λ
= e =e −λ

x=0
x! x=0
x!

n
t 0 t 1 t 2
 
λ λ λ
  
e e e

g.i
= e−λ  + + + · · · 

 
0! 1! 2!
= e−λ eλe
t

n
= eλ(e −1)
t

Mean of X:
d
( ) (
d h λ(et −1) i
) eri
ine
E (X) = [MX (t)] = e
dt t=0
dt
( )t=0
d −λ λet
h i
= e e
ng

dt
( )t=0
d λet
= e−λ λet eλe
h i n t
o
= e−λ e
E

dt t=0
t=0
arn

= e−λ λeλ
n o


Le

Variance of X :
 
Var(X) = E X2 − [E (X)]2
w.

( 2 ) ( )
d d
e−λ λet eλe
  h t
i
Now, E X2 = 2
[MX (t)] =
dt t=0
dt t=0
ww

( )
d t λet
= e−λ λ et eλe λet + et eλe
h i nh t t
io
= e−λ λ ee
dt t=0
o t=0
= e−λ λ eλ λ + eλ
n

= λ2 + λ
∴ Var(X) = λ2 + λ − (λ)2

e−λ λx
∴ PMF :P(X = x) = Mean = λ
x!
λ(et −1) Variance = λ
M.G.F. = e
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 29

1.3.3 M.G.F., Mean and Variance of Geometric Distribution

P(X = x) = qx p, x = 0, 1, 2, · · ·
M.G.F. of X:  
MX (t) = E etX

X ∞
X ∞ 
X x
= e P(X = x) =
tx
e q p=p
tx x t
eq
x=0
 0  1 x=0  2 x=0   1  2 
= p e q + e q + e q + ···∞ = p 1 + e q + e q + ···∞
t t t t t

n
h  i−1 p
= p 1 − et q =

g.i
1 − et q
Mean of X: ( )

n
d
E (X) = [MX (t)]

eri
dt
( " #)t=0 (  )
d p d  −1 
= t
= p 1 − et q
dt 1 − e q t=0 dt
ine
t=0
  −2  
= p (−1) 1 − et q −et q
t=0
 pq
ng

−2
= p (−1) 1 − q −q = 2
p
q
E

=
p
arn

Variance of X :
 
Var(X) = E X − [E (X)]2 2
Le

( 2 )
  d
Now, E X2 = [MX (t)]
dt2 t=0
w.

(  ) (  )
d  −2   d  −2 
= p (−1) 1 − et q −et q = pq 1 − et q et
dt t=0
dt t=0
ww

 −2  −3  


= pq 1 − et q et + et (−2) 1 − et q −et q
t=0
nh −2 −3 io
= pq 1 − q + (−2) 1 − q −q
q 2q2
( )
n o pq pq
= pq p + 2q p = 2 + 3 2q = + 2
−2 −3
 
p p p p
2
q 2q2 q q2 qp + q2 q p + q
! 
q q
∴ Var(X) = + 2 − = + 2= = =
p p p p p p2 p2 p2
q
∴ PMF :P(X = x) = qx p Mean =
p
p q
M.G.F. = t Variance =
1−eq p2
[Link]
[Link]
30 UNIT I - Random Variables

1.3.4 M.G.F., Mean and Variance of Uniform Distribution


1
f (x) = ,a < x < b
b−a
M.G.F. of X:
 
MX (t) = E e tX

Z ∞ Z b Z b
1 1
= etx f (x)dx = etx dx = etx dx
−∞ a b−a b−a a
bt (bt)2 at (at)2

n
" # " #
!b 1+ + + ··· − 1 + + + ···

g.i
1 etx etb − eta 1! 2! 1! 2!
= = =
b − a t a t (b − a) t (b − a)

n
   
2 2 2
(b−a) t b −a t b3 −a3 t3
+ + +· · ·
 
(b+a) t b +ba+a t
eri
2 2 2
= 1! 2! 3! = 1+ + + ···
t (b − a) 2! 3!
ine
Mean of X:
    
(b + a) t b 2
+ ba + a2 2
t
( )
d  d 
  
E (X) = = + + +
 
[MX (t)] 1 · · ·

dt  dt 
 
2! 3!
ng

  
t=0

t=0
 
b + ba + a 2t
 2 2

 (b + a)  (b + a) (b + a)
= + + + = =
 
E

0 · · ·
 
2! 3! 2! 2
 

 
t=0
arn

Variance of X :
 
Var(X) = E X2 − [E (X)]2
Le

    
 d  (b + a) b + ba + a 2t
2 2
( 2 )
  d   
Now, E X = = + +
2
 
[M (t)] · · ·

X
dt2 dt
 
2! 3!

  

w.

t=0
  
t=0
   

 b2 + ba + a2 2 
= + +
 
0 · · ·
ww


 

 3! 


t=0
 
b + ba + a
2 2
=
 3 
b2 + ba + a2
!2
a+b 1
∴ Var(X) = − = (b − a)2
3 2 12
1 a+b
∴ PDF : f (x) = ,a < x < b Mean =
b−a   2
(b + a) t b + ba + a t2
2 2
1
M.G.F. = 1 + + + ··· Variance = (b − a)2
2! 3! 12
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 31

1.3.5 M.G.F., Mean and Variance of Exponential Distribution

f (x) = αe−αx , α > 0, 0 ≤ X < ∞

M.G.F. of X:
  Z ∞
MX (t) = E e tX
= etx f (x)dx
Z−∞

= etx αe−αx dx
0
∞ " #∞
e−(α−t)x
Z
=α dx = α

n
−(α−t)x
e
0 −(α − t)

g.i
0
α
=
(α − t)

n
Mean of X:

eri
( )
d
E (X) = [MX (t)]
dt
#)t=0
ine
α
( "
d
=
dt (α − t) t=0
α
(" #)
ng

=
(α − t)2 t=0
1
E

=
α
arn

Variance of X :
 
Var(X) = E X − [E (X)]2 2
Le

( 2 )
  d
Now, E X2 = [MX (t)]
dt2
w.

#)t=0
α
( "
d
=
dt (α − t)2 t=0
ww

(" #)

=
(α − t)3 t=0
2
= 2
α
 2
2 1 1
∴ Var(X) = 2 − = 2
α α α
1
∴ PDF : f (x) = αe−αx Mean =
α α
M.G.F. = 1
(α − t) Variance = 2
α
[Link]
[Link]
32 UNIT I - Random Variables

1.3.6 M.G.F., Mean and Variance of Gamma Distribution

e−x xβ−1
f (x) = , β > 0, 0 ≤ X < ∞
β
M.G.F. of X:

∞ −x β−1

Z Z Z ∞
e x 1
e−x etx xβ−1 dx
 
MX (t) = E etX = etx f (x)dx = etx dx =
−∞ 0 β β 0
u β−1 du
Z ∞ Z ∞
1 1
 
−(1−t)x β−1
= e x dx = e−u

n
β 0 β 0 1−t (1 − t)

g.i
∵ put (1 − t) x = u ⇒ (1 − t) dx = du ⇒ dx = du
!
(1−t)
x = 1−t
u
, x → 0 ⇒ u → 0, x → ∞ ⇒ u → ∞

n
Z ∞
1
= e−u uβ−1 du

eri
β
β (1 − t) 0
1
= β
ine
β (1 − t)β
1
=
(1 − t)β
ng

Mean of X:
( ) ( " #)
d d 1
E

E (X) = [MX (t)] =


dt dt (1 − t)β t=0
arn

t=0
( )
d h i h i
= (1 − t)−β = −β (1 − t)−β−1 (−1)

dt t=0
i t=0
Le

h
= β(1 − t)−β−1 =β
t=0
Variance of X :
w.

 
Var(X) = E X − [E (X)]2 2

( 2 ) ( )
d d
ww

  h i
Now, E X2 = 2
[MX (t)] = β(1 − t)−β−1
dt t=0
dt t=0
nh io
= β −β − 1 (1 − t) −β−2

(−1)
t=0
= β(1 + β)
∴ Var(X) = β(1 + β) − (β)2

e−x xβ−1
∴ PDF : f (x) =
β Mean = β
1 Variance = β
M.G.F. =
(1 − t)β
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 33

1.3.7 M.G.F., Mean and Variance of Normal Distribution

−∞ < x < ∞
1 x−µ 2
− 12 ( σ )
x−µ
f (x) = √ e ; −∞ < µ < ∞ ; z=
α 2π σ
σ>0
M.G.F. of X:
Z ∞ Z ∞
  1 x−µ 2
− 12 ( σ )
MX (t) = E e tX
= tx
e f (x) dx = e √ e tx
dx
−∞ −∞ α 2π
Z ∞
1 x−µ 2
tx − 21 ( σ )
= √ e e dx
α 2π Z−∞

n
∞ Z ∞
1 etµ

g.i
t(σu+µ) − 12 (u)2 1 2
= √ e e σdu = √ etσu e− 2 u du
α 2π −∞     2π −∞
x−µ
∵ put σ = u ⇒ dx = = σdu
!

n
σ du ⇒ dx
x =σu+µ, x → −∞ ⇒ u → −∞, x → ∞ ⇒ u → ∞

= √
etµ
Z ∞
1 2
e− 2 (u −2σtu) du eri
ine
2π −∞
2 2 Z
tµ σ 2t
Z ∞ ∞
etµ 1 2 σ2 t2 e e 1 2
= √ e − 2 (u−σt)
e 2 du = √ e− 2 (u−σt) du
2π −∞ 2π −∞
ng

Z ∞ − 1 (u−σt)2
σ2 t2 e 2 σ2 t2
= etµ e 2 √ du = etµ e 2 . (1)

E

−∞
σ2 t2
= etµ e 2
arn

Mean of X:
( )
d
E (X) = [MX (t)]
Le

dt
)t=0 "
σ2 t2 σ
(  2
#
d tµ σ2 t2

σ2 t2
= e e 2 = etµ e 2 2t + µetµ e 2 =µ
w.

dt t=0
2 t=0
Variance of X :
ww

 
Var(X) = E X − [E (X)]2 2
( 2 )
  d
Now, E X2 = [MX (t)]
dt2 t=0
d tµ σ2 t2 σ
( " 2
#)
σ2 t2
= e e 2 2t + µe e 2

= µ2 + σ2
dt 2 t=0
∴ Var(X) = µ2 + σ2 − (µ)2 = σ2
1 1 x−µ 2
∴ PDF : f (x) = f (x) = √ e− 2 ( σ ) Mean = µ
α 2π
σ2 t2 Variance = σ2
M.G.F. = etµ e 2

[Link]
[Link]

34
1.4 Summary of Discrete Distributions

Discrete Parameters Relation Probability Mass Moment Generating Mean Variance


Probability Function Function

i. n
Distribution P(X = x) MX (t)
n
1. Binomial n, p Mean > Variance P (X = x) = nCx px qn−x , pet + q np npq
x = 0, 1, 2, 3, ...n,

λ(= np)
with p + q = 1
Mean = Variance P (X = x) =
e−λ λx
eλ(e −1)
t

i n g λ λ

r
2. Poisson ,
x!
x = 0, 1, 2, 3, · · · ∞
3. Geometric p Mean < Variance P (X = x) = qx p,

e
p
1 − et qe q
p
q
p2
with p + q = 1
(or)
gi n
x = 0, 1, 2, · · · ∞,

pet q

n
P (X = x) = qx−1 p, 1
1 − et q p p2
x = 1, 2, 3, · · · ∞,

When to use in P(X = x)


rn E
with p + q = 1

UNIT I - Random Variables


2. Poisson
e a
1. Binomial Probability of getting ‘x’ successes in ‘n’ trials(where n is finite)
Probability of getting ‘x’ successes in indefinite number ‘n’ of trials (i.e., n → ∞) and p → 0

.L
3. Geometric Probability to perform ‘x’ trials to get First success

w
‘x’ successes in ‘n’ trials Use Binomial Dist.(when n is finite number of trials)
‘x’ successes in ‘n’ trials Use Poisson Dist.(when n is indefinite number ‘n’ of trials i.e., n → ∞) and p → 0

w
First success in Last trial Use Geometric Dist.

w
[Link]
[Link]

.i n

PROBABILITY AND RANDOM PROCESSES


1.5 Summary of Continuous Distributions

Cont. Prob. Parameters Probability Density Moment Generating

n g
Function Mean

i
Variance

r
Distribution Function f (x) MX (t)  
b + ba + a t2

e
2 2
1 (b + a) t (a + b) 1
1. Uniform a, b f (x) = , 1+ + + ··· (b − a)2

2. Exponential α
a<X<b
b−a

f (x) = αe−αx
α

i n e2! 3!
1
α
2 12
1
α2

3. Gamma β
x≥0
f (x) =
e−x xβ−1
,
(α − t)
1

n gβ
β β

4. Normal µ, σ2
β
β > 0, 0 ≤ x < ∞
1

rn
f (x) = √ e− 2 ( σ ) ;
E
1 x−µ 2
(1 − t)

σ2 t2
etµ e 2 µ σ2

e a
σ 2π
−∞ < x < ∞; −∞ < µ < ∞;
σ > 0;

.L z=
x−µ
σ

w w
* A continuous random variable X is said follow Erlang distribution with parameters α, β ≥ 0, if its p.d.f.

f (x) =
e−αx xβ−1
β
αβ , α, β > 0, 0 ≤ X < ∞
is given by

w
Note : Symbolically, X ∼ Erlang distribution α, β


If α = 1, Erlang distribution becomes Gamma distribution as


β
e−x xβ−1

If β = 1, Erlang distribution becomes Exponential distribution as αe−αx

35
[Link]
[Link]
36 UNIT I - Random Variables

1.6 Part - A [Unit I - Random Variables (R.V.)

1. The cumulative distribution function of a random variable X is


F(x) = 1 − (1 + x)e−x , x > 0. Find the probability density function of X
and mean of X.
Solution : Given F(x) = 1 − (1 + x)e−x = 1 − e−x − xe−x , x > 0.
d d 
P.d.f. of X = f (x) = [F (x)] = 1 − e−x − xex = xe−x , x > 0

dx dx Z
Z ∞ ∞ Z ∞
Mean of X = E(X) = x f (x) dx = x (xe ) dx =
−x
x2 e−x dx = 2

n
0 0 0

g.i
2. The number of hardware failures of a computer system in a week of
operations has the following p.m.f.:

n
Number of failures 0 1 2 3 4 5 6

eri
Probability .18 .28 .25 .18 .06 .04 .01

Find the mean and variance of the number of failures in a week.


ine
Solution : Here X is discrete r.v.
X 6
X
Mean of X = E(X) = xP (X = x) = xP (X = x)
ng

∀x x=0
= (0)(.18) + (1)(.28) + (2)(.25) + (3)(.18) + (4)(.06)
E

+ (5)(.04) + (6)(.01) = 1.82


arn

 
Var (X) = E X2 − [E (x)]2 = 4.90 − (1.82)2 = 1.5876
3. The mean of a binomial distribution is 20 and standard deviation is 4.
Find the parameters of the distribution.
Le

p : Given Mean
Solution = np = 20.

S.D. = Var (X) = npq = 4 ⇒ npq = 16⇒ (20) q = 16 ⇒ q = 4/5,
w.

∴ p = 1 − q = 1/5
np = 20 ⇒ n (1/5) = 20 ⇒ n = 100
ww

∴ The parameters are n = 100 and p = 1/5


4. If X is a random variable with c.d.f. as F(x), show that the random
variable Y = F(x) is uniformly distributed in (0, 1).
Solution : Y = F(x). F is a monotonic function.
" #
dx 1 1 d
P.d.f. of Y = g y = f (x) = f (x) dy = f (x) ∵ f (x) = F (x)

dy f (x) dx
dx
⇒g y =1


1, 0 < y < 1
(
∴g y =

0, otherwise
5. Is the function defined as follows a density function?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 37

x<2


 0,
f (x) =  (3 + 2x) , 2 ≤ x ≤ 4

 1
18
x>4

 0,

Z ∞
Solution : Condition for p.d.f. is f (x) dx = 1
−∞ Z
Z 2 Z 4 ∞
1
LHS = 0 dx + (3 + 2x) dx + 0 dx = 1
−∞ 2 18 4
Hence the given function is density function.

n
6. Define Poisson distribution and state any two instance where Poisson

g.i
distribution may be successfully employed.
Solution : Poisson distribution : A random variable X is said to be
follow Poisson distribution if it assumes only non-negative values and

n
its probability mass function is given by

P (X = x) = p (x) =
(
eri
e−λ λx
x! , x = 01, 2, 3, ...∞ and λ > 0
ine
0, otherwise

λ is known as the parameter of Poisson distribution.


ng

State any two instances where Poisson distribution may be successfully


employed.
E

1. Number of printing mistakes at each page of the book.


2. Number of suicides reported in a particular day.
arn

3. Number of deaths due to a rare disease.


4. Number of defective items produced in the factory.
Le

7. Given ( the random variable X with density function


2x, 0 < x < 1
f (x) = . Find the p.d.f. of Y = 8X3 .
w.

0, elsewhere
2x, 0 < x < 1
(
Solution: Given f (x) =
0, elsewhere
ww

Let y = 8x .
3

dx 1 1/3 1 1 (1/3)−1 1 1
 
fY y = fX (x) =2 y = y1/3 y−2/3 = y−1/3

y
dy 2 23 6 6
8. If Var(X) = 4, find Variance(3X + 8), where X is a random variable.
Solution : W.K.T. Var(aX + b) = a2 Var (X)

∴ Var (3X + 8) = 32 Var (X) = 32 (4) = 36.

9. Let X be a r.v. with E(X) = 1 and E[X(X − 1)] = 4. Find Variance(X)


and Variance(2 − 3X).
Solution : Given E(X) = 1 and E(X(X − 1)) = 4
[Link]
[Link]
38 UNIT I - Random Variables

h i
E [X (X − 1)] = E X − X 2
     
= E X2 − E (X) ⇒ 4 = E X2 − 1 ⇒ E X2 = 4 + 1 = 5
 
Var (X) = E X2 − [E (X)]2
=5−1=4
h i
∴ Var (2 − 3x) = (−3)2 Var (X) ∵ Var (aX ± b) = a2 Var (X)
= 9(4) = 36
10. Let X be r.v. taking values -1,0 and 1 such that
P(X = −1) = 2P(X = 0) = P(X = 1). Find the mean of 2X − 5.

n
Solution : Let P(X = 1) = K.

g.i
Then P(X = −1) = K and P(X = 0) = K/2.
∴ The distribution of X is given by

n
eri
X −1 0 1
2 K 1 2
P(X = x) K = = K=
5 2 5 5
ine
Since the total probability = 1
5K 2
=1⇒K=
ng

We have
2 X5 2 1 2
   
Mean of X = E (X) = x · P(X = x) = (−1) + 0 +1 =0
E

x
5 5 5
Mean of 2X − 5 = E (2X − 5) = 2E (X) − 5 = −5.
arn

11. A continuous random variable X can assume any value between x = 2


and x = 5 has a density function given by f (x) = k(1+x). Find P(X < 4).
Le

Solution : Since f Zis probability density function, we must have that


Z ∞ ∞
f (x)dx = 1⇒ k(1 + x)dx = 1
w.

−∞ −∞
" #5 " ! !#
x2 52 22
⇒k x+ =1⇒k 5+ − 2+ =1
ww

2 2 2 2
21
 
⇒k 3+ =1
2
2
⇒k=
27 Z 4 Z 4
2
Now P[X < 4] = P[2 < X < 4] = k(1 + x) dx = (1 + x)dx
2 2 27
2 4
!
2 x 16
= x+ =
27 2 2 27

12. A continuous random variable X has p.d.f. f (x) = 3x2 ; 0 < x < 1. Find
a such that P(X = a) = P(X > a).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 39

1
Solution: P(X = a) = P(X > a) =
Z a " 3 #a 2
1 x 1 h ia 1 1
i.e., 3x dx = ⇒ 3
2
= ⇒ x3 = ⇒ a3 =
0 2 3 0 2 0 2 2
 1/3
1
⇒a=
2
13. A continuous random variable X has probability density function
given by f (x) = 3x2 , 0 = x = 1. Find k such that P(X > k) = 0.05.
Solution : P(X > k) = P(X = k) = 0.05

n
R∞ R∞  
k
f (x) dx = 0.05 ⇒ k 3x dx = 0.05 ⇒ x3 1k = 0.05
2

g.i
⇒ 1 − k3 = 0.05 ⇒ k = 0.9830
14. Find the cumulative distribution function F(x) corresponding to the

n
1 1
p.d.f. f (x) = , −∞ < x < ∞.

eri
π 1 + x2
Solution : The cumulative distribution function F(x) corresponding to
the p.d.f. f(x) is given by
ine
Z x Z x
1 1 1 h −1 ix
F (x) = f (x)dx = dx = tan x
−∞ π 1 + x π
2 −∞
−∞
ng

1 h −1 i 1 π
 
= tan x − tan (−∞) =
−1
tan−1 x +
π π 2
E

15. The p.d.f. of a random variable X is f (x) = 2x, 0 < x < 1, find the
probability density function, of Y = 3x + 1.
arn

Solution : Given p.d.f. of X is f (x) = 2x, 0 < x < 1.


To find the p.d.f. of Y = 3X + 1
Le

Y−1 dX
X= ⇒ = 1/3
3 dY
w.

dX Y−1 1 2
  
y = fX (x) =2 = (Y − 1)

fY
dY 3 3 9
ww

16. A random variable X has the p.d.f. f (x) given by


Cxe ; x > 0
−x
(
f (x) = . Find the value of C and c.d.f. of X.
0; x ≤ 0
R∞
Solution : W.K.T. −∞ f (x)dx = 1
R∞  ∞
−x −x

−∞
Ce−x dx = 1 ⇒ C x (−1) − (1) (−1)2 = 1
e e
0
⇒ C [(0 − 0) − (−1)] = 1 ⇒ C = 1
xe ; x > 0
( −x
∴ f (x) =
0; x ≤ 0
Rx  x
e−x e−x
∴ C.d.f. F (x) = 0 xe dx = x (−1) − (−1)2 = 1 − (1 + x) e−x
−x
0
[Link]
[Link]
40 UNIT I - Random Variables

1.6.1 Examples of Discrete random variable

Example 1.1 Let X denotes the number of heads in an experiment of tossing two
coins. Find
(a) probability distribution (b) Cumulative distribution
(c) Mean of X (d) Variance of X
(e) MGF of X (f) P (X ≤ 1)

n
(g) P (|X| ≤ 1) (h) find minimum value of c

g.i
such that P (X ≤ c) > 1/2.

n
Solution : W.K.T. by tossing two coins, the sample space is

eri
S = {HH, HT, TH, TT}, n(S) = 4
Given X is a random variable which denotes the number of heads.
ine
X(HH) = 2, X(HT) = 1, X(TH) = 1, X(TT) = 0
∴ The range space of X is RX = {0, 1, 2}.
ng

⇒ X is a discrete random variable.


(a) Probability distribution P(X = x) :
E

X: 0 1 2
arn

1 2 1 1
P(X = x) : =
4 4 2 4
(b) Cumulative distribution F(X = x) = P(X ≤ x) :
Le

X: 0 1 2
1 3 4
F(X = x) : P(X ≤ 0) = P(X ≤ 1) = P(X ≤ 2) = = 1
w.

4 4 4
(c) Mean of X :
ww

X
E(x) = xP(X = x)
∀x
x=2
X
= x · P(X = x)
x=0
= 0 · P(X = 0) + 1 · P(X = 1) + 2 · P(X = 2)
2 1
=0+ +2· (refer (a))
4 4
1 1
= +
2 2
=1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 41

 
(d) Variance of X : V(X) = E X − [E(X)]2 2

  X
Now, E X 2
= x2 · P(X = x)
∀x
x=2
X
= x2 · P(X = x)
x=0
= 0 · P(X = 0) + 12 · P(X = 1) + 22 · P(X = 2)
2

2 1
=0+1· +4·

n
(refer a)
4 4

g.i
1
= +1
2
3

n
=
2
3
∴ V(X) = − 1
2 eri (refer c)
ine
1
=
2
ng

h i
(e) MGF of X : MX (t) = E etX
E

h i
MX (t) = E etX
arn

X
= etx · P(X = x)
∀x
Le

x=2
X
= etx · P(X = x)
x=0
w.

=e · P(X = 0) + et·1 · P(X = 1) + et·2 · P(X = 2)


t·0

1 2 1
= 1 · + et · + e2t · (refer a)
ww

4 4 4
1 1 1
= + et + e2t
4 2 4

(f) P (X ≤ 1) =

= P(X = 0) + P(X = 1)
1 2
= + (refer a)
4 4
3
=
4
[Link]
[Link]
42 UNIT I - Random Variables

(g) P (|X| ≤ 1) =
= P(X = −1) + P(X = 0) + P(X = 1)
1 2
=0+ + (refer a)
4 4
3
=
4
(h) Minimum value of c such that P (X ≤ c) > 1/2.
1
X P(X ≤ c) >
2

n
P(X ≤ 0) = P(X = 0)

g.i
1 1
0 = >
4 2

n
∴c,0
1 2

1
P(X ≤ 1) = P(X = 0) + P(X = 1) =

=
3 1
>
+
4 4
eri
ine
4 2
∴c=1
1 2 1
P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = + +
ng

4 4 4
2 1
=1>
E

2
∴c=2
arn

1
∴ P(X ≤ c) > satisfies for c = 1, 2.
2
∴ Minimum value of C is 1.
Le

Example 1.2 A discrete random variable X has the probability function given
w.

below:
ww

X 0 1 2 3 4 5 6 7
P(X) 0 a 2a 2a 3a a2 2a2 7a2 + a
Find
(a) a (b) cumulative distribution
(c) P (X < 6) (d)P (3 ≤ X < 6)
(e) P (2X + 3 < 7) (f) P (X > 1 / X < 3)
(g) find minimum value of c such that
3
P (X ≤ c) > .
4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 43

Solution : X
(a) By the property of p.m.f., W.K.T. P(X = x) = 1
∀x

x=7
X
P(X = x) = 1
x=0
0 + a + 2a + 3a + a + 2a + 7a2 + a = 1
2 2

10a2 + 9a = 1
10a2 + 9a − 1 = 0

n
10a2 + 10a − a − 1 = 0

g.i
10a(a + 1) − (a + 1) = 0
(10a − 1)(a + 1) = 0

n
1
a=

eri
or − 1
10
But a , −1 (∵ Probability ∈ [0,1])
ine
1
∴a=
10
ng

∴ Probability distribution table becomes


E

X: 0 1 2 3 4 5 6 7
P(X) : 0 1/10 2/10 2/10 3/10 1/100 2/100 17/100
arn

(b) Cumulative distribution F(X = x) = P(X ≤ x) :


Le

X: 0 1 2 3 4 5 6 7
F(X = x) : 0 1/10 3/10 5/10 8/10 81/100 83/100 1
w.

(c) P(X < 6) = P(X = 0)+P(X = 1)+P(X = 2)+P(X = 3)+P(X = 4)+P(X = 5)


= 1 − [P(X ≥ 6)]
ww

= 1 − [P(X = 6) + P(X = 7)]


= 1 − [2/100 + 17/100]
= 1 − [19/100] ∵ P(A) = 1 − P(A)
100 − 19
=
100
81
= = 0.81
100
(d) P(3 ≤ X < 6) = P(X = 3) + P(X = 4) + P(X = 5)
= 2/10 + 3/10 + 1/100
51
= = 0.51
100
[Link]
[Link]
44 UNIT I - Random Variables

(e) P(2X + 3 < 7) = P(2X+ < 7 − 3)


= P(2X < 4)
= P(X < 2)
= P(X = 0) + P(X = 1)
= 0 + 1/10
1
= = 0.1
10
P[(X > 1) ∩ (X < 3)]
" #
P(A ∩ B)
(f) P(X > 7/X < 3) = ∵ P(A/B) =
P(X < 3) P(B)

n
P[(X = 2, 3, 4, 5, 6, 7) ∩ (X = 0, 1, 2)]
=

g.i
P(X = 0, 1, 2)
P[(X = 2)]
= P(X = 2)

n
P(X = 0) + P(X = 1)+

eri
2
10
= 1
10 + 2
10
ine
2
=
= 0.666666
3
3
(g) Minimum value of c such that P (X ≤ c) > .
ng

4
3
X P(X ≤ c) > = 0.75 Remarks
E

4
0 P(X ≤ 0) = F(X = 0) = 0
>0.75 ∴c,0
arn

1
1 P(X ≤ 1) = F(X = 1) =  >
0.75 ∴c,1
10
3
2 P(X ≤ 2) = F(X = 2) =  >
0.75 ∴c,2
Le

10
5
3 P(X ≤ 3) = F(X = 3) =  >
0.75 ∴c,3
10
w.

8
4 P(X ≤ 4) = F(X = 4) = > 0.75 ∴c=4
10
81
ww

5 P(X ≤ 5) = F(X = 5) = > 0.75 ∴c=5


100
83
6 P(X ≤ 6) = F(X = 6) = > 0.75 ∴c=6
100
100
7 P(X ≤ 7) = F(X = 7) = > 0.75 ∴c=7
100
3
∴ P(X ≤ c) > satisfies for c = 4, 5, 6, 7.
4
∴ Minimum value of c is 4.

Example 1.3 Let X represents the sum of number of two dice. Find
(a) distribution of X (b) P (|X − 1| ≤ 3).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 45

Solution : By rolling two dice, W.K.T. the sample space is


 


 (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6) 


 
(2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6)

 


 

 
(3, 1), (3, 2), (3, 3), (3, 4), (3, 5), (3, 6)

 

S=

 

(4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6)
 


 


 




 (5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6) 



 
 (6, 1), (6, 2), (6, 3), (6, 4), (6, 5), (6, 6)
 

n
 
n(S) = number of elements in S = 36 = 62 dice

g.i
Given X = sum of number of two dice.
∴ X = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}

n
(a) Probability distribution of X:

X: 2 3 4 eri
5 6 7 8 9 10 11 12
ine
1 2 3 4 5 6 5 4 3 2 1
P(X = x) :
36 36 36 36 36 36 36 36 36 36 36
ng

(b) P(|X − 1| ≤ 3) = P(−3 ≤ X − 1 ≤ 3)


= P(−3 + 1 ≤ X ≤ 3 + 1)
E

= P(−2 ≤ X ≤ 4) = P(X = −2,


 −1,
 0,
  1,
  2, 3, 4)
arn

= P(X = 2) + P(X = 3) + P(X = 4)


1 2 3
= + ++
36 36 36
Le

6 1
= =
36 6
w.

Example 1.4 Let X takes the values −1, 0, 1 such that P (X = −1) = 2P (X = 0) =
ww

P (X = 1). Find the distribution of X.

Solution : Let P(X = −1) = P(X = 0) = P(X = 1) = k

∴ P(X = −1) = k
k
P(X = 0) =
2
P(X = 1) = k

W.K.T. by the property of probability


[Link]
[Link]
46 UNIT I - Random Variables

X
P(X = x) = 1
∀x
x=1
X
P(X = x) = 1
x=−1
P(X = −1) + P(X = 0) + P(X = 1) = 1
k
k+ +k=1
2

n
2
5k = 2 ⇒ ∴ k =

g.i
5
The probability distribution table is

n
X: −1 0 1
P(X = x) : 2/5 1/5 2/5
eri
Example 1.5 Let X takes values 1, 2, 3, 4 such that 2P (X = 1) = 3P (X = 2) =
ine
P (X = 3) = 5P (X = 4). Find the distributions of X.
ng

Solution : Let 2P (X = 1) = 3P (X = 2) = P (X = 3) = 5P (X = 4) = k
E

k
∴ P(X = 1) =
2
arn

k
P(X = 2) =
3
P(X = 3) = k
Le

k
P(X = 4) =
5
w.

W.K.T. by the property of probability


X
P(X = x) = 1
ww

∀x
x=4
X
P(X = x) = 1
x=1
P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) = 1
k k k
+ +k+ =1
2 3 5
15k + 10k + 30k + 6k
=1
30
30
61k = 30 ⇒ ∴ k =
61
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 47

The probability and cumulative distributions table is

X: 1 2 3 4
P(X = x) : 15/61 10/61 30/61 6/61
F(X = x) : 15/61 20/61 55/61 61/61 = 1

1.6.2 Examples of Continuous random variable

n
Example 1.6 Check whether the following functions are probability density

g.i
function (p.d.f.):
1 1
(a) f (x) = 6x(1 − x), 0 ≤ X ≤ 1 (b) f (x) = , −∞ < X < ∞

n
 100 π 1 + x2
 x2 , X > 100
eri


(c) f (x) =  (d) f (x) = sin x, 0 < x < π

0, X < 100


ine

Solution : Here f (x) is defined in the interval (a, b) which contains


ng

uncountably infinite values.

If f (x) is defined in the interval of the form (a, b), then


E

Zb

arn

 = 1, given f (x) is pdf in (a, b)





f (x)dx 
 , 1, given f (x) is not pdf in (a, b)

a
Le

 
(a) Given f (x) = 6x(1 − x) = 6 x − x , 0 ≤ X ≤ 1
2

Z∞
f (x)dx = 1
w.

We have to prove (1)


−∞
ww

Z1 Z1  
LHS of (1) = f (x)dx = 2
6 x − x dx
0 0
" #1
6x2 6x3 h i1
= − = 3x2 − 2x3
2 3 0 0

= [(3 − 2) − (0 − 0)] = 1
= RHS of (1)

∴ The given f (x) is p.d.f.


1 1
(b) Given f (x) = , −∞ < X < ∞
π 1 + x2
[Link]
[Link]
48 UNIT I - Random Variables

Z∞
We have to prove f (x)dx = 1 (2)
−∞
Z∞
LHS of (2) = f (x)dx
−∞
1 h −1 i∞
= tan x
π    −∞ 
1 π −π
= −

n
π 2 2
1

g.i
= [π] = 1
π
= RHS of (2)

n
∴ The given f (x) is p.d.f.
100
eri

 2 , X > 100

(c) Given f (x) = 

 x 0, X < 100
ine

Z∞
We have to prove f (x)dx = 1 (3)
ng

−∞
Z∞ Z∞
100
E

LHS of (3) = f (x)dx = dx


x2
arn

100 100
 ∞
−1 1
 
= 100 = 100 0 +
x 100 100
Le

=1
= RHS of (3)
w.

∴ The given f (x) is p.d.f.

(d) Given f (x) = sin x, 0 < x < π


ww

Z∞
We have to prove f (x)dx = 1 (4)
−∞
Zπ Zπ
LHS of (4) = f (x)dx = sin xdx = [− cos x]π0 = −[cos x]π0
0 0
= −[(cos 0) − (cos π)] = −[(−1) − (1)] = 2 , 1
, RHS of (4)
∴ The given f (x) is not p.d.f.

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 49


ax, X ∈ [0, 1]







a, X ∈ [1, 2]


Example 1.7 Given p.d.f. f (x) = 

. Find

−ax + 3a, X ∈ [2, 3]







0, X > 3



1 3
 
(a) a (b) P ≤ X ≤ (c) P (|X| ≤ 2)
2 2
1

(d) P (X + 1 < 3) (e) P X > / X < 2 (f) P(X=2)
2

n

ax, X ∈ [0, 1]

g.i



 a, X ∈ [1, 2]
Solution : Given f (x) = 

. (1)

 −ax + 3a, X ∈ [2, 3]

n


0, X > 3

eri
Here f (x) is not defined clearly. i.e., it has unknown quantity a. So we
have to find that unknown from the property of pdf f (x)
ine
Z∞
i.e., f (x)dx = 1 (2)
ng

−∞

(a) To find a :
E

Z0 Z1 Z2 Z3 Z∞
f (x)dx = 1
arn

∴ (2) ⇒ f (x)dx+ f (x)dx+ f (x)dx+ f (x)dx+


−∞ 0 1 2 3
Z0 Z1 Z2 Z3 Z∞
Le

0dx + axdx + adx + (−ax + 3a)dx + 0dx = 1


−∞ 0 1 2 3
w.

2 1
" # 2
#3
"
ax −ax
0+ + [ax]21 + + 3ax + 0 = 1
2 0
2 2
ww

a −9a
     
− (0) + [(2a) − (a)] + + 9a − (−2a + 6a) = 1
2 2
−4a + 6a = 1
1
2a = 1 ⇒ ∴ a =
2
 x

 , X ∈ [0, 1]

 2
1


,

 X ∈ [1, 2]
∴ (1) ⇒ f (x) =  .

2 (3)

 x 3
− + ,

X ∈ [2, 3]


 2 20,



X>3

[Link]
[Link]
50 UNIT I - Random Variables

Z3/2
1 3
 
(b) P ≤X≤ = f (x)dx
2 2
1/2
Z1 Z3/2
x 1
= dx + dx [∵ by (3)]
2 2
1/2 1
#1 "
x2
 3/2
x
= +
4 1/2 2 1

n
1 h i1 1
= x2 + [x]13/2

g.i
4  1/2   2
1 1 1 3
  
= (1) − + − (1)
4  4  2 2

n
1 3 1 1
= +

eri
4 4 2 2
3 1
= +
16 4
ine
7
=
16
Z2
ng

(c) P (|X| ≤ 2) = P(−2 < X < 2) = f (x)dx


E

−2
Z0 Z1 Z2
arn

= f (x)dx + f (x)dx + f (x)dx


−2 0 1
Z0 Z1 Z2
Le

x 1
= (0)dx + dx + dx [∵ by (3)]
2 2
w.

−2 0 1
2 1
"
 2 #
x x 1 1
=0+ + = +1−
4 0 2 1 4 2
ww

1 1 3
= + =
4 2 4
Z2
(d) P (X + 1 < 3) = P (X < 3 − 1) = P (X < 2) = f (x)dx
−∞
Z0 Z1 Z2
x 1
= (0)dx + dx + dx [∵ by (3)]
2 2
−∞ 0 1
#1  2"
x2 x 1 1 3
=0+ + = + =
4 0 2 1 4 2 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 51

h  i
 P X > 1 ∩ (X < 2) " #
1 P (A ∩ B)

2
(e) P X > / X < 2 = ∵ P(A/B) =
2 P(X < 2) P(B)
h i
P 2 <X<2
1
=
P(X < 2)
2 Z1 Z2
 Z
1 x 1

Now, P < X < 2 = f (x)dx = dx + dx [∵ by (3)]
2 2 2
1/2 1/2 1
11
= [by simplification]

n
16
3

g.i
By (d), we have P(X < 2) =
4
1 11/16 11
 
∴P X> /X<2 = =

n
2 3/4 12

eri
Z2
(f) P(X = 2) = f (x)dx
ine
2
=0 [∵ lower and upper limits are same]
ng

Example 1.8 A r.v. X has a cumulative distribution function given by


E
arn


0, X < 1






F(x) = 

k(x − 1)4 , 1 ≤ X ≤ 3 .

Le




1, X > 3




w.

Find (a) f (x) (b) k (c) P (1 < X < 2)


ww

(d) P (|X| ≤ 2) (e) P(X + 3 < 5) (f) P (X > 1 / X < 2)

0, X < 1



Solution : Given F(x) =  k(x − 1)4 , 1 ≤ X ≤ 3 .


1, X > 3



(a) To find f (x) (p.d.f.) :

d[F(x)]
W.K.T. f (x) =
dx

[Link]
[Link]
52 UNIT I - Random Variables


d [0]
, X<1



dx



 h i
 d k(x − 1)4

=

 , 1≤X≤3

 dx
d [1]


1, X > 3



dx

0, X < 1



∴ f (x) =  4k(x − 1)3 , 1 ≤ X ≤ 3


0, X > 3


n
g.i
(b) to find k :

n
eri
Z∞
W.K.T. f (x)dx = 1 (∵ f (x) has an unknown quantity k)
ine
−∞
Z3
4k(x − 1)3 dx = 1
ng

1
 n+1
4 3 ax1 + b
" # Z 
(x − 1) n
E

4k =1 (∵ ax1 + b = )
4 (n + 1)(a)
arn

h 1i
k (3 − 1)4 − (1 − 1)4 = 1
k[16] = 1
Le

1
∴k=
16
0, X < 1
w.




 1

∴ f (x) = 

(x − 1)3 , 1 ≤ X ≤ 3
 4

ww


0, X > 3

Z2
(c) P(1 < X < 2) = f (x)dx
1
Z2 " 4 2
#
1 1 (x − 1)
= (x − 1)3 dx =
4 4 4 1
1
1
∴ P(1 < X < 2) =
16
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 53

(d )P(|X| ≤ 2) = P(−2 ≤ X ≤ 2)
Z2
= f (x)dx
−2
Z1 Z2 Z3
0 *0
= f (x)
 * dx + f (x)dx + f (x)
  dx
−2 1 2
Z2 " 4 2
#
1 1 (x − 1)
= (x − 1)3 dx =

n
4 4 4 1

g.i
1
1
∴ P(|X| ≤ 2) =
16

n
(e )P(X + 3 < 5) = P(X < 5 − 3) = P(X < 2)

=
Z2
f (x)dx eri
ine
−∞
Z1 Z2
*0 1
ng

= f(x)
 dx + (x − 1)3 dx
4
−∞ 1
E

1
∴ P(X + 3 < 5) =
 
∵ by (c)
16
arn

1
  
 P X> ∩ (X < 2) " #
1 P (A ∩ B)

2
(f )P X > / X < 2 = ∵ P(A/B) =
Le

2 P(X < 2) P(B)


P(X < 2)
=
P(X < 2)
w.

=1
ww

x

, X ∈ [0, 1]




 2
1


,

X ∈ [1, 2]


Example 1.9 Given p.d.f. f (x) =  2

. Find c.d.f. F(x).

x 3
− + ,

X ∈ [2, 3]






 2 2
X>3


 0,
[Link]
[Link]
54 UNIT I - Random Variables

 x

 , X ∈ [0, 1]

 2
1


, X ∈ [1, 2]


Solution : Given f (x) = 

2 . (1)

 x 3
− + , X ∈ [2, 3]



 2 20, X > 3




W.K.T., the relation to find F(x) from f (x) is

Zx

n
F(x) = f (x)dx

g.i
−∞

n
In [0, 1]

F(x) =
Zx
f (x)dx
eri
ine
−∞
Z0 Zx
ng

= f (x)dx + f (x)dx
−∞ 0
E

Z0 x∈[0,1]
Z
*0 x
= dx +
arn

f (x)
 dx (∵ by (1))

2
−∞ 0
2
x
=
Le

4
w.

In [1, 2]
ww

Zx
F(x) = f (x)dx
−∞
Z0 Z1 Zx
= f (x)dx + f (x)dx + f (x)dx
−∞ 0 1
Z0 1∈[0,1]
Z x∈[1,2]
Z
*0 x 1
= f(x)
 dx + dx + dx (∵ by (1))
2 2
−∞ 0 1
x 1
= −
2 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 55

In [2, 3]
Zx
F(x) = f (x)dx
−∞
Z0 Z1 Z2 Zx
= f (x)dx + f (x)dx + f (x)dx + f (x)dx
−∞ 0 1 2
Z0 1∈[0,1]
Z 2∈[1,2]
Z x∈[2,3]
Z
0 x 1 −x 3
 
= + dx + dx + + dx

n
f (x)
* dx (∵ by (1))

2 2 2 2

g.i
−∞ 0 1 2
2
−x 3x 5
= + −
4 2 4

n
eri
In [3, ∞)
Zx
ine
F(x) = f (x)dx
−∞
Z0 Z1 Z2 Z3 Z∞
ng

= f (x)dx + f (x)dx + f (x)dx + f (x)dx + f (x)dx


−∞
E

0 1 2 3
Z0 1∈[0,1]
Z 2∈[1,2]
Z 3∈[2,3]
Z Z∞
arn

*0 x 1 −x 3
 
= f (x)
 dx+ dx+ dx+ + dx+ (0)dx

2 2 2 2
−∞ 0 1 2 3
(∵ by (1))
Le

=1
w.

x2

, X ∈ [0, 1]



4



x 1
ww


− , X ∈ [1, 2]


∴ F(x) = 


2
2 4
x 3x 5


+ , X ∈ [2, 3]


 − −
4 4 4



1, X > 3

dF(x)
Note : Check = f (x) is true for the above problem.
dx

Example 1.10 A continuous random variable X has a p.d.f.


f (x) = 3x2 , 0 ≤ X ≤ 1. Find k&α such that (a) P (X ≤ k) = P (X > k) (b)
P (X > α) = 0.1 (c) P (|X| ≤ 1)
[Link]
[Link]
56 UNIT I - Random Variables

Solution : Given pdf f (x) = 3x2 , 0 ≤ x ≤ 1. (1)


(a) Find k from given equation P (X ≤ k) = P (X > k) (2)

W.K.T. P (X ≤ k) + P (X > k) = 1
2P (X ≤ k) = 1 (∵ P (X ≤ k) = P (X > k))
1
P (X ≤ k) =
2
1
∴ (2) ⇒ P (X ≤ k) = P (X > k) = (3)
2

n
From (3), we have

g.i
1 1
P (X ≤ k) = or P (X > k) =
2 2

n
Zk Z∞
1 1

eri
i.e., f (x)dx = i.e., f (x)dx =
2 2
−∞ k
ine
Zk Z1
1 1
3x2 dx = [∵ by (1)] 3x2 dx = [∵ by (1)]
2 2
0 k
ng

h ik 1 h i1 1
x3 = x3 =
0 2 k 2
E

1 1
k3 = 1 − k3 =
2 2
arn

1 1
1 3 1 3
   
∴k= ∴k=
2 2
Le

(b) Find α from given equation P(X > α) = 0.1


Z∞
w.

i.e., f (x)dx = 0.1


α
ww

Z1
3x2 dx = 0.1 [∵ by (1)]
α
h i1 1
x3 = 0.1 =
α 10
1
1 − α3 =
10
9
α3 =
10
1
9 3
 
∴α=
10
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 57

(c) P(|X| ≤ 1) :

i.e., P(|X| ≤ 1) = P(−1 ≤ X ≤ 1)


Z1
= f (x)dx
−1
Z0 Z1
*0
= f (x)
  dx + 3x2 dx (∵ by (1))
−1 0

n
h i1
= x3 dx

g.i
0
=1

n
eri
1.6.3 Examples of moments

Example 1.11 The first 4 raw moments about X = 4 are 1, 4, 10, 45 respectively.
ine
Show that the mean is 5, variance is 3, µ3 = 0 and µ4 = 26.
ng

Solution : W.K.T. The rth moment about the point X = A is


0
µr = E(X − A)r .
E
arn

Given
0
h i
The 1 moment about the point X = 4 is
st
µ1 = E (X−4) = 1 1
(1)
Le

0
h i
The 2nd moment about the point X = 4 is µ2 = E (X−4)2 = 4 (2)
0
h i
The 3rd moment about the point X = 4 is µ3 = E (X−4) = 10
3
(3)
w.

0
h i
The 4th moment about the point X = 4 is µ4 = E (X−4)4 = 45 (4)
ww

∴ (1) ⇒ E[X − 4] = 1
⇒ E[X] − 4 = 1
∴ E[X] = 5 (5)
h i
∴ (2) ⇒ E (X − 4) = 4
2
h i
⇒ E X2 − 8X + 16 = 4
h i
⇒ E X2 − 8E[X] + 16] = 4
h i
⇒ E X2 − 8(5) + 16 = 4
h i
∴ E X2 = 28 (6)
[Link]
[Link]
58 UNIT I - Random Variables

h i
∴ (3) ⇒ E (X − 4) = 103
h i
⇒ E X3 − 12X2 + 48X − 64 = 10
h i h i
⇒ E X − 12E X2 + 48E[X] − 48 = 10
3
h i
⇒ E X3 − 12(28) + 48(5) − 48 = 10
h i
∴ E X3 = 170 (7)
h i
∴ (4) ⇒ E (X − 4) = 454

n
h i
⇒ E X4 − 16X3 + 96X2 − 256X + 256 = 45

g.i
h i h i h i
⇒ E X − 16E X + 96E X2 − 256E [X] + 256 = 45
4 3
h i
⇒ E X4 − 16(170) + 96(28) − 256(5) + 256 = 45

n
h i
∴ E X4 = 1101 (8)
∴ The central moments about X = A are
eri
ine
µ1 = Mean = E[X] = 5
h i
µ2 = Variance = E X2 − [E(X)]2 = 28 − 52 = 3
ng

0 0 0
h 0 i3
µ3 = µ3 − 3µ2 µ1 + 2 µ1 = 10 − 3(4)(1) + 2(1)3 = 0
0 0 0 0
h 0 i4
µ4 = µ4 − 4µ3 µ1 + 6µ2 − 3 µ1 = 45 − 4(10)(1) + 6(4)(1)2 − 3(1)4 = 26
E
arn

Alternative Method :

Let the point be A = 4


Le

∴ Mean = A + µ11 = 4 + 1 = 5
0
h 0 i2
Variance = µ2 = µ2 − µ1 = 4 − 1 = 3
w.

0 0 0
h 0 i3
µ3 = µ3 − 3µ2 µ1 + 2 µ1
ww

= 10 − 3(4)(1) + 2(1)3 = 10 − 12 + 2 = 0
0 0 0 0
h 0 i2 h 0 i4
µ4 = µ4 − 4µ3 µ1 + 6µ2 µ1 − 3 µ1
= 45 − 4(10)(1) + 6(4)(1)2 − 3(1)4 = 26

1.6.4 Examples of Mean, variance, moment generating function

Example 1.12 Find m.g.f. of random variable X having


1
P (X = x) = x , x = 1, 2, 3, · · ·. Hence find mean, variance.
2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 59

Solution : Given X is a discrete random variable.

W.K.T., M.g.f. of X = MX (t)


h i
= E etX
X
= etx P(X = x)
∀x

n

X 1
= etx

g.i
2x
x=1
∞ t 2
!
X e

n
=
2

eri
x=1
!1 !2 !3
e1t e2t e3t
= + + + ···
2 2 2
ine
! !1 !2 
e1t  e1t e2t
= 1 + + + · · · 

2 2 2
ng

!" !#−1
e1t et
= 1− (∵ 1 + x + x2 + x3 · · · = [1 − x]−1 )
2 2
E

!
e1t 2
 
=
arn

2 2 − et
et
∴ M.g.f of X =
2 − et
Le
w.

Mean of X = E[X]
ww

( )
d
= [MX (t)]
dt
( " t #)t=0
d e
=
dt 2 − et t=0
2 − et et + e2t
("  #)
=
(2 − et )2 t=0
1+1
=
1
∴ Mean of X = 2
 
Variance of X = VarX = E X2 − [E(X)]2
[Link]
[Link]
60 UNIT I - Random Variables

( )
  d2
Now, E X 2
= [MX (t)]
dt2
 t=0 2t #)
d 2−e e +e t t
( "
= (refer Mean of X)
dt (2 − et )2 t=0
( " t
#)
d 2e
=
dt (2 − et )2 t=0
 2 2t  2t 
 2 2 − e e + 4 2 − e e 
t t

n
 
=
 
(2 − et )4
 


g.i

 

t=0
2+4
=
1

n
=6

eri
ine
∴ Var(X) = 6 − 4 (∵ Mean of X = E(X) = 2)

=2
E ng


 2e
−2x
,X ≥ 0

arn


Example 1.13 Given p.d.f. f (x) = 

.
0 , otherwise



Find m.g.f. and hence find mean, variance, standard deviation and µ3 .
Le

Solution : Here X is a continuous random variable.


w.

h i
M.G.F. of X = MX (t) = E E tX

Z∞
ww

= etx f (x)dx
−∞
Z∞ Z∞
= 2 etx e−2x dx = 2 e−(2−t)x dx

"0 −(2−t)x #∞ "0 #


e 1
=2 =2
−(2 − t) 0 (2 − t)
" #
2
∴ MX (t) =
(2 − t)
To find mean, variance, standard deviation and µ3 , expand MX (t) as
infinite series as follows.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 61

" #
2
MX (t) =
(2 − t)
 
 2 
=  h i , |t| < 2
 
2 1 − t 
2
−1
t

= 1−
2
   2  3
t t t
=1+ + ++ + ···
2 2 2

n
! !
1 t 1 t2 3 t3
 
=1+ + + + ···

g.i
2 1! 2 2! 4 3!

t 0 1
W.K.T. Coefficient of = µ1 = = Mean

n
1! 2

eri
2
t 0 1
Coefficient of = µ2 =
2! 2
3
t 3
ine
0
Coefficient of = µ3 =
3! 4
 0 2 1  1 2 1  1  1
ng

0
∴ Variance = µ2 = µ2 − µ1 = − = − =
r 2 2 2 4 4
√ 1 1
E

∴ S.D. = Variance = =
4 2
arn

0 0 0
h 0 i3 3 3 1
µ3 = µ3 − 3µ2 µ1 + 2 µ1 =  −  + = 0
4 4 4
Le
w.
ww

Example 1.14 A continuous random variable X has a p.d.f. f (x) = Kx2 e−x , x ≥ 0.
Find mean, variance and m.g.f.

Solution : Given pdf of a continuous random variable X is

f (x) = Kx2 e−x , x ≥ 0. (1)


[Link]
[Link]
62 UNIT I - Random Variables

Z∞
By pdf, wkt f (x)dx = 1 (∵ pdf contains an unknown K)
−∞
Z∞
K x2 e−x dx = 1 (∵ by (1))

" " −x # " −x # 0 " −x ##∞


e e e
K x2 − 2x +2 =1
−1 1 −1 0
h  −x i∞
2  −x 
=1
 −x 
K −x e − 2x e − 2 e

n
0
K[(0 − 0 − 0) − (0 − 0 − 2)] = 1

g.i
1
∴K=
2

n
1
∴ f (x) = x2 e−x , x ≥ 0. eri
ine
2
Mean of X:
ng

Z∞
E

E(X) = x f (x)dx
arn

−∞
Z∞
1 3 −x
= x e dx
Le

2
0
" " # " −x # " −x ##∞
1 3 e−x e e
= x − 3x2 +6
w.

2 −1 1 −1 0
ww

1 h 3 −x i∞
=
x e − 3x2 e−x − 6e−x
2 0
1
= [(0 − 0 − 0) − (0 − 0 − 6)]
2
1
= [6]
2
∴ E[X] = 3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 63

h i
Variance of X : Var(X) = E X − [E(X)]2 2

h i Z∞
Now, E X2 = x2 f (x)dx
−∞
Z∞
1 4 −x
=x e dx

n
2
0

g.i
" " # " −x # " −x # " −x # " −x ##∞
1 4 e−x e e e e
= x − 4x3 + 12x2 − 24x + 24
2 −1 1 −1 1 −1 0

n
1h i∞
= −x4 e−x − 4x3 e−x − 12x2 e−x − 24xe−x − 24e−x
2
eri
0
1
= [(−0 − 0 − 0 − 0 − 0) − (−0 − 0 − 0 − 0 − 24)]
ine
2
1
= [24]
2
∴ E[X] = 12
ng

h i
∴ Var(X) = E X2 − [E(X)]2 = 12 − 32 = 3
E
arn
Le

MGF of X:
w.

h i
MGF = MX (t) = E etX
ww

Z∞
= etx f (x)dx
−∞
Z∞
1
= etx x2 e−x dx
2
0
Z∞
1
= x2 e−(1−t)x dx
2
0

[Link]
[Link]
64 UNIT I - Random Variables

" " # " −(1−t)x # " −(1−t)x ##∞


1 2 e−(1−t)x e e
= x − 2x 2
+2
2 −(1 − t) (1 − t) −(1 − t)3 0
" " −(1−t)x # " −(1−t)x # " −(1−t)x ##∞
1 e e e
= −x2 − 2x − 2
2 (1 − t) (1 − t)2 (1 − t)3 0
" #
1 2
= (0 − 0 − 0) − (0 − 0 − )
2 (1 − t)3
" #
1
∴ M.G.F. =
(1 − t)3

n
g.i
1 −x
Example 1.15 Let X be a random variable with p.d.f. f (x) = e 3 , x ≥ 0. Find
3

n
P (X > 3), [Link] X, mean and variance.

Solution :
 eri
ine
x , X ∈ [0, 1]






Example 1.16 For the triangular distribution f (x) = 

2 − x , X ∈ [1, 2] .


ng



0 , otherwise




Find mean, variance and m.g.f.
E

Solution : Given pdf of a continuous random variable X is


arn

x , X ∈ [0, 1]



f (x) =  2 − x , X ∈ [1, 2]

(1)

Le

0 , otherwise


Mean of X:
w.

Z∞
E(X) = x f (x)dx
ww

−∞
Z0 Z1 Z2 Z∞
0 *0
= f(x)
* dx + x · xdx + x · (2 − x)dx + f(x)
 dx (∵ by(1))
−∞ 0 1 0
Z1 Z2
= x2 dx + (2x − x2 )dx
0 1
"#1 " 2 #2
x3 2x x3
= + −
3 0 2 3 1

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 65

1 8 1
      
= − (0) + 4 − − 1 −
3 3 3
1 7
   
= + 4−
3   3
1 2
= +
3 3
∴ E[X] = 1
h i
Variance of X : Var(X) = E X2 − [E(X)]2

n
Z∞

g.i
h i
Now, E X2 = x2 f (x)dx
−∞

n
Z0 Z1 Z2 Z∞
*0 *0
= dx + x2 · xdx + x2 · (2 − x)dx +

eri
f(x)
 f(x)
 dx
−∞ 0 1 0
(∵ by(1))
ine
Z1 Z2  
= x3 dx + 2x2 − x3 dx
ng

0 1
#1 " 3
" #2
x4 2x x4
= + −
E

4 0 3 4
  1
arn

1 16 16 2 1
    
= − (0) + − − −
4 4 4 3 4
1 14 15
   
= + −
Le

3  3  4
1 56 − 45
= +
4  12
w.

1 11

= +
4 12
ww

14
=
12
h i 7
∴ E X2 =
6
h i 7 1
∴ Var(X) = E X2 − [E(X)]2 = − 1 =
6 6

1.6.5 Examples of Binomial distribution

Example 1.17 Let X denotes the number of heads in an experiment of tossing two
coins. Find probability distribution by using Binomial distribution.
[Link]
[Link]
66 UNIT I - Random Variables

Solution :
X = a discrete r.v. which denotes the number of heads. (1)
1
p = Probability of getting a head from a single coin = (2)
2
1 1
q=1−p=1− = (3)
2 2
n = number of coins = 2 (3)
W.K.T. Binomial PMF of B.D. is P(X = x) = nCx px qn−x (4)
Probability distribution P(X = x) :

n
 0  2−0
1 1 1 1
When X = 0, P(X = 0) = 2C0 =1·1· =

g.i
2 2 4 4
 1  2−1
1 1 1 1 2
When X = 1, P(X = 1) = 2C1 =2· · =

n
2 2 2 2 4

eri
 2  2−2
1 1 1 1
When X = 2, P(X = 2) = 2C2 =1· ·1=
2 2 4 4
Example 1.18 Find Binomial distribution and P(X = 4), for
ine
(a) mean 4, variance = 3 (b) mean 4, variance = 5.
ng

Solution : Given distribution is binomial.

W.K.T. PMF of B.D. is P(X = x) = nCx px qn−x , x = 0, 1, 2, · · · , n


E

(1)
Mean of B.D. = np (2)
arn

Variance of B.D. = npq (3)


Relation between Mean & Variance is Mean > Variance (4)
Le

(a) Mean = np = 4 · · · (5)


Variance = npq = 3 · · · (6)
w.

4q = 3
3
∴q = (b) Mean = np = 4
ww

4
1 Variance = npq = 5
W.K.T. p = 1 − q = 1 −
4 4q = 5
(5) ⇒ np = 4 5
∴q = >1
1 4
∴ n = 16 ∵p=
4 which is not possible
 x  16−x
1 3 ∵ 0 ≤ Probability ≤ 1
∴ (1) ⇒ P(X = x) = 16Cx
4 4
 4  12
1 3
i.e., P(X = 4) = 16C4
4 4
= 0.22
Here Mean > Variance Here Mean < Variance
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 67

Example 1.19 A machine manufacturing screws is known to produce 5%


defective in a random sample of 15 screws. Find probability that there are
(a) exactly 3 defectives (b) all are good screws
(c) not more than 3 defectives(atmost 3 defectives (d) atleast 3 defectives
(e) defectives between 1 and 3(inclusive) (f)

Solution :

n
Let X = Number of defectives

g.i
5
p = Probability of getting a defective = = 0.05
→0

100
q = 1 − p = 0.95

n
n = Number of screws = 15

eri
ine →∞


So, X∼B.D.(n, p)
ng

W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x


= 15Cx (0.05)x (0.95)15−x
E
arn

(a) P(X = 3) =15C3 (0.05)3 (0.95)12 = 0.031


(b) P(X ≤ 3) =P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3)
Le

=15C0 (0.05)0 (0.95)15 + 15C1 (0.05)1 (0.95)14


+ 15C2 (0.05)2 (0.95)13 + 15C3 (0.05)3 (0.95)12
w.

=0.4 + 0.365 + 0.134 + 0.03


=0.993
ww

(c) P(X ≥ 3) =1 − P(X < 3)


=1 − [P(X = 0) + P(X = 1) + P(X = 2)]
=15C0 (0.05)0 (0.95)15 + 15C1 (0.05)1 (0.95)14 + 15C2 (0.05)2 (0.95)13
=0.038
(d) P(1 < X < 3) = P(X = 2)
= 1 − [P(X = 0) + P(X = 1) + P(X = 2)]
= 15C2 (0.05)2 (0.95)13
= 0.134
(e) P(X = 0) = 15C0 (0.05)0 (0.95)15
= 0.463
[Link]
[Link]
68 UNIT I - Random Variables

Example 1.20 In a certain city, 20% of population is literate, assume that 200
investigators take sample 3 of a individual to see they are literate. How many
investigators would you expect to report that three people or less are literate in the
sample?

Solution :

n
Let X =Number of literates

g.i
p =Probability of getting a literate
20
=Percentage of getting a literate = = 0.2

n
→0

100
q =1 − 0.2 = 0.8
n =Number of literates = 10
→∞

eri
ine
N =Number of investigators = 200
ng

So, X∼B.D.(n, p)
E

W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x


arn

= 10Cx (0.2)x (0.8)10−x


Le

∴ P(X ≤ 3) =P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3)


w.

=10C0 (0.2)0 (0.8)10 + 10C1 (0.2)1 (0.8)9


+ 10C2 (0.2)2 (0.8)8 + 10C3 (0.2)3 (0.8)7
ww

=0.17 + 0.268 + 0.302 + 0.201


=0.8789
∴ Number of investigators reporting 3 or less literates = N × P(X ≤ 3)
= 200 × 0.8789
= 175.6
= 176

Example 1.21 In 256 sets of 12 tosses of coins, in how many cases may one expect
eight heads and 4 tails?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 69

Solution :

Let X =Number of heads


p =Probability of getting a head from a single coin
1
= = 0.5 →0

2
q =1 − 0.5 = 0.5
n =Number of tosses = 12 →∞


N =Number of sets = 256

n
So, X∼B.D.(n, p)

n g.i
W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x

eri = 12Cx (0.5)x (0.8)12−x


ine
∴ P(X = 8) =10C8 (0.5)8 (0.5)4
=0.121
ng

∴ Number of sets having 8 heads = N × P(X = 8)


= 256 × 0.121
E

= 30.212
arn

= 31

Example 1.22 6 dice are thrown 729 times, how many times do you expect at
Le

least 3 dice to show a 5 or 6?


w.

Solution :
ww

Let X =Number of dice that shows 5 or 6


p =Probability of getting a 5 or 6 from a single die
=P(to get 5) + P(to get 6) − P(to get 5 and 6 )
1 1 2
= + −0=
6 6 6
1
=  0.3333 →0

3
1 2
q =1 − =
3 3
n =Number of dice = 6 →∞


N =Number of times 6 dice thrown = 729


[Link]
[Link]
70 UNIT I - Random Variables

So, X∼B.D.(n, p)

W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x


1 x 2 6−x
= 6Cx
3 3

n
∴ P(getting atleast 3 dice to get 5 or 6)

n g.i
=P(X ≥ 3)
=1 − P(X < 3) = 1 − P(X = 0, 1, 2)
=1 − [P(X = 0) + P(X = 1) + P(X = 2)] eri
ine
"  0  6  1  5  2  4 #
1 2 1 2 1 2
=1 − 6C0 + 6C1 + 6C2
3 3 3 3 3 3
ng

=1 − [0.088 + 0.263 + 0.329]


=0.320
E
arn

∴ Number of times that show at least 3 dice showing either 5 or 6


Le

= N × P(X ≥ 3)
w.

= 729 × 0.320
= 233
ww

Example 1.23 With the issued rotation find p for a binomial distribution random
variable X, if n = 6 and if 9[P(X = 4)] = P(X = 2).

Solution : Given X ∼ B.D. (n, p), where n = 6.


To find p from given relation 9[P(X = 4)] = P(X = 2) (1)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 71

W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x

(1) ⇒ 9P(X = 4) = P(X = 2)


2 2

9 [
6C
 4 
q2
4] p  = [
6C p2 q4
2] 
 
(∵ 6C4 = 6C2 by nCr = nCn−r )
i.e., 9p = q
2 2

9p2 − q2 = 0
9p2 − (1 − p)2 = 0
9p2 − 1 − p2 + 2p = 0

n
8p2 + 2p − 1 = 0

g.i
8p2 + 4p − 2p − 1 = 0
4p(2p + 1) − 1(2p + 1) = 0

n
(4p − 1)(2p + 1) = 0

eri
1 −1
p = or
4 2
−1
ine
p, (∵ 0 ≤ p ≤ 1)
2
1
∴p=
ng

4
E
arn
Le
w.
ww

[Link]
[Link]
72 UNIT I - Random Variables

1.6.6 Examples of Poisson distribution

Example 1.24 X is a Poisson distribution random variable such that P (X = 1) =


0.3, P (X = 2) = 0.2. Find P (X = 0).
Solution : Given X is a Poisson distribution random variable.
i.e., X ∼ P.D.(λ = np)
W.K.T. P.M.F. of P.D. is
e−λ λx

n
P(X = x) = , x = 0, 1, 2, 3, · · · , ∞
x!

g.i
Given P (X = 1) = 0.3, P (X = 2) = 0.2. (1)
e−λ λ0
We have to find P(X = 0) = = e−λ

n
0!
which indicates to find λ from (1)

P (X = 1) = 0.3
eri
P (X = 2) = 0.2
ine
e−λ λ1 e−λ λ2
= 0.3 = 0.2
1! 2!
ng

e−λ λ = 0.3 · · · (2) e−λ λ2 = 0.4 · · · (3)


E

(3) e−λ λ2 0.4 4


⇒ −λ = =
e λ
arn

(2) 0.3 3
4
λ=
3
Le

e−λ λ0 −4
P(X = 0) = = e 3 ∴ P(X = 0) = 0.263
w.

0!
Example 1.25 A manufacturer of pins knows that 5% of his product is defective.
ww

If he sells pins in boxes of 100, and guarantee is that not more 10 pins will be
defective. What is the probability that a box will fail to meet the guaranteed quality.
Solution :
Let X =Number of defectives
p =Probability of getting a defective
5
=Percentage of getting a defective = = 0.05
→0

100
n =Number of pins = 10
→∞


np =5
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 73

So, X∼B.D.(n, p) (or) X∼P.D.(λ = np)

e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!
−5 x
e 5
=
x!
P(Number of defectives not more than 10)
=P(Number of defectives less than 10)
=P(X > 10)

n
=P(X ≥ 11)

g.i
=P(X = 11) + P(X = 12) + P(X = 13) + · · · + P(X = 100)
=(or)

n
=1 − P(X ≤ 10) = 1 − [P(X = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)]

=1 −
" −5 0
e 5
0!
+
1!
+ eri
e−5 51 e−5 52 e−5 53 e−5 54
2!
+
3!
+
4!
ine
#
e−5 55 e−5 56 e−5 57 e−5 58 e−5 59 e−5 510
+ + + + + +
5! 6! 7! 8! 9! 10!
ng

" 0 1 2 3 4 5 6 7 8 9 10
#
5 5 5 5 5 5 5 5 5 5 5
=1 − e−5 + + + + + + + + + +
0! 1! 2! 3! 4! 5! 6! 7! 8! 9! 10!
E

=0.013695
arn

Example 1.26 Find the probability that of ace of spade will drawn from pack of
well shuffled cards atleast once in 104 consecutive trials.
Le

Solution :
Let r.v.X =Number of ace spade
w.

p =Probability of getting a ace spade in a single trial


ww

1
= →0 (which indicates to use Poisson distribution)
52
n =Number of trials = 104→∞


1
np =104 = 2(= λ)
52
So, X∼P.D.(λ = np)

e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!
P(Number of ace spades at least one)

[Link]
[Link]
74 UNIT I - Random Variables

Using P.D. (or) Using B.D.


Using P.D. Using B.D.
=P(X ≥ 1) =P(X ≥ 1)
=P(X = 1, 2, · · · 104) =P(X = 1, 2, · · · 104)
(or) (or)
=1 − P(X < 1) = 1 − [P(X = 0)] =1 − P(X < 1) = 1 − [P(X = 0)]
" −2 0 # "  0  1 #
e 2 1 51
=1 − =1 − 104C0 04
0! 52 52
=1 − e−2 =1 − 0.1327

n
=0.8646 =0.8672

g.i
1
Example 1.27 In a blade factory there is a chance of for any blade to be

n
500

eri
defective. The blades are in packets of 10. Use Poisson distribution to calculate
the number of packets that containing
ine
(a) no defective (b) one defective (c) two defectives
ng

respectively in a consignment of 10,000 packets.


E
arn

Solution :

Let X =Number of defective blade


Le

p =Probability of getting a defective blade


1
w.

=
500
=0.002→0 (which indicates to use Poisson distribution)
ww

n =Number of blades = 10 →∞




10 1
np = = (= λ)
500 50
N =Number of Total Packets = 10000

e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!  
1 x
1
e− 50 50
= x = 0, 1, 2, · · · , 10
x!
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 75

Probability of getting Number of packets that contains


X number of defectives X number of defectives
0
( )
1

− 50 1
e − 50
(a) P(X = 0) = (a) N × P(X = 0) = 10000 × 0.98019
0! = 9802
=0.98019
1
 1
− 50 1
e − 50 (b) N × P(X = 1) = 10000 × 0.0196
(b) P(X = 1) =
1! = 196
=0.0196
2

n
( )
1

− 50 1
e − 50 (c) N × P(X = 2) = 10000 × 0.00019603
(c) P(X = 2) =

g.i
2! = 1.9603  2
=0.00019603

n
1.6.7 Examples of Geometric distribution:
eri
ine
Example 1.28 With a probability that a child get a disease is 0.4. Find probability
that a child get disease (a) at 4th attempt (b) at 7th attempt.
ng

Solution :
E

Let X =Number of trials are performed until the first time disease occurs.
arn

p =Probability that a child get disease at x attempt


=0.4
Le

q =1 − p = 0.6
w.

W.K.T. P.M.F. of Geo.D. is P(X = x) = qx−1 p, x = 1, 2, 3, · · ·


ww

(a) P(X = 4) =q3 p = (0.6)3 (0.4) = 0.0864


[∵ start with 3 continuous failures and finally attacks by disease]
(b) P(X = 7) =q6 p = (0.6)6 (0.4) = 0.0186624
[∵ start with 6 continuous failures and finally attacks by disease]

Example 1.29 Probability of a student passing a subject is 0.8. What is the


probability that he will pass the subject in
(a) 3rd attempt (b) before 3rd attempt (c) after 3rd attempt
(d) even number of attempts (e) odd number of attempts.
[Link]
[Link]
76 UNIT I - Random Variables

Solution :

Let X =Number of trials are performed until passing the subject.


p =Probability that a student get pass at x attempt
80
=Percentage that a student get pass at x attempt = 80% =
100
=0.8
q =1 − 0.8 = 0.2

n
g.i
W.K.T. P.M.F. of Geo.D. is P(X = x) = qx−1 p, x = 1, 2, 3, · · ·

n
eri
 
(a) P 3rd attempt = P(X = 3) = q3−1 p = q2 p = (0.2)2 (0.8) = 0.032
 
(b) P before 3rd attempt = P(X < 3) = P(X ≤ 2)
ine
x=2
X x=2
X
= q p=
x−1
(0.2)x−1 (0.8)
ng

x=1 x=1
h i
= (0.8) (0.2) + (0.2) = (0.8)[1.2] = 0.96
0 1
 
E

(c) P after 2nd attempt = P(X > 1) = 1 − P(X ≤ 1)


= 1 − [P(X = 0) + P(X = 1)] = 1 − 0.96
arn

= 0.04
(d) P even attempts = P(X = 1) + P(X = 3) + P(X = 5) + · · ·

Le

= (0.2)1 (0.8) + (0.2)3 (0.8) + (0.2)5 (0.8) + · · ·


h i
= (0.2)(0.8) 1 + (0.2)2 + (0.2)4 + · · ·
w.

h i−1
= (0.2)(0.8) 1 − (0.2)2
ww

= (0.16)(1.041666667) = 0.166666667
(e) P odd attempts = 1 − P(even attempts)


= 1 − 0.166666667
= 0.833333333

Example 1.30 A and B shoot independently until each has hit his own target.
3 5
The probabilities of their hitting the target at each shot 5 and 7 respectively. Find
the probability that B will require more shots than A.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 77

Solution :
Let X = Number of trials required by A to get his first success.
X ∼ Geo. Dist.(p)
3
Given p = Probability of Ahitting the target = = P(A)
5
  3 2
P A = 1 − P(A) = 1 − =
5 5
Let Y = Number of trials required by B to get his first success.
Y ∼ Geo. Dist.(p)
5

n
Givenp = Probability of Bhitting the target = = P(B)
7

g.i
  5 2
P B = 1 − P(B) = 1 − =
7 7

n
eri
W.K.T. P.M.F. of Geo.D. is P(X = x) = qx−1 p, x = 1, 2, 3, · · ·
ine
P(B requires more shots to get his first success thanAto get his first success)
x→∞
X
= P[X = x and Y = (x + 1)(or)Y = (x + 2)(or)Y = (x + 2)(or) · · · ]
ng

x=1
x→∞
E

X
= P(X = x) · P[Y = (x + 1)(or)Y = (x + 2)(or)Y = (x + 3)(or) · · · ]
arn

x=1
(∵ shooters A and B are hitting the target in independent fashion)
x→∞
X  x−1 k→∞
X  (x+k)−1
=
Le

P A · P(A) P B · P(B)
x=1 k=1
x→∞ x−1   k→∞
X  (x+k)−1 
2 3 2 5
w.

X 
= ·
5 5 7 7
x=1 k=1
ww

    x→∞ k→∞  
3 5 X 2 x−1 X 2 x−1 2 k
   
= ·
5 7 5 7 7
x=1 k=1
  x→∞ x−1 k→∞
3 X 2 x−1 2
k
2
   X
= ·
7 5 7 7
x=1 k=1
 x→∞  k→∞  
3 4 x−1 X 2 k
 X
=
7 35 7
x=1 k=1
  x→∞
3 X 4 x−1 2 1
  "   2  3 #
2 2
= + + + ···
7 35 7 7 7
x=1

[Link]
[Link]
78 UNIT I - Random Variables

  x→∞
3 X 4 x−1 2
   "  1  2 #
2 2
= 1+ + + ···
7 35 7 7 7
x=1
    x→∞
3 2 X 4 x−1
 −1
2
  
= 1−
7 7 35 7
x=1
      x→∞
3 2 7 X 4 x−1
 
=
7 7 5 35
x=1
 "  1  2 #
6 4 4
= 1+ + + ···
35 35 35

n
 −1
6 4

g.i
 
= 1−
 35   35
6 35
=

n
35 31

eri
6
=
31
ine
1.6.8 Examples of Uniform distribution
ng

Example 1.31 If a random variable X with parameters (−3, 3). Find


E

(a) P(X < 2) (b) P(|X| < 2) (c) P(|X − 2| < 2)


arn

(d) k such that P(X > k) = 1/4 (e) P(X = 2).


Le

Solution : Given X is defined in (−3, 3).


⇒X ∼ U.D. (−3, 3).
w.

1
∴ P.D.F. of X = f (x) = ,a < x < b
b−a
1 1 1
ww

f (x) = = = , −3 < x < 3 (1)


3 − (−3) 3 + 3 6
Z2
(a) P(X < 2) = f (x)dx
−∞
Z2
1
= dx [∵ by (1)]
6
−3
1
= [x]2−3
6
1 5
= [2 − (−3)] =
6 6
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 79

(b) P(|X| > 1) = 1 − P(−1 ≤ X ≤ 1)


Z1 Z1
1
=1− f (x)dx = 1 − dx [∵ by (1)]
6
−1 −1
Z1
1
=1− 2 dx [∵ constant function is an even function]
6
0
1 1 2
= 1 − [x]10 = 1 − [1 − 0] =
3 3 3

n
(c) P(|X − 2| < 2) = P(−2 < X − 2 < 2)

g.i
= P(0 < X < 4)
= P(0 < X < 3) [∵ X ∼ in (−3, 3)]

n
Z3 Z3

eri
1
= f (x)dx = dx
6
0 0
ine
Z3
1 1 1 1
= dx = [x]30 = [3 − 0] =
6 6 6 2
ng

0
Z∞
1 1
(d) P(X > k) = ⇒ f (x)dx =
E

4 4
k
arn

Z3
1 1 1 1
dx = ⇒ [x]3k =
6 4 6 4
Le

k
6 3
[3 − k] = =
4 2
w.

3 −6 + 3
−k = −3 + =
2 2
3
ww

−k = −
2
3
k=
2

Example 1.32 Buses arrive at a specific stop at 15 minutes interval starting at


7.00 a.m. (i.e.) they arrive at 7.00, 7.15, 7.30 and so on. If a passenger arrives at
that stop at a random time which is uniformly distributed between 7.00 and 7.30
a.m. Find the probability that he waits

(a) less than 5 minutes (b) more than 12 minutes.


[Link]
[Link]
80 UNIT I - Random Variables

Solution :
Using X denotes Waiting time (OR) Using X denotes Arrival time
Let X be a random variable that Let X be a random variable that denotes
denotes the waiting time of the the arrival time of the passenger to get
passenger to get the bus who the bus who arrives between 7.00 a.m.
arrives between 7.00 a.m. and and 7.30 a.m.
7.30 a.m. The buses arrive for every 30 minutes.
The buses arrive for every 15 Whenever the passenger arrives
minutes. between 7.00 a.m. 7.30 a.m., his arrival
Whenever the passenger arrives time with minimum 0 minutes to

n
between 7.00 a.m. 7.30 a.m., he maximum 30 minutes.

g.i
has wait for minimum 0 minutes
X ∼ U.D. (0,30).
to maximum 15 minutes.

n
∴ X is uniformly distributed in (0, 30).
X ∼ U.D. (0,15).

eri
1 1 1
∴ X is uniformly distributed in f (x) = = =
b − a 30 − 0 30
(0, 15).
ine
(a) P(waiting time less than 5 minutes)
1 1 1
f (x) = = = =P(7.10 ≤ X ≤ 7.15)+P(7.25 ≤ X ≤ 7.30)
b − a 15 − 0 15
Z7.15 Z7.30
ng

Z5
= f (x)dx + f (x)dx
(a) P(X < 5) = f (x)dx
E

7.10 7.25
−∞
Z15 Z30
Z5
arn

1 1 1 h 15 i
1 1 = dx+ dx = [x]10 +[x]30
= dx = [x]50 15 15 30 25
15 15 10 25
0
1 10 1
Le

1 1 = [[(5) − (0)] + [(5) − (0)]] = =


= [5 − 0] = 30 30 3
15 3 (b) P(waiting time at least 12 minutes)
(b) P(X ≥ 12) = 1 − P(X < 12)
w.

=P(7.00 ≤ X ≤ 7.03)+P(7.15 ≤ X ≤ 7.18)


Z12
Z7.03 Z7.18
=1− f (x)dx
ww

= f (x)dx + f (x)dx
−∞
Z12 Z1 7.00 7.15
1 1 Z3 Z18
=1− dx = 1 − dx 1 1
15 15 = dx + dx
0 0 30 30
1 12 0 15
=1− [x] 1 h 3
15 0
i
= [x]0 + [x]15
18
1 1 30
= 1 − [12 − 0] = 1 − [12] 1
15 15 = [[(3) − (0)] + [(18) − (15)]]
4 30
=1− 6 1
5 = =
1 30 5
=
5 [Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 81

Example 1.33 An electric train runs every half an hour from 12.00 a.m. to 6.00
a.m. in the morning. Find the probability of a man entering the station at a
random time during this period, will have to wait

(a) atmost 20 minutes (b) atleast 20 minutes.

Solution : Let X be a random variable that denotes the waiting time of


the passenger to get the bus who arrives between 12.00 a.m. and 6.00 a.m.
The buses arrive for every half an hour for every 30 minutes.

n
Whenever the passenger arrives between 12.00 a.m. and 6.00 a.m., he has

g.i
wait for minimum 0 minutes to maximum 30 minutes.
X ∼ U.D. (0,30).
∴ X is uniformly distributed in (0, 30).

n
eri
1 1 1
f (x) = = =
b − a 30 − 0 30
Z20
ine
(a) P(X ≤ 20) = f (x)dx
−∞
ng

Z20
1
= dx
30
E

0
1 20
arn

= [x]
30 0
1
= [20 − 0]
Le

30
2
=
3
w.

(b) P(X ≥ 20) = 1 − P(X < 20)


2
= 1 − P(X ≤ 20) = 1 −
ww

3
1
=
3

1.6.9 Examples of Exponential Distribution:

x
Example 1.34 If X be the time to repair in hours with p.d.f. f (x) = ce− 5 , x > 0.
Find (a) c (b) mean (c) variance (d) P(X > 3) (e) P(3 < X ≤ 6).

Solution : Given pdf of a continuous random variable X is


x
f (x) = ce− 5 , x > 0 (1)
[Link]
[Link]
82 UNIT I - Random Variables

Z∞
(a) By pdf, wkt f (x)dx = 1 ∵ pdf contains an unknown c
−∞
Z∞
x
c e− 5 dx = 1 ∵ by (1)
0
 x ∞
 e− 5 
c 1  = 1
−5 0
 
1 

n
c(0) − 1  = 1

−5

g.i
 
1 
c0 + 1  = 1


n
5
1

1 x
∴ f (x) = e− 5 , x ≥ 0.
∴c=
5
eri
ine
5
1 x 1
Note : This f (x) = e− 5 is of the form f (x) = αe−αx , where α = .
5 5
ng

⇒ X ∼ Exponential Distribution.
1
(b) Mean =
α
E

1
= 1
arn

5
=5
Le

1
(c) Variance =
α2
1
=  2
w.

1
5
ww

= 25
Z∞
(d) P(X > 3) = f (x)dx
3
Z∞  x ∞
1 −x 1  e− 5 
= e 5 dx =  1 
5 5 −5
3 3

1 5 −x
  h i∞
−3 ∞
h i
= − e 5 =− 0−e 5
5 1 3 3
= e−0.6
= 0.5488
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 83

Z6
(e) P(3 < X ≤ 6) = f (x)dx
3
Z6
1 −x
= e 5 dx
5
3
 x 6
1  e− 5 
=  1 
5 −5
  3h i6

n
1 5 −x
= − e 5
5h 1

g.i
3
−6 −3
i
=− e5 −e5
= e−0.6 − e−1.2

n
= 0.2476

eri
ine
Example 1.35 The mileage of a tyre have exponential distribution with mean
40,000 kms. Find the probability that one of these tyres will last
ng

(a) atleast 20,000 kms (b) atmost 30,000 kms


(c) atleast 30,000 kms., given that it has been used for 20,000 kms.
E

Solution : Given distribution is exponential.


arn

Let X be a random variable which denotes the mileage of a tyre.


W.K.T. f (x) = αe−αx , x ≥ 0. (∵ X ∼ Exp. dist. (parameter = α))
Le

Given Mean = 40000 kms


1
= 40000
w.

α
1
α=
40000
ww

1 − x
∴ f (x) = e 40000
40000
20000
h i
(a) P(X ≥ 20000) = e− 40000 ∵ P(X > k) = P(X ≥ k) = e−αk is true for Exp. D
1
= e− 2
= 0.6065
(b) P(X ≤ 30000) = 1 − P(X > 30000)
h i
− 30000
=1−e 40000 ∵ P(X > k) = P(X ≥ k) = e is true for E. D.
−αk

3
= 1 − e− 4
= 0.5276
[Link]
[Link]
84 UNIT I - Random Variables

(c) P(X > 30000/X > 20000) = P(X > 20000 + 10000/X > 20000)
= P(X > 10000) [∵ P[X > (s + t)/P(X > s)] = P(X > t) is true for E. D.]
h i
− 14
=e ∵ P(X > k) = P(X ≥ k) = e is true for E. D.
−αk

= 0.7788
Note : P(X > t) for n tyres is [P(X > t)]n .

Example 1.36 The daily consumption of milk in excess 20,000 litres is


1
approximately exponential distributed with = 3000. The city has a stock of

n
λ

g.i
35,000 litres. Find the probability of 2 days select at random, the stock is
insufficient(inadequate) for both days.

n
eri
Solution : Given distribution is exponential.
Let X be a random variable which denotes daily milk consumption in
excess of 20000 litres.
ine
If this excess is upto 15000 litres, the stock is sufficient.
If this excess is more than 15000 litres, the stock is insufficient.
ng

W.K.T. f (x) = αe−λx , x ≥ 0. (∵ X ∼ Exp. dist. (parameter = λ))


E

1
Given Mean = = 3000
arn

λ
1
λ=
3000
Le

1 − x
∴ f (x) = e 3000
3000
w.

15000
h i
∴ P(X > 15000) = e− 3000 ∵ P(X > k) = P(X ≥ k) = e−λk is true for Exp. D
= e−5
ww

= 0.006737947
∴ For 2 days, probability for the insufficient of milk is
 2 
[P(X > 15000)]2 = e−5 ∵ P(X > k) for n days = [P(X > k)]n


= 0.0000453999298391533

1.6.10 Examples of Gamma & Erlang distribution

Example 1.37 In a certain locality, the daily consumption of water can be treated
as random variable having Gamma distribution with an average of 3 million litres.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 85

If the pumping station has a daily supply of 4 million litres. What is the probability
that this water supply will be inadequate on any day?

Solution : Given distribution is Gamma.


Let X be a random variable which denotes daily consumption of water.

e−x xβ−1
W.K.T. f (x) = , β > 0, 0 ≤ X < ∞.

n
β

g.i
(∵ X ∼ Gamma. dist. (parameter = β))
Given Mean = β = 3 million litres

n
e−x x2 e−x x2
f (x) = =
3
eri
2!
ine
∴ Probability that this water supply will be inadequate on any day is
ng

Z∞
P(X > 4) = f (x)dx
E

4
arn

Z∞
1
= e−x x2 dx
2
"4 " −x #
Le

" −x # " −x ##∞


1 2 e e e
= x − 2x +2
2 −1 1 −1 4
w.

1h  i∞
= −x2 e−x − 2x e−x − 2 e−x
   
2 4
ww

1 h  i
= (0 − 0 − 0) − −16e − 8e − 2e
−4 −4 −4
2
= 0.2381

Example 1.38 In a certain city the daily consumption of electric power in million
of Kilowatts can treated as a random variable having Erlang distribution with
 
parameters 12 , 3 . If the power plants daily capacity of 12 millions kilowatts. Find
the probability that this power supply will be inadequate on any given day?
[Link]
[Link]
86 UNIT I - Random Variables

Solution : Given distribution is Erlang.

1
 
X ∼ Erlang Distribution with parameters α = , β = 3 .
2
Let X be a Erl. r.v. which denotes daily consumption of electricity.
e−αx xβ−1
p.d.f. f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β
1
 3
1
e− 2 x x2 2
=

n
3

g.i
− 12 x
e x2
=
16

n
Z∞ eri
ine
P(X > 12) = f (x)dx
12
Z∞ 1
ng

e− 2 x x2
= dx
16
12
E

      ∞
  − 12 x   − 12 x   − 21 x 
1  2  e   e  e
arn

= x     − 2x     + 2    


 
16   − 1   1  2 3
 1 
2 − 2 − 2 12
1 h h 1 i h 1 i h 1 ii∞
= −2x2 e− 2 x − 8x e− 2 x − 16 e− 2 x
Le

16 12
1 h  i
= 2 −6 −6
(0 − 0 − 0) − −2(12) e − 8(12)e − 16e −6
w.

16
1 h i
= 2(12)2 e−6 + 96e−6 + 16e−6
16
ww

1 h i
= 400e−6
16
= 25e−6
= 06196

Example 1.39 The daily consumption of milk in a city, in excess of 20,000 litres
is approximately distributed as Gamma variant with parameters α = 10,000 , β
1
= 2.
The city has a daily stock of 30,000 litres. What is the probability that the stock is
insufficient?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 87

Solution : Given distribution is Erlang.

1
 
X ∼ Erlang Distribution with parameters α = ,β = 2 .
10000
Let X be a Erl. r.v. which denotes daily consumption of milk
in excess of 20000 litres.
e−αx xβ−1
p.d.f. f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β

n
1
 2
1
e− 10000 x x1 10000
=

g.i
2
1
− 10000 x
e x

n
=
(10000)2

eri
If this excess is upto 10000 litres, the stock is sufficient. If this excess is
more than 10000 litres, the stock is insufficient.
ine
Z∞
ng

P(X > 10000) = f (x)dx


E

10000
2 Z∞
1

arn

1
= e− 10000 x x1 dx
10000
10000
    ∞
1 1
Le

2   − 10000 x  − x
1   e   e 10000 
 
= x     −   2  dx
10000    − 1   − 1 
10000 10000 10000
w.

1 2h h
  i h ii∞
1 1
= x −10000e− 10000 x − (10000)2 e− 10000 x dx
10000 10000
ww

1 2h
  h i h ii
1 1
= (0 − 0) − −(10000)2 e− 10000 − (10000)2 e− 10000
10000
= 2e−1
= 0.7358

1.6.11 Examples of Normal distribution:

Refer area under Normal curve for P(0 < Z < z1 ):

Example 1.40 Find


[Link]
[Link]
88 UNIT I - Random Variables

(a) P(0 < Z < 1.25) (b) P(0.6 < Z < 1.25)
(c) P(Z < 1.25) (d) P(Z > 1.25)
(e) P(−1.25 < Z < 1.25) = P(|Z| < 1.25) (f) P(−2.5 < Z < 1.25)

n
Solution : Here Z is continuous normal varible.

n g.i
(a) P(0 < Z < 1.25)
= 0.3944 eri z=0 z = 1.25
ine
ng

(b) P(0.6 < Z < 1.25)


E

= P(0 < Z < 1.25) − P(0 < Z < 0.6) z = .6 z = 1.25


arn

= 0.3944 − 0.2258
= 0.1686
Le
w.

(c) P(Z < 1.25)


ww

= P(−∞ < Z < 0) + P(0 < Z < 1.25) −∞ ← z z = 1.25


= 0.5 + 0.3944
= 0.8944

(d) P(Z > 1.25) = 1 − P(Z < 1.25)


z = 1.25 z → ∞
= 1 − 0.8944
= 0.1.56
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 89

(e) P(|Z| < 1.25)


= P(−1.25 < Z < 1.25)
= P(−1.25 < Z < 0)+P(0 < Z < 1.25)
z = −1.25 z = 1.25
= 2 × P(0 < Z < 1.25)
[∵ Normal curve is symmetric]
= 2 × 0.3944
= 0.7888

n
g.i
(f) P(−2.5 < Z < 1.25)
= P(−2.5 < Z < 0) + P(0 < Z < 1.25)

n
= P(0 < Z < 2.5) + P(0 < Z < 1.25) z = −2.5 z = 1.25

eri
[∵ Normal curve is symmetric]
= 0.3944 + 0.4938
ine
= 0.8882
Example 1.41 X is a normal variant with mean µ = 30 and σ = 5. Find
ng

(a) P(26 < X < 40) (b) P(X > 45) (c) P(|X − 30| > 5)
E

Solution : Given
arn

µ = 30
σ=5
∴ X ∼ N.D. (µ, σ)
Le

X−µ
W.K.T. Z =
σ
w.

X − 30
=
5
ww

∴ X = 5Z + 30

(a) P(26 < X < 40)


= P(26 < 5Z + 30 < 40)
= P(26 − 30 < 5Z < 40 − 30)
= P(−4 < 5Z < 10)
z = −0.8 z=2
−4 10
 
=P <Z<
5 5
= P(−0.8 < Z < 2)
= P(0 < Z < 0.8) + P(0 < Z < 2)
= 0.7655
[Link]
[Link]
90 UNIT I - Random Variables

(b) P(X > 45) = P(5Z + 30 > 45)


15
 
= P(5Z > 45 − 30) = P Z >
5 z=3 z→∞
= P(Z > 3)
= 0.5 − P(0 < Z < 3)
= 0.0013
(c) P(|X − 30| > 5)

n
= P(|5Z + 30 − 30| > 5)

g.i
= P(|5Z| > 5)
= P(|Z| > 1)

n
= 1 − P(|Z| < 1) z = −1 z=1
= 1 − P(−1 < Z < 1)
= 1 − 2 × P(0 < Z < 1)
eri
ine
[∵ Normal curve is symmetric]
= 1 − 2 × 0.3431 = 1 − 0.6862
ng

= 0.3138
E
arn

Example 1.42 The mean height of soldiers to be 172 cm with variance (27cm)2 .
Le

How many soldiers in 1000 can be expected to be over 182 cms?


w.

Solution : Given
ww

µ = 172cm
σ2 = (27cm)2 ⇒ σ = (27cm)
X → denotes the height of a soldier
∴ X ∼ N.D. (µ, σ)
X−µ
W.K.T. Z =
σ
X − 172
=
27
∴ X = 27Z + 172
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 91

∴ The probability of height of soldier is more than 182 cm is


P(X > 182) = P(27Z + 172 > 182)
= P(27Z > 10)
10
 
=P Z>
27
= P(Z > 0.37)
= 0.5 − P(0 < Z < 0.37)
= 0.5 − 0.1443
= 0.3557

n
∴ Number of soldiers having height over 182 cms is

g.i
= 1000 × P(X > 182)
= 1000 × 0.3557

n
= 355.7

eri
 356 soldiers
ine
Example 1.43 In a normal distribution, 31% of items are under 45 and 8% are
over 64. Find mean and variance.
ng

Solution : Given distribution is Normal.


E

X−µ
arn

W.K.T. Z = (1)
σ
We have to find µ, σ2 .
Le

Given that 31% of items are under 45 and 8% are over 64.
w.

When X1 = 45 When X2 = 64
X1 − µ X2 − µ
Let Z = z1 = [∵ (1)] Let Z = z2 = [∵ (1)]
ww

σ σ
45 − µ 64 − µ
= =
σ σ
i.e., P(Z < z1 ) = 31% = 0.31 i.e., P(Z > z2 ) = 8% = 0.08
i.e., P(z1 < Z < 0) = 19% = 0.19 i.e., P(0 < Z < z2 ) = 42% = 0.42
Using table, ∴ z1 = −0.5 From table, ∴ z2 = 1.4
45 − µ 64 − µ
= −0.5 = 1.4
σ σ
45 − µ = −0.5σ 64 − µ = 1.4σ
µ − 0.5σ = 45 (1) µ + 1.4σ = 64 (2)
Solving (2) and (3)
[Link]
[Link]
92 UNIT I - Random Variables

Mean = µ = 50
S.D. = σ = 10
Variance = σ2 = 100
Example 1.44 In a normal distribution, 7% of items are under 35 and 89% are
under 65. Find µ, σ2 .
Solution : Given distribution is Normal.

X−µ
W.K.T. Z =

n
(1)
σ

g.i
We have to find µ, σ2 .
Given that 7% of items are under 35 and 89% are under 65.

n
eri
When X1 = 35 When X2 = 65
X1 − µ X2 − µ
Let Z = z1 = [∵ (1)] Let Z = z2 = [∵ (1)]
σ σ
ine
35 − µ 65 − µ
= =
σ σ
i.e., P(Z < z1 ) = 7% = 0.07 i.e., P(Z < z2 ) = 89% = 0.89
ng

i.e., P(z1 < Z < 0) = 53% = 0.53 i.e., P(0 < Z < z2 ) = 39% = 0.39
Using table, ∴ z1 = −1.5 From table, ∴ z2 = 1.25
E

35 − µ 65 − µ
= −1.5 = 1.25
arn

σ σ
35 − µ = −1.5σ 65 − µ = 1.25σ
µ − 1.5σ = 35 (1) µ + 1.4σ = 65 (2)
Le

Solving (2) and (3)


Mean = µ = 51.52
w.

S.D. = σ = 10.34
Variance = σ2 = 106.92
ww

1.7 Functions of random variable

X is a random variable with p.d.f. f (x) with range space RX . Y is a


function of X.
Y = g(X) (1)
From (1), y = g(x), find x in terms of y.
dx
P.d.f. of Y = fY y = fX (x)

, find range space RY .
dy
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 93

1.7.1 Examples of Functions of random variable

Example 1.45 X is a random variable with p.d.f. f (x), 0 < X < 1, with
Y = aX + b. Find p.d.f. of Y.

Solution : Given p.d.f. of random variable X is f (x), 0 < X < 1.

Y = aX + b
y = ax + b

n
ax = y − b

g.i
y−b
x=
a

n
dx 1
=
dy a

W.K.T. p.d.f. of Y = fY (y) = fX (x)eri


dx
ine
dy
!
y−b 1
= fX (1)
a a
ng

Range of Y:
E

Given 0 < X < 1


x>0 x<1
arn

y−b y−b
>0 <1
a a
y>b y<a+b
Le

∴ Range of Y is b < Y < a + b


w.

P.d.f. of Y is given by
 y−b 
fX
ww

a
∴ (1) ⇒ fY (y) = , b < y < a + b.
a
Example 1.46 Let X be a continuous random variable with
 x
 12 , 1 < X < 5


f (x) = 

. Find p.d.f. of 2X − 3.
 0 , otherwise

Solution : Given p.d.f. of random variable X is


 x
 ,1 < X < 5
f (x) =  .

12 (1)

 0 , otherwise

[Link]
[Link]
94 UNIT I - Random Variables

Let Y = 2X − 3
y = 2x − 3
2x = y + 3
y+3
x=
2
dx 1
=
dy 2
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)
dy

n
y+3 1
!

g.i
= fX
2 2
y+3
!

n
fX
2

eri
=
 y+3 2
ine
 2 
 
12 
 
= [∵ by (1)]
2
ng

y+3
= (2)
48
Range of Y:
E

Given 1 < X < 5


arn

x>1 x<5
y+3 y+3
>1 <5
Le

2 2
y > −1 y<7
∴ Range of Y is −1 < Y < 7
w.

P.d.f. of Y is given by
ww

y+3
∴ (1) ⇒ fY (y) = , −1 < y < 7.
2

 2x , 0 < X < 1


Example 1.47 Given random variable X with p.d.f. f (x) = 

.
 0 , otherwise


Find p.d.f. of Y = 8X3 .

Solution : Given p.d.f. of random variable X is

2x , 0 < X < 1
(
f (x) = . (1)
0 , otherwise
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 95

Y = 8X3
y = 8x3
y
x3 =
8
1
y3
x=
2
2
dx y− 3
=
dy 6

n
g.i
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)
dy
 1

n
 y 3  1 2
= fX   y− 3

eri
2 6
1 1 2
= y 3 y− 3 [∵ by(1)]
6
ine
1
y− 3
= (2)
6
ng

Range of Y:
E
arn

Given 0 < X < 1


x>0 x<1
1
y3
Le

1
>0 y3
2 <1
1 2
1
3 < 2y < 8
w.

y
y 3 > 0y > 0
∴ Range of Y is 0 < Y < 8
ww

P.d.f. of Y is given by

1
y− 3
∴ (1) ⇒ fY (y) = , 0 < y < 8.
6

Mean of Y:
[Link]
[Link]
96 UNIT I - Random Variables

Z∞
E[Y] = y f (y)dy
−∞
Z8 1
y− 3
= y dy
6
0
Z8
1 2
= y 3 dy
6
0

n
 5 8
1  y 3 

g.i
=  5  dy
6 3
" 5 0#
83

n
= −0
10

eri
" 5#  
2 32
= =
10 10
ine
∴ E[Y] = 3.2
Z∞
ng

h i
E Y2 = y2 f (y)dy
−∞
E

Z8 1
y− 3
=
arn

y dy
6
0
Z8 5
y3
Le

= dy
6
0
w.

 8 8
1  y 3 
=  8  dy
6
ww

" 8 3 0#
83
= −0
16
" 8 # " 4 4#
2 2 ·2
= =
16 16
h i
∴E Y 2
= 16
h i
∴ Variance = E Y − [E(Y)]2 2

= 16 − (3.2)2
= 5.76
Example 1.48 X is a random variable uniformly distributed in (0, 1).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 97

Find p.d.f. of Y = sin X.


Solution : Given X is a random variable uniformly distributed in (0, 1).

X ∼ Uni. Dist. (0, 1)


1
f (x) = =1
1−0
Given Y = sin X
y = sin x

n
x = sin−1 y

g.i
dx 1
= p
dy 1 − y2

n
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)

eri
dy

= fX sin−1 y p
1
ine
1 − y2
1 1
=1· p = p (1)
ng

1 − y2 1 − y2
Range of Y:
E

Given 0 < X < 1


x>0 x<1
arn

sin−1 y > 0 sin y < 1


−1

π
y>0 y < sin(1) =
Le

2
π
∴ Range of Y is 0 < Y < 2

P.d.f. of Y is given by
w.

1 π
∴ (1) ⇒ fY (y) = p ,0 < y < .
ww

1 − y2 2

1.8 Part - B[Random Variables (R.V.)](Anna University Questions)

1. Find the Moment generating function of a Poisson variate. Deduce


that the sum of two independent Poisson variate is a Poisson variate,
while the difference is not a Poisson variate.
λ(et −1)
{Solution : MGF = e }[May/June 2006, ECE]
2. If X is a random variable uniformly distributed in (0, 1), find the
p.d.f. of Y = sinX. Also find the mean and variance of Y. {PDF =
√ 1 2 , Mean = 0.4597, Var = −0.166}[May/June 2006, ECE]
1−y
[Link]
[Link]
98 UNIT I - Random Variables

3. Establish the memoryless property of Geometric distribution.


{P[X > m + n / X > m] = P [X > n]}[May/June 2006, ECE]
4. If X1 and X2 are independent Poisson variates with parameters
λ1 and λ2 , show that the conditional distribution of X1 , given
X1 + X2 follow binomial distribution.[May/June 2006, ECE]
5. Define binomial distribution. Also obtain its Moment generating
function and hence the mean and variance. {P.m.f. of B.D :nCx px qn−x ,
x = 0,1,. . . ,n, where p + q = 1; Mean = np; Var = npq; Mgf =
n
q + pet }[May/June 2006, ECE]

n
g.i
6. The monthly breakdowns of a computer is a random variable having
Poisson distribution with a mean equal to 1.8. Find the probability

n
that this computer will function for a month (a) Without a breakdown
(b) With only one breakdown (c) with oatleast one breakdown.

eri
n
(a) = e−1.8 = 0.1653, (b) = 0.2975, (c) = 0.8347 [M/J 07, CSE]
ine
7. Obtain the Moment generating function of exponential distribution
and hence or otherwise compute the first four moments.
α
h i
{Mgf= α−t ;Mean=E [X] = α , E X2 = α22 , ...}
1
ng

[N/D 2006, ECE]


8. If X1 , X2 be independent random variable each having Geometric
E

distribution qk p,k = 0,1,2,. . . Show that the conditional distribution of


arn

X1 given X1 + X2 is Uniform.
 X−µ 2
9. If X ˜ N( µ , σ2 ). Obtain the probability density function of U = 21 σ .
Le

1 −u 21 −1
{g (u) = e u ,u ≥0}
1/2
w.

10. A random
( −θx variable X has the density function
θe , f or x > 0
f (x) = find the generating function and hence
0, otherwise
ww

θ
n o
find the mean and variance. Mg f = θ−t , Mean = θ , Var = θ2
1 1

11. The savings bank account of a customer showed an average balance of


Rs. 150 and standard deviation of Rs. 50. Assuming that the account
balances are normally distributed (1) What percentage of account is
over Rs. 200? {15.87%} (2) What percentage of account is between
Rs.120 and Rs.170? {38.11%} (3) What percentage of account is less that
Rs. 75? {6.68%}
12. Find the Moment generating function of a Poisson distribution and
hence find its mean and variance. {Mgf =eλ(e −1) ,Mean = λ,Var = λ }
t

[N/D 07, ECE] [M/J 06, CSE]


[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 99

13. Suppose that a trainee soldier shoots a target according to a Geometric


distribution. If the probability that the target is shot on any one shot is
0.7. What is the probability that it takes him an even number of shots?
{0.2308}
14. If X has an exponential distribution with parameter α, find the
probability density function of Y logX.
15. Find the mean, variance and Moment generating function of an
bt
−eat
uniformly distributed random variable. {M.g.f = et(b−a) , Mean = a+b
2 , Var
(b−a)2

n
= 12 } [N/D 06, CSE]

g.i
16. The density function of a random variable X is given by f (x) = Kx(2 −
x), 0 = x = 2. Find K, mean, variance and rth moment.

n
( )
32r+1

eri
K = 3/4, Mean = 1, Var = 1 / 5 , µr = E [X ] =
1 r
(r + 2) (r + 3)
[M/J 2007, ECE]
ine
17. A random variable X has the following probability distribution.
x: 0 1 2 3 4 5 6 7
ng

P(x): 0 k 2k 2k 3k k2 2k2 7k2 Find (1) the value of k (2)


+k
E

P(1.5 < x < 4.5 / x > 2) (3) the smallest value of λ for which
P(x = λ) > 1/2. {1/10, 5/7, 4} [N/D 2008, ECE]
arn

18. Prove that Poisson distribution is the limiting case of binomial


distribution. [N/D 2008, ECE]
Le

19. Define Gamma Distribution and find its mean and variance. [N/D08,
ECE]
w.

20. Find the nth central moments of normal distribution.


2n/2 σn ∞ n −t2
ww

Z
{µn = √ t e dt}
π −∞
[N/D 2007, ECE]
21. A random variable X has the following probability distribution.
x: –2 –1 0 1 2 3
P(x): 0.1 K 0.2 2k 0.3 3k
Find (1) the value of k 2 Evaluate P(X < 2) and P(−2 < X < 2) (3) Find
the cumulative distribution of X (4) Evaluate the mean of X {1/15, 1/2
and 2/5, mean = 16/15, cumulative distribution is
F(x) 0.1 1/6 11/30 1/2 4/5 1 }
:
[Link]
[Link]
100 UNIT I - Random Variables

22. If the density function of a continuous random variable X is given by





 ax, 0≤x≤1
 a, 1≤x≤2

f (x) = 



 3a − ax, 2 ≤ x ≤ 3

 0, elsewhere
Find (1) the value of a (2) the cumulative distribution function of X (3) if
x1 , x2 and x3 are independent observation of X. What is the probability
that exactly one of these 3 is greater than 1.5?[N/D08,ECE]

n
<
  

 
 0, x 0 

g.i
  2

x
,
  

 
 4 0 ≤ x ≤ 1 

1  3
 
, = , , > =
  x 1
 
F (x) − 1 ≤ x ≤ 2 P exactlyone value 1.5

2 4
2 8

 
 

2
3 x 5
,
  

n





 2 x − 4 − 4 2 ≤ x ≤ 3 


x > 3
  
  1,
 

eri
 

23. Find mean and variance of Gamma distribution and hence find mean
ine
and variance of Exponential distribution.{ Gamma distribution: Mean
= λ , Var = λ}, {E.D. Mean = α1 , Var = α12 } [M/J 07, CSE]
24. VLSI chips, essential to the running of a computer system, fail in
ng

accordance with a Poisson distribution with the rate of one chip in


about 5 weeks. If there are two spare chips on hand, and if a new
E

supply will arrive in 8 weeks. What is the probability that during the
arn

next 8 weeks the system will be down for a week or more, owing to a
lack of chips?{0.1665}
25. Find the probability distribution of the total number of heads obtained
Le

in four tosses of a balanced coin. Hence obtain the MGF of X, mean of


X and variance of X.
w.

1 h
 i 
Mg f = 1 + 4e + 6e + 4e + e , Mean = 2, variance = 1
t 2t 3t 4t
16
ww

[A/M 2008, ECE]


26. If X has the distribution function



 0, x<1
1≤x<4



 1/3,
F (x) =  4≤x<6

1/2,

6 ≤ x < 10

5/6,





 1,
 x ≥ 10
Find (1) the probability
n distribution of X (2) P(2 < X < 6) (3) mean
o of X
(4) variance of X. P (x) : 0 1/3 1/6 2/6 1/6 , 1/6, 28/6, 89/9 [A/M
2008, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 101

27. Obtain the MGF of Poisson distribution and hence compute the first
four moments.

mg f = eλ(e −1) , µ11 = λ, µ12 = λ2 + λ, µ13 = λ3 + 3λ2 + λ, µ13 = λ4 + 6λ3 + 7λ2


n t

[A/M 2008, ECE]


 
28. State and explain the properties of Normal N µ, σ2 distribution. [A/M
2008, ECE]
29. What is the variance of the random variable X which is uniformly

n
distributed?[A/M 2008, ECE]

g.i
30. In an engineering examination, a student is considered to have failed,
secured second class, first class and distinction, according as he scored

n
less than 45%, between 45% and 60%, between 60% and 75% and

eri
above 75% respectively. In a particular year 10% of the students failed
in the examination and 5% of the students get distinction. Find the
percentages of students who have got first class and second class.
ine
(Assume normal distribution of marks){I calss = 38%, II class = 47%}
[N/D 2008, ECE]
ng

31. =21 [N/D 2007, ECE]


32. A continuous random variable has the pdf f (x) = kx4 , −1 < x < 0. Find
E

the value of k and P[X > −1/2 / X < −1/4].{k=5,1/33}


arn

33. Define geometric distribution. Obtain its m.g.f. and hence compute
the first four moments.
Le

34. For a normal distribution with mean 2 and variance 9, find the value
of x1 , of the variable such that the probability of the variable lying in
w.

the interval (2, x1 ) is 0.4115.


35. A random variable X has a uniform distribution over the interval
ww

(−3, 3). Compute (1) P(X = 2) (2) P( jX − 2 j < 2) (3) find k such that
P(X > k) = 1/3. {0, 2/3, k = 1}
36. Define gamma distribution. Prove that the sum of independent gamma
variates is a gamma variate.
37. In a book of 520 pages, 390 typographical errors occur. Assuming
Poisson’s law for the number or errors per page, find the probability
that a random sample of 5 pages will contain no error.
Kx, x = 1, 2, 3, 4, 5
(
38. If P(X = x) = represents a p.m.f. Find(1) k (2)
0, elsewhere
P(x is a prime number) (3) P(1/2 < x < 5/2 / x > 1) (4) distribution
[Link]
[Link]
102 UNIT I - Random Variables

function.
1 2 3
 
k = , , , pm f is F(x) 1
15
3
15
6
15
10
15 1
15 3 4
[N/D 2009, ECE]
39. The probability mass function of a r.v. x is given by
cλi
P (i) = i! , (i = 0, 1, 2, ...) whereλ > 0. Find (1) P(x = 0) (2) P(x > 2)

λ2
( " #)
1 1 1
c = λ, λ, 1 − λ 1 + λ +
e e e 2!

n
g.i
[N/D 2009, ECE]
40. The time (in hours) required to repairs a machine is exponential,

n
distributed with parameter λ = 1/2. (1) What is the probability that

eri
the repair time exceeds 2 hours? (2) What is the conditional
probability that a repair takes atleast 10 h given that its duration
exceeds 9h? {1/e, P(X > 1) = }[N/D 2009, ECE]
ine
ng
E
arn
Le
w.
ww

[Link]
[Link]

2 UNIT II - TWO DIMENSIONAL RANDOM


VARIBLES

n
Joint distributions - Marginal and conditional distributions -

g.i
Covariance - Correlation and Regression - Transformation of random

n
variables - Central limit theorem (for iid random variables).

eri
ine
2.1 Joint distributions - Marginal and conditional distributions
ng

2.1.1 Joint Probability Function


E

(a) For discrete r.v. (X, Y):


arn

Joint Probability mass function (jpmf) is


 
pi j = P X = xi , Y = y j , i = 1, 2, ..., m and j = 1, 2, ..., n
Le

and satisfying basic conditions (properties)


i) Xpi j ≥
X0 ∀ i and j
w.

ii) pij = 1
j i
ww

(b) For continuous r.v. (X, Y):


Joint Probability density function (jpdf) is
fXY (x, y) or f (x, y)
and satisfying basic conditions (properties)
i) f (x, y) ≥ 0 ∀ (x, y)
Z∞ Z∞
ii) f (x, y) dx dy = 1
−∞ −∞
Zd Zb
iii) P(a < X < b, c < Y < d) = f (x, y)dx dy
y=c x=a
[Link]
[Link]
104 UNIT II - Two Dimensional Random Variables

2.1.2 Marginal Probability functions

(a) For discrete (X, Y):


(i) Marginal p.m.f. of X:
X
pi∗ = PX (xi ) = P (X = xi ) = pi j
j

(ii) Marginal p.m.f. of Y:


    X
p∗ j = PY y j = P Y = x j = pi j

n
i

g.i
(b) For continuous (X, Y):
(i) Marginal p.d.f. of X:

n
Z∞
fX (x) or f (x) =
eri

f x, y dy and
−∞
ine
Zb
P(a < X < b) = fX (x) dx
a
ng

Note : fX (x) ⇒ It is function of only x and hence Integration is


w.r.t. y.
E

(ii) Marginal p.d.f. of Y:


arn

Z∞
fY (y) or f (y) =

f x, y dx and
−∞
Le

Zb
P(a < Y < b) = fY (y) dy
w.

a
Note : fY (y) ⇒ It is function of only y and hence Integration is
ww

w.r.t. x.

2.1.3 Conditional Probability Functions

(a) For discrete (X, Y):


(i) Conditional p.m.f. of X given Y = y j is
 
  P X = xi , Y = y j pi j
P X = xi /Y = y j =   =
P Y = yj p∗ j
(ii) Conditional p.m.f. of Y given X = xi is
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 105

 
  P X = xi , Y = y j pi j
P Y = y j /X = xi = =
P (X = xi ) pi∗
(iii) For independent X and Y :
pi j = pi∗ p∗ j
and the converse is also true.
(b) For continuous (X, Y):
(i) Conditional p.d.f. of X given Y = y is

 f x, y

n
fX/Y x/y or f x/y =

 and
fY y

g.i
Zb
P a < X < b/Y = y =
 
f x/y dx

n
a

eri
(ii) Conditional p.d.f. of Y given X = x is

fY/X y/x or f y/x =


  f x, y

and
ine
fX (x)
Zd
P (c < Y < d/X = x) =

f y/x dy
ng

c
(iii) For independent X and Y:
E

f (x, y) = fX (x) · fY (y) and the converse is also true.


arn

i.e. f (x, y) = fX (x) · fY (y) ⇒ X and Y are independent.


Le

2.1.4 Distribution Functions


w.

(a) Joint Distribution Function:


FXY (x, y)
ww

or F(x, y) = P(X ≤ x, Y ≤ y)
, for discrete r.v. (X, Y)
 PP


 pi j
 j i
F(x, y) = 

 x y
R R
f (x, y) dx dy , for continuous r.v. (X, Y)





−∞ −∞

(b) Properties of F(x, y):


(i) FX,Y (−∞, y) = FX,Y (x, −∞) = FX,Y (−∞, −∞) = 0
(ii) FX,Y (∞, ∞) = 1
(iii) 0 ≤ FX,Y (x, y) ≤ 1
(iv) FX,Y (x, y) is a non-decreasing function of x and y.
[Link]
[Link]
106 UNIT II - Two Dimensional Random Variables

(v) FX,Y (x, ∞) = FX (x) and FX,Y (∞, y) = FY (y)


(vi) P(a < X < b,c < Y < d) = FX,Y (b, d)+FX,Y (a, c)−FX,Y (a, d)−FX,Y (b, c)
(vii) P(a < X < b, Y = y) = F(b, y) − F(a, y)
(viii) P(X = x, c < Y < d) = F(x, d) − F(x, c)
∂2 F(x, y)
(ix) At points of continuity : f (x, y) =
∂x∂y
(c) Marginal Distribution Function: FX (x) and FY (y)
P  
 P X ≤ xi , Y = y j , for d.r.v. (X, Y)

n


 j

(i) FX (x) = 

Rx Rx R∞

g.i
fX (x) dx = f (x, y) dy dx , for c.r.v. (X, Y)





−∞ −∞ −∞
d

n
and fX (x) = [FX (x)]
dx

eri
P  


 P X = x i , Y ≤ y j , for d.r.v. (X, Y)
i

(ii) FY (y) = 

 y
Ry R∞
ine
R
fY y dy = f (x, y) dx dy , for c.r.v. (X, Y)

 



−∞ −∞ −∞
d 
and fY (y) =

FY (y)
ng

dy
(d) Conditional Distribution Functions:
E

∂ 
(i) f y/x =
 
F y/x
∂y
arn

∂ 
(ii) f x/y =
 
F x/y
∂x
Le

2.1.5 Expectations :
w.

P P  
g xi , y j pi j , for d.r.v. (X, Y)
ww




 j i
(a) E[g(X, Y)] = 

 ∞ ∞
R R
dx dy , for c.r.v. (X, Y)
  



 g x, y f x, y
−∞ −∞
(b) Properties of Expected Values :
Z∞
(i) E g (X) =
 
g (x) fX (x) dx
−∞

, for discrete r.v.


P

 xi pi∗
 i∞

∴ Mean of X = E (X) = 

R
x fX (x) dx , for continuous r.v.




−∞
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 107

R∞
E [h (Y)] =
 
h y fY y dy
−∞

, for discrete r.v.


P


 y j p∗ j
 j
∴ Mean of Y = E (Y) = 

 ∞
R
dy , for continuous r.v.

y f y




 Y
−∞

(ii) E(X + Y) = E(X) + E(Y) and


E (a1 X1 + a2 X2 + · · · + an Xn ) = a1 E (X1 ) + a2 E (X2 ) + · · · + an E (Xn )

n
g.i
(iii) In general, E(XY) , E(X) · E(Y).
If X and Y are independent, then
E(XY) = E(X) · E(Y)

n
eri
Note : However the converse may not be true i.e. E(XY) =
E(X)E(Y) ; X and Y are independent. They are independent
only when f x, y = fX (x) · fY y
 
ine
(c) Conditional Expectations :
(i) For discrete r.v. (X, Y) :
ng

h i P    
E g (X, Y) /Y = y j = g xi , y j · P X = xi /Y = y j
i  pi j
P 
E

= g xi , y j p∗j
i
arn

 P    
E g (X, Y) X = xi = g xi , y j · P Y = y j /X = xi

j
P  p
= g xi , y j pi∗i j
Le

(ii) For continuous r.v. (X, Y):


Z∞
w.

E g(X, Y) /Y = y =
   
g x, y · f x/y dx
ww

−∞
Z∞
and conditional mean is E X/Y = y =
  
x f x/y dx
−∞
R∞
E g(X, Y) /X = x =
   
g x, y · f y/x dy
−∞
Z∞
and conditional mean is E [Y/X = x] =

y f y/x dy
−∞
(iii) If X and Y are independent, then
E(X/Y) = E(X)andE(Y/X) = E(Y).
[Link]
[Link]
108 UNIT II - Two Dimensional Random Variables

2.2 Covariance - Correlation and Regression

2.2.1 Covariance

(a) Cov(X, Y) = E{[X − E(X)][Y − E(Y)]} · · · Definition


(b) Properties:
(i) Cov(X, Y) = E(XY) − E(X)E(Y) · · · useful result

n
(ii) If X and Y are independent, then Cov(X, Y) = 0.

g.i
(iii) If a, b are constants,

n
Cov(aX, bY) = abCov(X, Y)

eri
Cov(X + a, Y + b) = Cov(X, Y)
(iv) Var(X ± Y) = Variance(X) + Variance(Y) ± 2Cov(X, Y)
ine
ng

2.2.2 Correlation

The relationship between any two variables.


E

Used to measure the degree of relationship between any two variables.


arn

(Supply) (Price)
Y Y
Le

12 12
w.

1 1
X X
ww

0 1 2 0 01 20
(Rain fall) (Supply)
+ ve linear correlation − ve linear correlation
0<r<1 −1 < r < 0
Y Y

12 12
1 1
X X
0 1 2 0 0 1 20
Perfect + ve linear correlation Perfect − ve linear correlation
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 109

Y Y

2
1 12
1 1
X X
0 1 0
2 0 1 20

+ ve non-linear correlation − ve non-linear correlation

n
Classification of correlation betweenX and Y

n g.i
eri
Correlation Coefficient
Discrete R.V.s · · · (r1 )
Continuous R.V.s · · · (r2 )
ine
Correlation between X and Y
ng

Non Repeated Ranks · · · (r3 )


Rank Correlation Repeated Ranks · · · (r4 )
E
arn
Le

 
(a) Karl Pearson’s correlation coefficient r or rxy or ρxy :
w.

Cov (X, Y) E(XY) − E(X)E(Y)


ww

r= = q
σX σY
q
E (X2 ) − [E(X)]2 E (Y2 ) − [E(Y)]2
1
 P
n xy − x y
, for n pairs of (x, y) values



 q q

 1 2 2
x2 − (x) n1 y2 − y

 P P 
n


r1 = 

(or)

 P  P P 
n xy − ( x) y


i , direct method



 q
 h P i h P 2
P 2 P
n x2 − ( x) n y2 −
 

 y

where σX and σY are standard deviations of X and Y.


[Link]
[Link]
110 UNIT II - Two Dimensional Random Variables

R∞ R∞ R∞
! ∞ !
R
xy f (x, y)dxdy − x f (x)dx y f (y)dy
−∞ −∞ −∞ −∞
r2 = s #2 s ∞ #2
R∞
"∞ "∞
R R R
x2 f (x)dx − x f (x)dx y2 f (y)dy − y f (y)dy
−∞ −∞ −∞ −∞
" #X
6
r3 = 1 − d2i , where di = xi − yi
n (n2 − 1)
#    
2
mi mi − 1 
"
6 X  X
r4 = 1 − +
 2
 
d
  
XY
n (n2 − 1) 

12
 

n
  

∀i

g.i
where di = xi − yi ;
mi = number of times an item is repeated

n
(b) Properties of r:
(i) −1 ≤ r ≤ 1 eri
ine
r = +1 ⇒ perfect +ve (Best Relationship between X and Y)
r = −1 ⇒ perfect −ve (Worst Relationship between X and Y)
r = 0 ⇒ No correlation
ng

(ii) Two independent variables are uncorrelated. ⇒ r = 0.


E

(iii) Correlation coefficient is independent of change of origin (a, b)


and scale (h, k)
arn

y−b
i.e., if u = x−a
h , v = k
rxy = ruv
Le

2.2.3 Regression
w.
ww

(a) Regression Lines :


(i) Regression Line of Y on X : y − y = b yx (x − x)
where b yx = Regression coefficient of y on x
Cov (X, Y) Cov (X, Y)
= =
Var (X) σ2x
σy
=r
σP
x P P 
n xy − ( x) y
= P 2 P 2 ... direct method
n x − ( x)
(ii) Regression Line of X on Y : x − x = bxy y − y

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 111

where bxy = Regression coefficient of x on y


Cov (X, Y) Cov (X, Y)
= =
Var (Y) σ2y
σx
=r
σy
P P P 
n xy − ( x) y
= P 2 P 2 ... direct method
n y − y
(b) Properties of Regression Lines :

n

(i) Regression lines intersect at point x, y

g.i
(ii) Angle between regression lines :
2  σ σ
 ! 
1 − r x y
Acute angle, θ = tan−1 

  

n

σx + σ y
 2 2 
|r|

eri
σ σ
 !  
2
 r − 1  x y 
Obtuse angle, θ0 = tan−1 
 
|r|  σ2x + σ2y 
 
ine
If r = 0 ⇒ The regression lines are perpendicular
If r = ±1 ⇒ The regression lines are parallel(or coincide)
ng

(c) Properties of Regression Coefficients :


(i) bxy , byx
E

(ii) bxy and byx have same sign and if one of them is greater than 1
(numerically), then the other is less than one (numerically).
arn

(iii) r2 = bxy b yx
i.e., r = ± bxy b yx and r has the same sign as that of bxy and b yx .
p
Le

(iv) Regression coefficients are independent of change but not of


scale and
w.

y−b
if u = x−a
h ,v = k then
bxy = hk buv and b yx = hk bvu
ww

2.3 Transformation of random variables:

Y(X = 0) V(U = 0)
3
1 v = f2 (x, y)
(x, y) v (u, v)
2
1
u = f1 (x, y)
U(V = 0)
0 1 2 0 X(Y = 0)
3 u
[Link]
[Link]
112 UNIT II - Two Dimensional Random Variables

XY − plane U V − plane
f (x, y) = Joint pdf of X&Y g(u, v) = Joint pdf of U&V
= fXY (x, y) = gUV (u, v)
= fxy (x, y) = guv (u, v)
f (x) = Marginal pdf of X g(u) = Marginal pdf of U
Z∞ Z∞
= f (x, y)dy = g(u, v)dv
−∞ −∞
= fX (x) = gU (u)

n
f (y) = Marginal pdf of Y g(v) = Marginal pdf of V

g.i
Z∞ Z∞
= f (x, y)dx = g(u, v)du

n
−∞ −∞

eri
= fY (y) = gV (v)
Let U = f1 (x, y) and V = f2 (x, y)
ine
Find x andy in terms of u and v

 ∂x ∂x
f (x, y) |J| , J = ∂u ∂v



∂y ∂y

ng





∂u ∂v




Define g(u, v) =  (or)


E

∂u ∂u




∂x ∂y

arn


f (x, y) |J10 | , J0 = ∂v ∂v







∂x ∂y


Find Range of space of u and v
Le

Z∞
P.D.F. of U = g(u) = g(u, v)dv
w.

−∞
Z∞
ww

P.D.F. of V = g(v) = g(u, v)du


−∞

(a) Joint probability density function of transformed r.v.s:


Consider a two dimensional random variable (X, Y) having joint

probability density function fXY x, y . Now let (U, V) be another
two dimensional random variable where the r.v.s U and V are
defined as: U = f1 x, y and V = f2 x, y . Also let the inverse
 
be given by x = g1 (u, v) , y = g2 (u, v) which are continuous and
posses continuous partial derivatives. Then, the joint p.d.f. of the
transformed r.v.s (U, V) is given by fUV (u, v) = fXY x, y |J| where

|J| is the modulus of Jacobian J of (x, y) with respect to (u, v) which
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 113

is given by
∂x ∂x
∂(x,y) ∂u ∂v xu xv
J= ∂(u,v) = ∂y ∂y =
∂u ∂v
yu yv
Note : On obtaining the joint p.d.f. fUV (u, v) we can find the
marginal p.d.f.s of U and V as
R∞
fU (u) = f (u) = fUV (u, v) dv
−∞
R∞
fV (v) = f (v) = fUV (u, v) du

n
−∞
(b) Theorem : If X and Y are independent continuous r.v.s then the

g.i
p.d.f. of U = X + Y is given by
R∞

n
f (u) = fX (v) fY (u − v) dv

eri
−∞

(c) Working rule for finding fUV (u, v) given the j.p.d.f. fXY x, y as in
following steps:
ine
(i) Find out the joint p.d.f. of (X, Y) if not already given.
(ii) Consider the new random variables, u = f1 x, y , v = f2 x, y
 
ng

and rearrange them to express x and y in terms of u and v to get


x = g1 (u, v) , y = g2 (u, v).
E

∂x ∂x
∂(x,y) ∂u ∂v xu xv
(iii) Find the Jacobian J = = = and hence
arn

∂(u,v) ∂y ∂y
∂u ∂v
yu yv
the value of |J|
(iv) Find the j.p.d.f. fUV (u, v) using the formula:
Le

fUV (u, v) = fXY x, y |J| and express it in terms of u and v.




(v) Obtain the range space of U and V by considering the range


w.

spaces of X and Y and the relation U = f1 x, y and V = f2 x, y .


 

(vi) Write down the expression for fUV (u, v).


ww

(vii) If required, find out the expression for marginal p.d.f.s of U and
V as
R∞
fU (u) = f (u) = fUV (u, v) dv
−∞
R∞
fV (v) = f (v) = fUV (u, v) du
−∞

2.4 Central Limit Theorem

This is one of the best and widely used theorems which helps find the
distribution of the sum of a large number of random variables.
[Link]
[Link]
114 UNIT II - Two Dimensional Random Variables

To find the probability of single random variable or sum or average of


large number of random variables which approaches a distribution.

(a) Liapounoff’s form(of central limit theorem):


If X1 , X2 , · · · , Xn , · · · is a sequence of independent random variables
with E (Xi ) = µi and Var (Xi ) = σ2i , i = 1, 2, · · · , n, · · · and if Sn =
X1 + X2 + · · · + Xn , then under certain general conditions, Sn follows
n n
a Normal distribution with mean µ = µi and variance σ = σ2i 2
P P
 i=1 i=1

n
as n tends to infinity. Thus Sn ∼ N µ, σ as n → ∞. 2

g.i
(b) Lindberg-Levy’s form(of central limit theorem):
If X1 , X2 , · · · , Xn , · · · is a sequence of independent identically

n
distributed random variables with E (Xi ) = µ and

eri
Var (Xi ) = σ , i = 1, 2, · · · n, · · · and if Sn = X1 + X2 + · · · + Xn , then
2

under certain general conditions, Sn follows a Normal


2
distribution with mean  nµ and variance nσ as n tends to infinity.
ine

Thus Sn ∼ N nµ, nσ2 as n → ∞.
(c) Corollary :
ng

If X = n1 (X1 + X2 + · · · + Xn ) then
  1 1
$E X = E (X1 + X2 + ... + Xn ) = nµ = µ and
E

n n
1 1 2 σ2
arn

 
Var X = 2 Var (X1 + X2 + ... + Xn ) = 2 nσ =
n n n
σ2
∴ X follows a Normal distribution with mean µ and variance n as
Le

n tends to infinity.
i.e., X ∼ N µ, σn as n → ∞.
 2
w.

(d) Uses of central limit theorem


(1) 1. It states that almost all theoretical distributions coverage to
ww

Normal distribution as n → ∞.
(2) It provides a simple method for calculating the approximate
probabilities of sum of a large number of independent random
variables.
(3) It is very useful in statistical surveys. For a large sample size, it
helps provide fairly accurate results.
(4) It also implies that empirical frequencies of many natural
‘populations’ (where we consider all possible observations)
exhibit a bell shaped (or normal) curve.
(e) Note :

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 115

(1) Students are advised to refer quickly the section on ‘Normal


distribution’ described earlier in chapter on ‘Standard
Distributions’ for solving the problems.
(2) In statistical surveys ‘population’ refers to the collection of N
observations under study. However, practically data is
collected in the form of a limited number of observations from
the population which forms the sample. If the population has
a mean µ and variance σ2 , by corollary to Lindberg-Levy’s
form of C.L.T., the mean X of a random sample of size n

n
follows a normal distribution with mean µ and variance σ2 /n

g.i
by if n → ∞ i.e., if n is very large.

n
2.4.1 Part - A[Problems of Marginal, conditional distributions(Discrete r.v.)]

f x, y =
1 eri
1. X and Y are two random variables having the joint density function
x + 2y , where x and y assumes the integer values 0,1 and
ine
 
27
2. Find the marginal probability distribution of X.[N/D 2006, IT]
1
Solution : Given f x, y = x + 2y ; x = 0, 1, 2; y = 0, 1, 2.
 
ng

27
E

f (x, y) X PY (y)
0 1 2
arn

0 0 1/27 2/27 3/27


Y 1 2/27 3/27 4/27 9/27
2 4/27 5/27 6/27 15/27
Le

PX (x) 6/27 9/27 12/27 1


w.

2. Three balls are drawn at random without replacement from a box


containing 2 white, 3 red and 4 black balls. If X denotes the number of
ww

white balls drawn and Y denotes the number of red balls drawn, find
the probability distribution of (X, Y).
Solution :

X(White) Y(Red)
0 1 2 3
0 1/21 3/14 1/7 1/84
1 1/7 2/7 1/14 0
2 1/21 1/28 0 0

3. The joint distribution of (X, Y) where X and Y are discrete is given in


the following table
[Link]
[Link]
116 UNIT II - Two Dimensional Random Variables

X Y
0 1 2
0 0.1 0.04 0.06
1 0.2 0.08 0.12
2 0.2 0.08 0.12

Verify whether X, Y are independent. [M/J 2009, ECE]

n
2.4.2 Part - A [Problems of Marginal, conditional distributions(Cont r.v.)] :

g.i
1. Define joint probability distribution function of two random variables
X and Y and state its properties. [M/J 2007, ECE]

n
Solution : Joint probability density function of two random variables
X and Y :
eri
We define the probability of the joint event X = x, Y = y, which is a
ine
function of the numbers x and y, by a joint probability distribution
function and denote it by the symbol FXY x, y . Hence FXY x, y =
 

p X ≤ x, Y ≤ y
ng

Properties of the joint distribution :


(i) FX,Y (−∞, y) = FX,Y (x, −∞) = FX,Y (−∞, −∞) = 0
E

(ii) FX,Y (∞, ∞) = 1


arn

(iii) 0 ≤ FX,Y (x, y) ≤ 1


(iv) FX,Y (x, y) is a non-decreasing function of x and y.
Le

(v) FX,Y (x, ∞) = FX (x) and FX,Y (∞, y) = FY (y)


(vi)
w.

P(a < X < b, c < Y < d) = FX,Y (b, d) + FX,Y (a, c) − FX,Y (a, d) − FX,Y (b, c)
(vii) P(a < X < b, Y = y) = F(b, y) − F(a, y)
ww

(viii) P(X = x, c < Y < d) = F(x, d) − F(x, c)


∂2 F(x, y)
(ix) At points of continuity : f (x, y) =
∂x∂y
For a given function to be valid joint distribution function of two
dimensional r.v.s X and Y, it must satisfy the properties (i),(ii) and (vi).

2. The joint probability density function pf a bivariate r.v. (X, Y) is


k(x + y), 0 < x < 2, 0 < y < 2
(
fXY x, y =

find k. [N/D 2006, ECE]
0, otherwise
Solution: By the property of joint p.d.f. we have
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 117

" Z2 Z2
f x, y dxdy = 1 ⇒ k x + y dxdy = 1
 

R 0 0
Z2 " #x=2 2
x2
Z
+ xy dy = 1 ⇒ k 2 + 2y dy = 1
 
k
2 x=0
0 0
1
k=
8
3. The joint p.d.f. of 2 random variables X and Y is f (x, y) = cx(x − y), 0 <

n
x < 2, −x < y < x. Find c. [M/J 2007, CSE]

g.i
Solution : By the property of joint probability density function, we
have

n
Z∞ Z∞ Z2 Zx

eri
f (x, y)dxdy = 1 ⇒ cx x − y dydx = 1


−∞ −∞ 0 −x
ine
1
c=
8
ng

4. Find k if the joint probability density function of a bivariate random


 k (1 − x) 1 − y , 0 < x, y < 1
( 
variable (X, Y) is given by f x, y
E

0, otherwise
Solution : By the property of joint p.d.f. we have
arn

" Z1 Z1
f x, y dxdy = 1 ⇒ k (1 − x) 1 − y dxdy = 1
 
Le

R 0 0
Z2 #x=1 Z1
x2 y
"
x2 y
w.

1

k x − − xy + dy = 1 ⇒ k − y + dy = 1
2 2 x=0 2 2
0 0
ww

k=4

x + y, 0 < x < 1, 0 < y < 1


(
5. If X and Y have joint p.d.f. f x, y =

.
0, otherwise
Check whether X and Y are independent. [M/J 06, ECE]
Solution : If X and Y are independent then f (x, y) = f (x)
The marginal density function of X is

R∞ R1
f (x) = f (x, y)dy = (x + y)dy = x + 1
2
−∞ 0

The marginal density function of Y is


[Link]
[Link]
118 UNIT II - Two Dimensional Random Variables

R∞ R1
f (y) = f (x, y)dx = (x + y)dx = y + 1
2
−∞ 0
   
∴ f (x) f y = x + 12 y + 12 , f x, y


Hence X and Y are not independent.


1 − λe−λ(x+y) , if x > 0, y > 0
(
6. For λ > 0, let F x, y =

. Check whether
0, otherwise
F can be the joint probability distribution function of two random

n
variables X and Y. [N/D 2007, CSE]
Solution :

g.i
1 − λe−λ(x+y) , if x > 0, y > 0
(
F x, y =

0, otherwise

n
∂ λ2 e−λ(x+y) , if x > 0, y > 0
(
F x, y =
eri


∂y 0, otherwise
∂2 −λ3 e−λ(x+y) , if x > 0, y > 0
(
ine
F x, y =


∂x∂y 0, otherwise
∂2
F x, y < 0, it cannot be a joint p.d.f.,

Since ∂x∂y
ng

∵ F is not a joint probability distribution function.


E

7. Let X and( Y be continuous random variables with joint p.d.f.


2xy + (3/2) y , 0 < x < 1, 0 < y < 1
2
arn

f x, y = . Find P(X + Y < 1).



0, otherwise

8. The joint probability density function of the r.v. (X, Y) is given by


Le

f x, y = kxye−(x +y ) , x > 0, y > 0. Find the value of k and prove that X


 2 2

and Y are independent. [N/D 2006, IT]


w.

Solution : By the property of the joint p.d.f., we have,


Z Z Z∞ Z∞
ww

kxye−(x +y ) dxdy = 1 ⇒ k xye−(x +y ) dxdy = 1


2 2 2 2

x>0 y>0 0 0
Z∞
 ∞ 
Z " #
du

2  2
⇒k ye−y  xe−x dx dy = 1, take x2 = u ⇒ xdx =

  2
0 0
Z∞ Z∞ " #
k 2 dv
⇒ ye−y e−t dtdy = 1, take y2 = v ⇒ ydy =
2 2
0 0
Z∞
k1
⇒ e−v dv = 1 ⇒ k = 4
22
0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 119

The marginal density of X is given by


R∞ R∞
f x, y dy = 4x ye−(x +y ) dy = 4xe−x 12 = 2xe−x , x > 0
2 2 2 2
fX (x) =

0 0

|||ly the marginal density of Y is given by


 R∞ R∞
y = f x, y dx = 4y xe−(x +y ) dx = 4ye−y 12 = 2ye−y , y > 0.
 2 2 2 2
fY
0 0

Consider f (x). f (y) = 4xye−(x +y ) = f x, y


2 2 
⇒ X and Y are independent random variables.

n
g.i
9. Find the marginal density function of X and Y if
f x, y = 2 2x + 5y /5, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1[N/D 06, ECE][M/J 06, IT]
 
Solution :

n
The marginal density function of X is given by

fX (x) = f (x) =
R∞
eri
f (x, y)dy =
R1
2
5 (2x + 5y)dy = 4x+5
5 .
ine
−∞ 0

The marginal density function of Y is given by


ng

 R∞ R1 2+10y
y = f y = f (x, y)dx = 52 (2x + 5y)dx =

fY 5
−∞ 0
E

10. The joint probability density function of the random variable (X, Y)
is given by f x, y = Kxye−(x +y ) , x > 0, y > 0. Find the value of K,
 2 2
arn

marginal density function X and Y. Also check X and Y independent?.


[N/D 07, ECE] [N/D 07, IT]
Le

Solution : Here the range space is the entire first quadrant of the
xy-plane.
By the property of the j.p.d., we have
w.

Z Z
Kxye−(x +y ) dxdy = 1
2 2
ww

x>0 y>0
Z∞ Z∞
2 2
⇒K ye−y .xe−x dxdy = 1
0 0
∞ Z∞
put x2 = t ⇒ 2xdx = dt
Z (
K −y2
⇒ ye .e dtdy = 1,
−t
2 put y2 = v ⇒ 2ydy = dv
0 0
Z∞
K1
⇒ e−v dv = 1
22
0
⇒k=4
[Link]
[Link]
120 UNIT II - Two Dimensional Random Variables

The marginal density of X is given by


Z∞ Z∞
2 2
fX (x) = f (x) = f (x, y)dy = 4xe−x ye−y dy
0 0
1 2 2
= 4xe−x= 2xe−x , x > 0
2
|||ly The marginal density of Y is given by
Z∞ Z∞
2 2
fY y = f y = f (x, y)dx = 4ye−y xe−x dx
 

n
g.i
0 0
1 2 2
= 4ye−y= 2ye−y , y > 0
2

n
∴ f (x) f y = 4xye−(x +y ) = f x, y
 2 2 
⇒ X and Y are independent R.V.s.
11. Let X and Y be random variables with joint density
eri
function f x, y

=
ine
( XY
4xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
. Find E(XY). [M/J 07, ECE]
0, otherwise
ng

Solution :
Z ∞Z ∞
E [XY] = xy f (x, y)dxdy
E

Z−∞ −∞
arn

1Z 1
=

xy 4xy dxdy
0 0
Z 1Z 1
=4
Le

x2 y2 dxdy
0 0
4
=
w.

( xy variable X and Y have joint probability density function


12. Two random
ww

, 0 < x < 4, 1 < y < 5


f x, y = 96

. Find E[XY]. [N/D 06, CSE]
0, otherwise
Solution :
Z ∞Z ∞
E [XY] = xy f (x, y)dxdy
Z−∞ −∞
5Z 4  xy 
= xy dxdy
0 0 96
Z 5Z 4
1
= x2 y2 dxdy
96 0 0
248
=
27
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 121

13. If the function f (x, y) = c(1 − x)(1 − y), 0 < x < 1, 0 < y < 1, y < 1 to be
a density function, find the value of c.
Solution : Given f (x, y) = c(1 − x)(1 − y), 0 < x < 1, 0 < y < 1, y < 1
Given is a density function. We have

Z∞ Z∞ Z1 Z1
f x, y dxdy = 1 ⇒ c (1 − x) 1 − y dxdy = 1
 

−∞ −∞ 0 0
Z1 !1 !1 !1
x2 x2 y2
1 − y dy = 1 ⇒ c x − =1

⇒c x− y−

n
2 0
2 0
2 0

g.i
0
1
⇒c=
4

n
14. Is the function defined as follows a density function?


 0, erix<2
ine
f (x) =  (3 + 2x) , 2 ≤ x ≤ 4

 1
18
x>4

 0,

ng

R∞
Solution : Condition for p.d.f. is f (x) dx = 1
−∞
E

R2 R4 R∞
L.H.S. = (0)dx + 1
18
(3 + 2x) dx + (0)dx = 1
arn

−∞ 2 4
Hence the given function is density function.
15. Can the joint distributions of two random variables X and Y be got if
Le

their marginal distributions are known? [M/J06, ECE]


Solution : We know that fXY x, y = fX (x) fY y
 
w.

Two joint distributed random variables X and Y are statistically


independent of each other, if and only if the joint probability density
ww

function is equals the product of the tow marginal probability density


functions.
16. Define marginal and conditional probabilities of a bivariate probability
distribution. [M/J 2006, CSE]
Solution :
Marginal probability : A probability of only one event that takes place
is called a marginal probability.
Conditional probability : The conditional probability of A given B is
P(A∩B)
P (A/B) = P(B) i f P (B) , 0

and it is undefined otherwise.


[Link]
[Link]
122 UNIT II - Two Dimensional Random Variables

2.4.3 Part - A [Covariance - Correlation and Regression]:

1. Let (X, Y) be a two dimensional random variable. Define covariance


of (X, Y). If X and Y are independent, what will be the covariance of
(X, Y). [M/J 2007, CSE]
Solution : Cov(X, Y) = E[(X − E(X)(Y − E(Y)] = E(XY) − E(X)E(Y)
If X and Y are independent, then cov(X, Y) = 0.
2. If Y = −2x + 3, find the cov(X, Y).
Solution : Given Y = −2x + 3.
W.K.T. COV(X, Y) = E[XY] − E[X]E[Y]

n
= E[x(−2x + 3)] − E[x]E[−2x + 3]

g.i
= E[−2x2 + 3x] − E[x][−2E(x) + 3]
= −2E(x2 ) + 3E(x) + 2[E(x)]2 − 3E(x)

n
= −2E(x2 ) + 2[E(x)]2
= −2Var(x)
eri
ine
3. Prove that the correlation coefficient ρxy takes value in the range -1 to
1. [M/J 2007, IT]
Solution : Let X and Y be two random variables with variances σ2x , σ2y
ng

respectively.
We know that variance of any random variable = 0.
E

X Y
 
∴ Var + =0
σX σY
arn

X Y X Y
     
⇒ Var + Var + 2Cov + =0
σX σY σX σY
(By the property of variance)
Le

1 1 2
⇒ 2 Var (X) + 2 Var (Y) + σY Cov (X + Y) = 0
σX σ σX
w.

( Y
∵ Var (aX) = a2 Var (X) ,
)

Cov (aX, b) = abCov (X, Y)


ww

2
⇒ 1 + 1 + σY Cov (X + Y) = 0
σX
⇒ 2 + 2ρXY = 0
⇒ 1 + ρXY = 0
⇒ ρXY = −1
X Y
 
Similarly, Var − =0
σX σY
⇒ 2 − 2ρXY = 0
⇒ 1 − ρXY = 0
⇒ ρXY = 1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 123

Thus −1 = ρXY and ρXY = 1 ⇒ −1 = ρXY = 1


So, correlation coefficient always lies between -1 and +1.
4. If X and Y are random variables such that Y = aX + b where a and b
are real constants, show that the correlation coefficient r(X, Y) between
them has magnitude one. [M/J 06, ECE]
Solution : Given : Y = aX + b
 cov (X, Y) E[XY] − E[X]E[Y]
r x, y = =
σx σ y σx σ y
E[X(aX + b)] − E[X]E[aX + b]
=

n
σx σ y

g.i
E[aX2 + bX] − E[X][aE[X] + b]
=
σx σ y

n
aE[X2 ] + bE[X] − a[E[X]]2 − bE[X]
=

eri
σx σ y
h i
2 2
a E[X ] − [E[X]] aσ2x aσx
= = =
ine
σx σ y σx σ y σy
h i h i
σ y = E Y − [E [Y]] = E (aX + b) − [E (aX + b)]2
2 2 2 2
ng

h i
= E a2 X2 + 2abX + b2 − [aE [X] + b]2
h i n o
= a E X + 2abE [X] + b − [aE [X]] + b + 2abE [X]
2 2 2 2 2
E

h i
= a2 E X2 + 2abE [X] + b2 − a2 [E [X]]2 − b2 − 2abE [X]
arn

h i n h i o
2 2 2 2
= a E X − a [E [X]] = a E X − [E [X]] = a2 σ2x
2 2 2

∴ σ y = ±aσx
Le

aσx
r (X, Y) = = ±1 ⇒ |r (X, Y)| = 1
±aσx
w.

5. If X and Y have a bivariate normal distribution and U = X + Y and


V = X − Y, find an expression for the correlation coefficient of U and
ww

V.
6. The two regression lines are 4x − 5y + 33 = 0 and 20x − 9y = 107. Find
the means of x and y. Also find the value of r. [M/J 2006, CSE]
Solution : Given
4x − 5y + 33 = 0 (A)
20x − 9y = 107 (B)
Since both the lines of regression passes through the mean values

x and y, the point x, y must satisfy the two given regression lines.
4x − 5y = −33 (1)
20x − 9y = 107 (2)
[Link]
[Link]
124 UNIT II - Two Dimensional Random Variables

By simplifying (1) and (2),

x = 13, y = 17

Choose equation (A) is the equation of the line of regression of y on x.

(A) ⇒ 5y = 4x + 33 ⇒ y = 54 x + 33
5 ⇒ b yx = 4
5

Choose equation (B) is the equation of the line of regression of x on y.

n
(B) ⇒ 20x = 9y + 107 ⇒ x = 9
20 y + 107
20 ⇒ bxy = 9
20

g.i
We know that r2 = bxy bxy = 9 4
20 5 = 9
25 ⇒r= 3
5 = 0.6

n
eri
ine
2.4.4 Part - A[Problems of Central limit theorem(C.L.T.)]

1. State central limit theorem. [N/D 06, CSE][M/J 06, IT] [M/J 07, IT]
ng

Solution : Let X1 , X2 , · · · be independent, identically distributed


random variables each having mean µ and finite and non-zero
E

Sn −nµ
variance σ2 . Then lim P √
σ n
≤ x = φ (x) , −∞ < x < ∞.
n→∞
arn

2. State the importance of central limit theorem. [M/J 2006, ECE]


Solution : The practical usefulness of the central limit theorem does
not reside so much in the exactness of the Gaussian distribution for
Le

N → ∞ because the variance of 1/N becomes infinite from


σx2i > > 0, i = 1, 2, · · · , N
w.

Usefulness derives more the fact that YN = for finite N may have a
ww

distribution that is closely approximated as Gaussian.


However, the approximation can be every in accurate in the tail
regions away from the mean, even for large values of N. Of course,
the approximation is made more accurate by increasing N.

3. Write the applications of central limit theorem.[N/D 06, ECE]


Solution :
(i) Central limit theorem provides on simple method for computing
approximate probabilities of sums of independent random variables.
(ii) It also gives us the wonderful fact that the empirical frequencies of
so man natural populations exhibit a bell shaped curve (that is normal
curve)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 125

4. The lifetime of a certain brand of an electric bulb may be considered a


R.V. with mean 1200h and standard deviation 250h. Find the
probability, using central limit theorem, that the average lifetime of 60
bulbs exceeds 1250h. [N/D 2007, IT]
Solution :
Let Xi represents the lifetime of the bulb.
E(Xi ) = 1200h and Var(Xi ) = 2502
Let X denote the mean life time of 60 bulbs.
By corollary of Linderg-Levy form of C.L.T.,

n
   
σ
X follows N µ, √n ⇒ X follows N 1200, 250

g.i

60

   √ 

n
P(X > 60) = N X−1200
250

> 1250−1200
250

=P Z> 60
5

eri
60 60
X−E(Xi )
(here Z = √σ
, the standard Normal variate)
n

∴ P(X > 60) = P(Z > 1.55) = P(0 < Z < ∞) − P(0 < Z < 1.55) =
ine
0.5 − 0.4394 = 0.0606 (from the table of areas under normal curve)
5. The lifetime of a TV tube (in years) is an Exponential random variable
ng

with mean 10. What is the probability that the average lifetime of a
random sample of 36 TV tubes is atleast 10.5?[N/D 07,CSE]
E

Solution : λ = 1/10.
arn

E (Xi ) = σxi = λ1 = 10, Xi = lifetimeof


 i th TV tube, 1 ≤ i ≤ 36
 
P X ≥ 10.5 = P X−10 √ ≥ 0.30 ≈ 0.3821 by central limit theorem.
10/ 36
Le

2.4.5 Part - B[Problems of Marginal, conditional distributions(Discrete r.v.)]


w.

Example 2.1 Two coins are tossed. X denotes number of heads and Y denotes
ww

number of tails. Find


(a) Joint p.m.f. (b) Marginal p.m.f. of (X, Y)
(c) P(X ≤ 1, Y = 1) (d) P(Y ≤ 1)
(e) FX (1) = F(X = 1) (f) FY (2) = F(Y = 2)
(g) Conditional probability of X given Y = 1
(h) Conditional probability of Y given X = 2
(i) Are X and Y independent?

Solution : W.K.T. by tossing two coins,


[Link]
[Link]
126 UNIT II - Two Dimensional Random Variables

The sample space is S = {HH, HT, TH, TT}, n(S) = 4


Let the r.v. denotes number of heads is X takes the values 2,1,1,0
∴ X = {0, 1, 2}
Let the r.v. denotes number of tails is Y takes the values 0,1,1,2
∴ Y = {0, 1, 2}
(a) Joint P.M.F. :
X
0 1 2

n
1
0 P(X = 0, Y = 0) = 0 P(X = 1, Y = 0) = 0 P(X = 2, Y = 0) =

g.i
4
2
Y 1 P(X = 0, Y = 1) = 0 P(X = 1, Y = 1) = P(X = 2, Y = 1) = 0
4

n
1
2 P(X = 0, Y = 2) = P(X = 1, Y = 2) = 0 P(X = 2, Y = 2) = 0
4
(b) Marginal P.M.F. of X&Y :
eri
ine
X Mar. PMF of Y :
0 2 1 PY (y) = P∗ j
1 1
0 0 0 P(Y = 0) =
ng

4 4
2 2
Y 1 0 0 P(Y = 1) =
4 4
E

1 1
2 0 0 P(Y = 2) =
arn

4 XX  4 
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P X = xi , Y = y j
∀i ∀j
1 2 1
Le

PX (x) = Pi∗ = = = =1
4 4 4
Marginal P.M.F. of X Marginal P.M.F. of Y
w.

j=2 i=2
1 X   1 X
P(X = 0) = = P X = 0, Y = y j P(Y = 0) = = P (X = xi , Y = 0)
ww

4 4
j=0 i=0
j=2 i=2
2 X   2 X
P(X = 1) = = P X = 1, Y = y j P(Y = 1) = = P (X = xi , Y = 1)
4 4
j=0 i=0
j=2 i=2
1 X   1 X
P(X = 2) = = P X = 2, Y = y j P(Y = 2) = = P (X = xi , Y = 2)
4 4
j=0 i=0

(c) P(X ≤ 1, Y = 1) = P(X = {0, 1}, Y = 1)


= P(X = 0, Y = 1) + P(X = 1, Y = 1)
2 2
=0+ =
4 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 127

(d) P(Y ≤ 1) = P(Y = {0, 1})


= P(Y = 0) + P(Y = 1)
1 2 3
= + =
4 4 4
(e) FX (1) = FX (x = 1)
= P(X ≤ 1)
= P(X = 0, 1)
= P(X = 0) + P(X = 1)
1 2
= +

n
4 4

g.i
3
=
4
(f) FY (2) = FY (y = 1)

n
= P(Y ≤ 2)
= P(X = 0, 1, 2)
eri
= P(X = 0) + P(X = 1) + P(X = 2)
ine
1 2 12
= + +
4 4 4
=1
ng

(g) Conditional probability of X given Y = 1:


E

P(X = 0/Y = 1) P(X = 1/Y =1) P(X = 2/Y = 1)


arn

2
P(X = 0, Y = 1) 0 P(X = 1, Y = 1) 4 P(X = 2, Y = 1) 0
=   =0 =   =1 =   =0
P(Y = 1) 2 P(Y = 1) 2 P(Y = 1) 2
4 4 4
Le

(h) Conditional probability of Y given X = 2:


P(Y = 0/X = 2) P(Y = 1/X = 2) P(Y = 2/X = 2)
w.

P(Y = 0, X = 2) P(Y = 1, X = 2) P(Y = 2, X = 2)


P(X = 2) P(X = 2) P(X = 2)
ww

(or)   (or) (or)


1
P(X = 2, Y = 0) 4 P(X = 2, Y = 1) 0 P(X = 2, Y = 2) 0
=   =1 =   =0 =   =0
P(X = 2) 2 P(X = 2) 1 P(X = 2) 1
4 4 4
(h) Independence or dependence of X and Y:
= Pi · P j ∀i, j
(
⇒ X&Y are independent.
Pi j
, Pi · P j atleast one i, j ⇒ X&Y are not independent
i.e.,
   
Independent P X = xi , Y = y j = P (X = xi ) · P Y = y j , ∀i, j.
   
Not independent P X = xi , Y = y j , P (X = xi ) · P Y = y j , for any i, j.
[Link]
[Link]
128 UNIT II - Two Dimensional Random Variables

For X = 0, Y = 0:
Here P(X = 0, Y = 0) = 0
1 1 1
But P(X = 0) · P(Y = 0) = · = , P(X = 0, Y = 0)
4 4 16
∴ X and Y are not independent.

Example 2.2 A two dimensional random variable has (X, Y) as a joint p.m.f.
P(x, y) = k(2x + 3y), x = 0, 1, 2; y = 1, 2, 3. Find
(a) Cond. prob. of X given Y. (b) Prob. dist. of X + Y

n
(c) P(X > Y) (d) P[Max(X, Y) = 3]

g.i
(e) Prob. dist. of Z = Min(X, Y).

n
Solution : Given discrete random variables X and Y with joint pmf as

eri
P(x, y) = k(2x + 3y), x = 0, 1, 2; y = 1, 2, 3.
which contains an unknown k.
ine
The joint pmf is
X
0 1 2
ng

1 3k 5k 7k
Y 2 6k 8k 10k
E

3 9k 11k 13k
arn

To find k, use
XX
P(X = x, Y = y) = 1
∀x ∀y
Le

P(X ≤ 2, Y ≤ 3) = 1
3k + 5k + 7k + 6k + 8k + 10k + 9k + 11k + 13k = 1
w.

72k = 1
1
∴k=
ww

72
X Mar. PMF of Y :
0 1 2 PY (y) = P∗ j
3 5 7 15
1 P(Y = 1) =
72 72 72 72
6 8 10 24
Y 2 P(Y = 2) =
72 72 72 72
9 11 13 33
3 P(Y = 3) =
72 72 72 XX  72 
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P X = xi , Y = y j
∀i ∀j
18 24 30
PX (x) = Pi∗ = = = =1
72 72 72
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 129

(a) Marginal p.m.f. of X, Y:

Marginal P.M.F. of X Marginal P.M.F. of Y


18 15
P(X = 0) = P(Y = 1) =
72 72
24 24
P(X = 1) = P(Y = 2) =
72 72
30 33
P(X = 2) = P(Y = 3) =
72 72
(b) Conditional probability of X given Y = 3:

n
P(X = 0/Y = 3) P(X = 1/Y = 3) P(X = 2/Y = 3)

g.i
9 11 13
P(X = 0, Y = 3) 72 9 P(X = 1, Y = 3) 72 11 P(X = 2, Y = 3) 72 13
= = = = = =
P(Y = 3) 33 33 P(Y = 3) 33 33 P(Y = 3) 33 33
72 72 72

n
(c) Prob. dist. of X + Y:

X + Y P(X+Y) eri
ine
3
1 P(X = 0, Y = 1) =
72
6 5 11
P(X = 0, Y = 2) + P(X = 1, Y = 1) = + =
ng

2
72 72 72
9 8 7 24
3 P(X = 0, Y = 3)+P(X = 1, Y = 2)+P(X = 2, Y = 1) = + + =
E

72 72 72 72
11 10 21
4 P(X = 1, Y = 3) + P(X = 2, Y = 2) = + =
arn

72 72 72
13
5 P(X = 2, Y = 3) =
72
Le

7
(d) P(X > Y) = P(X = 2, Y = 1) =
72
(e) P[Max(X, Y) = 3]:
w.

P[Max(X, Y) = 3] = P(X = 0, Y = 3) + P(X = 1, Y = 3) + P(X = 2, Y = 3)


ww

9 11 13
= + +
72 72 72
33
=
72
(f) P[Z = Min(X, Y)]:
Z = Min(X, Y) P(Z)
3 6 9 18
0 P(0, 1) + P(0, 2) + P(0, 3) = + + =
72 72 72 72
5 8 11 7 31
1 P(1, 1)+P(1, 2)+P(1, 3)+P(2, 1) = + + + =
72 72 72 72 72
10 13 23
2 P(2, 2) + P(2, 3) = + =
72 72 72
[Link]
[Link]
130 UNIT II - Two Dimensional Random Variables

x + 2y
Example 2.3 (X, Y) as a joint p.m.f. is given by , x = 0, 1, 2; y = 0, 1, 2.
27
Find
(a) Cond. prob. of Y given X = 1 (b) Cond. prob. of X given Y = 1
(c) Are X and Y independent?
 
Solution : (a) {1/9, 1/3, 5/9} (b) {2/9, 3/9, 4/9} (c) not independent

Example 2.4 Three balls are drawn from a box in random containing 3 red, 2

n
white and 4 black. If X denotes number of white balls drawn and Y denotes

g.i
number of red balls drawn. Find joint probability distribution P(X,Y).

n
Solution : Given that a box contains 3 red, 2 white and 4 black.

eri
X = denotes the number of white balls
Y = denotes the number of red balls
ine
RX = {0, 1, 2}
RY = {0, 1, 2, 3}
ng

The joint probability distribution P(X, Y) :


Y
E

0 1 2 3
arn

4C3 3C1 · 4C2 3C2 · 4C1 3C3


0
9C3 9C3 9C3 9C3
2C1 · 4C2 2C1 · 3C1 · 6C1 2C1 · 3C2 0
X 1
Le

9C3 9C3 9C3 9C3


2C2 · 4C1 2C2 · 4C1 0 0
2
9C3 9C3 9C3 9C3
w.

i.e.,
ww

Y Mar. PMF of Y
0 1 2 3 PX (x) = Pi∗
4 18 12 1 35
0 P(X = 0) =
84 84 84 84 84
12 24 6 42
X 1 0 P(X = 1) =
84 84 84 84
4 3 7
2 0 0 P(X = 2) =
84 84 XX 84
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P(X = 3) P(i, j)
∀i ∀j
20 45 18 1
PY (y) = P∗j = = = = =1
84 84 84 84
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 131

2.4.6 Anna University Questions

1. The joint probability mass function of (X, Y) is given by


P(x, y) = K(2x + 3y), x = 0, 1, 2; y = 1, 2, 3. Find all the marginal and
conditional probability distributions.[N/D 07, CSE]
Sol: Marginal distributions of X :
P(X = 0) = 18/72, P(X = 1) = 24/72, P(X = 2) = 30/72
Marginal distributions of Y :
P(Y = 1) = 15/72, P(Y = 2) = 24/72, P(Y = 3) = 33/72
Conditional distribution of X given Y :

n
P(X = 0/Y = 1) = 1/5, P(X = 1/Y = 1) = 1/3,

g.i
P(X = 2/Y = 1) = 7/15, · · ·
Conditional distribution of Y given X :
P(Y = 1/X = 0) = 1/6, P(Y = 2/X = 0) = 1/3,

n
P(Y = 3/X = 0) = 1/2, · · ·

eri
2. Let the joint probability distribution of X and Y be given by
ine
X
-1 0 1
ng

-1 1/6 1/3 1/6


Y 0 0 0 0
E

1 1/6 0 1/6
arn

Show that their covariance is zero even though the two random
variables are not independent.
Le

3. The joint probability mass function of X and Y is


w.

0 1 2
Y
ww

X
0 0.1 0.04 0.02
1 0.08 0.20 0.06
2 0.06 0.14 0.30

Compute the marginal distribution function of X and Y, P(X = 1, Y = 1)


and check whether X and Y are independent. [A/M 08, ECE]

4. The joint probability mass function of (X, Y) is given by


p(x, y) = k(2x + 3y), x = 0, 1, 2; y = 1, 2, 3. Find all the marginal and
conditional probability distributions. Also find the probability
distribution of (X + Y). [N/D 2008, ECE]
[Link]
[Link]
132 UNIT II - Two Dimensional Random Variables

2.4.7 PART - B[Problems of Marginal, conditional distributions (Cont r.v.)]

Example 2.5 Given joint p.d.f. f (x, y) = 2, 0 < y < x < 1. Find
(a) Marginal p.d.f. of X (b) Marginal p.d.f. of Y
(c) Conditional. prob. of X given Y (d) Conditional prob. of Y given X
(e) Are X and Y independent?

Solution : Given joint p.d.f. f (x, y) = 2, 0 < y < x < 1. (1)

n
(a) Marginal p.d.f. of X

g.i
Z∞
f (x) = f (x, y)dy

n
eri
−∞
Zy=x
= 2dy (∵ by (1))
ine
y=0
y=x
= 2[y] y=0
ng

= 2[(x) − (0)]
∴ f (x) = 2x, 0 < x < 1 (use extreme limits)
E

(b) Marginal p.d.f. of Y


arn

Z∞
f (y) = f (x, y)dx
Le

−∞
Zx=1
=
w.

2dy (∵ by (1))
x=y
ww

= 2[x]x=1
x=y
= 2[(1) − (y)]
∴ f (y) = 2(1 − y), 0 < y < 1 (use extreme limits)
(c) Conditional probability of X given Y
f (x, y)
f (x/y) =
f (y)
2
= (∵ by (1) and (b))
2(1 − y)
1
∴ f (x/y) = ,0 < y < 1
1−y
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 133

(d) Conditional probability of Y given X

f (x, y)
f (y/x) =
f (x)
2
= (∵ by (1) and (a))
2x
1
∴ f (x/y) = , 0 < y < 1
x

n
g.i
(e) Are X and Y independent?

n
If X and Y are independent, then f (x, y) = f (x) · f (y)

eri
If X and Y are not independent, then f (x, y) , f (x) · f (y)
ine
Here f (x, y) = 2 (∵ by (1))
ng

f (x) · f (y) = 2x · 2(1 − y)


= 4x(1 − y)
E

, f (x, y)
arn
Le

∴ X and Y are not independent.


w.
ww

Example 2.6 The bivariant random variable X and Y has the p.d.f.
f (x, y) = kx2 8 − y , x < y < 2x, 0 ≤ x ≤ 2. Find


(a) k (b) f (x) (c) f (y) (d) f (x/y)


 
(e) f (y/x) (f) f (y/x = 1) 1 3
(g) P 4 ≤ X ≤ 4

Solution : f (x, y) = kx2 8 − y , x < y < 2x, 0 ≤ x ≤ 2.



(1)
[Link]
[Link]
134 UNIT II - Two Dimensional Random Variables

(a) k:
Z∞ Z∞
W.K.T. f (x, y)dxdy = 1 (∵ f (x, y) has an unknown k)
−∞ −∞
Z2 Z2x
kx2 (8 − y)dydx = 1
0 x
(∵ by(1), y is a function of variable x)
Z2 2 y=2x

n
" #
y
k x2 8y − dx = 1

g.i
2 y=x
0
Z2 " !#
x2

n
 
k x2 16x − 2x2 − 8x − dx = 1
2
0
Z2 "
x2
# eri
ine
k x2 16x − 2x2 − 8x + dx = 1
2
0
Z2
ng

" 2
#
x
k x2 8x − 3 dx = 1
2
0
E

Z2 " 4
#
x
arn

k 8x3 − 3 dx = 1
2
0
" 4 #2
x x5
Le

k 8 −3 =1
4 10 0
5 2
" #
x
w.

 
k 2x4 − 3 =1
10 0
32
  
ww

k (32 − 0) − 3 =1
10
32
  
k (32) − =1
5
160 − 48
 
k =1
5
112

k =1
5
5
∴k=
112
5 2
∴ (1) ⇒ f (x, y) = x 8 − y , x < y < 2x, 0 ≤ x ≤ 2.

(2)
112
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 135

(b) f (x) :

Z∞
f (x) = f (x, y)dy
−∞
Z2x Z2x
5 2 5 2
= x 8 − y dy =
 
x 8 − y dy
112 112
x x
(∵ integration w.r.t. y, so treat remaining as constant)

n
!2x
y2
" ! !#
5 2 5 2 4x2 x2
= x 8y − = x 16x − − 8x −

g.i
112 2 x 112 2 2
" ! !#
5 2 4x2 x2
= x 16x − − 8x −

n
112 2 2

eri
" #
5 2 4x2
= x 8x − 3
112 2
ine
5 h 3 i
= 16x − 3x4 , 0 < x < 2
224
ng

(b) In Region R1 , f (y) :


E

Y Y = 2X
arn

51
(2, 4)
4
Le

Z∞ X=Y
f (y) = f (x, y)dx 3 R2
w.

−∞
(2, 2)
Zy 2
5 2 R1
ww

=

x 8 − y dx
112
y 1
2

[Refer region R1 ] X
0 1 2 3 40
Zy
5
= x2 dx

8−y
112
y
2
" #y
5  x3
= 8−y
112 3 y
" 3 !2
y3
!#
5  y
= 8−y −
112 3 24
[Link]
[Link]
136 UNIT II - Two Dimensional Random Variables

 7y3
" #
5
= 8−y
112 24
5 3
= y 8 − y ,0 < y < 2

384
In Region R2 , f (y) :
Z∞
f (y) = f (x, y)dx
−∞
Z2

n
5 2
=

x 8 − y dx [Refer region R2 ]

g.i
112
y
2

Z2

n
5
= x2 dx

8−y

eri
112
y
2
" #2
 x3
ine
5
= 8−y
112 3 y
"  2
y3
!#
5  8
ng

= 8−y −
112 3 24
5 h i
E

= 8 − y 64 − y3 , 2 < y < 4

112
arn

(d) f (x/y) :
f (x, y)
f (x/y) =
Le

f (y)
 h i
5 2 2



 112 8x − x y
w.


5 3
 
yh8−y i

=

 384
5


 112 8x2 − x2 y
ww



 5 8 − y 64 − y3 


112

(e) f (y/x) :
f (x, y)
f (y/x) =
f (x)
h i
5 2 2
112 8x − x y
= 5
224
(16x3 − 3x4 )
h i
2 2
2 8x − x y
=
(16x3 − 3x4 )

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 137

(f) f (y/x = 1) :
f (x = 1, y)
f (y/x = 1) =
f (x = 1)
5  
112 8 − y
= 5
(16 − 3)
224
2 
=

8−y
13
 
1 3
(g) P 4 ≤X≤ 4 :

n
 Z3/4

g.i
1 3

P ≤X≤ = f (x)dx
4 4
1/4

n
Z3/4

eri
5  3 
= 4
16x − 3x dx
224
1/4
ine
5 3 5 3/4
 
= 4
4x − x
224 5 1/4
= 0.0247366769
ng

Example 2.7 Joint distribution function of X and Y is given by


E

F(x, y) = (1 − e−x ) (1 − e−y ) , x > 0, y > 0. Find


arn

(a) Marginal p.d.f. of X and Y (b) Marginal p.d.f. of Y


(c) Are X and Y independent? (d) P(1 < x < 3, 1 < y < 2)
Le

Solution : Given joint distribution function of continuous r.v.s X and Y is


w.

given by
F(x, y) = (1 − e−x ) (1 − e−y ) , x > 0, y > 0 (1)
ww

To find Marginal p.d.f. f (x) and f (y), first we have to find joint p.d.f. f (x, y).
i.e.,
∂2 ∂2
f (x, y) = F(x, y) (or) F(x, y)
∂x∂y ∂y∂x
∂ ∂ 
" #
= −x
(1 − e ) (1 − e )−y 
∂x ∂y
∂ 
= (1 − e−x ) e−y

∂x
= e−x e−y
= e−(x+y)
[Link]
[Link]
138 UNIT II - Two Dimensional Random Variables

∴ Joint p.d.f. of X and Y is f (x, y) = e−(x+y) . (2)


(a) Marginal p.d.f. of X:
Z∞
f (x) = f (x, y)dy
−∞
Z∞
= e−(x+y) dy (∵ by (2))

"0 −(x+y) #∞
e

n
=
−1 0

g.i
= (0) − (−e−x )
 

∴ f (x) = e−x , x > 0

n
eri
(b) Marginal p.d.f. of Y:
Z∞
f (y) =
ine
f (x, y)dx
−∞
Z∞
ng

= e−(x+y) dx (∵ by (2))

"0 −(x+y) #∞
E

e
=
arn

−1 0
= (0) − (−e−y )
 

∴ f (y) = e−y , y > 0


Le

(c) Independent of X and Y:


f (x, y) = e−(x+y)
w.

= e−x · e−y
ww

= f (x) · f (y)
∴ X and Y are independent.
(d) P(1 < x < 3, 1 < y < 2) :
Z2 Z3
P(1 < x < 3, 1 < y < 2) = e−x e−y dxdy
1 1
=e −5
− e−4 − e−3 + e−2

Example 2.8 Given joint p.d.f. f (x, y) = 81 (6 − x − y), 0 < x < 2, 2 < y < 4.
Find
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 139

(a) P(X < 1, Y < 3) (b) P(X + Y < 3) (c) P(X + Y > 3)
(d) P(X < 1/Y < 3) (e) f (x/y = 1)

Solution : Given joint p.d.f. of continuous r.v.s X and Y is

n
g.i
1
f (x, y) = (6 − x − y), 0 < x < 2, 2 < y < 4. (1)
8

n
(a) P(X < 1, Y < 3) : eri
ine
ng

Z∞ Z∞
P(X < 1, Y < 3) = f (x, y)dxdy
E

−∞ −∞
arn

Z1 Z3
1
= (6 − x − y)dydx (∵ by (1))
8
0 2
Le

Z1 !3
1 y2
= 6y − xy − dx
8 2
w.

2
0
Z1 
1 9
 
ww

= 18 − 3x − − (12 − 2x − 2) dx
8 2
0
Z1 
1 7

= − x dx
8 2
0
" #1
1 7 x2
= x−
8 2 2 0
1 7 1
  
= − − (0 − 0)
8 2 2
3
=
8
[Link]
[Link]
140 UNIT II - Two Dimensional Random Variables

41

3
(b) P(X + Y < 3) : X+Y=3
2
Z∞ Z∞

n
P(X + Y < 3) = f (x, y)dxdy 1

g.i
−∞ −∞
Z3 Z3−y X
1 0 1 2 30

n
= (6 − x − y)dxdy [∵ by (1)]
8

eri
2 0
3 !3−y
x2
Z
1
=
ine
6x − − yx dy
8 2 0
2
Z3 "
9 y2
! #
1
ng

= 18 − 6y − − + 3y − 3y + y2 − (0 − 0 − 0) dy
8 2 2
2
E

Z3 "
y2
#
1 27
= − 6y +
arn

dy
8 2 2
2
3 3
" #
1 27 y
= y − 3y2 −
Le

8 2 6 2
1 27 27 8
   
= 3 − 27 + − 27 − 12 +
w.

8 2  6 6
1 4
= 3−
ww

8 3
5
=
24

(c) P(X + Y > 3) :

P(X + Y > 3) = 1 − P(X + Y < 3)


5
=1−
24
19
=
24
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 141

(d) P(X < 1/Y < 3) :

P(X < 1, Y < 3)


P(X < 1/Y < 3) = (2)
P(Y < 3)

Z3
where, P(Y < 3) = f (y)dy (3)
2
Z∞

n
Now, f (y) = f (x, y)dx

g.i
−∞
Z2
1
=

n
(6 − x − y)dx
8

eri
0
Z2
1
= (6 − x − y)dx
ine
8
0
!2
1 x2
=
ng

6x − − xy
8 2 0
1
=
 
12 − 2 − 2y − ((0 − 0 − 0))
E

8
1
arn

= 10 − 2y

8
1
= 5−y

4
Le

Z3
1
∴ (3) ⇒ P(Y < 3) =

5 − y dy
w.

4
2
#3
y2
"
1
ww

= 5y −
4 2 2
1 9
  
= 15 − − (10 − 2)
4 2
1 9
= 7−
4  2
1 5
=
4 2
5
=
8
P(X < 1, Y < 3) 3/8 3
∴ (2) ⇒ P(X < 1/Y < 3) = = = [∵ by (a) and (3)]
P(Y < 3) 5/8 5
[Link]
[Link]
142 UNIT II - Two Dimensional Random Variables

(e) f (x/y = 1) :
" #
f (x, y)
f (x/y = 1) =
f (y) y=1
1 
 8 (6 − x − y) 
=  1 
(5 − y)

4 y=1
1 5−x
 
=
2 4
5−x
=

n
8

n g.i
Example 2.9 A gun is aimed at a certain point (origin of coordinate system),

eri
because of the random factors, the actual hit point can be any point (x, y)in a circle
of radius R about the origin. Assume that the joint density function of X and Y is
ine

 c ,x + y ≤ R

 2 2 2
constant in this circle and given by f (x, y) = 

.
 0 , otherwise

ng


 q  2
2
 πR 1 − R
x
, −R ≤ X ≤ R


(b) show that f (x) = 

(a) Find c
E

0 , otherwise



arn

Solution : Given that the joint density function of X and Y is constant in


this circle and given by
Le

c , x2 + y2 ≤ R2
(
f (x, y) = (1)
0 , otherwise
w.

(a) Value of c :
ww

Z∞ Z∞
W.K.T. f (x, y)dxdy = 1 (∵ f (x, y) contains an unknown)
−∞ −∞
Z Z
cdx = 1

πR2 c = 1
1
∴c=
πR2
1

, x2 + y2 ≤ R2

∴ (1) ⇒ f (x, y) = 

 πR0 , otherwise
2 (2)


[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 143

(b) Proof of f (x) :


Z∞
f (x) = f (x, y)dy
−∞

ZR2 −x2
1
= dy

πR2
− R2 −x2

ZR2 −x2
2

n
= dy (∵ constant fn. which is an even function)
πR2

g.i
0
2 h √ 2  i
= 2
R − x − (0)
πR2  q

n

 1 −  x 2 
=
2 
πR 
2

 R
R  

eri

ine
r  2
2 x
= 1− , −R ≤ X ≤ R.
πR R
ng

2.4.8 Anna University Questions


E

8xy, 0 < x < y < 1


(
1. Given the p.d.f of (X, Y) as f x, y =

Find the
arn

0, otherwise
marginal and conditional probability density function of X and Y. Are
X and Y independent? [May/June 2006, ECE][M/J 07, CSE]
Le

f(x) = 4x(1 − x ), 0 < x < 1, f(y) = 4 y , 0 < y < 1, 


 

 
  f (x,y) 2x f (x,y) 2y

= = , < < = = ,
 
Solution :  f x/y 0 x y, f (y/x)
 
2 2
f (x) y f ( y) 1−x
 

w.

 
 x < y < 1, f (x) f y , f x, y
   

2. If the joint p.d.f. of a two dimensional random variable (X, Y) is given


ww

by ( 2 xy
x + 3 ; 0 < x < 1, 0 < y < 2  
f x, y = Find (a) p x > 21 (b) P(Y < X)

0, elsewhere.
 
(c) p y < 21 /x < 12
{(a) = 5/6 , (b) = 7/24 , (c) = 5/32}
3. If X is the proportion of persons who will respond to one kind of mail
order solicitation, Y is the proportion of persons who will respond to
another kind of mail-order solicitation and the joint probability
density (function of X and Y is given by
5 x + 4y , 0 < x < 1, 0 < y < 1, find the probabilities
2 
f x, y =

0, elsewhere
[Link]
[Link]
144 UNIT II - Two Dimensional Random Variables

that
(1) atleast 30% will respond to the first kind of mail-order solicitation.
(2) atmost 50% will respond to the second kind of mail-order
solicitation given that there has been 20% response to the first kind of
mail-order solicitation.

4. The joint of p.d.f. of random variable X and Y is given by f x, y =



Kxye−(x +y ) , x > 0y > 0. Find the value of K and prove also that X and
2 2

Y are independent.[N/D06,CSE][N/D 07, IT] {K=4}

n
=

5. The joint p.d.f of a bivariate random variable (X, Y) is fXY x, y
k x + y , i f 0 < x < 2, 0 < y < 2
(

g.i

0, otherwise
(1) Find k {1/8}o

n
1+y
n
4 , 0 < x < 2; 4 , 0 < y < 2
x+1
(2) Find the marginal p.d..f of X and Y.
(3) Are X and Y independent?

(4) Find fY/X y/x and fX/Y x/y

eri {not
n independent}
1 x+y
   x+y o
1
2 x+1 ; 2 y+1
ine
6. The joint probability density function of a two dimensional random
x2
= 2
+ 8 , 0 ≤ x ≤ 2, 0 ≤ y ≤ 1.

variable (X, Y) is given by f x, y xy
ng

  
Compute P (X > 1) , P y < 12 , P X > 1/Y < 12 , P Y < 12 /X > 1 ,
P(X<Y) and P (X + Y ≤ 1) . [M/J 07, ECE][N/D 07, CSE] {19/24, 1/4, 5/6,
E

5/19, 53/480, 13/480}


arn

7. Given the joint probability density function of (X, Y):


, x > 0, y > 0
( −(x+y)
e
f x, y =

.
0, elsewhere
Le

Find the marginal densities of X and Y. Are X and Y independent?


[A/M 2008, ECE]
w.

8. Given ( the joint probability density


2
(x + 2y), 0 < x < 1, 0 < y < 1
f x, y = 3
ww


Find the marginal densities,
0, otherwise
 
conditional density of X given Y = y and P X ≤ 12 /Y = 12 . [A/M 2008,
ECE]

9. The joint p.d.f. of X and Y is given by


kx(x − y), 0 < x < 2, −x < y < x
(
f (x, y) = . Evaluate the constant k
0, otherwise
and find the marginal [Link] of the random variables. Find
also conditional p.d.f. of Y given X = x. [M/J 09, ECE]

10. Find the marginal and conditional densities if f (x, y) = k(x3 y + xy3 ), 0 =
x = 2, 0 = y = 2. [M/J 09, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 145

11. The joint p.d.f.


( of a two dimensional random variable (X, Y) is given
xy
x + 3 , 0 < x < 1, 0 < y < 2
2
by f (x, y) = . Find (1) P(X > 1/2) (2)
0, elsewhere
P(Y < X) (3) P(Y < 1/2 / X < 1/2). [M/J 09, ECE]

( the joint p.d.f. of a two dimensional R.V. (x, y) is given by f (x, y) =


12. If
k(6 − x − y), 0 < x < 2, 2 < y < 4
, find (a) the value of k (b) p(x <
0, elsewhere
1, y < 3) (c) p(x < 1/y < 3) (d) p(y < 3/x < 1) (e) p(x + y < 3). [N/D
2009, ECE]

n
g.i
2.4.9 Part-B[Problems of Correlation]

n
Example 2.10 For the following data about the use of fertilizers, X in kg and the

X 12 12 eri
output yield Y in kg from a field, find the correlation coefficient.

15 17 20 26
ine
Y 140 160 100 190 200 260
ng

Solution : Here X and Y are discrete random variables.


W.K.T. the correlation coefficient for the discrete random variables is
Cov(X, Y)
E

r=
σX · σY
arn

E(XY) − E(X)E(Y)
= q q
E (X2 ) − [E(X)] E (Y2 ) − [E(Y)]2
2
Le

1
 P
n xy − x y
, for n pairs of (x, y) values



 q q

 1
P 2 1
P 2
x2 − (x) n y2 − y

w.



n


=

(or)

 P  P P 
n xy − ( x) y

ww


, direct method



 q
 h P P 2i h P 2 P 2 i
2


 n x − ( x) n y − y

# X Y XY X2 Y2 X
1 12 140 1680 144 19600
2 12 160 1920 144 25600
3 15 100 1500 225 10000
4 17 190 3230 289 36100
5 20 200 4000 400 40000
6 26 260 6760 676 67600
n= X= Y= XY = X = Y =
P P P P 2 P 2
6 102 1050 19090 1878 198900
[Link]
[Link]
146 UNIT II - Two Dimensional Random Variables

19090
− 102
6 6 · 6
1050
∴ r(X, Y) = q  2 q  2
1 102 1 1050
6 (1878) − 6 6 (198900) − 6
19090
− 107100
= √ √ 36
6

24 2525
206.6666667
=
246.1706725
∴ r = 0.8395 (Check −1 ≤ r ≤ 1)

n
g.i
Note : Here r ≥ 0, ⇒ the relation between fertilizers and yield is near to
best (r = 1).

n
eri
Example 2.11 Calculate the coefficient of correlation from the following heights
in inches of fathers X and sons Y.
ine
X 65 66 67 67 68 69 70 72
ng

Y 67 68 65 68 72 72 69 71
E

Solution : {Ans : r=0.6032}


arn

Example 2.12 Joint p.d.f. of X and Y is given by f (x, y) = x + y, 0 ≤ x, y ≤ 1.


Le

Find correlation coefficient.


w.

Solution : Joint p.d.f. of continuous r.v.s X and Y is given by

f (x, y) = x + y, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1.
ww

(1)

Cov(X, Y)
r=
σX · σY
E(XY) − E(X)E(Y)
= q q
E (X2 ) − [E(X)] E (Y2 ) − [E(Y)]2
2

R∞ R∞ R∞
! ∞ !
R
xy f (x, y)dxdy − x f (x)dx y f (y)dy
−∞ −∞ −∞ −∞
= s #2 s ∞ #2

"∞ "∞
R R R R
x2 f (x)dx − x f (x)dx y2 f (y)dy − y f (y)dy
−∞ −∞ −∞ −∞
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 147

Z∞ Z∞
E(XY) = xy f (x, y)dxdy
−∞ −∞
Z1 Z1
= xy(x + y)dxdy
0 0
Z1 Z1  
= x2 y + xy2 dxdy
0 0

n
Z1 " #1
x3 x2 2

g.i
= y + y dy
3 2 0
0

n
Z1 "
y y2
! #

eri
= + − (0 + 0) dy
3 2
0
ine
#1
y2 y3
"
= +
6 6 0
1 1 2
  
ng

= + − (0 + 0) =
6 6 6
1
=
E

3
arn
Le

Z∞
E(X) = x f (x)dx (2)
w.

−∞
Z∞
ww

where f (x) = f (x, y)dy


−∞
Z1
= (x + y)dy
0
#1
y2
"
= xy +
2 0
1
  
= x + − (0 + 0)
2
1
= x + ,0 < x < 1
2
[Link]
[Link]
148 UNIT II - Two Dimensional Random Variables

Z1 
1

∴ (2) ⇒ E(X) = x x + dx
2
0
Z1 
x

= x + dx
2
2
0
" #1
x3 x2
= +
3 4 0
1 1
  

n
= + − (0 + 0)
3 4

g.i
7
=
12

n
Z∞

eri
E(Y) = y f (y)dy (3)
−∞
Z∞
ine
where f (y) = f (x, y)dx
−∞
ng

Z1
= (x + y)dx
E

0
#1
arn

"
x2
= + xy
2
 0
1
 
= + y − (0 + 0)
Le

2
1
= y + ,0 < y < 1
w.

2
Z1
1
 
ww

∴ (3) ⇒ E(Y) = y y + dy
2
0
Z1 
y
= y + dy
2
2
0
#1
y3 y2
"
= +
3 4 0
1 1
  
= + − (0 + 0)
3 4
7
=
12
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 149


  Z
E X2 = X2 f (x)dx
−∞
Z1
1
 
= x x + dx
2
2
0
Z1 2
!
x
= x3 + dx
2
0

n
" #1
x4 x3

g.i
= +
4 12 0
1 1
  
= + − (0 + 0)

n
4 12

eri
5
=
12

ine
  Z
E Y2 = y2 f (y)dy
−∞
ng

Z1
1
 
= y y + dy
2
2
E

0
arn

Z1 2
!
y
= y3 + dy
2
0
Le

#1
y4 y3
"
= +
4 12 0
w.

1 1
  
= + − (0 + 0)
4 12
ww

5
=
12
1 7 7
3 − 12 · 12
r= q q
5 49 5 49
12 − 144 12 − 144

= −0.0909

Example 2.13 10 participants were ranked according to their performance in a


musical competition as follows.
[Link]
[Link]
150 UNIT II - Two Dimensional Random Variables

X 1 6 5 10 3 2 4 9 7 8
Y 3 5 8 4 7 10 2 1 6 9
Z 6 4 9 8 1 2 3 10 5 7
Using rank correlation method, discuss which pair of judges has the nearest
approximation to common likings of music.
Solution : Here X, Y, Z are discrete random variables.

n
W.K.T. the rank correlation coefficient for the given arrangement of ranks.
Here ranks are not repeated.

g.i
So, use Non-Repeated rank correlation formula. i.e.,
" # X

n
6 h i
r(X, Y) = 1 − dXY , d = x − y
2
(1)
n (n2 − 1)

r(Y, Z) = 1 −
"
6
n (n2 − 1)
# X
h
2
i
eri
dYZ , d = y − z (2)
ine
" # X
6 h i
r(Z, X) = 1 − dZX , d = z − x
2
(3)
n (n2 − 1)
ng

# Rank Rank Rank dXY dYZ dZX d2XY d2YX d2ZX


of X of Y of Z = X−Y = Y−Z = Z−X
E

1 1 3 6 -2 -5 5 4 25 25
arn

2 6 5 4 1 1 -2 1 1 4
3 5 8 9 -3 -1 4 9 1 16
4 10 4 8 6 -4 -2 36 16 4
Le

5 3 7 1 -4 6 -2 16 36 4
6 2 10 2 -8 8 0 64 64 0
w.

7 4 2 3 2 -1 -1 4 1 1
8 9 1 10 8 -9 1 64 81 1
ww

9 7 6 5 1 1 -2 1 1 4
10 8 9 7 -1 2 -1 1 4 1
P 2 P 2 P 2
n dXY dYZ dZX
= 10 =200 =214 =60
" # X
6 h i
∴ (1) ⇒ r(X, Y) = 1 − 200 2
= −0.2121
10 (102 − 1)
" # X
6 h i
(2) ⇒ r(Y, Z) = 1 − 214 2
= −0.2969
10 (102 − 1)
" # X
6 h i
(3) ⇒ r(Z, X) = 1 − 60 = 0.6363
2
10 (102 − 1)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 151

Best pair of judges is decided by maximum(near to value 1) of above r


values.
Here Max[r(X, Y) = −0.2121, r(Y, Z) = −0.2969, r(Z, X) = 0.6363] = 0.6363.
∴ The judges X and Z are the best pair in common likeness of music.

Example 2.14 Find rank correlation coefficient for X and Y given below:

X 68 64 75 50 64 80 75 40 55 64
Y 62 58 68 45 81 60 68 48 50 70

n
Solution : Here X, Y are discrete random variables.

g.i
Here ranks of X and Y are not given directly as in previous problem. So
find ranks for the given data of X and Y.

n
Here some of data of X and Y are repeated.

eri
So, use Repeated rank correlation formula. i.e.,
#    
2
m m − 1
"
6  X  X i i

r(X, Y) = 1 − +
 2
 
ine
d (1)
  
2 XY  
n (n − 1)  12

 
 

∀i

where d = x − y;
ng

mi = number of times an item is repeated


E

# X Y Rank of X Rank of Y d = x − y d2
arn

1 68 62 4 5 -1 1
2 64 58 6 7 -1 1
3 75 68 2.5 3.5 -1 1
Le

4 50 45 9 10 -1 1
5 64 81 6 1 5 25
6 80 60 1 6 -5 25
w.

7 75 68 2.5 3.5 -1 1
8 40 48 10 9 1 1
ww

9 55 50 8 8 0 0
10 64 70 6 2 4 16
n = 10 d2XY = 72
P

In X− series:
∗ 75 is repeated 2 times, m1 = 2
   
2 2
m1 m1 − 1 2 2 −1 1
= =
12 12 2
∗ 64 is repeated 3 times, m2 = 3
   
2 2
m2 m2 − 1 3 3 −1
= =2
12 12
[Link]
[Link]
152 UNIT II - Two Dimensional Random Variables

In Y− series:
∗ 68 is repeated 2 times, m3 = 2
   
m3 m23 − 1 2 22 − 1 1
= =
12 12 2
" #
6 1 1
 
∴ (1) ⇒ r(X, Y) = 1 − 72 + + 2 +
10 (102 − 1) 2 2
= 1 − 0.4545
∴ r = 0.5455

n
g.i
2.4.10 Part-B[Problems of Correlation](Anna University Questions)

1. If the independent random variables X and Y have the variances 36

n
and 16 respectively,
√ find the correlation coefficient between (X +Y) and

eri
(X − Y). {r = 5/13 } [N/D 2006, ECE]
6−x−y
2. If f (x, y) = 8 , 0 ≤ x ≤ 2, 2 ≤ y ≤ 4 for a bivariate r. v. (X, Y) find the
ine
correlation coefficient ρXY . {-1/11}
3. Find the correlation coefficient for the following data: [N/D07,ECE]
ng

X 10 14 18 22 26 30
Y 18 12 24 6 30 36 {0.6}
E

4. Find the correlation between X and Y if the joint probability density of


arn

2, x > 0, y > 0, x + y < 1


(
X and Y is f x, y = .[A/M 08, ECE]

0, elsewhere
Le

5. If the independent random variables X and Y have the variance 36 and


16 respectively, find the correlation coefficient between (X + Y) and
(X − Y). [A/M 08, ECE]
w.

6. Calculate the Karl-Pearson’s coefficient of correlation from the


following data:
ww

X : 39 65 62 90 82 75 25 98 36 78
Y : 47 53 58 86 62 68 60 91 51 84

7. Let the joint p.d.f. of (X, Y) be fXY (x, y) = e−y , 0 < x < y < 8. Find the
correlation coefficient r(X, Y).[16 marks] [N/D 2008, ECE]

2.4.11 Part - B[Problems of regression]

Example 2.15 The following table gives age X in years if cars and annual
maintenance cost Y(in 100 Rs.)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 153

X 1 3 5 7 9
Y 15 18 21 23 22

Find
(a) regression equations (b) correlation coefficient
(c) Estimate the maintenance cost for a 4 year old car.

Solution :

n
(a) W.K.T. regression equations, i.e.,

g.i
Regression equation of Y on X is
y − y = b yx (x − x)

(1)

n
Regression equation of X on Y is

eri
(x − x) = bxy y − y

(2)
ine
where

bxy = Regression coefficient of x on y


b yx = Regression coefficient of y on x
ng

Cov (X, Y) Cov (X, Y) Cov (X, Y) Cov (X, Y)


= = = =
σ2x Var (Y) σ2y
E

Var (X)
σy σx
=r =r
arn

σP
x σy
P P  P P P 
n xy − ( x) y n xy − ( x) y
= P 2 P 2 = P 2
n x − ( x)
P 2
n y − y
Le

X X X X X
Now, find the values of x, y, x, y, xy, x2 , y2 by the table :
w.

X2 Y2
ww

# X Y XY
1 1 15 15 1 225
2 3 18 54 9 324
3 5 21 105 25 441
4 7 23 161 49 529
5 9 22 198 81 484
X = 25 Y = 99 XY = 533 X = 165 Y = 2003
P P P P 2 P 2
n=5

5(533) − (25)(99)
P
x 99
x= = = 19.8 b yx = 2
= 0.95
Pn 5 5(165) − (25)
y 25 5(533) − (25)(99)
y= = =5 bxy = = 0.8878
n 5 5(2003) − (99)2
[Link]
[Link]
154 UNIT II - Two Dimensional Random Variables

∴ Regression equation of Y on X is from (1) is


(y − 19.8) = 0.95(x − 5) ⇒ y = 0.95x + 15.05 (3)
∴ Regression equation of X on Y is from (2) is
(x − 5) = 0.8878(y − 19.8) ⇒ x = 0.8878y − 12.5784 (4)
(b) Correlation Coefficient r :
q
r = ± b yx · bxy

n
= ± 0.8878 × 0.95

g.i
∴ r = ±0.9183
(c) Cost(Y) for 4 year old(X) car :

n
i.e., we have to use regression equation of Y on X. i.e.,
∴ (3) ⇒ y = 0.95x + 15.05
= 0.95(4) + 15.05
eri
ine
∴ y = 18.85
ng

i.e., The cost for 4 year old car is Rs.1885.

Example 2.16 The two lines of regression are


E
arn

8x − 10y + 66 = 0

40x − 18y − 214 = 0


Le

Find
w.

(a) mean values of X and Y (b) r (X, Y)


ww

Solution : Given
8x − 10y + 66 = 0 (1)
40x − 18y − 214 = 0 (2)

(a) Since both the lines of regression pass through the point x, y , where

(x) and y are mean values of x and y, which satisfies the two given
lines of regression.
Find the mean value point x, y by solving equations (1) and (2) : i.e.,
8x − 10y = −66 (3)
40x − 18y = 214 (4)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 155

On solving,

x = 13
y = 17

(b) r(x, y) :

(1) ⇒ 8x − 10y + 66 = 0 (1) ⇒ 8x − 10y + 66 = 0


⇒ 8x = 10y − 66 ⇒ 10y = 8x + 666
10 66 8 66
⇒ x= y − ⇒ y= x +

n
8 8 10 10
10 8

g.i
∴ bxy = ∴ b yx =
8 10
(2) ⇒ 40x − 18y − 214 = 0 (2) ⇒ 40x − 18y − 214 = 0

n
⇒ 40x − 214 = 18y ⇒ 40x = 18y + 214
⇒ y= y −
40
18
40
214
18 eri ⇒ x= y +
18
40
18
214
40
ine
∴ b yx = ∴ bxy =
18q 40q
∴ r=± bxy · b yx ∴ r=± bxy · b yx
ng

r r
10 40 18 8
r=± · r=± ·
8 18 40 10
E

r = ±2.777 r = ±0.6
arn

W.K.T. 0 ≤ r ≤ 1.

∴ r = ±0.6
Le

Example 2.17 For 10 observations on price X and supply Y the following data
w.

X = 130, Y = 220, X2 = 2288, Y2 = 5506, XY =


P P P P P
were obtained :
ww

3467. Obtain the line of regression of Y on X and estimate the supply when the
price is 16 units. [M/J 2009, ECE]

Solution :

2.4.12 Part - B[Problems of correlation & regression]

1. If y = 2x − 3 and y = 5x + 7 are the two regression lines, find the


mean values of x and y. Find the correlation coefficient between x and
y. Find an estimate of √ x when y = 1. [M/J 2006, ECE] {Mean values
x = − 3 , y = − 3 , r = 2/5, for y = 1, x = −6/5}
10 29

[Link]
[Link]
156 UNIT II - Two Dimensional Random Variables

2. If two dimensional random variables X and Y are uniformly


distributed in 0 < x, y < 1 find (i) correlation coefficient rxy . (ii)
Regression equations. [N/D 2009, ECE]

2.4.13 Part - B[Problems of Transformation of random variables] :

Example 2.18 If X and Y are continuous random variables and U = X + Y. Find


p.d.f. of U.

n
Solution : Given U = X + Y. (1)

g.i
Let us define a new variable Let us define a new variable
V=X (2) V=Y (4)

n
(2) ⇒ x = v (2) ⇒ y = v
(1) ⇒ u = x + y
i.e., u = v + y [∵ by (2)]
(1) ⇒ u = x + y
eri
i.e., u = x + v [∵ by (4)]
ine
∴ y=u−v ∴x=u−v
g(u, v) = f (x, y) |J| (3) g(u, v) = f (x, y) |J| (5)
ng

0 1 1 −1
where J = = −1 where J = =1
1 −1 0 1
E

∴ |J| = |−1| = 1 ∴ |J| = |1| = 1


arn

∴ (3) ⇒ g(u, v) = f (v, u − v) · 1 ∴ (3) ⇒ g(u, v) = f (u − v, v) · 1


Z∞ Z∞
∴ g(u) = g(u, v)dv ∴ g(u) = g(u, v)dv
Le

−∞ −∞
Z∞ Z∞
= =
w.

f (v, u − v)dv f (u − v.v)dv


−∞ −∞
Find range space of u and v.
ww

Example 2.19 Joint p.d.f. of X&Y is given by f (x, y) = e−(x+y) , x > 0, y > 0.
X+Y
Find p.d.f. of .
2
Solution : Given Joint p.d.f. of X&Y is given by

f (x, y) = e−(x+y) , x > 0, y > 0. (1)

X+Y
Let U = . (2)
2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 157

Define a new variable V = X. (3)

(3) ⇒ v = x ⇒∴ x = v (4)
x+y
∴ (2) ⇒ u =
2
⇒ 2u = v + y ⇒∴ y = 2u − v (5)

Now, joint pdf of (U, V) :

g(u, v) = f (x, y) |J|

n
(6)

g.i
0 1
where J = = −2 ⇒ |J| = |−2| = 2
2 −1

n
∴ g(u, v) = e−(x+y) · 2

eri
= 2e−(v+2u−v)
g(u, v) = 2e−2u
ine
x>0
y>0
2u − v > 0
ng

To find range space of u and v: v > 0 2u > v


i.e., v < 2u
E

∴ 0 < v < 2u
V(U = 0)
arn

v = 2u
Z∞
Le

∴ Pdf of U is g(u) = g(u, v)dv


−∞
Z∞ 0 < v < 2u
w.

= 2e−2u dv
U(V = 0)
ww

−∞
Z2u
= 2e−2u du
0
= 2e−2u · 2u
= 4ue−2u , u > 0

Example 2.20 If X and Y are independent exponential random variates with


parameter 1. Find p.d.f. of U = X − Y.

Solution : Given X and Y are independent exponential random variates


with parameter 1. i.e.,
[Link]
[Link]
158 UNIT II - Two Dimensional Random Variables

X ∼ Exp. Dist. (α = 1) Y ∼ Exp. Dist. (α = 1)


Pdf of X : f (x) = αe−αx , x > 0 Pdf of Y : f (y) = αe−αy , y > 0
= e−x , x ≥ 0 = e−y , y ≥ 0
∴ Joint p.d.f. of X&Y is given by

f (x, y) = f (x) · f (y) (∵ X and Y are independent)


= e−x · e−y
= e−(x+y) , x > 0, y > 0. (1)

n
g.i
Given U = X − Y. (2)
Define a new variable V = X. (3)

n
eri
(3) ⇒ v = x ⇒∴ x = v (4)
∴ (2) ⇒ u = x − y
⇒ u = v − y ⇒∴ y = v − u
ine
(5)
ng

Now, joint pdf of (U, V) :

g(u, v) = f (x, y) |J| (6)


E

0 1
where J = = 1 ⇒ |J| = |1| = 1
arn

−1 1
∴ g(u, v) = e−[x+y] · 1
Le

= e−[v+v−u]
g(u, v) = eu · e−2v
w.

x≥0
y≥0
v−u≥0
ww

To find range space of u and v: v ≥ 0


v≥u
∴u≤0<v

V(U = 0)
u=v

R1 R2

v≥u v>0
U(V = 0)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 159

Z∞
∴ Pdf of U is g(u) = g(u, v)dv (7)
−∞

In R1 In R2
Z∞ Z∞
∴ (7) ⇒ g(u) = eu e−2v dv ∴ (7) ⇒ g(u) = eu e−2v dv

"0 −2v #∞ "u −2v #∞


e e
= eu = eu
−2 0 −2 u

n
" −2u
#

1
 e

g.i
= e (0) −
u = eu (0) −
−2 −2
u
e e −u
= ,u < 0 = ,u ≥ 0

n
2 2
eu

eri

,u < 0



∴ The required pdf of U : g(u) =  2


−u
e
,u ≥ 0


ine

2

Example 2.21 If X and Y are independent uniform random variates on [0, 1].
ng

Find p.d.f. of U = XY.


Solution : Given X and Y are independent uniform random variates
E

defined on [0, 1]. i.e.,


arn

X∼ Uni. Dist.(a = 0, b = 1) Y∼ U.D.(a = 0, b = 1)


1 1
Pdf of X : f (x) = = 1, x > 0 Pdf of Y : f (y) = = 1, y > 0
1−0 1−0
Le

∴ Joint p.d.f. of X&Y is given by


f (x, y) = f (x) · f (y)
w.

(∵ X and Y are independent)


=1·1
= 1, x > 0, y > 0.
ww

(1)

Given U = XY. (2)

Define a new variable V = Y. (3)


(3) ⇒ v = y ⇒∴ y = v (4)
∴ (2) ⇒ u = xy
u
⇒ u = xv ⇒∴ x = (5)
v

[Link]
[Link]
160 UNIT II - Two Dimensional Random Variables

Now, joint pdf of (U, V) :

g(u, v) = f (x, y) |J| (6)


1 −u
where J = v v2 = −2 ⇒ |J| = |−2| = 2
0 1
1
∴ g(u, v) = 1 ·
v
1
=
v

n
g.i
0<x<1 0<y<1
u
0< <1

n
To find range space of u and v: v 0<v<1
0<u<v
∴0<u<v<1
eri
V(U = 0)
ine
u=v
Z∞
ng

v=1
∴ Pdf of U is g(u) = g(u, v)dv v=1
−∞
E

Z1
1 v=u
arn

= dv U(V = 0)
v
u
Z
= [log v]1u
Le

= [(log 1) − (log u)]


w.

= − log u, u > 0
ww

Example 2.22 If X and Y are independent uniform random variates on [0, 1].
X
Find p.d.f. of U = .
Y

Solution : Given X and Y are independent uniform random variates


defined on [0, 1]. i.e.,

X ∼ Uni. Dist. (a = 0, b = 1) Y ∼ Uni. Dist. (a = 0, b = 1)


1 1
Pdf of X : f (x) = = 1, x > 0 Pdf of Y : f (y) = = 1, y > 0
1−0 1−0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 161

∴ Joint p.d.f. of X&Y is given by

f (x, y) = f (x) · f (y) (∵ X and Y are independent)


=1·1
= 1, x > 0, y > 0. (1)

n
X

g.i
Given U = . (2)
Y

n
Define a new variable V = X. (3)

eri
(3) ⇒ v = X ⇒∴ x = v
ine
(4)
x
∴ (2) ⇒ u =
y
ng

v v
⇒ u = ⇒∴ y = (5)
y u
E
arn

Now, joint pdf of (U, V) :


Le

g(u, v) = f (x, y) |J| (6)


w.

0 1 −v −v v
where J = −v 1 = 2 ⇒ |J| = 2 = 2
ww

u u u
u2 u
v
∴ g(u, v) = 1 · 2
u
v
= 2
u

0<x<1
0<y<1
v
0< <1
To find range space of u and v: 0 < v < 1 u
0<v<u
∴0<v<u<1
[Link]
[Link]
162 UNIT II - Two Dimensional Random Variables

V(U = 0)
u=v
Z∞
∴ Pdf of U is g(u) = g(u, v)dv v=1
−∞
Z1
1 R1 R2
= dv U(V = 0)
v
u
Z

n
= [log v]1u

g.i
= [(log 1) − (log u)]
= − log u, u > 0

n
eri
Example 2.23 If X and Y are independent random variables with mean ‘0’ and
√  
variance σ2 . Find p.d.f. of R = X2 + Y2 , φ = tan−1 XY .
ine
Solution : Given X and Y are independent random variables with mean
‘0’ and variance σ2 . i.e.,
ng

   
X ∼ Nor. Dist. µ = 0, σ1 Y ∼ Nor. Dist. µ = 0, σ1
1 x−µ 2 1 y−µ 2
   
E

1 − 1 −
f (x) = √ e 2 σ ; f (y) = √ e 2 σ ;
arn

σ 2π σ 2π
−∞ < x < ∞ −∞ < y < ∞
x−µ y−µ
z= ; −∞ < µ < ∞ z= ; −∞ < µ < ∞
σ σ
Le

σ>0 σ>0
Joint pdf of X and Y :
 !2   !2 
w.

 1 y − 0   1 y − 0 
− −
   
 1 σ 1 σ
f (x, y) = f (x) · f (y) =  √ e 2  ·  √ e 2

 
 
  
 σ 2π   σ 2π
ww

  
 
  
 
1 x +y
2 2

1 −
= e 2 σ2 ; −∞ < x, y < ∞ (1)
2πσ 2

Given

R = X 2 + Y2 (2)
Y
φ = tan−1 (3)
X
(2) ⇒ a circle of radius R and argument φ.
Since the given transformations R and φ are in polar form of (x, y). So
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 163

define

x = R cos φ (3)
y = R sin φ (4)

n
g.i
g(R, φ) = f (x, y) |J| (6)

n
∂x ∂x
∂R ∂φ
where J = ∂y ∂y =
eri
cos φ sin φ
−R sin φ R cos φ
= R cos2 φ + R sin2 φ = R ⇒ |J| = R
ine
∂R ∂φ
∴ g(R, φ) = f (x, y) · |J|
 
ng

1 x2
+ y2

1 −
= e 2 σ2 ·R
2πσ 2
E

 
2
1 R
arn

R −
= 2
e 2 σ2 (∵ by(1))
2πσ
Le
w.

−∞ < x, y < ∞
To find range space of R and φ:
0 ≤ φ ≤ 2π and 0 ≤ R < ∞
ww

V(U = 0)
R=0 R→∞

U(V = 0)

0 ≤ φ ≤ 2π
[Link]
[Link]
164 UNIT II - Two Dimensional Random Variables

∴ Pdf of R is : ∴ Pdf of φ is :
Z∞
g(φ) = g(R, φ)dR
−∞


2
Z∞ Z∞ 1 R
R −
g(R) = g(R, φ)dφ = 2
e 2 σ2 dR
2πσ
−∞ 0
   
2 2
Z2π 1 R Z∞ 1 R
R − R −

n
= e 2 σ2 dφ = Re 2 σ2 dR
2πσ 2 2πσ 2

g.i
0 0
R2
 
2 2RdR
1 R Take t = 2 ⇒ dt =
R −

n
2σ 2σ2
= 2
e 2 σ2 [φ]2π 0 Z∞

eri
2πσ 1
∴ g(φ) = σ2 e−t dt
 
2
1 R 2πσ 2
R − " −t0 #∞
= e 2 σ2 [2π]
ine
2
2πσ   1 e
2 =
1 R 2π −1 0
R − 1 h −∞  0 i
∴ g(R) = 2 e 2 σ2
ng

=− (e ) − e
σ 2π
1
= − [(0) − (1)]
E


1
arn

∴ g(φ) = , 0 ≤ φ ≤ 2π

2.4.14 Part - B[Problems of Transformation of random variables](Anna University


Le

Questions)
w.

1. If X and Y are independent variates uniformly distributed in (0, 1), find


the joint p.d.f. and marginal p.d.f.( of XY and XY .[M/J ) 06, ECE] {joint
1/2 , 0 < u < 1
ww

p.d.f.: v1 , v marginal p.d.f.: − log u, 1


2u2
, 1<u<∞
2. If the joint of p.d.f. of (X, Y) is given by f (x, y) = x + y, 0 ≤ x, y ≤ 1, find
the p.d.f. of U = XY. [N/D 06,ECE] [N/D 07 IT] { fU (u) = 2(1 − u)}
3. The random variables X and Y are statistically independent having a
gamma distribution with parameters (m, 1/2) and (n, 1/2), respectively.
Derive the probability density function of a random variable U = X+Y
X
.
[A/M 2008, ECE]
4. Let X and Y be independent random variables with common
probability density function fX (x) = e−2x , x > 0. Find the joint p.d.f. of
U = X + Y and V = ex . [A/M 2008, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 165

2.4.15 Part - B[Problems of Central limit theorem(C.L.T.)]

(1) − (2)
# General Format of CLT problems: (1) ∼ N((2), (3)) p Z=
(3)
  X−µ
(a) X ∼ N µ, σ2 Z= √
σ2
  Sn − nµ
Sum = Total = X = Sn ∼ N nµ, nσ2 Z =
P
(b) √
σ n

n
σ2 X−µ
P !
X

g.i
(c) Mean = Average = = X ∼ N µ, Z= σ
n n √
n

n
eri
ine
Example 2.24 The life time of certain brand of an electric bulb may be considered
a random variable with mean 1200 hrs and standard deviation 250 hrs. Find the
ng

probability using CLT, that the average life time of 60 bulbs exceeds 1250 hrs.
E
arn

Solution : Let X = Life time of a bulb


Le

X = Average life time of of bulb


n = Number of bulbs = 60
w.

σ = Standard deviation = 250 hrs


µ = Mean = 1200 hrs
ww

We have to find the probability of average life time of 60 bulbs exceeds


1250 hrs = P(X > 1250) :
So use CLT for average of random variables.

σ2
P !
X
∴ Mean = Average = = X ∼ N µ,
n n
X−µ
⇒Z= σ

n
[Link]
[Link]
166 UNIT II - Two Dimensional Random Variables

 
 
 X − µ 1250 − µ 
∴ P(X > 1250) = P  σ >
 
σ

 √ 
√ 
n n

 
 
 
1250 − 1200
= P z > z = 1.55 z → ∞
 

 250 

 

60
= P (z > 1.55)

n
= 0.5 − P (0 < z < 1.55)

g.i
= 0.5 − 0.4394
∴ P(X > 1250) = 0.0606

n
eri
Example 2.25 If Vi , i = 1, 2, · · · , 20 are independent noise voltages received in
an adder and V is sum of voltages received. Find the probability that the total
ine
incoming exceeds 105, using C.L.T., assume that each of the random variable Vi
is uniformly distributed over (0, 10)?
ng

Solution : Given Vi = Voltage in an adder


E

X i=n
X
arn

Vi = V = Total voltage = Vi = V1 + V2 + V3 + · · · + Vn
i=1
n = Number of voltages = 20
Le

We have to find the probability of total voltage exceeds 105 = P(V > 105) :
So use CLT for sum of random variables.
w.

X  
Sum = Total = Vi = Sn = V ∼ N nµ, nσ 2

Sn − nµ
ww

⇒Z= √
σ n

To find µ and σ:

Vi ∼ Uniform distribution (a = 0, b = 10)


1 1 1
f (vi ) = Pdf of vi = = =
b − a 10 − 0 10
a + b 0 + 10
µ = Mean of vi = = =5
2 2
r
2 2
(b − a) (10 − 0) 100 100
σ2 = Variance of vi = = = ⇒ σ=
12 12 12 12
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 167

 
 
 
 V−nµ 105−(20)(5) 
∴ P(V > 105) = P  √ > r
 
 σ n

100 √ 
20 
 

12 z = 0.38 z → ∞
5
 
=P z>
12.90
= P(z > 0.3872)
= 0.5 − 0.1480

n
∴ P(V > 105) = 0.3520

n g.i
Example 2.26 An unbiased die is rolled 300 times find the probability that number

eri
of outcomes which are odd will be between 140 and 150.
ine
Solution : Given, an unbiased die is rolled.
ng

3 1
E

p = probability of odd numbers in a single die = =


6 2
arn

1
q=1−p=
2
X = Number of odd outcomes
Le

i.e., X ∼ [Link](n, p)
1
np = Mean = 300 × = 150 ⇒ µ = 150
w.

2
150
npq = (np)q = Variance = = 75 ⇒ σ2 = 75
ww

We have to find the probability of getting odd numbers between 140 and
150 = P(140 < X < 150) :
So use C.L.T. for a random variable.
i.e., By C.L.T., [Link]. approaches as n → ∞.

 
X ∼ N µ, σ 2

X−µ
⇒z=
σ
[Link]
[Link]
168 UNIT II - Two Dimensional Random Variables

!
X−µ
∴ P(140 < X < 150) = P 140 < < 150
σ
140 − µ 150 − µ
!
=P <z<
σ σ
! z=0 z = 1.15
140 − 150 150 − 150
=P √ <z< √
75 75
= P (−1.15 < z < 0)
= P (0 < z < 1.15)

n
∴ P(140 < X < 150) = 0.3749

g.i
Example 2.27 A random sample of size 100 is taken from a population whose
mean is 60 and variance is 400. Use CLT, find probability of mean of sample will

n
not differ from µ = 60 by more than 4.
Solution : Given n = Size of the sample = 100
eri
ine
µ = Mean of the sample = 60
σ2 = Variance of the sample = 400
ng

We have to find, the probability of mean of sample will not differ from
µ = 60 by more than 4 = P (|X − 60| ≤ 4):
So use CLT for average of random variables.
E

σ2
P !
X
∴ Mean = Average = = X ∼ N µ,
arn

n n
X−µ
⇒Z= σ
Le


n
   
∴ P X − 60 ≤ 4 = P −4 ≤ X − 60 ≤ 4
w.

 
= P −4 + 60 ≤ X ≤ 4 + 60
ww

 
= P 56 ≤ X ≤ 64
 56 − µ X − µ 64 − µ 
 
= P  σ ≤ σ ≤
√σ

√ √ 
 n n
n
z=0 z=2
 56 − 60 64 − 60 
= P  20 ≤z≤ 
√ √20 
100 100
= P(−2 ≤ z ≤ 2)
= 2P(0 ≤ z ≤ 2)
= 2(0.4772)
 
∴ P X − 60 ≤ 4 = 0.9544
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 169

2.4.16 Part - B[Problems of Central limit theorem(CLT)](Anna Univeristy


Questions)

1. State and prove the central limit theorem for a sequence of independent
and identically distributed random variables.[M/J 06, ECE]
2. The life time of a certain brand of a tube light may be considered
as a random variable with mean 1200 hours and standard deviation
250 hours. Find the probability, using central limit theorem, that the
average life time of 60 lights exceeds 1250 hours. {0.0606} [N/D 2006,

n
ECE]

g.i
3. State and prove Central Limit Theorem. [N/D 2007, ECE]
4. A random sample of size 100 is taken from a population whose mean is

n
60 and variance 400. Using C.L.T. with what probability can we assert

eri
that the mean of the sample will not differ from = 60 more that 4?
[A/M 2008, ECE]
ine
5. The life time of a certain brand of an electric bulb may be considered
a RV with mean 1200h and standard deviation 250h. Find the
probability, using central limit theorem, that the average lifetime of 60
ng

bulbs exceeds 1250h. [N/D 2008, ECE]


6. If 20 random numbers are selected independently from the interval
E

(0,1) with the probability density function of each number is fX (x) =


arn

1, 0 < x < 1, what is the approximate probability that the sum of these
numbers is atleast 8? [N/D 2008, ECE]
Le
w.
ww

[Link]
[Link]

3 UNIT III - CLASSIFICATION OF RANDOM


PROCESSES

n
Definition and examples - first order, second order, strictly stationary,

g.i
wide-sense stationary and ergodic processes - Markov process -

n
Binomial, Poisson and Normal processes - Sine wave process -
Random telegraph process.
eri
ine
ng

3.1 Definition and examples - first order, second order, strictly


stationary, W.S.S. :
E

Introduction:
arn

Deterministic signal (voltages or current waveforms) can be described


by the usual mathematical function with time ‘ t ’ as the independent
variable. The probabilistic model used for characterizing a random signal
Le

is called Random Process.


w.

Definition :
A random process is a collection or (ensemble) of a random variables
ww

{X(s, t)}, that are functions of a real variable namely time (t), where s ∈ S,
the sample space and t ∈ T, the parameterized set or index set.
(OR)
Families of random variables which are functions of time t, a stochastic
process
{X(t), t ∈ T}, 0 ≤ t < ∞.

1. If t ∈ T is continuous, then Random process is denoted by {X(t)}.


2. If t ∈ T is discrete, then Random process is denoted by {X(n)} or {Xn }.

[Link]
[Link]

172
3.1.1 Classification of random process

# Parameter set Sample {Xn , n ≥ 1}/ Example


/ index set / space {X(t), 0 ≤ t < ∞}
time set t ∈ T
1. Discrete Discrete Discrete
random
(1) Xn represents outcomes (2) Step
of nth toss of the fair die. x(t)
.i
Wave
n function :

sequence Xn = {1, 2, 3, 4, 5, 6}
T = {1, 2, 3, · · · }

i n g t

2. Discrete Continuous Continuous

e r
(1) Xn represents the (2) Non-Step Wave function:
random
sequence
e
temperature at a point in a f (t)

n
room at the end of nth hour
of a day.

i π
t

g
Xn = {1 ≤ n ≤ 24} 0 2π

UNIT III - Classification of Random Processes


3. Continuous Discrete Discrete

E
random process number n
T = {1, 2, 3, · · · }
(1) X(t) represents the (2) Step Wave function :
of telephone x(t)

rn
received by a hospital in
time interval [0, t]

a
t

4. Continuous

.L e
Continuous Continuous (1) X(t) represents the (2) Non-Step Wave function :
random process maximum temperature at a f (t)
place in interval [0, t]

w w 0 π 2π
t

w
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 173

Example :
1. If a FM station is broadcasting a tone X(t) = 10 cos 20t. It is received
by a large number of receivers in given area whose distance from a
transmitter can be considered as a continuous random variable as the
amplitude and phase of the waveform received by any receiver will be
dependent on this distance, they are all random variables. Hence the
ensemble of the received waveforms can be represented by a random
process {X(t)} of the form X(t) = A cos(20t + θ), where A and θ will be
random variables representing amplitude and phase of the received

n
signal.

g.i
2. If Xn denotes the presence or absence of a pulse at the end of the nth
time (say nth sec) in a digital communication system or digital data
processing system, then {Xn , n ≥ 1} is a random process.

n
3.2 Stationary random processes eri
ine
3.2.1 First order stationary processes
ng

A random process {X(t)} is called stationary to order 1, if its first order


density function does not change with shift in time margin.
E

fx (x, t) = fx (x, t + ∆), ∀t and real number ∆.


arn

F(x, t) = P{X(t) ≤ x} = I order distribution function of random process X



f (x, t) = [F(x, t)] = I order density function.
Le

∂x

3.2.2 Second order stationary processes


w.

A random process {X(t)} is called stationary to order 2, if its second


ww

order density function does not change with shift in time margin.

f (x1 , x2 ; t1 , t2 ) = f (x1 , x2 ; t1 + ∆, t2 + ∆), ∀t1 , t2 and real number ∆.


F(x1 , x2 ; t1 , t2 ) = P{X(t1 ) ≤ x1 , X(t2 ) ≤ x2 } = II order distribution function
∂2
f (x1 , x2 ; t1 , t2 ) = [F(x1 , x2 ; t1 , t2 )] = II order density function.
∂x1 ∂x2

3.2.3 Strictly(Strongly) sense stationary process(S.S.S.)

A random process X(t) is said to be S.S.S., if all its finite dimensional


distributions(density functions) or invariant under the translation of
time parameters (i.e.,)
[Link]
[Link]
174 UNIT III - Classification of Random Processes

f (x1 , x2 , ..., xn ; t1 , t2 , ..., , tn ) = f (x1 , x2 , ..., xn ; t1 + ∆, t2 + ∆, ..., tn + ∆), ∀ti

and real number ∆.

3.2.4 Mean of the random process

Mean of the random process X(t) is denoted by µX (t) and is defined


generally by
Upper limit
R.V.Z

n
E[(R.P.)] = (R.P.) f [(R.V.)] d(R.V.)

g.i
[Link] limit

where R.V. = a r.v. follows a cont. dist. with lower and upper limits

n
(OR)

E[(R.P.)] = Use expansion ideas


eri
ine
and real number ∆.
ng

3.2.5 Auto correlation of the random process


E

A auto correlation of the process {X(t)} is denoted by


arn

RXX (t, t + τ) (OR) RXX (t1 , t2 )


Le

and is defined as

RXX (t, t + τ) = E[X(t).X(t + τ)]


w.

(OR)
ww

Z∞ Z∞
RXX (t1 , t2 ) = E[X(t1 ).X(t2 )] = x1 x2 [ f (x1 , x2 ; t1 , t2 )] dx1 dx2
−∞ −∞

3.2.6 Wide(Weak) sense stationary process(W.S.S.)

A random processX(t) is said to be WSS, if its mean is constant and


auto correlation depends only on time difference (i.e.,)
(1) E[X(t)] = constant = µ
(2) RXX (t, t + τ) = E[X(t).X(t + τ)] = a function of τ (or) a function of
time difference (t2 − t1 ) (OR) (t1 − t2 ).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 175

3.2.7 Evolutionary process

A random process X(t) that is not stationary in any sense is called


evolutionary process.

3.2.8 Jointly WSS process

Processes X(t) and Y(t) are said to be jointly WSS, if


(1) each of X(t) and Y(t) are individually WSS
(2) Cross correlation RXY (t, t + τ) = E[X(t).Y(t + τ)]= a function of τ

n
g.i
3.2.9 Auto covariance of Random process X(t)

CXX (t, t + τ) = RXX (t, t + τ) − E[X(t)] . E[X(t + τ)]

n
eri
or
CXX (t1 , t2 ) = RXX (t1 , t2 ) − E[X(t1 )] . E[X(t2 )]
ine
3.3 Ergodic random process
ng

3.3.1 Time average of random processes X(t)


E

Time average of random process X(t) over (−T, T) denoted by XT and


arn

is defined as
RT
XT = 2T X(t) dt
1
Le

−T

3.3.2 Definition[Ergodic random process]


w.

A random process X(t) is called ergodic if its ensemble averages are


equal to the appropriate time averages.
ww

3.3.3 Mean Ergodic random process(based on first order average)

Consider a random process X(t) with constant mean µ {i.e., E[X(t) = µ]}
and time average XT , then X(t) is said to be mean ergodic (or ergodic
in mean) if XT tends to µ as t → ∞ i.e.,
lim XT = E[X(t)] = µ
T→∞
ZT
1
lim X(t)dt = E[X(t)] = µ
T→∞ 2T
−T
[Link]
[Link]
176 UNIT III - Classification of Random Processes

3.3.4 Correlation Ergodic random process(based on second order average)

A stationary process X(t) is said to be correlation ergodic (or ergodic in


correlation), if the process Y(t) is mean ergodic where Y(t) = X(t).X(t +
τ) (i.e.,)
RT
Mean of Y(t) = E[Y(t)] = lim YT = 1
lim 2T Y(t)dt
T→∞ T→∞
−T

where YT is time average of Y(t).

n
Note :

g.i
Ergodic process ⊂ Stationary process ⊂ W.S.S. ⊂ Random process

n
eri
3.4 Markov Process(Markovian process)

Definition :
ine
A random process X(t) is called Markov process, if for t0 < t1 < t2 <
· · · < tn < tn+1 , we have
ng

P[X(tn+1 ) ≤ xn+1 /X(tn ) ≤ xn , X(tn−1 ) ≤ xn−1 , ..., X(t0 ) ≤ x0 ] = P[X(tn+1 ) ≤


xn+1 /X(tn ) ≤ xn ]
E

i.e., in words,
arn

If the future behaviour of a process depends only on the present state,


but not on the past, then the process is called a Markov process.
Le

Note :
(1) Thus from the definition, the future state xn+1 of the process at
w.

time tn+1 {i.e., X(tn+1 ) = xn+1 }, depends only on the present state
xn of the process at tn {i.e., X(tn ) = xn } and not on the past values
x0 , x1 , · · · , xn−1 .
ww

(2) X(t0 ), X(t1 ), · · · , X(tn+1 ) are random variables


t0 , t1 , · · · , tn+1 are times
x0 , x1 , · · · , xn+1 are states of random process
Examples of Markov process:
(1) Probability of raining today depends only on the just
previous weather conditions existing for last two days and
not the earlier weather conditions.
(2) The market shares of three brands of the tooth pastes A, B, C
for a month depend only on their market shares in the
previous month and not of earlier months.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 177

(3) Let random variable Sn represents the total number hits in first n
trials and defined by Sn = x1 + x2 + · · · + xn and possible values of
Sn are 0, 1, 2, · · · , n, then random variable
Sn+1 = x1 + x2 + · · · + xn + xn+1
= Sn + xn+1 , then
k + 1, if the (n + 1)st trial result is a hit.
(
Sn+1 =
k, if the (n + 1)st trial result is a miss.
Thus
P[Sn+1 = k + 1/Sn = k] = P[Xn+1 = 1] = p

n
P[Sn+1 = k + 1/Sn = k] = P[Xn+1 = 0] = q

g.i
These conditions probabilities depends only on the value of Sn
( i.e., the just previous value) and not on the values of random

n
variables S1 , S2 , · · · , Sn−1 (past values) as it does not matter how the
value Sn = k is reached.
eri
ine
3.4.1 Types of Markov process
ng

(1) Continuous random process + Markov property = continuous


parameter Markov process
E

(2) Continuous random sequence + Markov property = discrete


arn

parameter Markov process


(3) Discrete random process + Markov property = continuous
parameter Markov chain
Le

(4) Discrete random sequence + Markov property = discrete


parameter Markov chain
w.
ww

3.4.2 Markov chain

The random process {X(t)} satisfying Markov property where X(t)


takes only discrete values, whether t is discrete or continuous is
called Markov chain.

3.4.3 States of Markov chain and state space

The random {Xn }, n = 0, 1, 2, · · · where

P[Xn = an /Xn−1 = an−1 , Xn−2 = an−2 , X1 = a1 , X0 = a0 ]


= P[Xn = an /Xn−1 = an−1 ]
[Link]
[Link]
178 UNIT III - Classification of Random Processes

for all n is called a Markov chain.


Here a0 , a1 , · · · , an , · · · , ak , are called the states of Markov chain and
{a0 , a1 , · · · , an , · · · , ak } is the state space.
One step transition probability :
The conditional probability P[Xn = a j /Xn−1 = ai ] is called one step
transition probably from state ai to a j at the nth step(trial) and is denoted
by pi j (n − 1, n).
Homogeneous Markov chain :
If the one step transition probability does not depend on the step i.e.,

n
pij (n − 1, n)=pi j (m − 1, m), then Markov chain is called a homogeneous

g.i
Markov chain or the chain is said to have stationary (constant)
transition probabilities.

n
Transition probability matrix(t.p.m.) :

eri
When the Markov chain is homogeneous, the one step (stationary)
transition probability is denoted by pi j and the matrix of all pi j values
i.e., matrix P = {pi j } is called (one - step) t.p.m.
ine
Note :
The t.p.m. of a Markov chain is a stochastic matrix since
ng

(1)all pi j ≥ 0 and
(2) j pi j = 1, ∀i (i.e., sum of all elements in each row =1)
P
E

n step Transition probability :


arn

(n)
P[xn = a j /x0 = ai ] = pi j
Chapman - Kolmogorov theorem(for n step tpm) :
Le

If P is the t.p.m. of a homogeneous Markov chain, then the n-step


t.p.m. P(n) = Pn , where P is the (one–step) t.p.m.
w.

Probability distributions of a Markov chain :


If the probability that the process is in state ai is pi (where i = 1, 2, · · · , k)
at any arbitrary step then the row vector p = p1 , p2 , ..., pk is called the

ww

probability distribution of the process at that time (or step).


(n) (n)
P[xn = a1 ] = p1 , P[xn = a2 ] = p2 , · · ·
and
(n) (n) (n) (n) (n) (n)
p(n) = [p1 , p2 , ..., pk ] = {p1 , p2 , ..., pk }
= probability distributin of the Markov chain at nth step.
Probability distribution for the nth step :
(n) (n) (n)
If p(n) = {p1 , p2 , ..., pk } is (state) probability distribution of the process
at step (time) t = n, then probability distribution at the step (time)
t = n + 1 is p(n+1) = p(n) .P where P is t.p.m. of the chain.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 179

3.4.4 Classification of states of a Markov chain

Irreducible :
A Markov chain is said to tbe irreducible if every state can be
(n)
reached from every other state, where Pi j > 0 for some n and for
all i and j.
The t.p.m. of an irreducible chain is an irreducible matrix.
Otherwise, the chain is reducible or non-reducible.

n
Return state :
(n)
If Pi j > 0, for some n > 1, then we call the state i of the Markov

g.i
chain as return state.

n
Period :
(m)
Let Pii > 0 for all m. Let i be a return state. Then we define the
period di as follows,
eri
di = GCD{m, Pm > 0},
ine
ii
where GCD = Greatest common divisor.
If di = 1, then state i is said to be aperiodic.
ng

If di > 1, then state i is said to be periodic with period di .


Regular matrix :
E

A stochastic matrix P is said to be a regular matrix, if all the entries


arn

of Pm are positive, where m is some ‘+’ integer.


Non null persistent :
The state i is said Non null persistent if its mean recurrence time
Le


µii = n fiin is finite and null persistent if µii → ∞.
P
n=1
w.

Ergodic state:
A non null persistent and a periodic state is called ergodic.
ww

3.4.5 Steady state probability of Markov chain / stationary distribution of Markov


chain / Long run probability of Markov chain / Limiting form of the state
probability distribution :

Let π = ( π1 π2 · · · πn ) be n steady state such that

π1 + π2 + ... + πn = 1

and πP = π.
[Link]
[Link]
180 UNIT III - Classification of Random Processes

3.5 Binomial, Poisson and Normal processes

Bernoulli Process :
Bernoulli random variable Xi is defined as
(
1, if the ith trial is success
X(ti ) = Xi =
0, if the ith trial is success failure
where the time index ti is such that i = · · · , −2, −1, 0, 1, 2, · · ·
and the state space is S = {0, 1}.
The ensemble {X(t)} of the r.v.s is called as a Bernoulli process.

n
Properties of Bernoulli Process :

g.i
(1) It is a discrete sequence.
(2) It is a SSS process.

n
h i
(3) E[Xi ] = p, E Xi2 = p and

eri
h i
Var[Xi ] = E Xi2 − [E (Xi )]2 = p − p2 = p 1 − p = pq


Binomial process :
ine
Let Sn be the number of successes in n Bernoulli trials.
i.e., Sn = X1 + X2 + · · · + Xn .
ng

Thus, Sn is a partial sum of Bernoulli variates and hence is a Binomial


variable. Then the Binomial process is defined as a sequence of these
partial sums.
E

i.e., Sn = X1 + X2 + · · · + Xn , n = 1, 2, 3, · · · .
arn

3.5.1 Properties of Binomial process


Le

(1) Binomial process is a Markov process :


w.

Proof: We have Sn = X1 + X2 + · · · + Xn−1 + Xn = Sn−1 + Xn


ww

∴ P[Sn = k/Sn−1 = k] =P[there are k number of successes in n trials given that


there are k number of successes in (n − 1) trials]
=P[Xn = 0] = 1 − P[Xn = 1] [∵ Xn is a Bernoulli trial]
∴ P[Sn = k/Sn−1 = k] =1 − p = q
Also P[Sn = k/Sn−1 = k − 1] = P[Xn = 1] = p
Thus, these conditional properties depend only on the just previous
value i.e., of Sn−1 and hence, by definition, the Binomial process
{Sn } is a Markov process.
(2) Mean and Variance of Binomial random variable :
Since Sn be a Binomial random variable,
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 181

n−k
P [Sn = k] = nCk pk 1 − p = nCk pk qn−k , k = 0, 1, 2, · · · , n
E [Sn ] = np
Var [Sn ] = npq
(3) The Binomial distribution of the processes approaches the Poisson
distribution when n is very large and p is very small.

3.5.2 Poisson Process (Poisson counting process)

n
Poisson process is a continuous parameter (t) and discrete state X

g.i
process. This is practically a very useful random process.

Uses :

n
(1)To find number of incoming telephone calls received in particular
time.
eri
(2)The arrival of customers at a bank in a day.
ine
(3)The emission of an electron from the surface of a light - sensitive
material (photo detector)
ng

Mean of the Poisson process {X(t)} :


E

Mean of X(t) = E [X (t)]



arn

X
= xP [X (t) = x]
x=0

e−λt (λt)x
Le

X
= x
x=0 x

(λt)x
w.

X
=e −λt
x
x
x=0 1
ww

2 3

 (λt) (λt) (λt)
= e−λt 1 +2 +3 + · · · 
 
1 2 3
 1 2

(λt) (λt)
= e−λt (λt) 1 + + + · · · 

 
1 2
= e−λt (λt) eλt
= λt

3.5.3 Variance of the Poisson process {X(t)}


h i
Var [X (t)] = E X 2 (t)
− {E [X (t)]}2
[Link]
[Link]
182 UNIT III - Classification of Random Processes

h i X∞
Now, E X (t) =
2
x2 P [X (t) = x]
x=0
∞ −λt (λt)x
2e
X
= x
x=0 x

X e−λt (λt)x
= [x (x + 1) − x]
x=0 x
∞  −λt (λt)x 
 ∞ 
X  e−λt (λt)x 

X e
= x (x + 1)


n
 − x
  

x x

x=0 x=0

g.i
∞  
X e−λt (λt)x 
= x (x + 1)  − λt ∵ Mean o f Poisson process = λt
 

x

n
x=0 
−λt (λt)1 −λt (λt)2 −λt (λt)3

eri
e e e
= 0 + 1.2 + 2.3 + 3.4 + ... − λt
 
1 2 3
ine
 
(λt)1 (λt)2
= 2e λt 1 + 3
−λt
+ 3.2 + ... − λt
 

2 3
 1 2

ng

(λt) (λt)
= 2e−λt λt 1 − (−3) + 3.2 + ... − λt

 
2 3
E

= (λt)2 + λt
arn

h i
∴ Var [X (t)] = (λt)2 + λt − {λt}2 = λt

Note : Mean [X(t)] = Var[X(t)] = λt = parameter of Poisson process.


Le
w.

3.5.4 Autocorrelation of the Poisson process

RXX (t1 , t2 ) =E[X(t1 )X(t2 )]


ww

=E {X(t1 ) [X(t2 ) − X(t1 ) + X(t1 )]}


=E {X(t1 )X(t2 ) − X(t1 ) [+X(t1 )]}
n o
=E X(t1 ). [X(t2 ) − X(t1 )] + X2 (t1 )
h i
=E {X(t1 ). [X(t2 ) − X(t1 )]} + E X (t1 )
2
h i
=E {X(t1 )} E [X(t2 ) − X(t1 )] + E X2 (t1 )

∵ x(t)is a process of independent increment
= (λt1 ) λ (t2 − t1 ) + λt1 + λ2 t21 = λ2 t1 t2 − λ2 t21 + λt1 + λ2 t21
=λ2 t1 t2 + λt1 , i f t2 ≥ t1
=λ2 t1 t2 + λ [min (t1 , t2 )]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 183

3.5.5 Autocovarince of Poisson process

CXX (t1 , t2 ) = RXX (t1 , t2 ) − E [X (t1 )] E [X (t2 )]


= λ2 t1 t2 + λt1 − (λt1 ) (λt2 )
= λt1 , i f t2 ≥ t1
∴ CXX (t1 , t2 ) = λ [min (t1 , t2 )]

3.5.6 Correlation coefficient of Poisson process

CXX (t1 , t2 )

n
rXX (t1 , t2 ) = p p

g.i
Var [X (t1 )] Var [X (t2 )]
λt1
= p , if t2 ≥ t1
(λt1 ) (λt2 )

n
r

eri
t1
= , i f t2 ≥ t1
t2
ine
3.5.7 M.G.F. of Poisson process
h i
MX(t) (u) = E e uX(t)
ng

X∞
= eux P [X (t) = x]
E

x=0
arn

∞ −λt (λt)x
ux e
X
= e
x=0 x

[(λt) eu ]x
Le

X
=e −λt
x
x=0
w.


u 1 u 1
 [(λt) e ] [(λt) e ] 
=e 
−λt 
+ + + ...
 
1


1 2

 

ww

u
= e−λt e(λt)e
= eλt(e
u
−1)
, where‘u0 is a parameter

3.5.8 Random Telegraph process

A random telegraph process is a discrete random process X(t), which


satisfies the following:

1. X(t) takes only two values either 1 or −1 at any time 0 t0 .


1
2. X(0) = 1 or −1 with equal probability .
2
[Link]
[Link]
184 UNIT III - Classification of Random Processes

3. The number of occurrences N(t) from one value to another occurring


in any interval of length 0 t0 is a Poisson process with mean rate λ so
that the probability of exactly 0 x0 transitions is

e−λt (λt)x
P[N(t) = x] = , x = 0, 1, 2, · · ·
x!

n
n g.i
eri
ine
E ng
arn
Le
w.
ww

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 185

3.5.9 Part - A(Anna University Questions)

(1) Prove that a first order stationary random process has a constant
mean.[Nov/Dec 2006 ECE]
Solution : Consider a random process X(t) at two different instants t1
and t2 .

Z∞
E [X (t1 )] = x fX (x, t1 )dx
−∞

n
Z∞

g.i
E [X (t2 )] = x fX (x, t2 )dx
−∞

n
Let t2 = t1 + C

Z∞ eri
ine
E [X (t2 )] = x fX (x, t1 + C)dx
−∞
Z ∞
ng

=
 
x fX (x, t1 )dx ∵ X(t) is first order stationary
−∞
=E [X (t1 )]
E
arn

Since E [X (t2 )] = E [X (t1 )] , its value must be a constant.

(2) Define Binomial process. Give an example for its sample function.[N/D
Le

2006, ECE]
Solution : Binomial process can be defined as a sequence of partial
w.

sums.
{Sn /n = 1, 2, · · · } where Sn = X1 + X2 + · · · + Xn . Example: A sample
ww

function of the binomial random process with

(x1 , x2 , x3 , · · · ) = (1, 1, 0, 0, 1, 0, 1, · · · ) is
(S1 , S2 , S3 , · · · ) = (1, 2, 2, 2, 3, 3, 4, · · · )

The process is increments by 1 only at the discrete times ti = iT, i = 1, 2.

(3) Prove that difference at two independent Poisson processes is not a


Poisson process. [M/J 07, ECE]
Solution : Let {X1 (t)} & {X2 (t)} be two independent Poisson processes
with parameters λ1 and λ2 respectively. Let X(t) = X1 (t) − X2 (t). Then
[Link]
[Link]
186 UNIT III - Classification of Random Processes

E[X(t)] =E[X1 (t) − X2 (t)] = λ1 t − λ2 t = (λ1 − λ2 )t


E[X2 (t)] =E[X1 (t) − X2 (t)]2
h i
=E X1 (t) − 2X1 (t) X2 (t) + X2 (t)
2 2
h i h i
=E X12 (t) − 2E [X1 (t) X2 (t)] + E X22 (t)
h i h i
=E X1 (t) − 2E [X1 (t)] E [X2 (t)] + E X2 (t)
2 2

(∵ X1 (t) and X2 (t) are independent)


=λ21 t2 + λ1 t − 2(λ1 t)(λ2 t) + λ22 t + λ2 t
=(λ1 − λ2 )2 t2 + (λ1 + λ2 )t

n
g.i
,(λ1 − λ2 )2 t2 + (λ1 − λ2 )t
∴ X(t) = X1 (t) − X2 (t) is not a Poisson process.

n
(4) Define Strictly sense stationary (SSS) process. [M/J 07, ECE]

eri
Solution : A random process is called a strongly stationary process
or strict sense stationary process (abbreviated as SSS process), if all its
ine
finite dimensional distributions are invariant under translation of time
parameter.
i.e., time series {X(t)} is said to be strictly stationary is
ng

f (x1 , x2 , ...xn ; t1 , t2 , ..., tn ) = f (x1 , x2 , ...xn ; t1 + ∆, t2 + ∆, ..., tn + ∆), for any


t and any real number ∆.
E

(5) Prove that sum of two independent Poisson processes is also


arn

Poisson.(Additiveproperty) [N/D 07, ECE]


Solution : Let X(t) = X1 (t) + X2 (t).
Le

n
X
P [X (t) = n] = [P [X1 (t) = r]] [P [X2 (t) = n − r]]
r=0
w.

n
X e−λ1 t (λ1 t)r e−λ2 t (λ2 t)n−r
=
r! (n − r)!
ww

r=0
n
e−(λ1 +λ2 )t X n!
= (λ1 t)r (λ2 t)n−r
n! r=0
r!(n − r)!
n
e−(λ1 +λ2 )t X
= nCr (λ1 t)r (λ2 t)n−r
n! r=0
e−(λ1 +λ2 )t
= (λ1 t + λ2 t)n
n!
⇒ X (t) = X1 (t)+X2 (t) is a Poisson distribution with parameter λ1 +λ2
(6) Define wide - sense stationary process.
Solution : A random process X(t) is said to be wide sense stationary
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 187

if its mean is constant and its autocorrelation depends only on time


difference. i.e., (1) E[X(t)] = constant.
(2) RXX (t, t + τ) = E[X(t).X(t + τ)] = a function of τ = RXX (τ) = φ(τ).

(7) What is Markov process? [M/J 08, ECE]


Solution : A random process X(t) is called Markov process, if for
t0 < t1 < t2 < · · · < tn < tn+1 , we have

P[X(tn+1 ) ≤ xn+1 /X(tn ) ≤ xn , X(tn−1 ) ≤ xn−1 , · · · , X(t0 ) ≤ x0 ]


= P[X(tn+1 ) ≤ xn+1 /X(tn ) ≤ xn ]

n
g.i
i.e., in words,
If the future behaviour of a process depends only on the present state,

n
but not on the past, then the process is called a Markov process.

eri
(8) When is a stochastic process said to be ergodic? [M/J 08, ECE]
Solution : All the time averages are equal to the corresponding
ine
ensemble averages.

(9) What is mean by autocorrelation?


ng

Solution : Autocorrelation of the process {X(t)} is defined by RXX(t,


t+τ) = E[X(t)X(t + τ)] , τ is time difference.
E

(10) State the four types of stochastic processes.


arn

Solution : Refer (Classification of random process table)

(11) Define strict sense stationary process and wide sense stationary
Le

process.[N/D08, ECE] Solution: Refer (S.S.S. and W.S.S. definitions)

(12) Define Markov process. [N/D 08, ECE]


w.

Solution: (= 7th question)

(13) Prove that the sum of two independent Poisson processes is a Poisson
ww

process. [M/J 09, ECE]


Solution: (= 5th question)

(14) Define sine wave process. [M/J 09, ECE]


Solution : A sine wave random process is represented as
X(t) = Asin(ωt + φ), where A is amplitude, ω is frequency and φ is
phase, a combination of these three may be random.

(15) Define wide sense stationary process. Give an example. [M/J 09, ECE]
Solution : (= 6th question)
Example: If X(t) = sin(ωt + y), where y a uniform random variable in
(0, 2π), then X(t) is W.S.S.
[Link]
[Link]
188 UNIT III - Classification of Random Processes

(16) Determine
 whether the given matrix is irreducible or not
 0.3 0.7 0 
P =  0.1 0.4 0.5  [N/D 09, ECE]
 
0 0.2 0.8
 
Solution : In this Markov chain, as all states communicate with each
other, the chain is irreducible. For example, if we assume the three
states 0,1 and 2, in this chain, it is possible to go from state 0 to1 with
probability 0.7 and from state 1 to 2 with probability 0.5. So it is
possible to go from state 0 to 2. So the chain is irreducible and all the
states are recurrent.

n
g.i
(17) Show that the random process X(t) = Acos(ω0 t + θ) is not stationary, if
A and ω0 are constants and θ is uniformly distributed in (0, π). [N/D
09,EC]

n
Solution : E[X(t)] = − 2Aπ sin ω0 t = a function of t ⇒ X(t) is not
stationary.
eri
(18) Examine whether the Poisson process {X(t)} given by the law
ine
−λt (λt)r
P [X (t) = r] = e r! , r = 0, 1, 2, · · · is covariance stationary.[M/J 06,
CSE]
ng

Solution : E[X(t)] = λt , a constant.


∴ Poisson process is not a stationary process, as its statistical (mean,
E

autocorrelation,. . . ) are time dependent.


arn

(19) Define Markov process and a Markov chain. [M/J 06, CSE]
Solution : If for t1 < t2 < ··· < tn < t.
P [X (t) ≤ x/X(t1 ) = x1 , X(t2 ) = x2 , · · · , X(tn ) = xn ] =
Le

P [X (t) ≤ x/X(tn ) = xn ] , then the process {X(t)} is a Markov process. A


discrete parameter Markov process is called a Markov chain.
w.

(20) Define random process and its classification. [N/D 06, CSE]
Solution : Solution : Refer (Classification of random process table)
ww

(21) Let X bet the random variable which gives the inter arrival time (time
between successive arrivals), where the arrival process is a Poisson
process. What will be the distribution of X? How? [M/J 06, CSE]
Solution : X follows an exponential distribution.
P(X = t) = P(number of arrivals in time t) = e−λt . F(t) = P(X = t) =
1 − e−λt is the distribution function.
F(t) = λe−λt which is the p.d.f. of an exponential variate.

(22) Define strict sense stationary process and give an example.[M/J 06]
Solution: Refer(S.S.S. 4th question)
Example :
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 189

(1) Ergodic function (2) If X(t) = Acos(ω0 t + θ), where A and ω0 are
constants and θ is uniformly distributed r.v. in (0, 2π).

(23) A man tosses a fair coin until 3 heads occur in a row. Let Xn denotes
the longest string of heads ending at the nth trial (i.e.,) Xn = k, if at the
nth trial, the last tail occurred at the (n − k)th trial. Find the transition
probability matrix. [M/J06,CSE]
Solution : The state space = {0, 1, 2, 3}, since the coin is tossed until 3
heads occur in a row.

n
State o f Xn

g.i
0 1 2 3
 
0  1/2 1/2 0 0 

n
1  1/2 0 1/2 0
 
P = State o f Xn−1


eri
 
2  1/2 0 0 1/2 

3 0 0 0 1

ine
(24) Define strict sense and wide sense stationary process. [N/D 06, CSE]
Solution : Refer(4th and 15th questions)
ng

(25) The transition probability matrix of aMarkov chain


E

/4


 3/4 0 

{Xn }, n = 1, 2, 3, · · · with three state 0,1, and 2 is P = 
 
1/4 1/2 1/4
 
arn

 

 0 3/4 1/4 
 
with initial P( 0) = (1/3, 1/3, 1/3}. Find
P(x3 = 1, X2 = 2, X1 = 1, X0 = 2).[N/D 06, CSE]
Le
w.
ww

3.5.10 Part - B[Problems of S.S.S.]

Example 3.1 Let random process X(t) = cos(t + ϕ) where ϕ is a random variable
1 π π
with f (φ) = , − < φ < . Check whether density function stationary or not.
π 2 2

Solution : Given: X(t) = Random process = cos(t + ϕ).


π π
φ = a continuous Random variable, − <φ< .
2 2
1
f (φ) = P.d.f. of φ = .
π
Check X(t) is stationary or not:
[Link]
[Link]
190 UNIT III - Classification of Random Processes

Recall General Format of Mean of the Random process with limits :


Upper limit
R.V.Z

E[(R.P.)] = (R.P.) f [(R.V.)] d(R.V.)


[Link] limit

n
Z2
1

g.i
∴ E[X(t)] = cos(t + φ) dφ
π
π

n
2

1 sin(t + φ) 2
eri
"
= π
π 1 −
ine
 2  
1 π π
  
= sin t + − sin t −
π 2 2
ng

π ∵ sin(t + 90) = cos t


 " #
1
  
= [cos (t)] − − sin −t
π 2 sin(t − 90) = − sin(90 − t)
π
E

1
   
= [cos t] + sin −t
π 2
arn

1
= {cos t + cos (t)} (∵ sin(90 − t) = cos t)
π
1
Le

= [2 cos t]
π
= a function of t
w.
ww

∴ X(t) is not stationary.

Example 3.2 Show that X(t) = A cos(ωt + θ) is not stationary if A and ω are
constants and θ is uniformly distributed in (0, π).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 191

Solution : Given: X(t) = Random process = A cos(ωt + θ),


where A and ω are constants
θ = a uniformly distributed r. v. defined in (0, π).
1
f (θ) = P.d.f. of θ = .
π
To show X(t) is not stationary ⇒ E[X(t)] is a function of t


1
∴ E[X(t)] = A cos(ωt + θ)

n

π

g.i
0
A
= [sin(ωt + θ)]π0
π

n
A

eri
= {[sin (ωt + π)] − [sin (ωt − 0)]}
π
A
= {[−sinωt − sin ωt]} ( ∵ sin(ωt + 180) = − sin ωt )
ine
π
−2A
= sin ωt
π
ng

= a function of t
E

∴ X(t) is not stationary.


arn
Le

Example 3.3 Let X(t) = cos(ωt+θ), where θ is uniformly distributed in (−π, π).
w.

Check whether X(t) is stationary or not? Also find first and second moments of
ww

process.

Solution : Given: X(t) = Random process = cos(ωt + θ),


where ω is a constant
θ = a uniformly distributed r. v. defined in (−π, π).
1
f (θ) = P.d.f. of θ = .

First moment of the Random process : E[X(t)] h i
2
Second moment of the Random process : E X (t)
[Link]
[Link]
192 UNIT III - Classification of Random Processes

Check X(t) is stationary or not:



1
∴ E[X(t)] = cos(ωt + θ) dθ

−π
1
= [sin(ωt + θ)]π−π

1
= {[sin (ωt + π)] − [sin (ωt − π)]}

∵ sin(ωt + 180) = − sin ωt
" #
1
= {[− sin ωt + sin(π − ωt)]}
sin(ωt − π) = − sin(π − ωt)

n

g.i
1
= [− sin ωt + sin ωt]

= 0 = a constant

n
= not a function of t
∴ X(t) is stationary.
eri
∴ First moment of the Random process X(t) = E[X(t)] = 0
ine
h i
II moment of the Random process : E X2 (t)
ng

h i h i
∴ E X (t) = E cos (ωt + θ)
2 2

1 + cos 2(ωt + θ)
E

" #
=E
2
arn

1 1
= E[1] + E [cos(2ωt + 2θ)]
2 2

Le

1 1 1
= + cos(2ωt + 2θ) dθ
2 2 2π
−π
w.

(∵ E[k] = k, where k is a constant)



1 sin(2ωt + 2θ)
"
1
ww

= +
2 4π 2 −π
1 1
= + [sin(2ωt + 2π) − sin(2ωt − 2π)]
2 8π
1 1
= + [sin(2π + 2ωt) + sin(2π − 2ωt)]
2 8π
(∵ sin(A − 2π) = − sin(2π − A), A = 2ωt)
1 1
= + [sin(2ωt) − sin(2ωt)]
2 8π
(∵ sin(2π + A) = sin(A), sin(2π − A) = − sin(A), A = 2ωt)
1 1
= +0=
2 2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 193

h i 1
∴ Second moment of the Random process X(t) = E X2 (t) =
2
∴ X(t) is stationary w.r.t. to both I and II moment.h i
Note : Variance of the random process X(t) = E X2 (t) − {E[X(t)]}2
1 1
= − (0)2 =
2 2

3.5.11 Part - B[Problems of W.S.S.]

Example 3.4 Show that random process X(t) = A cos(ωt + θ) is W.S.S.(or

n
g.i
covariance stationary) if A and ω are constants θ is uniformly distributed
random variable in (0, 2π).

n
eri
Solution : Given: X(t) = Random process = A cos(ωt + θ),
where A and ω are constants
ine
θ = a uniformly distributed r. v. defined in (0, 2π).
1
f (θ) = P.d.f. of θ = .

ng

To show X(t) is Wide sense stationary, we have to prove the following

(i) Mean of X(t) = E[X(t)] = a constant (1)


E

(ii) Autocorrelation of X(t) = E[X(t).X(t + τ)] = a function of τ (2)


arn

(OR)
Autocorrelation of X(t) = E[X(t1 ).X(t2 )] = a function of time difference
= f (t2 − t1 ) (or) f (t1 − t2 )
Le

Now,
w.

Z2π
1
(1) ⇒ E[X(t)] = A cos(ωt + θ) dθ
ww


0
A
= [sin(ωt + θ)]2π
0

A
= {[sin (ωt + 2π)] − [sin (ωt − 0)]}

A
= {[sin ωt − sin ωt]} ( ∵ sin(ωt + 2π) = sin ωt )

−2A
= [0]
π
=0
= a constant
[Link]
[Link]
194 UNIT III - Classification of Random Processes

(2) ⇒ A.C. = RXX (t, t + τ)


= E[X(t) · X(t + τ)]
= E[A cos(ωt + θ) · A cos(ω(t + τ) + θ)]
A2 
= ωt + 
θ)−( ωt
 +ωτ+ θ)
 +cos [(ωt + θ)+(ωt+ωτ+θ)]
  
E cos ( 
2


(∵ cos A cos B = 12 [cos(A−B)+cos(A+B)])


A2

n
= E {cos [ωτ] + cos [2ωt + ωτ + 2θ]} (∵ cos(−ωt) = cos ωt)
2

g.i
A2 A2
= cos [ωτ] + E[cos(2ωt + ωτ + 2θ)] (3)
2 2

n
(∵ E[R.P. without r.v. ] = E[constant] = constant])

Consider E[cos(2ωt + ωτ + 2θ)]


eri
ine
Z2π
1
= A2 cos [2ωt + ωτ + 2θ] dθ

ng

2
Z2π
A
E

= cos [2ωt + ωτ + 2θ] dθ



arn

0
#2π
A2 sin (2ωt + ωτ + 2θ)
"
=
2π 2 0
Le

2
A
= [sin (2ωt + ωτ + 4π) − sin (2ωt + ωτ)]

w.

A2
= [sin (2ωt + ωτ) − sin (2ωt + ωτ)] (∵ sin(4π + A) = sin(A))

A2
ww

= [0]

=0

∴ (3) ⇒ RXX (t, t + τ) = A2 cos ωτ = a function of τ.


∴ (1) and (2) are proved.
∴ X(t) is WSS.

Example 3.5 Consider X(t) = sin(ωt + θ) where θ is uniformly distributed in


(0, 2π). Show that X(t) is WSS.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 195

Solution : Given: X(t) = Random process = sin(ωt + θ),


where θ is a uniformly distributed r. v. defined in (
1
f (θ) = P.d.f. of θ = .

To show X(t) is Wide sense stationary, we have to prove the following

⇒ Mean of X(t) = E[X(t)] = a constant (1)


Autocorrelation of X(t) = E[X(t).X(t + τ)] = a function of τ (2)
(OR)

n
Autocorrelation of X(t) = E[X(t1 ).X(t2 )] = a function of time difference

g.i
= f (t2 − t1 ) (or) f (t1 − t2 )

n
Now,

(1) ⇒ E[X(t)] =
Z2π
eri
sin(ωt + θ)
1


ine
0
1
= [− cos(ωt + θ)]2π
0

ng

1
= − {[cos (ωt + 2π)] − [cos (ωt − 0)]}

E

−1
= {[cos ωt − cos ωt]} ( ∵ cos(ωt + 2π) = cos ωt )
arn


−1
= [0]

=0
Le

= a constant
w.
ww

(2) ⇒ A.C. = RXX (t, t + τ)


= E[X(t) · X(t + τ)]
= E[A sin(ωt + θ) · sin(ω(t + τ) + θ)]
1 
= E cos ( ωt + θ)−( ωt
 +ωτ+ θ)
 −cos [(ωt + θ)+(ωt+ωτ+θ)]
  

2


(∵ sin A sin B = 12 [cos(A−B)−cos(A+B)])


1
= E {cos [ωτ] + cos [2ωt + ωτ + 2θ]} (∵ cos(−ωt) = cos ωt)
2
1 1
= cos [ωτ] + E[cos(2ωt + ωτ + 2θ)] (3)
2 2
(∵ E[R.P. without r.v. ] = E[constant] = constant])
[Link]
[Link]
196 UNIT III - Classification of Random Processes

Consider E[cos(2ωt + ωτ + 2θ)]


Z2π
1
= A2 cos [2ωt + ωτ + 2θ] dθ

0
Z2π
A2
= cos [2ωt + ωτ + 2θ] dθ

0
#2π
A2 sin (2ωt + ωτ + 2θ)
"
=
2π 2

n
0
2
A

g.i
= [sin (2ωt + ωτ + 4π) − sin (2ωt + ωτ)]

A2

n
= [sin (2ωt + ωτ) − sin (2ωt + ωτ)] (∵ sin(4π + A) = sin(A))

=
=0
A2

[0]
eri
ine
1
∴ (3) ⇒ RXX (t, t + τ) = cos ωτ = a function of τ.
2
ng

∴ (1) and (2) are proved.


∴ X(t) is WSS.
E

Example 3.6 If X(t) is WSS process with auto correlation R(τ) = Ae−α|τ| . Find
arn

second order moment of the random variable X(8) − X(5).


Solution : Given: X(t) = WSS Random process
Le

R(τ) = Autocorrelation of X(t) = E[X(t) · X(t + τ)]


= Ae−α|τ| , τ = time difference.
w.

Second order moment of the r.v. X(8) − X(5) :


n o
II order moment = E [X(8) − X(5)]2
ww

h i
= E X (8) − 2X(8) · X(5) + X (5)
2 2

= E[X(8) · X(8)] + E[X(5) · X(5)] − 2E[X(8) · X(5)]


= Ae−α|8−8| + Ae−α|5−5| − 2Ae−α|8−5|
= A + A − 2Ae−3α
= 2A − 2Ae−3α
h i
= 2A 1 − e−3α

Example 3.7 A random variable Y with characteristic function φ(ω) = E(eiωY )


and X(t) = cos(λt + Y). Show that X(t) is WSS, if φ(1) = φ(2) = 0.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 197

Solution : Given: X(t) = random process = cos(λt + Y)


Y = a random variable
[a r.v. without range, so use expansion ideas]
φ(ω) = characteristic function of Y = φY (ω)
h i
= E eiωY
= E [cos ωY + i sin ωY]
= E [cos ωY] + iE [sin ωY]

n
g.i
Also given φ(1) = 0
⇒ E [cos Y] + iE [sin Y] = 0

n
⇒ E [cos Y] = 0, E [sin Y] = 0 (1)

eri
and also φ(2) = 0
E [cos 2Y] + iE [sin 2Y] = 0
ine
⇒ E [cos 2Y] = 0, E [sin 2Y] = 0 (2)
ng

To show X(t) is WSS, we have to prove


E

(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
arn

(OR)
RXX (t1 , t2 ) = a function of time difference
Le

Now (i)
w.

E[X(t)] = E[cos(λt + Y)]


ww

= E[cos λt cos Y − sin λt sin Y]


= cos λtE[cos Y] − sin λtE[sin Y]
= cos λt(0) − sin λt(0) [∵ E[cos Y] = 0, E[sin Y] = 0]
=0

Now (ii)

RXX (t1 , t2 ) = E [X (t1 ) · X (t2 )]

[Link]
[Link]
198 UNIT III - Classification of Random Processes

= E [cos (λt1 + Y) · cos (λt2 + Y)]


1 
= E cos (λt1 + Y  − λt2 − Y) + cos (λt1 + Y + λt2 + Y)

 
2
1 1
= E [cos λ (t1 − t2 )] + E [cos λ (t1 + t2 + 2Y)]
2 2
1 1
= cos λ (t1 − t2 ) + E [cos λ (t1 + t2 ) cos 2Y − sin λ (t1 + t2 ) sin 2Y]
2 2
1
= cos λ (t1 − t2 )
2
1
+ [cos λ (t1 + t2 ) E[cos 2Y] − sin λ (t1 + t2 ) E[sin 2Y]]

n
2

g.i
1
= cos λ (t1 − t2 )
2 " #
1 0 0

n
+ cos λ (t1 + t2 )  − sin λ (t1 + t2 ) 
:
 :

E[cos E[sin
 2Y]  2Y]
2
1
= cos λ (t1 − t2 )
2 eri
ine
1
= cos λτ (where τ = t1 − t2 or τ = t2 − t1 )
2
∴ X(t) is WSS.
ng

Example 3.8 Let random process X(t) = B cos(50t + φ) where B and φ are
E
arn

independent random variables. B is a random variable with mean ‘0’ and variance
‘1’ and φ is uniformly distributed in interval (−π, π). Find mean and auto
Le

correlation of process.

Solution : Given: X(t) = random process = B cos(50t + φ)


w.

B = a random variable with


Mean of B = 0 ⇒ E[B] = 0
ww

" #2
h i 0 h i
Variance of B = 1 ⇒ E B2 −  E(B) = 1 ⇒ E B2 = 1
*



φ = a r.v. uniformly distributed in (−π, π)


1
f (φ) =

Mean of the R.P. X(t) :

E[X(t)] = E[B cos(50t + φ)]


*0
=  E[cos(50t + φ)]
E[B]


=0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 199

Autocorrelation of the R.P. X(t) :

E[X(t) · X(t + τ)]


= E[B cos(50t + φ) · B cos(50(t + τ) + φ)]
1 h   i
= E B2 · cos  50t

 +φ−
50t

 − 50τ − φ + cos(50t + φ + 50t + 50τ + φ)
2  
1 h i
= E B2 · E[cos (−50τ) + cos(100t + 50τ + 2φ)]
2
1
= 1 · E[cos (−50τ)] + E[cos(100t + 50τ + 2φ)]
2

n
1
= cos (50τ) + 0 (∵ cos(−ωτ) = cos(ωτ))

g.i
2
1
= cos (50τ)
2

n
eri
Note : In the previous problem, X(t) is WSS.
ine
Example 3.9 Show that the process X(t) = A cos λt + B sin λt is WSS, where A
ng

and B are random variables if

(a) E(A) = E(B) = 0 (b) E(A2 ) = E(B2 ) (c)E(AB) = 0.


E
arn

Solution : Given: X(t) = random process = A cos λt + B sin λt


where A&B are random variable
(a) ⇒ E(A) = E(B) = 0
Le

    
(b) ⇒ E A = E B = σ , say
2 2 2

(c) ⇒ E(AB) = 0
w.

To show X(t) is WSS, we have to prove


ww

(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
(OR)
RXX (t1 , t2 ) = a function of time difference

Now (i)

E[X(t)] = E[A cos λt + B sin λt]


= E[A] cos λt + E[B] sin λt
= (0) cos λt + (0) sin λt (∵ by (a))
=0
[Link]
[Link]
200 UNIT III - Classification of Random Processes

Now (ii)

RXX (t1 , t2 )
= E [X (t1 ) · X (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (A cos λt2 + B sin λt2 )]
h i
= E A cos λt1 cos λt2 +AB cos λt1 sin λt2 +AB cos λt2 sin λt1 +B sin λt1 sin λt2
2 2
h i
= E A2 cos λt1 cos λt2 + 0 + 0 + B2 sin λt1 sin λt2 (∵ by (c))
   
= E A2 cos λt1 cos λt2 + E B2 sin λt1 sin λt2

n
   
= E A2 cos λt1 cos λt2 + E A2 sin λt1 sin λt2 (∵ by (b))

g.i
= σ2 cos λt1 cos λt2 + σ2 sin λt1 sin λt2 (∵ by (b))
= σ2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)

n
eri
(or)
= σ2 cos λτ
= a function of time difference
ine
∴ X(t) is WSS.
E ng

Example 3.10 The process X(t) whose probability distribution under certain
arn

conditions is given by
Le

(at)n−1

 (1 + at)n+1 , n = 1, 2, · · ·



P [X(t) = n] = 

w.

 at
, n = 0.


1 + at

ww

Show that it is not stationary (or evolutionary).

(at)n−1

 (1 + at)n+1 , n = 1, 2, · · ·



Solution : Given: P [X(t) = n] = 

(1)
 at
, n = 0.


1 + at

where X(t) = a discrete random process
To show X(t) is not stationary, we have to show

E [Xr (t)] = a function of t, r = 1, 2, · · · (2)


[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 201

When r=1
n→∞
X
E[X(t)] = nPn (t)
n=0
n→∞
X
= [nPn (t)]n=0 + nPn (t) (∵ by (1))
n=1
n→∞
at (at)n−1
  X
= 0· + n
1 + at (1 + at)n+1
n=1

n
1 (at) (at)2
=0+1· + 2 · + 3 ·

g.i
(1 + at)2 (1 + at)3 (1 + at)4
at 2
" #
1 at
  
= 1+2 +3

n
(1 + at)2 1 + at 1 + at

eri
−2
1 at
 
= 1 − (∵ 1 + 2x + 3x2 + · · · = (1 − x)−2 )
(1 + at)2 1 + at
ine
#−2
1 + at − at
"
1
=
(1 + at)2 1 + at
1 1 −2
 
ng

=
(1 + at)2 1 + at
1
= [1 + at]2
E

(1 + at)2
arn

=1 (3)
= a constant
Le

⇒ X(t) is stationary process. h i


But we have to show X(t) is not stationary, for that find E X2
w.

i.e., substitute r = 2 in (2)


ww

h i n→∞
X
E X (t) =
2
n2 Pn (t)
n=0
h i n→∞
X
= n Pn (t)
2
+ n2 Pn (t) (∵ by (1))
n=0
n=1
n→∞
X
=0+ [n(n + 1) − n]Pn (t)
n=1
n→∞
X n→∞
X
= [n(n + 1)]Pn (t) − [n]Pn (t)
n=1 n=1

[Link]
[Link]
202 UNIT III - Classification of Random Processes

n→∞
(at)n−1
X " #
= [n(n + 1)] −1 (∵ by (3))
n=1
(1 + at)n+1

1 (at) (at)2
= 1(2) + 2(3) + 3(4) −1
(1 + at)2 (1 + at)3 (1 + at)4
at 2
" #
2 at
  
= 1+3 +6 −1
(1 + at)2 1 + at 1 + at
−3
2 at
 
= 1− −1 (∵ 1 + 3x + 6x2 + · · · = (1 − x)−3 )
(1 + at)2 1 + at
#−3

n
1 + at − at
"
2
= −1

g.i
(1 + at)2 1 + at
2 1 −3
 
= −1
(1 + at)2 1 + at

n
2
=
(1 + at)2
[1 + at]3 − 1

= 2(1 + at) − 1 = 2 + 2at − 1 eri


ine
= 1 + 2at
= a function of t
ng

⇒ X(t) is not stationary.


h i
Note : Var[X(t)] = E X2 (t) − {E[X(t)]}2
E

= 1 + 2at − (1)2
arn

= 2at

Example 3.11 Assume a random process


Le

X(t, s1 ) = cos t, X(t, s2 ) = − cos t, X(t, s3 ) = sin t, X(t, s4 ) = − sin t,


w.

which are equally likely events. Show that it is WSS.


ww

Solution : Given
 that 4 equally likely discrete random processes  i.e.,
1
= =
 

 X (t, s1 ) cos t ⇒ P [X (t, s 1 )] 

4

 


 

 1 
X s2 = − cos t ⇒ P [X s2 =
 
(t, ) (t, )]

 

 

 4 

(1)

 1 

X (t, s ) = sin t ⇒ P [X (t, s )] =

 

 3 3 


 4 


1

 

= =
 

 X (t, s4 ) − sin t ⇒ P [X (t, s 4 )] 

4
 
To show that X(t) is WSS, we have to show
(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 203

Now (i)
i=4
X
E[X(t)] = X (t, si ) P [X (t, si )]
i=1
= X (t, s1 ) P [X (t, s1 )] + X (t, s2 ) P [X (t, s2 )]
+ X (t, s3 ) P [X (t, s3 )] + X (t, s4 ) P [X (t, s4 )]
1 1 1 1
       
= cos + sin
 

 4
− cos 
 4
 − sin  (∵ by (1))
  

4 

4
= 0 = a constant (2)

n
Now (ii)

g.i
RXX (t, t + τ)

n
= E[X(t) · E(t + τ)]

eri
= E[X (t, si ) · X (t + τ, si )]
Xi=4
= [X (t, si ) · X (t + τ, si )]P [X (t, si )]
ine
i=1
= [X (t, s1 )·X (t+τ, s1 )]P [X (t, s1 )]+[X (t, s2 )·X (t+τ, s2 )]P [X (t, s2 )]
ng

+[X (t, s3 )·X (t+τ, s3 )]P [X (t, s3 )]+[X (t, s4 )·X (t+τ, s4 )]P [X (t, s4 )]
1 1
   
= (cos t)[cos(t + τ)] + (− cos t)[− cos(t + τ)]
E

4  4 
1 1
arn

+ (sin t)[sin(t + τ)] + (− sin t)[− sin(t + τ)]


4 4
1
= [2 cos t cos(t + τ) + 2 sin t sin(t + τ)] (∵ by (1))
4
Le

2
= [cos t cos(t + τ) + sin t sin(t + τ)]
4
w.

1
= [cos(t − (t + τ))] (∵ cos(A − B) = cos A cos B + sin A sin B)
2
1
ww

= [cos(−τ)]
2
1
= [cos τ]
2
= a function of τ (3)

∴ From (2) and (3) ⇒ X(t) is WSS.

3.5.12 Part - B[Problems of Jointly WSS process]

Example 3.12 Two random processes X(t) and Y(t) are defined by
X(t) = A cos λt + B sin λt, Y(t) = B cos λt − A sin λt, where A and B are random
[Link]
[Link]
204 UNIT III - Classification of Random Processes

variables and λ is a constant. If A and B are uncorrelated with zero means and
same variances. Prove that X(t) and Y(t) are jointly WSS.

Solution : Given: X(t) = a random process = A cos λt + B sin λt


Y(t) = a random process = B cos λt − A sin λt
where λ = a constant
Given A and B are random variables with zero means i.e.,
Mean of A = 0 ⇒ E[A] = 0
( )
(1)
Mean of B = 0 ⇒ E[B] = 0

n
And given A and B are random variables with same variances i.e.,

g.i
Variance(A) = Variance(B)

n
h i h i
E A − [E(A)] = E B2 − [E(B)]2
2 2
h i h i
E A2 − 0 = E B 2 − 0
eri (∵ by (1))
ine
h i h i 
E A = E B = σ , say
2 2 2
(2)

Also given A and B are uncorrelated random variables


ng

i.e., Covariance(A, B) = 0 ⇒ Cov(A, B) = 0


E

E[AB] − E[A] · E[B] = 0


arn

E[AB] = E[A] · E[B] = 0 · 0 (∵ by (1))


E[AB] = 0 (3)
Le

To show X(t) is jointly WSS, we have to prove


w.

(i) X(t) is WSS ⇒ {(a) E[X(t)] = a constant, (b) RXX (t1 , t2 ) = a fn. of τ}
(ii) Y(t) is WSS ⇒ {(a) E[Y(t)] = a constant, (b) RYY (t1 , t2 ) = a fn. of τ}
ww

(iii) RXY (t, t + τ) = a function of τ


(OR)
RXY (t1 , t2 ) = a function of time difference

Now (i)(a)

E[X(t)] = E[A cos λt + B sin λt]


= E[A] cos λt + E[B] sin λt
= (0) cos λt + (0) sin λt (∵ by (1))
=0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 205

Now (i)(b)

RXX (t1 , t2 )
= E [X (t1 ) · X (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (A cos λt2 + B sin λt2 )]
h i
= E A2 cos λt1 cos λt2 +ABcos λt1 sin λt2 +ABcos λt2 sin λt1 +B2 sin λt1 sin λt2
h i
= E A cos λt1 + 0 + 0 + B sin λt1 sin λt2
2 2
(∵ by (3))
   
= E A2 cos λt1 + E B2 sin λt1 sin λt2

n
g.i
   
= E A cos λt1 + E A2 sin λt1 sin λt2
2
(∵ by (2))
= σ2 cos λt1 + σ2 sin λt1 sin λt2 (∵ by (2))

n
= σ2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)
(or)
= σ2 cos λτ eri (∵ τ = t1 − t2 or ∵ τ = t2 − t1 )
ine
= a function of time difference
ng

∴ X(t) is WSS.
Similarly, we can prove (ii)(a) and (ii)(b)
E

∴ Y(t) is also WSS.


arn

Now (iii)
Le

RXY (t1 , t2 )
= E [X (t1 ) · Y (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (B cos λt2 − A sin λt2 )]
w.

h i
= E ABcos λt1 cos λt2 −A2 cos λt1 sin λt2 +B2 sin λt1 cos λt2 −ABsin λt1 sin λt2
ww

h i
= E 0−A cos λt1 sin λt2 +B sin λt1 cos λt2 − 0
2 2
(∵ by (1))
h i h i
= E −A2 cos λt1 sin λt2 +E B2 sin λt1 cos λt2
h i h i
= −E A cos λt1 sin λt2 +E B2 sin λt1 cos λt2
2

= −σ2 cos λt1 sin λt2 +σ2 sin λt1 cos λt2 (∵ by (2))
= −σ2 [cos λt1 sin λt2 −sin λt1 cos λt2 ]
= −σ2 [sin λ (t2 − t1 )] (∵ sin(A − B) = sin A cos B + cos A sin B)
(or)
= −σ2 sin λτ
= a function of time difference
[Link]
[Link]
206 UNIT III - Classification of Random Processes

3.5.13 Part - B[Problems of Auto covariance of Random process]

Example 3.13 Let random process X(t) = A cos ωt + B sin ωt where A and B are
 
independent normally distributed random variables N 0, σ . Show that X(t) is
2

WSS. Also find auto covariance.

 
Solution : Given X ∼ N 0, σ2

n
⇒ Mean = 0 ⇒ E[X(t)] = 0 (1)

g.i
h i h i
Variance = σ ⇒ E X (t) − {E[X(t)]} = σ ⇒ E X (t) = σ2
2 2 2 2 2
(2)

n
eri
To show X(t) is WSS :

E [X (t1 )] = E [A cos ωt1 + B sin ωt1 ] = 0 (· · · (3), Refer Example 3.9)


ine
RXX (t1 , t2 ) = σ2 cos ωτ (· · · (4),Refer Example 3.9)
⇒ X(t) = is WSS. (Refer Example 3.9)
E ng

Auto covariance of X(t) :


arn

CXX (t1 , t2 ) = RXX (t1 , t2 ) − E [X (t)] E [X (t2 )]


= σ cos ωτ − 0 (∵ from (3) and (4))
Le

= σ cos ωτ
w.

3.5.14 Part - B (Problems of Ergodic process)


ww

Example 3.14 Show that the random process X(t) = cos(t + φ) where φ is a
random variable uniformly distributed in (0, 2π) is

(a) first order stationary

(b) stationary in wide sense

 
(c) ergodic based on first order and 2nd order arguments .
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 207

Solution : Given: X(t) = random process = cos(t + φ)


φ = a uniform random variable in (0, 2π)
1
f (φ) = p.d.f. of φ =

(a) First order stationary : (Ensemble average in I order)
φupper limit
Z
E[X(t)] = X(t) f (φ)dφ
φlower limit
Z2π

n
1
= cos(t + φ) dφ

g.i

0
1 h i2π

n
= sin(t + φ)
2π 0

eri
1
= [sin(t + 2π) − sin(t + 0)]

1
ine
= [sin(t) − sin(t)]

= 0 = a constant (1)
ng

⇒ X(t) is I order stationary.


(b) Stationary in wide sense : (Ensemble average in II order)
E

E[X(t) · X(t + τ)] = E[cos(t + φ) · cos(t + τ + φ)]


arn

1
= E[cos(t + φ − t − τ − φ)
 + cos(t + φ + t + τ + φ)]
2  
1
= E[cos(−τ) + cos(τ + 2t + 2φ)]
Le

2
1 1
= E[cos(−τ)] + E[cos(τ + 2t + 2φ)]
w.

2 2
Z2π
1 1 1
= cos(−τ) + cos(2t + τ + 2φ) dφ
ww

2 2 2π
0
#2π
1 sin(2t + τ + 2φ)
"
1
= cos(τ) +
2 4π 2 0
1 1 h i 2π
= cos(τ) + sin(2t + τ + 2φ)
2 8π 0
1 1
= cos(τ) + [sin(2t + +τ + 4π) − sin(2t + τ + 0)]
2 8π
1 1
= cos(τ) + [sin(2t + τ) − sin(2t + τ)]
2 8π
1
= cos(τ) = a function of τ (2)
2
[Link]
[Link]
208 UNIT III - Classification of Random Processes

⇒ X(t) is Stationary in wide sense.


∴ From above (a) and (b) ⇒ X(t) is WSS.
(c) I order :
To show X(t) Ergodic(based on first order), we have to show
E[X(t)] = lim XT (3)
T→∞

Now, LHS of (3):


E[X(t)] = 0 (∵ by (1))
Now, RHS of (3):

n
ZT

g.i
1
lim XT = lim cos(t + φ)dt (∵ by the definition. of time average)
T→∞ T→∞ 2T

n
−T
1 h iT
= lim sin(t + φ)
T→∞ 2T

= lim
1 h
−T

sin(T + φ) − sin(−T + φ)
i
eri
ine
T→∞ 2T
1 h i
= lim sin(T + φ) + sin(T − φ)
T→∞ 2T
ng

1
= lim 2 sin T cos φ
2T
T→∞ 
=0
E

(∵ Numerator is finite and denominator → ∞)


∴ Ergodic(based on first order) = Mean Ergodic.
arn

(c) II order :
To show X(t) Ergodic(based on second order), we have to show
Le

RXX (t, t + τ) = lim YT , where Y(t) = X(t) · X(t + τ) (4)


T→∞

Now, LHS of (4):


w.

1
E[X(t)] = cos(τ) (∵ by (2))
2
ww

Now, RHS of (4):

Example 3.15 Determine X(t) = A sin t + B cos t is ergodic, if A and B are


normally distributed random variables with zero means and unit variances.
Solution :Given X(t) = A sin t + B cos t
A&B = Nor. dist. r.v. with zero means and unit variances
i.e., E[A] = E[B] = 0
h i h i
and Var[A] = Var[B] = 1 ⇒ E A2 − {E[A]}2 = E B2 − {E[B]}2 = 1
h i h i
⇒ E A = E B2 = 1
2

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 209

To check ergodicity , if
E[X(t)] = lim XT (1)
T→∞
Now, LHS of (1) :
E[X(t)] = E[A sin t + B cos t]
= E[A] sin t + E[B] cos t
=0+0
= 0 = µ = Mean of X(t) (2)
Now, RHS of (1):

n
g.i
ZT
1
lim XT = lim A sin t + B cos tdt
T→∞ T→∞ 2T

n
−T
(∵ by the definition. of time average)

= lim
1
T→∞ 2T eri
[−A cos(t) + B sin t]T−T
ine
1
= lim −A
[( cos T +Acos
T + B sin T + B sin T]
((( 
((
T→∞ 2T
1
= lim
ng

[2B sin T]
T→∞ 2T
1 B sin T
= lim
E

T→∞ 2T T
=0 (∵ Numerator is finite and denominator → ∞)
arn

∴ X(t) is Ergodic(based on first order) = X(t) is Mean Ergodic.


Le

3.5.15 Part - B [Problems of Transition probability matrix(t.p.m)]


w.

Example 3.16 A particle performs a random walk with absorbing barriers 0 and
4. If it moves to r + 1 with probability p or to r − 1 with probability q such that
ww

p + q = 1, as soon as it reaches to 0 and 4. It remains in same state. Find t.p.m.


and initial probability distribution.
Solution : Here the particle has 0 n0 moves from position r to r + 1 from the
step Xn to Xn+1 . i.e.,

P [Xn+1 = r + 1/Xn = r] = p ⇒ moves towards right


P [Xn+1 = r − 1/Xn = r] = q ⇒ moves towards left
P [Xn+1 = 0/Xn = 0] = 1 ⇒ moves towards 0
P [Xn+1 = 4/Xn = 4] = 1 ⇒ moves towards 4
[Link]
[Link]
210 UNIT III - Classification of Random Processes

∴ The Transition Probability Matrix(TPM) P :

Future State
Xn+1
0 1 2 3 4
 
0  1 0 0 0 0 
1 q 0 p 0 0
 
 
 
Present State Xn 2 
 0 q 0 p 0 

3 
 0 0 q 0 p 

4 0 0 0 0 1

n
 

g.i
Initial probability distribution :
1 1 1 1 1
 
P = (0)

n
5 5 5 5 5
(∵ initial position of the particle is not given, so equally likely events)
Important Note : Each row sum of P and P0 is equal to 1. eri
ine
Example 3.17 The t.p.m. of a Markov chain {Xn } having the space S = {0, 1, 2, 3}
 
 0 1 0 0 
ng


 
 .3 0 .7 0 
 
is P =  . Verify Chapman Kolmogorov theorem.
E

 .0 .3 0 .7 


arn

 
 
0 0 0 1 

Solution :
Le

Example 3.18 A gambler has Rs. 2. He bets Re. 1 at a time and wins Re. 1 with
w.

probability 1/2. He stops playing if he loses Rs. 2 or wins Rs.4.


(a) Find the t.p.m. of the related Markov chain?
ww

(b) Find the probability that he lost his money at the end of 5 plays?
(c) Find the probability that the game lasts more than 7 plays?

Solution : Let Xn represents the amount with the player at the end of nth
round of the play. The game is over if the player loses all the money i.e.,
(Xn = 0) or wins Rs.4 i.e., (Xn = 6).
The state space is Xn = {0, 1, 2, 3, 4, 5, 6}.
1
If he wins the game, then the probability p = .
2
1
If he looses the game, then the probability q = 1 − p = .
2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 211

He starts with Rs.2.


(a) The corresponding TPM of the related Markov chain is
Rs. in Future state (Xn+1 )
0 1 2 3 4 5 6
 
0  1 0 0 0 0 0 0 
1 1/2 0 1/2 0 0 0 0
 
 
 
2 
 0 1/2 0 1/2 0 0 0 
 (1)
T.P.M., P = 3 
 0 0 1/2 0 1/2 0 0 

4 0 0 0 1/2 0 1/2 0
 
 

n
 
5  0 0 0 0 1/2 0 1/2 

g.i
 
6 0 0 0 0 0 0 1
(b) Probability that the player has lost money at the end of the five plays:

n
eri
Since the player initially has got Rs. 2, the initial state probability
distribution of {Xn } is
ine
 
P(0) = 0 0 1 0 0 0 0 (2)
After first play, the TPM is
ng

P(1) = P(0) · P
E

 
 1 0 0 0 0 0 0 
arn

1/2 0 1/2 0 0 0 0 


 


  
 0 1/2 0 1/2 0 0 0 
= 0 0 1 0 0 0 0  0 0 1/2 0 1/2 0 0 

Le

0 0 0 1/2 0 1/2 0 


 

 

 0 0 0 0 1/2 0 1/2 
0 0 0 0 0 0 1

w.

1 1
!
0 0 0 0 0
= (1)
2
(1) (1)
2
(1) (1) (1) (1) (⇒ TPM after I play)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
ww

Similarly,
1 1 1
!
0 0 0 0
P(2) = P(1) · P = 4
(2) (2)
2
(2) (2)
4
(2) (2) (2) (After II)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6

1 1 3 1
!
0 0 0
P(3) = P(2) · P = 4
(3)
4
(3) (3)
8
(3) (3)
8
(3) (3) (After III)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6

3 5 1 1
!
0 0 0
P(4) = P(3) · P = 8
(4) (4)
16
(4) (4)
4
(4) (4)
16
(4) (After IV)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
[Link]
[Link]
212 UNIT III - Classification of Random Processes

3 5 9 1 1
!
0 0
P(5) = P(4) · P = 8
(5)
32
(5) (5)
32
(5) (5)
8
(5)
16
(5) (After V)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6

∴ P[The man has lost his money at the end of 5th play]
= After V play, the probability that the gambler has zero rupee
(5)
= PRs.0
= The entry corresponding to the state ’0’ in P(5)

n
3
=

g.i
8
(c) The probability the the game lasts more than 7 plays:

n
29 7 13 1
!

eri
0 0 0
P(6)
=P (5)
·P= 64
(6) (6)
32
(6) (6)
64
(6) (6)
8
(6) (After VI)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
ine
29 7 27 13 1
!
0 0
P(7) = P(6) · P = 64
(7)
64
(7) (7)
128
(7) (7)
128
(7)
8
(7) (After VII)
ng

PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6

Now,
E

P[the game lasts more than 7 plays]


arn

= P[the system is neither in state 0 nor in 6 at the end of the seventh play]
= P [X7 = 1, 2, 3, 4, 5]
7 27 13
= +0+ +0+
Le

64 128 128
27
=
w.

64
ww

3.5.16 Part - B [Classification(Nature) of states of a Markov chain]

Example 3.19 Three students S1 , S2 , S3 are throwing a ball to each other. S1


always throws the ball to S2 and S2 always throws the ball to S3 , but S3 is just as
likely to throws the ball to S2 as to S1 . Prove that the process is Markovian. Find
the t.p.m. and classify the states.

Solution : Given that the three students S1 , S2 , S3 are throwing a ball to


each other.
∴ The state space is Xn = {S1 , S2 , S3 }.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 213

The corresponding TPM for the problem is


S S S
 1 2 3
S1  0 1 0 
P=
S2  0 0 1 
 
 1 1 
S3 2 2 0
Here the state in (n + 1)th step depends only on nth step and not on the
previous steps. Hence the process is Markovian.
Now,
     
 0 1 0   0 1 0   0 0 1 

n
P2 = P × P =  0 0 1  ×  0 0 1  =  12 21 0  (TPM, After II throw)
     

g.i
 1 1   1 1
2 2 0 2 2 0 0 12 12
  

 0 0 1   0 1 0   12 12 0 
     

n
P3 = P2 × P =  21 12 0  ×  0 0 1  =  0 12 12  (TPM, After III throw)
     

eri
  1 1
0 12 12
  1 1 1 
2 2 0

 1 1     4 14 21 
 2 2 0   0 1 0   0 2 2 
ine
P = P × P =  0 12 12  ×  0 0 1  =  14 14 12  (TPM, After IV throw)
4 3      
 1 1 1   1 1   1 1 1 
2 2 0
 4 41 21     14 12 41 
 0 2 2   0 1 0   4 4 2 
ng

P = P × P =  14 14 12  ×  0 0 1  =  14 12 14  (TPM, After V throw)


5 4      
 1 1 1   1 1   1 3 1 
2 2 0
E

4 2 4 8 8 2
..
.
arn

and so on · · ·
Classification of states:
Le

Irreducibility of the Markov chain:


(n)
Here Pi j > 0, for some n, ∀(i, j) satisfied. i.e.,
w.

(3) (1) (2)


PS1 S1 > 0, PS1 S2 > 0, PS1 S3 > 0,
(2) (2) (1)
PS2 S1 > 0, PS2 S2 > 0, PS2 S3 > 0,
ww

(1) (1) (2)


PS3 S1 > 0, PS3 S2 > 0, PS3 S3 > 0.
Hence the given Markov chain is Irreducible.
Also here, only 3 states S1 , S2 , S3 , which implies Markov chain is finite.
Return State of the Markov chain :
(n)
Here Pii > 0, for some n > 1 satisfied. i.e.,
(n) (n) (n)
PS1 S1 > 0, PS2 S2 > 0, PS3 S3 > 0, n = 3, 5.
Hence the given Markov chain with all states S1 , S2 , S3 have return
state.
Periodicity of the Markov chain :
Find suitable n, such that Period of Si = GCD {n} = di > 0 i.e.,
[Link]
[Link]
214 UNIT III - Classification of Random Processes

(n)
PSi Si > 0 for some i.
i.e.,
(3) (5)
PS1 S1 > 0, PS1 S1 > 0, · · · ⇒ Period of S1 = d1 = GCD {3, 5, · · · } = 1
(2) (3) (4)
PS2 S2 > 0, PS2 S2 > 0, PS2 S2 > 0 · · · ⇒ Period of S2 = d2 = GCD {2, 3, 4, · · · } = 1
(2) (3) (4)
PS3 S3 > 0, PS3 S3 > 0, PS3 S3 > 0 · · · ⇒ Period of S3 = d3 = GCD {2, 3, 4, · · · } = 1
Non-null persistent states :
(
n→∞ finite, the state i is non-null persistent
(n)
µii = n fii =
P
→ ∞, the state i is null persistent

n
n=1
where µii is Mean recurrence time of the state i.

g.i
Here all states are non-null persistent.
∴ Finally, Markov chain is finite, all states are aperiodic, irreducible

n
and non-null persistent. So States are Ergodic. i.e., the Markov chain is

eri
Ergodic.

Example 3.20 * Find the nature of the states of the Markov chain with the t.p.m.
ine
 
 0 1 0 
 
 
P =  1/2 0 1/2 
ng

 
 
 
 0 1 0 
E

Solution :
arn

3.5.17 Part - B (Steady state / Long run)


Le

 
 0 2/3 1/3 
 
 
Example 3.21 Let 3 state Markov chain with t.p.m.  1/2 0 1/2 . Find
 
w.

 
 
 1/2 1/2 0 
ww

long run probability of Markov chain.


 
 0 2/3 1/3 
Solution : Let TPM of the Markov chain is P =  1/2 0 1/2 . (1) Let
 
1/2 1/2 0
 
π = [π1 , π2 , π3 ] be the stationary state distribution of the 3 state Markov
chain.

Finding the values of π1 , π2 , π3 , by using the relations :


πP = π (2)
X
πi = 1 (3)
∀i
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 215

Now,
 
 0 2/3 1/3 
∴ (2) ⇒ [π1 π2 π3 ]  1/2 0 1/2  = [π1 π2 π3 ]
 
1/2 1/2 0
 
π2 π3
i.e., + = π1 (4)
2 2
2π1 π3
+ = π2 (5)
3 2
π1 π2
+ = π3 (6)
3 2

n
g.i
Solving (4),(5),(6), using (3) , we get
9
π1 = = Long run probability for the I state

n
27
10
π2 =

π3 =
27
8 eri
= Long run probability for the II state

= Long run probability for the III state


ine
27
lim
i.e., Long run probability of given 3 states = n → ∞P(n)
ng

= [π1 π2 π3 ]
9 10 8
 
=
E

27 27 27
arn

Example 3.22 A man either drives a car or catches a train to go to office each day.
He never goes two day in a row by train but if he drives only, then the next day he
Le

is just as likely to drive again as he to travel by train. Now, suppose that on the
first dice and drove to work iff a six appeared. Find
w.

(a) the probability that he takes a train on the third day.


ww

(b) the probability that he drives to work in the long run.

Solution : Let Xn be the mode of travel.

The state space S = {Train,Car}


= {T,C}
" T C #
TPM is P = T 0 1 (1)
C 1/2 1/2

(a) Initial probability distribution is based on the outcome that the fair die
is tossed.
[Link]
[Link]
216 UNIT III - Classification of Random Processes

P[X = C] = Probability of traveling in Car(C)


1
=
6
P[X = T] = Probability of traveling in Train(T)
5
=
6
∴ Initial distribution of the Markov chain is

n
 
P(1) = 56 T 16 C

g.i
" #
  0 1
P(2) = P2 = P × P = 56 16 1 1
2 2

n
 
= 12 12
1 11

eri
" #
  0 1
P = P = P × P = 12 12
(3) 3 2 1 11
1 1
2 2
ine
 
= 11 13
24 T 24 C

11
ng

∴ P [The man travels by train on third day]=


24
(b) Long run probability distribution for 2 states Train and Car :
Let π = [π1 π2 ] be the stationary distribution of the process.
E
arn

W.K.T. πP = π (2)
and π1 + π2 = 1 (3)
Le

i.e.,
" #
w.

h i 0 1 h i
(2) ⇒ π1 π2 1 1 = π1 π2
2 2
π2
ww

i.e., 0 · π1 + = π1 (4)
2

π2
π1 + = π2 (5)
2
Solving (4),(5), using (3), we get
1
π1 = = Long run probability to travel in Train
3
2
π2 = = Long run probability to travel in Car
3
2
∴ P [man travels by Car in long run] = .
3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 217

Example 3.23 Find the steady state probabilities of state Markov chain with t.p.m.
 
 2/3 1/3 
P =  .
 
 1/6 5/6 

Solution :

3.5.18 Part - B (TPM with conditional probability)

n
Example 3.24 The t.p.m. of a Markov chain {Xn , n = 0,1,2,3,. . . } with 3 states

g.i
 
 3/4 1/4 0 
 
 
0,1,2 is P =  1/4 1/2 1/4  and the initial distribution is P{X0 = i} = 31 , i =

n
 
 

eri
 
 0 3/4 1/4 
0, 1, 2. Find
ine
(a) P[x2 = 2/x1 = 0]
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]
ng

(c) P[x2 = 1]
E

(d) state distribution of the process at the second step


arn

(e) Stationary distribution of the process.

 0 1 2 
Le

0  3/4 1/4 0 
Solution : Given TPM of 3 state M.C. is P(1) = (1)
1  1/4 1/2 1/4 
 
 
2 0 3/4 1/4
w.

1
Initial probability distribution P{X0 = i} = , i = 0, 1, 2.
3
ww

i.e.,
1
P{X0 = 0} = (2)
3
1
P{X0 = 1} = (3)
3
1
P{X0 = 2} = (4)
3 
i.e., P = 13
(0) 1
3
1
3 (5)

(1) 1 n h i o
(a) P [X2 = 1/X1 = 0] = P01 = ∴ P Xn+1 = ai /Xn = a j = Pi j
4
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]:
[Link]
[Link]
218 UNIT III - Classification of Random Processes

P(A ∩ B) P(AB)
Use P[A/B] = =
P(B) P(B)
⇒ P(AB) = P[A/B] · P(B)
Similarly, P(ABC) = P[A/BC] · P[B/C] · P(C)
P(ABCD) = P[A/BCD] · P[B/CD] · P[C/D] · P(D)

∴ P[x3 = 1,x2 = 2, x1 = 1, x0 = 2]
= P[x3 = 1/x2 = 2, x1 = 1, x0 = 2] · P[x2 = 2/x1 = 1, x0 = 2]
· P[x1 = 1/x0 = 2] · P[x0 = 2]

n
= P[x3 = 1/x2 = 2] · P[x2 = 2/x1 = 1] · P[x1 = 1/x0 = 2] · P[x0 = 2]

g.i
(1) (1) (1) (0)
= P21 · P12 · P21 · P2
3 1 3 1

n
= · · · (Refer (1) and (4))
4 4 4 3
=
3
64 eri
ine
X
(c) P [X2 = 1] = P [X2 = 1/x0 = i] · P [X0 = i]
∀i
ng

i=2
X
= P [X2 = 1/x0 = i] · P [X0 = i]
i=0
E

= P [X2 = 1/x0 = 0] · P [X0 = 0] + P [X2 = 1/x0 = 1] · P [X0 = 1]


arn

+ P [X2 = 1/x0 = 2] · P [X0 = 2]


(2) (0) (2) (0) (2) (0)
= P01 · P0 + P11 · P1 + P21 · P2 (6)
(2) (2) (2)
To find P01 , P11 , P21 , we have to find P(2) = P2 = P × P. i.e.,
Le

P(2) = P2 = P × P
w.

   
 3/4 1/4 0   3/4 1/4 0 
=  1/4 1/2 1/4  ×  1/4 1/2 1/4 
   
ww

0 3/4 1/4 0 3/4 1/4


   
   (2) (2) (2) 
 5/8 5/16 1/16   P00 P01 P02 
=  5/16 1/2 3/16  =  P(2) (2) (2)
   
10
P11 P12 

  (2) (2) (2)

3/16 9/16 4/16

P P P

20 21 22

5 1 1 1 9 1
∴ (6) ⇒ P [X2 = 1] = · + · + ·
16 3 2 3 13 3
5 + 9 + 8 22
= =
48 48
11
=
24
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 219

(d) The state distribution of the process at the second state :

P(2) = P(0) · P(2) = P(0) · P2 (∵ P(n+m) = P(n) · P(m) )


 
 
  5/8 5/16 1/16 
= 1/3 1/3 1/3  5/16 1/2 3/16 
 
3/16 9/16 4/16
 
1h 5 5 i
= + + 3 5
+ +
1 9 1
+ +
3 4
3 8 16 16 16 2 16 16 16 16
1 h 18 22 8 i
P(2) = 16 16 16
h3

n
i
= 8 24 6
3 11 1

g.i
= a state distribution at 2nd step

n
(e) Stationary distribution of the process :

W.K.T πP = π,
&π1 + π2 + π3 = 1 eri (7)
(8)
ine
where π = [π1 π2 π3 ]
be the stationary distribution of the 3 state of M.P.
ng

Now,
E

 
3/4 1/4 0
arn

h 
i   h i
∴ (7) ⇒ π1 π2 π3  1/4 1/2 1/4  = π1 π2 π3
 
 
0 3/4 1/4
 
3 1
Le

i.e., π1 + π2 + 0 · π1 = π1 (9)
4 4
1 1 3
π1 + π2 + π3 = π2 (10)
w.

4 2 4
1 1
0 · π1 + π2 + · π3 = π3 (11)
ww

4 4
Solving (9),(10),(11), using (8), we get
h i
π = π1 π2 π3
h i
= 7 7 7 , which is the required stationary distribution
3 3 1

Example 3.25 Consider as Markov chain {Xn , n = 1,2,3,. . . } with states


 
 .1 .5 .4 
 
 
S = {1, 2, 3} and t.p.m. P =  .6 .2 .2  and initial probability distribution
 
 
 
 .3 .4 .3 
[Link]
[Link]
220 UNIT III - Classification of Random Processes

P(0) = ( .7 .2 .1 ). Find

(a) P(X2 = 3) (b) P[x3 = 2, x2 = 3, x1 = 3, x0 = 2].

Solution :

3.5.19 Part - B [Problems of Poisson process]

Recall the formulas,

n
e−λ λx
P[X = x] = , x = 0, 1, 2, · · · (1)

g.i
x!
from Poisson Distribution of I unit
where λ is the parameter

n
e−(λt) (λt)n
P[X(t) = x = n] =
n!
, n = 0, 1, 2, · · ·
eri
from Poisson Process of III unit( this unit )
(2)
ine
where λt is the parameter
Zb
ng

P[a ≤ T ≤ b] = f (t)dt, where f (t) = λe−λt (3)


a
E

from Poisson Process of III unit( this unit )


arn

By the property of P.P., the interval T between


2 consecutive arrivals ∼ an Exp. Dist.(parameter = λ)
e−(λp)t [(λp)t]n
P[N(t) = n] = , n = 0, 1, 2, · · ·
Le

(4)
n!
from Poisson Process of III unit( this unit ), where
w.

λp is the parameter and


p is the probability of occurrence of an experiment
ww

Example 3.26 If a customer arrives at bank counter in accordance with mean rate
of 3 per minute. Find probability that during time interval of 2 minutes

(a) exactly 4 customers arrived (b) more than 2 customers arrived

Also find the probability that the interval between two consecutive arrivals is

(c) more than 1 minute (d) between 1 and 2 minutes


(e) 4 min or less
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 221

Solution : Let X(t) be the number of customers arriving at bank counter


in time interval of t.
λ = Mean arrival rate = 3/min
t = Time interval = 2/min
(a) Probability of exactly 4 customers in 2 minutes :

n = Number of arrivals = 4
X(t) ∼ Poisson Process
e−(λt) (λt)n
P[X(t) = n] = , n = 0, 1, 2, · · · (by using above (2))

n
n!

g.i
e−(3×2) (3 × 2)4
P[X(3) = 4] =
4!
= 0.1339

n
eri
(b) Probability of more than 2 customers :

n = Number of arrivals = more than 2


ine
X(t) ∼ Poisson Process
e−(λt) (λt)n
P[X(t) = n] = , n = 0, 1, 2, · · · (by using above (2))
ng

n!
P[X(3) > 2] = 1 − P[X(2) ≤ 2] = 1 − P[X(2) = 0, 1, 2]
E

e−(3×2) (3 × 2)0 e−(3×2) (3 × 2)1 e−(3×2) (3 × 2)2


= + +
0! 1! 2!
arn

= 0.9881

(c) Probability that the interval between 2 consecutive arrivals in more than
Le

1 minute :

X(t) ∼ Poisson Process


w.

Zb
P[a ≤ T ≤ b] = f (t)dt, where f (t) = λe−λt
ww

(by using above (3))


a
Z∞
P[T > 1] = f (t)dt, f (t) = λe−λt = 2e−2t
1
Z∞ Z∞
= 2e−2t dt = 2 e−2t dt
1
" −2t ∞
# "1 #
e e−2 h i
=2 =2 0− = e−2
−2 1
−2
= 0.1353
[Link]
[Link]
222 UNIT III - Classification of Random Processes

(d) Probability that the interval between 2 consecutive arrivals in between


1 and 2 minute :

X(t) ∼ Poisson Process


Zb
P[a ≤ T ≤ b] = f (t)dt, where f (t) = λe−λt (by using above (3))
a
Z2
P[1 < T < 2] = f (t)dt, f (t) = λe−λt = 2e−2t

n
1

g.i
Z2 Z2
= 2e−2t dt = 2 e−2t dt

n
1 1
−2t 2
" # " #

eri
e e−4 e−2
=2 =2 −
−2 1
−2 −2
= 0.119
ine
(e) Probability that the interval between 2 consecutive arrivals in 4 min or
less :
ng

X(t) ∼ Poisson Process


E

Zb
arn

P[a ≤ T ≤ b] = f (t)dt, where f (t) = λe−λt (by using above (3))


a
Z4
Le

P[T ≤ 4] = f (t)dt, f (t) = λe−λt = 2e−2t


0
w.

Z4 Z4
= 2e−2t dt = 2 e−2t dt
ww

0 0
#4
" " −8 #
e−2t e 1
=2 =2 −
−2 0 −2 −2
= 0.9997

Example 3.27 A machine generates defective items at the rate of 2 per minute.
Find the chance that the interval of 5 minutes (a) exactly 10 defective items are
generated (b) at most 4 defective items (c) at least 2 defective items

Solution :
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 223

Example 3.28 The number of particles emitted by a radioactive source is


distributed in Poisson fashion. The source emits particles at the rate of 6 per
minute. Each emitted particle has a probability of 0.7. Find the chance that 11
particles are counted in 4 minutes.

Solution : Let N(t) = The number of particles emitted in Poisson Process


λ = Mean emitted rate = 6
p = Probability of emitted particle

n
∴ The chance that 11 particles are counted in 4 minutes :

g.i
e−(λp)t [(λp)t]n
P[N(t) = n] = , n = 0, 1, 2, · · · (by using above (4))
n!

n
e−(6×0.7)4 [(6 × 0.7)4]11

eri
P[N(4) = 11] =
11!
= 0.038
ine
Example 3.29 A radio active source emits particles at the rate of 5 per minutes
ng

in accordance with Poisson fashion. Each particle emitted has a probability 0.6 of
being recorded. Find probability that
E
arn

(a) 10 particles are recorded in 4 minutes


(b) less than 2 particles in 1 minute period
Le

(c) more than 2 particles are recorded in 1 minute period


w.

Solution : {(a)=0.1048, (b)=0.4233, (c)=0.5767}


ww

3.5.20 Part - B [Problems of Normal Process (Gaussian process)]

Recall the formulas,

Cov [X (ti ) , X (ti )] = C [X (ti ) , X (ti )] = σ2i Var (Xi ) (1)


h  i  
Cov X i , X t j = λi j = a function of ti − t j
(t ) (2)
h i    
Var Xi ± X j = Var (Xi ) + X j ± 2Cov Xi , X j (3)

Example 3.30 If X(t) is a Gaussian process, with µ(t) = 10 and C (t1 , t2 ) =


16e|t1 −t2 | . Find the probability that (a) X(10) ≤ 8 (b) |X(10) − X(6)| ≤ 4
[Link]
[Link]
224 UNIT III - Classification of Random Processes

Solution : Given X(t)Gaussian process (Normal process)


µ(t) = 10
C (t1 , t2 ) = 16e|t1 −t2 |
X(10) − µ(10) 8 − µ(10)
" #
(a) P[X(10) ≤ 8] = P ≤
σ(10) σ(10)
8 − 10
 
=P z≤
4
= P [z ≤ −0.5]
= P [z ≥ 0.5]

n
= 0.5 − P [0 ≤ z ≤ 0.5]

g.i
= 0.5 − P [0 ≤ z ≤ 0.5]
= 0.3085

n
(b) P[|X(10) − X(6)| ≤ 4] :

eri
Let U = X(10) − X(6), whichisalsoaR.V.
µ[U] = µ[X(10) − X(6)]
ine
i.e., E[U] = E[X(10) − X(6)]
= E[X(10)] − E[X(6)] = 10 − 10
=0
ng

Now, σ2 [U] = σ[ X(10) − X(6)]


Var[U] = Var[X(10) − X(6)]
E

= Cov(10, 10) + Cov(6, 6) − 2Cov(10, 6)


arn

h i
= 16e−|10−10| + 16e−|6−6| − 2 16e−|10−6|
h i
= 16 + 16 − 2 16e−4
Le

 
= 32 − 32e = 32 1 − e = 32(1 − 0.01832) = 32(0.9817)
−4 −4

⇒ σ2 [U] = 31.4139
w.


∴ σ[U] = 31.4139
ww

⇒ σ[U] = 5.6048

[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 225

∴ (b) ⇒ P[|X(10) − X(6)| ≤ 4] = P[|U| ≤ 4]


= P[−4 ≤ U ≤ 4]
−4 − µ(U) 4 − µ(U)
" #
=P ≤U≤
σ(U) σ(U)
4 − µ(U)
" #
−4 − 0
=P ≤z≤
5.6048 5.6048
4 − µ(U)
" #
−4 − 0
=P ≤z≤
5.6048 5.6048
= P [−0.7137 ≤ z ≤ 0.7137]

n
= 2 × P [0 ≤ z ≤ 0.7137]

g.i
= 2 × 0.2611
= 0.5222

n
Example 3.31 If X(t) is a Gaussian process,
eri with µ(t) = 3 and
ine
C (t1 , t2 ) = 4e0.2|t1 −t2 | . Find the probability that

(a) X(5) ≤ 2 (b) |X(8) − X(5)| ≤ 1


ng

Solution : {(a)=0.3085, (b)=0.4038}


E
arn
Le
w.
ww

[Link]
[Link]
226 UNIT III - Classification of Random Processes

3.5.21 Part - B [Problems of Sine wave process]

Example 3.32 Verify whether the sine wave process of X(t) where
X(t) = Y cos ωt, where Y is uniformly distributed R.V. in (0, 1) is SSS or not.

Solution : Given X(t) = Y cos ωt


Y = Uniform distributed r.v. in(0, 1)
1
∴ pdf of Yis f (y) = =1
1−0

n
If X(t) to be Stationary, then its mean is not a function of 0 t0 .

g.i
Z∞
E[X(t)] = X(t) f (y)dy

n
−∞

eri
Z1
= y cos ωt · 1 · dy
ine
0
Z1
= cos ωt y · dy
ng

0
#1
y2
"
E

= cos ωt
2
 0 
arn

1
= cos ωt − 0
2
1
= cos ωt
Le

2
= a function of t
w.

∴ X(t) is not stationary process ⇒ X(t) is not SSS.


ww

Example 3.33 For a R.V. X(t) = Y sin ωt, Y is an uniformly distributed r.v. in
(−1, 1). Check whether the process is WSS or not.

Solution :

3.5.22 Part - B(Anna University Questions)

(1) Show that the random process X(t) = cos(t + φ) where φ is a random
variable uniformly distributed in (0, 2π) is (a) first order stationary (b)
stationary in the wide sense (c) ergodic (based on first order or
second order averages) [May/June 2006, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 227

Solution:
(a) E[X(t)] = 0 ⇒ X(t) is first order stationary




 (b)RXX (t, t + τ) = 2 cos τ ⇒ X(t) is WSS


 1


 (c) If lim X̄T = E[X(t)] = 0 ⇒ X(t) is first order average in ergodic

 T→∞
lim ȲT = E[Y(t)] = 12 cos τ ⇒ X(t) is second order average in ergo



 If
T→∞

(2) Define Poisson process and obtain the probability distribution for that.
Also find the autocorrelation function of the process.
Solution :
Definition : If X(t) represents the number of occurrences of a certain

n
event in (0, t), then the discrete random process X(t) is called Poisson

g.i
process, provided that the following postulates are satisfied
(a) P[1 occurrence in (t, t + ∆t)] = λ.∆t + O(∆t)

n
(b) P[0 occurrence in (t, t + ∆t)] = 1 − λ.∆t + O(∆t)

eri
(c) P[2 or more occurrences in (t, t + ∆t)] = O(∆t)
(d) X(t) is independent of the number of occurrences of the event in
ine
any interval prior and after the interval (0, t).
(e) The probability that the event occurs a specified number of times
in (t0 , t0 + t) depends only on t, but not on t0 .
ng

Probability distribution:
Let X(t) be the Poisson process. Let ‘λ’ be the occurrences of the event
E

per unit time and let ‘t’ be the time. Let ∆t be the increment in time.
arn

Pn (t) = P[X(t) = n], where ‘n’ denotes the number of occurrences. We


have (a) , (b) and (c) from the definition of Poisson process.
Now Pn (t + ∆t) =P[X(t + ∆t) = n]
Le

=P{(n − 1) occurrence in the interval (0, t) and


one occurrence in (t, t + ∆t)}
w.

+ P{n occurrence in the interval (0, t) and


zero occurrence in (t, t + ∆t)}
ww

=Pn−1 (t)[λ.∆t + O(∆t)] + Pn (t)[1 − λ.∆t + O(∆t)]


Pn (t+∆t)−Pn (t) O(∆t)
Now ∆t = Pn−1 (t)λ − λPn (t) + ∆t As ∆t → 0, we have

Pn (t + ∆t) − Pn (t) O(∆t)


lim = λ [Pn−1 (t) − Pn (t)] + 0, as lim =0
∆t→0 ∆t ∆t→0 ∆t
P0n (t) = λ [Pn−1 (t) − Pn (t)] (1)
Let the initial solution of equation (1) be
(λt)n
Pn (t) = f (t) (2)
n
[Link]
[Link]
228 UNIT III - Classification of Random Processes

Where f (t) is a function of time ‘t’. Differentiating (2) [Link] ‘t’,

λn h i
P0n (t) = nt n−1
f (t) + t f (t)
n 0
(3)
n

Substituting the values of (2) and (3) in (1), we get

 
λ h
n i  (λt)
n−1
(λt)n
ntn−1 f (t) + tn f 0 (t) = λ 
 
f (t) − f (t)

n
n n−1 n

g.i
λn tn−1 (λt)n λn tn−1 (λt)n
f (t) + f 0 (t) = f (t) + λ f (t)
n−1 n n−1 n

n
⇒ f 0 (t) = −λ f (t)


f 0 (t)
f (t)
= −λ
eri
ine
Integrating w.r.t. ‘t’, we get
E ng

f 0 (t)
Z Z
dt = − λdt
arn

f (t)
log f (t) + log c = −λt
⇒ log c[ f (t)] = −λt
Le

⇒ c f (t) = e−λt
1
⇒ f (t) = e−λt
w.

c
∴ f (t) = ke−λt
ww

From equation (2),

f (0) = P0 (0) = P[X(0) = 0] = P[no event occurs]

When t = 0, we have f (0) = ke0 ⇒ k = 1


∴ f (t) = e−λt
(λt)n
(2) becomes, Pn (t) = n e−λt , n = 0, 1, 2, · · · Auto correlation function
|
of Poisson process:
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 229

RXX (t1 , t2 ) =E[X(t1 )X(t2 )]


=E {X(t1 ) [X(t2 ) − X(t1 ) + X(t1 )]}
=E {X(t1 )X(t2 ) − X(t1 ) [+X(t1 )]}
n o
=E X(t1 ). [X(t2 ) − X(t1 )] + X (t1 )
2
h i
=E {X(t1 ). [X(t2 ) − X(t1 )]} + E X2 (t1 )
h i
=E {X(t1 )} E [X(t2 ) − X(t1 )] + E X (t1 )
2

∵ x(t)is a process of independent increment
= (λt1 ) λ (t2 − t1 ) + λt1 + λ2 t21 = λ2 t1 t2 − λ2 t21 + λt1 + λ2 t21

n
=λ2 t1 t2 + λt1 , i f t2 ≥ t1

g.i
=λ2 t1 t2 + λ [min (t1 , t2 )]

n
(3) Show that when events occur as a Poisson process, the time interval

eri
between successive events follow exponential distribution.[N/D 2006,
ECE]
ine
Solution : Consider two consecutive occurrences of the event Ei and
Ei+1 .
Let Ei takes place at time ti and T, the interval between the occurrences
ng

Ei and Ei+1 . T is a continuous R.V.


P(T > t) = P[Ei+1 did not occur in (ti and ti+1 )]
E

= P[No event occurs I an interval or length t]


arn

= P[X(t) = 0]
= e−λt
Le

The cdf of T os given by F(t) = P(T ≤ t) = 1 − P(T > t) = 1 − e−λt


∴ The pdf of T is given by f (t) = λe−λt , t ≥ 0
w.

This is a exponential distribution with mean λ1


(4) For a random process X(t) = Y sin ωt, Y is an uniform random variable
ww

in the interval -1 to +1. Check whether the process is wide-sense


stationary or not.
Solution : Y is uniformly distributed in the interval - 1 to 1.
1
∴ f (y) = , −1 < y < 1.
2
Z1
1
Mean of X(t) = E[X(t)] = y sin ωt dy = 0
2
−1
A.C. of X(t) = RXX [X(t)] = E[X(t)X(t + τ)]
1
= [cos ωt − cos ω(2t + τ] = a function of t.
6
[Link]
[Link]
230 UNIT III - Classification of Random Processes

Mean is constant, the auto correlation function is not a function of time


differences of the two random variables.
The process is not WSS.
(5) Show that sum of two independent Poisson processes is again a Poisson
process.
Solution : Refer ( 2 marks question and answers - 5th question)
(6) The transition probability
 matrix of a Markov chain {Xn }, three states

 0.1 0.5 0.4  
1, 2 and 3 is P =  and the initial distribution is P( 0) =
 
0.6 0.2 0.2

n
 
 

 0.3 0.4 0.3 
 

g.i
(0.7, 0.2, 0.1).
Find (1) P{X2 = 3} and (2) P{X3 = 2, X2 = 3, X1 = 3, X0 = 2}.

n
Solution : {(1) = 0.279 ,(2) = 0.0048}

eri
(7) If {X(t)} and {Y(t)} are two independent Poisson processes, show that
the conditional distribution {X(t)} and {X(t) + Y(t)} is
ine
binomial.[May/June 2007, ECE]
Solution : Let {X(t)} and {Y(t)} be two independent Poisson
processes with λ1 t and λ2 t respectively.
ng

Hence {X(t) + Y(t)} is also Poisson process with parameter λ1 t + λ2 t.


P [X(t) = m∩Y(t) = n−m]
P{X(t) = m/X(t)+ y(t) = n} = = nCm px qn−m
E

P [X(t)+Y(t) = n]
λ1 λ2
arn

where p = ;q = ;p + q = 1
λ1 + λ2 λ1 + λ2
⇒ Binomial.
Le

(8) Show that the random process X(t) = A cos(ω0 t + θ) is wide-sense


stationary, A and ω0 are constants and θ is a uniformly distributed
random variable in (0, 2π).
w.

A2
Solution : {E[X(t)] = 0, RXX [X(t)] = cos(ω0 t) ⇒ WSS}
2
ww

(9) If X(t) = Y cos ωt+Z sin ωt, where Y and Z are two independent normal
random variables with E(Y) = E(Z) = 0, E(Y2 ) = E(Z2 ) = σ2 and ω is a
constant, prove that {X(t)} is strict sense stationary process of order 2.
Solution : {R(t, t + τ) = k cos λτ, where k = E(Y2 ) = E(Z2 )}
(10) Define Poisson process and derive the probability law for the Poisson
process {X(t)}.

Solution : Refer(Definition and probability law of 2nd question.)


(11) A man either drives a car (or) catches a train to go to office each day.
He never goes 2 days in a row by train but if he drives one day, then
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 231

the next day he is just as likely to drive again as he is to travel by


train. Now suppose that on the first day of the week, the man tossed
a fair die and drove to work if and if only if a 6 appeared. Find (1) the
probability that he takes a train on the third day and (2) the probability
that he drives to work in the long run. [N/D 2007, ECE]
Solution:(1) P(the man travels by train on the third day) = 11 / 24.
(2) P(the man travels by car in the long run) = 2 / 3.
(12) Define random process. Classify it with an example.
(13) The process {X(t)} whose probability distribution under certain

n
conditions is given by

g.i
 (at)n−1
 (1+at)n+1 , n = 1, 2, ...
P{X(t) = n} = 

Show that it is evolutionary (not
 at , n = 0

n
1+at
stationary).

eri
(14) Three boys A, B and C are throwing a ball to each other. A always
throws the ball to B and B always throws the ball to C, but C is just as
ine
likely to throw the ball to B as to A. Show that the process is Markovian.
Find the transition matrix and classify the states. [A/M 2008, ECE]
 
A B C
ng


 

   

 A 0 1 0 irreducible 

Solution : Refer notes. 
   
P = B  0 0 1  , aperiodic 
 
 
 
E


 

 
C 1/2 1/2 0 ergodic 
  
arn

(15) Write a detailed note on Normal process.


Solution :
Le

Definition :
A real valued random process X(t) is called Gaussian process or
normal process, if the random variables X(t1 ), X(t2 ), · · · , X(tn ) are
w.

jointly normal for every n = 1, 2, · · · and for any set of ti .


The nth order density of a Gaussian process is given by
ww

f (x1,x2,· · ·,xn;t1,t2,· · ·,tn )


n X
n
 
 1
 X 
|Λ|i j (xi − µi )(x j − µ j )

−

 2 |Λ|
1 
=p i=1 j=1

e
n
(2π) |Λ|
where µi = E{X(ti )} and Λ is the nth order square matrix (λi j ),
where λi j = CXX {X(ti ), X(t j )} = autocovariance and jΛ ji j =
cofactor of λi j , in jΛ j.
Note :
Gaussian process is completely specified by the first and second
order moments, viz., means and covariances(variances).
[Link]
[Link]
232 UNIT III - Classification of Random Processes

Properties :
(1) If a Gaussian process is WSS it is also strict-sense stationary.
(2) If the member functions of a Gaussian process are
uncorrelated, then they are independent.
(3) If the input {X(t)} is a linear system is a Gaussian process, the
output will also be a Gaussian process.
(4) Random phenomena in communication system are well
approximated by a Gaussian process. For example, the voltage
across a resister at the output of an amplifier can be random due
to a random current that is the result of many contributions from

n
other random currents at various places within the amplifier.

g.i
Random thermal agitation of electrons causes the randomness of
the various current. This type of noise is call Gaussian because

n
the random variable representing the noise voltage has the

eri
Gaussian density.
Uses : One of the important uses of Gaussian process is to
analyze the effects of thermal noise in electronic circuits used in
ine
communication systems.
(16) Distinguish between ‘stationary’ and weakly stationary stochastic
ng

processes Give an example to each type. Show that Poisson process is


an evolutionary process. [N/D 2008, ECE]
E

Solution :
# Strict sense stationary Wise sense stationary
arn

processes (SSS) processes (WSS)


(1) Every WSS process need not Every SSS process of order 2
be a SSS process of order 2 is a WSS.
Le

(2) E[X(t)] = constant. E[X(t)] = constant.


The condition is enough The condition is not enough
w.

(3) Example: Ergodic process Example: Every SSS process


is a WSS
ww

Poisson process is not stationary (evolutionary) since all statistical


process are functions of t.
P[X(t1 ) = n1 ; X(t2 ) = n2 ]
= P[X(t1 ) = n1 ] · P[X(t2 ) = n1 jX(t1 ) = n], t2 > t1
= P[X(t1 ) = n1 ]·
P[the event occurs (n2 − n1 ) times in the interval (t2 − t1 )]
e−λt1 (λt1 )n1 e−λ(t2 −t1 ) [λ(t2 − t1 )n2 −n1 ]
= , i f n2 ≥ n1
n1 n2 − n1
e−λt2 λn2 tn1 2 (t2 − t1 )n2 −n1
= , i f n2 ≥ n1
n1 n2 − n1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 233

Proceeding similarly, we can get the 3rd probability function as,

P[X(t1 ) = n1 , X(t2 ) = n2 , X(t3 ) = n3 ] =


−λt3 n3 n2
e λ t1 (t2 − t1 )n2 −n1 (t3 − t2 )n3 −n2
, ifn3 ≥ n2 ≥ n1
n1 n2 − n1 n3 − n2

(17) State the postulates of a Poisson process and derive its probability
law.[M/J 2009, ECE]
Solution : Refer Definition and probability distribution of 2nd
question

n
g.i
(18) Classify the random process and explain with an

n
example(=12question)

eri
(19) Given a random variable Y with characteristic function ϕ(ω) and a
random process X(t) = cos(λt + Y). Show that {X(t)} is stationary in the
wide sense if ϕ(1) = ϕ(2) = 0.
ine
Solution : {E(X(t) = 0, RXX (t, t + τ) = 12 cos λτ}

(20) A machine goes out of order whenever a component fails. The failure
ng

of this part follows a Poisson process with a mean rate of 1 per week.
Find the probability that 2 weeks have elapsed since last failure. If
E

there are 5 spare parts of this component in an inventory and that


arn

the next supply is not due in 10 weeks, find the probability that the
machine will not be out of order in the next 10 weeks. [N/D 2009, ECE]
Solution : Here the unit time is 1 week.
Le

(a) Mean failure rate λ = 1.

P[2 weeks have elapsed since last failure]


w.

= P[no failures in the 2 weeks since last failure]


e−λt (λt)x
ww

P [X (t) = x] =
x
e−(1)(2) [(1)(2)]0
⇒ P [X (2) = 0] = = e−2 = 0.135
0

(b) There are only 5 spare parts and the machine should not go out of
order in the next 10 weeks.

P[for this event] = P[X(10) ≤ 5]


5 5
X e−λt (λt)x X e−(2)(5) [(2)(5)]x
= = = 0.068
x=0 x x=0 x
[Link]
[Link]
234 UNIT III - Classification of Random Processes

(21) x(t) = A sin(ωt+θ), where A and ω are constants and θ is R.V. uniformly
distributed over (−π, π). Find the auto correlation of {y(t)}, where
y(t) = x2 (t). ( )
A4
Solution : R (t1 , t2 ) = [2 + cos2ω (t1 − t2 )]
8
(22) For a random process x(t) = y sin ωt, y is an uniformly distributed
random variable in the interval (−1, 1). Check whether the process is
wide sense(stationary or not. )
A4
Solution : E[X(t)] = 0, R (t1 , t2 ) = [2 + cos2ω (t1 − t2 )]

n
8

g.i
(23) In a village road, buses cross a particular place at a Poisson rate of 4
per hour. If a boy starts counting at 9.00 am.

n
(i) What is the probability that his count is 1 by 9.30 a.m?

eri
(ii) What is the probability that his count is 3 by 11.00 a.m?
(iii) What is the probability that his count is more than 5 by noon?
ine
ng
E
arn
Le
w.
ww

[Link]
UNIT - IV v
Visit for More : [Link]

n
g.i
rin
ee
gin
En
arn
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


UNIT - V Visit for More : [Link]
UNIT - V

n
g.i
rin
ee
gin
En
arn
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner


Visit for More : [Link]

.in
ng
eri
ine
ng
rnE
a
Le
w.
ww

Visit for More : [Link] Scanned by CamScanner

You might also like