Understanding Probability Concepts
Understanding Probability Concepts
in
Probability
n
• Deterministic
g.i
• Probabilistic
n
In deterministic( or predictable or surety ) type, we can take sure results
as ‘YES’ or ‘NO’.
Examples: eri
ine
1. After 2 years, the age of a living person is added more. ( here the result
is true or yes).
ng
2. A person is living the planet sun. ( here the result is false or no).
E
sure results as YES or NO. In that cases, the study of random (or chance )
phenomena, the result is in between 0% to 100%.
Le
Examples:
The concept of probability has its origin in the early 17th century. Such
probability phenomena are frequently observed in games, mark in an exam,
profit in a business, birth and death rates of a country, system failure,
machine life time, queueing theory, radar detection and social sciences,
economy of a country.
n
(2) Axioms of probability:
g.i
(i) P(A) ≥ 0, if P(A) = 0 then A is an impossible event.
(ii) P(S) = 1 ⇒ A is an sure event.
n
eri
Important Note * : Probability of any event A always lies between
0 and 1.
ine
i.e., 0 ≤ P(A) ≤ 1. [This is from (i) and (ii)]
(iii) For mutually exclusive events
ng
(3) Theorems:
arn
P(A or B) = P (A ∪ B)
Le
= P (A) + P (B) − P (A ∩ B)
(for non - mutually exclusive events)
w.
Similarly,
ww
P(A or B or C) = P (A ∪ B ∪ C)
P (A and B and C) = P (A ∩ B ∩ C)
P (A) P (B/A) P (C/A ∩ B)
or
=
P (B) P (C/B) P (A/B ∩ C)
or
P (C) P (A/C) P (B/C ∩ A)
Conditional Probability:
P (A ∩ B)
n
P (B/A) =
P (A)
g.i
P (A ∩ B)
P (A/B) =
P (B)
n
For independent events:
(i) P (A/B) = P (A)
eri
(ii) P (A ∩ B) = P (A) P (B)
ine
which is the necessary and sufficient condition.
(iii) AND ⇒ multiplication
ng
n
X
P (B) = P (Ai ) P (B/Ai )
w.
i=1
collectively exhaustive,
P (B) = P (A1 ) P (B/A1 ) + P (A2 ) P (B/A2 ) + P (A3 ) P (B/A3 )
(e) Baye’s Theorem:
P (Ai ) P (B/Ai )
P (Ai /B) =
P (B)
P (Ai ) P (B/Ai )
=
P (A1 ) P (B/A1 ) + P (A2 ) P (B/A2 ) + · · · + P (An ) P (B/An )
(4) Useful results:
(a) Algebra of Sets:
(i) Commutative law:
[Link]
[Link]
4 Introduction
A∩B=B∩A
A∪B=B∪A
(ii) Distributive law:
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C)
(iii) Associate law:
(A ∪ B) ∪ C = A ∪ (B ∪ C) = A ∪ B ∪ C
(A ∩ B) ∩ C = A ∩ (B ∩ C) = A ∩ B ∩ C
n
(iv) DeMorgan’s law:
g.i
A∪B=A∩B
A∩B=A∪B
n
(b) If A and B independent, then
eri
A and B , A and B and A and B and independent
ine
(c) P A/B = 1 − P (A/B)
P (A) − P (A ∩ B)
(d) P A/B =
1 − P (B)
ng
0! = 1
n!
(b) n Cr =
r! (n − r)!
Le
n
Cr = Cn−r
n
n
C1 = n =n Cn−1
C0 = 1 = n Cn
w.
n (n − 1)
Sum of n terms, Sn = na + d
2
(d) For a Geometric Progession(G.P.): a, ar, ar2 , · · ·
a (1 − rn )
1 − r , if r < 1
Sum of n terms, Sn =
a (rn − 1)
, if r > 1
r−1
a
S∞ = , r < 1
1−r
Example 0.1 A coin is tossed. Find the probability of getting (a) a head (b) a tail
(c) both head and tail (d) either a head or a tail (e) 2 heads.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 5
n
n(B) 1
g.i
∴ P(B) = = .
n(S) 2
n
(c) Let C be an event to get both head and tail.
i.e., C = A ∩ B = { } = φ = null set; n(C) = 0.
∴ P(C) =
n(C) 0
= =0 eri (⇒ C is an impossible event)
ine
n(S) 2
= + −0 (∵ by (a), (b)&(c))
2 2
=1 (⇒ D is a sure event.)
Le
Example 0.2 2 coins are tossed. Find the probability of getting (a) 2 heads (b) 1
w.
n(D) 3
∴ P(D) = = .
n
n(S) 4
g.i
(e) Let F be an event to get at most 1 head. i.e., to get number of heads
≤1
n
i.e., D = {HT, TH, TT}, n(F) = 3.
∴ P(F) =
n(F) 3
= .eri
ine
n(S) 4
Example 0.3 A coin is tossed 3 times. Find the probability to get (a) more heads
ng
Sample space S : {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}
arn
n(S) = number of elements in S = 8 = 2 3 coins
n(S) 8 2
(b) Let B be an event to get tail occurs only on odd number of tosses.
ww
Example 0.4 A die is thrown. Find the probability of getting (a) odd numbers (b)
even numbers (c) multiples of 3 (d) divisors of 3 (e) more than 2.
Solution: Let the experiment be E : Throwing a die.
Sample space S : {1, 2, 3, 4, 5, 6}
n(S) = number of elements in S = 6 = 61 die
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 7
(a) Let A be an event to get an odd number. i.e., A = {1, 3, 5}, n(A) = 3.
n(A) 3 1
∴ P(A) = = = .
n(S) 6 2
(b) Let B be an event to get an even number. i.e., B = {2, 4, 6}, n(B) = 3.
n(B) 3 1
∴ P(B) = = = .
n(S) 6 2
(c) Let C be an event to get multiple of 3. i.e., C = {3, 6}, n(C) = 2.
n
n(C) 2 1
g.i
∴ P(C) = = = .
n(S) 6 3
n
(d) Let D be an event to get divisor of 6. i.e., D = {1, 2, 3, 6}, n(D) = 4.
eri
∴ P(D) =
n(D) 4 2
n(S)
= = .
6 3
ine
(e) Let F be an event to get more than 2. i.e., F = {3, 4, 5, 6}, n(F) = 4.
n(F) 4 2
ng
∴ P(F) = = = .
n(S) 6 3
(or)
E
2 2
=1− = .
6 3
Le
Example 0.5 2 dice are thrown. Find the probability of getting (a) a doublet (b)
even number on first die (c) total is 7 (d) total more than 9 (e) total less than or
w.
equal to 9.
ww
(a) A = {(1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)}, n(A) = 6.
[Link]
[Link]
8 Introduction
n(A) 6 1
∴ P(A) = = = .
n(S) 36 6
(2, 1), (2, 3), (2, 5)
(b) B = , n(B) = 3.
(4, 1), (4, 3), (4, 5)
(6, 1), (6, 3), (6, 5)
n(B) 9 1
∴ P(B) = = = .
n(S) 36 4
(c) C = {(1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)}, n(C) = 6.
n
g.i
n(C) 6 1
∴ P(C) = = = .
n(S) 36 6
n
(d) D = {(4, 6), (5, 5), (5, 6), (6, 4), (6, 5), (6, 6)}, n(D) = 6.
∴ P(D) =
n(D)
n(S)
=
6 1
= .
36 6 eri
ine
1 5
(e) P(F) = 1 − P(D) = 1 − = .
6 6
ng
Example 0.6 From a well shuffled deck of 52 playing cards, find the probability
w.
of getting (a) a spade (b) 2 spades (c) a black king (d) 3 heats (e) 4 diamonds (f) 5
ww
kings (e) 3 spades and 1 diamond (f) 2 kings, 2 aces and 2 queens (g) 4 cards of
each suit (h) a spade or a king (i) a black or a spade or an ace.
Solution: Let the experiment be E : Play of well shuffled 52 cards.
Sample space S : Refer (Arrangement of 52 deck of cards)
n(S) = number of elements in S = 52
13C1 13 1
(a) P( a spade ) = = = .
52C1 52 4
13×12
13C2 1×2 13 × 12 3 1
(b) P( two spades ) = = 52×51
= = = .
52C2 1×2
52 × 51 51 17
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 9
13C3 13 × 12 × 11
(c) P( 3 hearts ) = = .
52C3 52 × 51 × 50
(d) P( 5kings ) = 0. ( an impossible event)
2C1 2 1
(e) P( a black king ) = = = .
52C1 52 26
4C4
(f) P( 4 diamonds ) = .
52C4
13C3 × 4C1
(g) P( 3 spades and 1 diamond ) = .
n
52C4
g.i
4C2 × 4C2 × 4C2
(h) P( 2 kings, 2 aces and 2 queens ) = .
52C6
n
13C1 × 13C1 × 13C1 × 13C1
(i) P( 4 cards of each suit ) = .
52C4
(j) P( a spade or a king )
eri
= P( a spade ) + P( a king ) − P( a spade and a king )
ine
13C1 4C1 1 4
= + − = .
52C1 52C1 52 13
ng
Example 0.7 A box contains 4 orange, 5 white , 7 green balls. Find the probability
of getting (a) 2 orange, 3 white and 1 green all together at a time (b) 2 orange or (3
w.
white and 1 green) (c) 2 orange, 3 white and 1 green successively with replacement
ww
(d) 2 orange, 3 white and 1 green successively without replacement (d) 2 orange,
1 orange and 3white successively with replacement (e) 2 orange, 1 orange and 3
white successively without replacement.
Solution: Let A be an event to get 2 orange 4 orange
Let B be an event to get 3 white 5 white
Let C be an event to get 1 green 7 green
16 balls
4C2 × 5C3 × 7C1
(a) P(A ∪ B ∪ C) = .
16C6
[Link]
[Link]
10 Introduction
n
Let H be an event to get 1 green given that event F ∩ G has occurred
g.i
with replacement.
n
7C1
∴ P(H/(F ∩ G)) =
eri
16C1
16C2
5C3
P(G/F) =
arn
5C3
∴ P(K/(I ∩ J)) =
16C3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 11
n
13C3
g.i
∴ P(I ∩ J ∩ K) = P(I).P(J/I).P(K/(I ∩ J))
n
4C2 2C1 5C3
=
16C2 14C1 13C3
eri
Here I, J, K are dependent events.
ine
Example 0.8 A bag contains 1 to 100 numbered balls. One ball is drawn at
random. Find the probability of getting (a) a multiple of 5 (b) a multiple of 7 (c) a
ng
100
n(A) = = 20.
5
w.
n(A) 20 1
∴ P(A) = = = .
n(S) 100 5
ww
50 1
P(D) = =
100 2
20 1
We have from (a), P(A) = = , where A = an event to get a multiple
100 5
of 5.
Let F be an event to get an odd number or a multiple of 5. i.e., F = D∪A.
n
50 20 10 60 3
= + − = =
g.i
100 100 100 100 5
Example 0.9 5 persons are selected at random from a group of 4 men, 4 women
n
eri
and 4 children. Find the probability of getting (a) exactly 2 children (b) exactly 1
child.
ine
Solution: Here group total = 4 + 4 + 4 = 12.
Here we have to select totally 5 persons.
ng
(a) We are going to select 2 children from 4 children and remaining 3 from
4 men and 4 women.
E
arn
4C2 × 8C3
∴ P(Selection of 5 persons in which exactly with 2 children) = .
12C5
Le
4C1 × 8C4
(b) ∴ P(Selection of 5 persons in which exactly with 1 child) = .
12C5
w.
2 3 4
none (b) 1 (c) 2 (d) all 3 (e) at least one.
1 1 1
Solution: Given P(A) = , P(B) = , P(C) = .
2 3 4
Students A, B, C are solving the problem independently.
1 1
∴P(Ā) = 1 − P(A) = 1 − = = P( Problem not solved by A)
2 2
1 2
P(B̄) = 1 − P(B) = 1 − = = P( Problem not solved by B)
3 3
1 3
P(C̄) = 1 − P(C) = 1 − = = P( Problem not solved by C)
4 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 13
n
(c) P(Problem solved by exactly two )
g.i
= P(A ∩ B ∩ C̄) + P(A ∩ B̄ ∩ C) + P(Ā ∩ B ∩ C̄)
= P(A)P(B̄)P(C̄) + P(Ā)P(B)P(C̄) + +P(Ā)P(B̄)P(C)
123 113 121 6 + 3 + 2
n
= + + =
234 234 234 24
eri
11
=
24
ine
(d) P(Problem solved by all 3 students)
= P(A ∩ B ∩ C) = P(A)P(B)P(C)
111 1
= =
ng
2 3 4 24
(e) P(Problem solved by at least by one student)
= P(Number of students solved the problem ≥ 1)
E
4
Example 0.11 What is the chance that a leap year selected at random will contain
w.
53 Sundays?
ww
366
Solution: W.K.T. in a leap year, the total number of days are 366. = 52
7
complete weeks and 2 days.
The remaining 2 days may be in the following possible combinations:
(1)Sunday, Monday (2)Monday, Tuesday (3)Tuesday, Wednesday
(4)Wednesday, Thursday (5)Thursday, Friday (6)Friday, Saturday
(7)Saturday, Sunday
2
∴ P 53 Sundays in a leap year =
7
Example 0.12 Box A contains 5 oranges and 7 green balls. Box B contains 4
oranges and 6 green balls. One box is choosen at random Find the probability of
[Link]
[Link]
14 Introduction
get (a) 2 orange (b) 3 green (c) 2 orange from box A (d) 3 green from box B.
n
(a) P(2 orange) = P(2 orange from A or B)
=P(2 orange from A) + P(2 orange from B) − P(2 orange from A and B)
g.i
=P( First enter into box A and take 2 orange from A)
+ P( First enter into box B and take 2 orange from B) − 0
n
(∵ selection of box is 1 at random)
eri
=P( First enter into box A)P( take 2 orange from A)
+ P( First enter into box B)P(take 2 orange from B)
ine
1 5C2 1 4C2
= + . (⇒ Total Probability)
2 12C2 2 10C2
(b) P(3 green) = P(3 green from A or B)
ng
=P(3 green from A) + P(3 green from B) − P(3 green from A and B)
=P( First enter into box A and take 3 green from A)
E
= + . (⇒ Total Probability)
2 12C3 2 10C3
w.
Favorable probability
(c) P(2 orange from A) =
Total probability
P(Select box A and get 2 orange)
ww
=
P(2 orange from A or B)
1 5C2
2 12C2
=1 5C2 1 4C2
(∵ Baye’s theorem)
2 12C2 + 2 10C2
Favorable probability
(d) P(3 green from B) =
Total probability
P(Select box B and get 3 green)
=
P(3 green from A or B)
1 6C3
2 10C3
= (∵ Baye’s theorem)
1 7C3 1 6C3
2 12C3 + 2 10C3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 15
Example 0.13 A factory has three machines A, B and C which generate items in
the proportional 2 : 6 : 3. 50%, 70% and 90% of the items generated by A, B and
C respectively are known to have standard quality. An item selected at random
from a day’s production is known to have quality. What is the chance that it came
from C?
2
Solution: E1 : The item is generated by the machine A, P(E1 ) = .
11
n
6
E2 : The item is generated by the machine B, P(E2 ) = .
g.i
11
3
E3 : The item is generated by the machine C, P(E3 ) = .
11
n
A : The item has standard quality.
P(E3 /A) =
eri
P(A/E1 ) = 50% = 0.5, P(A/E2 ) = 70% = 0.7, P(A/E3 ) = 90% = 0.9
P(E3 )P(A/E3 )
ine
P(E1 )P(A/E1 ) + P(E2 )P(A/E2 ) + P(E3 )P(A/E3 )
(∵ Baye’s theorem)
ng
3
11 × 0.9
= = 0.34
2
11 × 0.5 + 6
11 × 0.7 + 3
11 × 0.9
E
arn
Le
w.
ww
[Link]
[Link]
n
ng.i
eri
ine
ng
E
arn
Le
w.
ww
[Link]
[Link]
n
generating functions and their properties. Binomial, Poisson ,
g.i
Geometric, Uniform, Exponential, Gamma and Normal distributions
- Function of Random Variable.
n
eri
A random variable is a real valued function defined over the sample
space of an experiment. i.e.,
ine
A random variable is a function X(s), which assigns a real number (x)
to every element (s) of the sample space (S) corresponding to random
ng
experiment (E).
For Example : Consider an experiment of tossing a coin twice. Define a
random variable X as the number of heads. Then
E
Sample Space : HH HT TH TT
arn
Value of X : 2 1 1 0
1 2 1
∴ P(X = 2) = , P(X = 1) = , P(X = 0) =
4 4 4
w.
n
(ii) f (x)dx = 1
variable X, p.m.f. is
g.i
−∞
P (X = xi ) = Pi = P (xi )
n
Pi ≥ 0
where (i) X
eri
(ii) Pi = 1
∀i
F(x) = P (X ≤ x) Z x
ine
Cumulative =
P
P (X = xi ) F(x) = f (x) dx
distribution xi ≤x −∞
function
ng
(cdf):F(x)
X Z ∞
Mean of X: E (X) = xP (X = x) E(x) = x f (x) dx
E
E(X) ∀x −∞
arn
V (X) = E X2 − [E (X)]2
2
V (X) = E X − [E (X)] 2
where
Le
Variance of X where
P Z ∞
: V(X) E X2 = x2 P (X = x) E X2 = x2 f (x) dx
∀x −∞
w.
h i X h i Z ∞
Moment E etx = etx P (X = x) E etx = etx f (x) dx
generating ∀x −∞
ww
function
(m.g.f.) of hX: i
MX (t) = E etx
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 19
n
P (a ≤ x ≤ b) = F (b) − F (a) Za
g.i
b
+ P (X = a) P (a ≤ x ≤ b) = f (x) dx
+P (X = b) a
n
eri
(1) Properties of Cumulative distribution function (c.d.f.):
(a) 0 ≤ F(x) ≤ 1
ine
(b) F (x1 ) < F (x2 ) if x1 < x2
(c) F (−∞) = lim F (x) = 0
ng
x→−∞
F (∞) = lim F (x) = 1
x→∞
(2) For a continuous r.v.:
E
d
Also, F (x) = f (x)
dx
2
Le
2
x − X P (x) , for discrete r.v. X
P
Zx
(a) Var (X) =
2
ww
x − X f (x) dx, for continuous r.v. X
R
x
Moments
0
(a) rth moment about origin (Raw moment) : µr = E (Xr )
x P (x) , for discrete r.v. X
P r
x
0
(i) µr = E (Xr ) =
Z ∞
xr f (x) dx , for continuous r.v. X
−∞
(ii) Note:
0
µ0 = 1
n
0
µ1 = Mean
(X) = E (X)
g.i
0
µ2 = E X 2
r
(b) r moment about mean (Central moment) : µr = E X − X
th
n
r
eri
X
x − X P (x) , for discrete r.v. X
(i) µr =
Zx
∞ r
f (x) dx , for continuous r.v. X
ine
x − X
−∞
(ii) Note:
ng
µ1 = 0
0
0 2
µ2 = Var (X) = µ2 − µ1
E
0 3
arn
0 0 0
µ3 = µ3 − 3µ2 µ1 + 2 µ1
0 0 0 0
0 2 0 4
µ4 = µ4 − 4µ3 µ1 + 6µ2 µ1 − 3 µ1
Le
0 0
0 0
0 2 0 0 2 0 4
In general, µr = µr − rC1 µr−1 µ1 + rC2 µr−2 µ1 µ2 µ1 − 3 µ1
∞ r
0 t
X
w.
0
(iii) MX (t) = µr where µr = (Xr ) i.e. rth raw moment.
r=0
r!
(iv) MGF of X about X = a :
ww
[MX (t)]x=a = E et(x−a)
∞ 0 r
where µr = E (X − a)r
0
= µr r!t
P
r=0
n
sample space and find probabilities on the experiment. Suppose 10 or 100
g.i
or more coins tossed, writing sample space in exam time is nor possible.
So we use distributions.
Refer examples:
n
Example 0.2 [Probability by directly]
Example 1.1
Example 1.17 eri
[Probability by Random variable]
[Probability by Binomial distribution]
ine
*** All the above examples give the same results for the same experiment.
ng
Assumptions:
E
outcomes(success or failure).
(2) Number of trials in finite.
Le
Formula:
ww
Assumptions:
(1) Number of trials in indefinitely large i.e., n → ∞.
(2) The probability of success is very small i.e., p → 0 with p + q = 1.
n
Formula:
g.i
e−λ λx
(1) P (X = x) = , x = 0, 1, 2, 3, · · · ∞.
x!
n
(2) Frequency function or expected number of ‘N’ sets of ‘n’ trials each#
eri e λx
" −λ
having ‘x’ number of success is f (x) = N [P (X = x)] = N
x!
ine
where
x → value of random variable X
X → Random variable (Discrete), represents number of successes,
ng
e−λ λx
arn
P.M.F. : P (X = x) = ,
x!
Mean = λ,
Variance = λ,
Le
Assumptions:
Suppose that independent trials are performed until the success occurs.
(or)
Number of trials to get first success
(or)
Number of trials required to get first success
(or)
Number of failures preceding the first success
(or)
Number of trials needed to obtain the first success
Formula:
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 23
P (X = x) = qx p, x = 0, 1, 2, · · · ∞, with p + q = 1
(or)
P (X = x) = qx−1 p, x = 1, 2, 3, · · · ∞, with p + q = 1
(or)
P (X = x) = qx−2 p, x = 2, 3, 4, · · · ∞, with p + q = 1 (or) etc. where
x → value of random variable X
X → Random variable (Discrete), represents number of trials, which
follows Geometric distribution
n
p → probability of success in single trial
g.i
q → probability of failure in single trial
Note : Symbolically, X ∼ Geometric distribution p
n
q q p
P.M.F. : P (X = x) = qx p, Mean = p , Variance = 2 , M.G.F. =
(or) eri p 1 − et q
ine
q pet
P.M.F. : P (X = x) = qx−1 p, Mean = 1p , Variance = , M.G.F. =
p2 1 − et q
ng
over a finite interval, if its p.d.f. is constant in that interval and p.d.f. is
given by
1
f (x) = ,a < X < b
b−a
Note : Symbolically, X ∼ Uniform distribution (Parameters : a, b)
1
P.D.F. : f (x) = ,
b−a
(a + b) 1
Mean = , Variance = (b − a)2 , M.G.F. =
2 12
(b + a) t b + ba + a t
2 2 2
1+ + + ···
2! 3!
[Link]
[Link]
24 UNIT I - Random Variables
f (x) = αe−αx , x ≥ 0
n
1
Mean = ,
g.i
α
1
Variance = 2 ,
α
n
α
M.G.F. =
(α − t)
eri
ine
1.2.3 Memoryless property of exponential distribution:
e−x xβ−1
f (x) = , β > 0, 0 ≤ X < ∞
β
w.
Z ∞
Note : β = e−x xβ−1 dx
ww
Z ∞0
β+1 = e−x xβ dx
(0
n!, n is +ve integer
n+1 =
n 2n , n is a fraction
Note : Symbolically, X ∼ Gamma distribution β
e−x xβ−1
P.D.F.: f (x) = ,
β
1
Mean = β , Variance = β , M.G.F. =
(1 − t)β
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 25
e−αx xβ−1
f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β
n
β
If β = 1, Erlang distribution becomes Exponential distribution as αe−αx
g.i
n
1.2.6 Normal distribution
−∞ x=µ ∞
arn
z=0
1 x − µ 2 −∞ < x < ∞
1 − x−µ
σ
Le
Note : Symbolically, X ∼ Normaldistribution µ, σ 2
ww
x−µ 2
− 21 ( )
e σ
P.D.F. : f (x) = √ ,
σ 2π
σ2 t2
M.G.F. =e e tµ 2
Mean = µ,
Variance = σ2
Z z2
(2) P (x1 < X < x2 ) = P (z1 < Z < z2 ) = φ (z) dz
z1
x1 − µ x2 − µ x − µ
where z1 = , z2 = ∵z=
σ σ σ
1 1 2
φ (z) = √ e− 2 Z ⇒ p.d.f. of standard normal variant where
α 2π
Z ∼ N (0, 1)
1
(3) P (−∞ < Z < 0) = P (0 < Z < ∞) =
2
(4) To find area under standard normal curve between lines z = 0 and
n
z = z1 .
g.i
(5) The number of cases where x1 < X < x2 from the total number of
N cases is
x−µ
n
N [P (x1 < X < x2 )] = NP (z1 < Z < z2 ) , Z =
σ
eri
ine
ng
E
arn
Le
w.
ww
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 27
n
x=0
n n
g.i
X X x
= tx
e nCx p q x n−x
= nCx e p qn−x
t
x=0 x=0
n
0 1 2 n
= nC0 e p q +nC1 e p q +nC2 e p q +· · ·+nCn et p q0
t n t n−1 t n−2
eri
1 2 n
= qn + nC1 et p qn−1 + nC2 et p qn−2 + · · · + et p
n
ine
= pe + q
t
Mean of X:
( )
d
ng
h ni
n−1
= pe + q
t
= n pe + q
t
pet
dt t=0
arn
t=0
n−1
= np p + q
= np
Le
Variance of X :
Var(X) = E X2 − [E (X)]2
w.
( 2 ) ( )
d d n−1
Now, E X2 = 2
[MX (t)] = np pet + q et
dt dt
ww
t=0 t=0
n−2 n−1
= np (n − 1) pet + q pet et + pet + q et
t=0
= np (n − 1)p + 1
= n2 p2 − np2 + np
2
∴ Var(X) = n2 p2 − np2 + np − np
= np − np2 = np 1 − p
= npq
∴ PMF :P(X = x) = nCx px qn−x Mean = np
n
M.G.F. = pet + q Variance = npq
[Link]
[Link]
28 UNIT I - Random Variables
e−λ λx
P(X = x) = , x = 0, 1, 2, · · ·
x!
M.G.F. of X:
X∞
MX (t) = E e =
tX
etx P(X = x)
x=0
∞ ∞ x
X
tx e λ
−λ x X et λ
= e =e −λ
x=0
x! x=0
x!
n
t 0 t 1 t 2
λ λ λ
e e e
g.i
= e−λ + + + · · ·
0! 1! 2!
= e−λ eλe
t
n
= eλ(e −1)
t
Mean of X:
d
( ) (
d h λ(et −1) i
) eri
ine
E (X) = [MX (t)] = e
dt t=0
dt
( )t=0
d −λ λet
h i
= e e
ng
dt
( )t=0
d λet
= e−λ λet eλe
h i n t
o
= e−λ e
E
dt t=0
t=0
arn
= e−λ λeλ
n o
=λ
Le
Variance of X :
Var(X) = E X2 − [E (X)]2
w.
( 2 ) ( )
d d
e−λ λet eλe
h t
i
Now, E X2 = 2
[MX (t)] =
dt t=0
dt t=0
ww
( )
d t λet
= e−λ λ et eλe λet + et eλe
h i nh t t
io
= e−λ λ ee
dt t=0
o t=0
= e−λ λ eλ λ + eλ
n
= λ2 + λ
∴ Var(X) = λ2 + λ − (λ)2
=λ
e−λ λx
∴ PMF :P(X = x) = Mean = λ
x!
λ(et −1) Variance = λ
M.G.F. = e
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 29
P(X = x) = qx p, x = 0, 1, 2, · · ·
M.G.F. of X:
MX (t) = E etX
∞
X ∞
X ∞
X x
= e P(X = x) =
tx
e q p=p
tx x t
eq
x=0
0 1 x=0 2 x=0 1 2
= p e q + e q + e q + ···∞ = p 1 + e q + e q + ···∞
t t t t t
n
h i−1 p
= p 1 − et q =
g.i
1 − et q
Mean of X: ( )
n
d
E (X) = [MX (t)]
eri
dt
( " #)t=0 ( )
d p d −1
= t
= p 1 − et q
dt 1 − e q t=0 dt
ine
t=0
−2
= p (−1) 1 − et q −et q
t=0
pq
ng
−2
= p (−1) 1 − q −q = 2
p
q
E
=
p
arn
Variance of X :
Var(X) = E X − [E (X)]2 2
Le
( 2 )
d
Now, E X2 = [MX (t)]
dt2 t=0
w.
( ) ( )
d −2 d −2
= p (−1) 1 − et q −et q = pq 1 − et q et
dt t=0
dt t=0
ww
Z ∞ Z b Z b
1 1
= etx f (x)dx = etx dx = etx dx
−∞ a b−a b−a a
bt (bt)2 at (at)2
n
" # " #
!b 1+ + + ··· − 1 + + + ···
g.i
1 etx etb − eta 1! 2! 1! 2!
= = =
b − a t a t (b − a) t (b − a)
n
2 2 2
(b−a) t b −a t b3 −a3 t3
+ + +· · ·
(b+a) t b +ba+a t
eri
2 2 2
= 1! 2! 3! = 1+ + + ···
t (b − a) 2! 3!
ine
Mean of X:
(b + a) t b 2
+ ba + a2 2
t
( )
d d
E (X) = = + + +
[MX (t)] 1 · · ·
dt dt
2! 3!
ng
t=0
t=0
b + ba + a 2t
2 2
(b + a) (b + a) (b + a)
= + + + = =
E
0 · · ·
2! 3! 2! 2
t=0
arn
Variance of X :
Var(X) = E X2 − [E (X)]2
Le
d (b + a) b + ba + a 2t
2 2
( 2 )
d
Now, E X = = + +
2
[M (t)] · · ·
X
dt2 dt
2! 3!
w.
t=0
t=0
b2 + ba + a2 2
= + +
0 · · ·
ww
3!
t=0
b + ba + a
2 2
=
3
b2 + ba + a2
!2
a+b 1
∴ Var(X) = − = (b − a)2
3 2 12
1 a+b
∴ PDF : f (x) = ,a < x < b Mean =
b−a 2
(b + a) t b + ba + a t2
2 2
1
M.G.F. = 1 + + + ··· Variance = (b − a)2
2! 3! 12
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 31
M.G.F. of X:
Z ∞
MX (t) = E e tX
= etx f (x)dx
Z−∞
∞
= etx αe−αx dx
0
∞ " #∞
e−(α−t)x
Z
=α dx = α
n
−(α−t)x
e
0 −(α − t)
g.i
0
α
=
(α − t)
n
Mean of X:
eri
( )
d
E (X) = [MX (t)]
dt
#)t=0
ine
α
( "
d
=
dt (α − t) t=0
α
(" #)
ng
=
(α − t)2 t=0
1
E
=
α
arn
Variance of X :
Var(X) = E X − [E (X)]2 2
Le
( 2 )
d
Now, E X2 = [MX (t)]
dt2
w.
#)t=0
α
( "
d
=
dt (α − t)2 t=0
ww
(" #)
2α
=
(α − t)3 t=0
2
= 2
α
2
2 1 1
∴ Var(X) = 2 − = 2
α α α
1
∴ PDF : f (x) = αe−αx Mean =
α α
M.G.F. = 1
(α − t) Variance = 2
α
[Link]
[Link]
32 UNIT I - Random Variables
e−x xβ−1
f (x) = , β > 0, 0 ≤ X < ∞
β
M.G.F. of X:
∞ −x β−1
∞
Z Z Z ∞
e x 1
e−x etx xβ−1 dx
MX (t) = E etX = etx f (x)dx = etx dx =
−∞ 0 β β 0
u β−1 du
Z ∞ Z ∞
1 1
−(1−t)x β−1
= e x dx = e−u
n
β 0 β 0 1−t (1 − t)
g.i
∵ put (1 − t) x = u ⇒ (1 − t) dx = du ⇒ dx = du
!
(1−t)
x = 1−t
u
, x → 0 ⇒ u → 0, x → ∞ ⇒ u → ∞
n
Z ∞
1
= e−u uβ−1 du
eri
β
β (1 − t) 0
1
= β
ine
β (1 − t)β
1
=
(1 − t)β
ng
Mean of X:
( ) ( " #)
d d 1
E
t=0
( )
d h i h i
= (1 − t)−β = −β (1 − t)−β−1 (−1)
dt t=0
i t=0
Le
h
= β(1 − t)−β−1 =β
t=0
Variance of X :
w.
Var(X) = E X − [E (X)]2 2
( 2 ) ( )
d d
ww
h i
Now, E X2 = 2
[MX (t)] = β(1 − t)−β−1
dt t=0
dt t=0
nh io
= β −β − 1 (1 − t) −β−2
(−1)
t=0
= β(1 + β)
∴ Var(X) = β(1 + β) − (β)2
=β
e−x xβ−1
∴ PDF : f (x) =
β Mean = β
1 Variance = β
M.G.F. =
(1 − t)β
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 33
−∞ < x < ∞
1 x−µ 2
− 12 ( σ )
x−µ
f (x) = √ e ; −∞ < µ < ∞ ; z=
α 2π σ
σ>0
M.G.F. of X:
Z ∞ Z ∞
1 x−µ 2
− 12 ( σ )
MX (t) = E e tX
= tx
e f (x) dx = e √ e tx
dx
−∞ −∞ α 2π
Z ∞
1 x−µ 2
tx − 21 ( σ )
= √ e e dx
α 2π Z−∞
n
∞ Z ∞
1 etµ
g.i
t(σu+µ) − 12 (u)2 1 2
= √ e e σdu = √ etσu e− 2 u du
α 2π −∞ 2π −∞
x−µ
∵ put σ = u ⇒ dx = = σdu
!
n
σ du ⇒ dx
x =σu+µ, x → −∞ ⇒ u → −∞, x → ∞ ⇒ u → ∞
= √
etµ
Z ∞
1 2
e− 2 (u −2σtu) du eri
ine
2π −∞
2 2 Z
tµ σ 2t
Z ∞ ∞
etµ 1 2 σ2 t2 e e 1 2
= √ e − 2 (u−σt)
e 2 du = √ e− 2 (u−σt) du
2π −∞ 2π −∞
ng
Z ∞ − 1 (u−σt)2
σ2 t2 e 2 σ2 t2
= etµ e 2 √ du = etµ e 2 . (1)
2π
E
−∞
σ2 t2
= etµ e 2
arn
Mean of X:
( )
d
E (X) = [MX (t)]
Le
dt
)t=0 "
σ2 t2 σ
( 2
#
d tµ σ2 t2
σ2 t2
= e e 2 = etµ e 2 2t + µetµ e 2 =µ
w.
dt t=0
2 t=0
Variance of X :
ww
Var(X) = E X − [E (X)]2 2
( 2 )
d
Now, E X2 = [MX (t)]
dt2 t=0
d tµ σ2 t2 σ
( " 2
#)
σ2 t2
= e e 2 2t + µe e 2
tµ
= µ2 + σ2
dt 2 t=0
∴ Var(X) = µ2 + σ2 − (µ)2 = σ2
1 1 x−µ 2
∴ PDF : f (x) = f (x) = √ e− 2 ( σ ) Mean = µ
α 2π
σ2 t2 Variance = σ2
M.G.F. = etµ e 2
[Link]
[Link]
34
1.4 Summary of Discrete Distributions
i. n
Distribution P(X = x) MX (t)
n
1. Binomial n, p Mean > Variance P (X = x) = nCx px qn−x , pet + q np npq
x = 0, 1, 2, 3, ...n,
λ(= np)
with p + q = 1
Mean = Variance P (X = x) =
e−λ λx
eλ(e −1)
t
i n g λ λ
r
2. Poisson ,
x!
x = 0, 1, 2, 3, · · · ∞
3. Geometric p Mean < Variance P (X = x) = qx p,
e
p
1 − et qe q
p
q
p2
with p + q = 1
(or)
gi n
x = 0, 1, 2, · · · ∞,
pet q
n
P (X = x) = qx−1 p, 1
1 − et q p p2
x = 1, 2, 3, · · · ∞,
.L
3. Geometric Probability to perform ‘x’ trials to get First success
w
‘x’ successes in ‘n’ trials Use Binomial Dist.(when n is finite number of trials)
‘x’ successes in ‘n’ trials Use Poisson Dist.(when n is indefinite number ‘n’ of trials i.e., n → ∞) and p → 0
w
First success in Last trial Use Geometric Dist.
w
[Link]
[Link]
.i n
n g
Function Mean
i
Variance
r
Distribution Function f (x) MX (t)
b + ba + a t2
e
2 2
1 (b + a) t (a + b) 1
1. Uniform a, b f (x) = , 1+ + + ··· (b − a)2
2. Exponential α
a<X<b
b−a
f (x) = αe−αx
α
i n e2! 3!
1
α
2 12
1
α2
3. Gamma β
x≥0
f (x) =
e−x xβ−1
,
(α − t)
1
n gβ
β β
4. Normal µ, σ2
β
β > 0, 0 ≤ x < ∞
1
rn
f (x) = √ e− 2 ( σ ) ;
E
1 x−µ 2
(1 − t)
σ2 t2
etµ e 2 µ σ2
e a
σ 2π
−∞ < x < ∞; −∞ < µ < ∞;
σ > 0;
.L z=
x−µ
σ
w w
* A continuous random variable X is said follow Erlang distribution with parameters α, β ≥ 0, if its p.d.f.
f (x) =
e−αx xβ−1
β
αβ , α, β > 0, 0 ≤ X < ∞
is given by
w
Note : Symbolically, X ∼ Erlang distribution α, β
35
[Link]
[Link]
36 UNIT I - Random Variables
n
0 0 0
g.i
2. The number of hardware failures of a computer system in a week of
operations has the following p.m.f.:
n
Number of failures 0 1 2 3 4 5 6
eri
Probability .18 .28 .25 .18 .06 .04 .01
∀x x=0
= (0)(.18) + (1)(.28) + (2)(.25) + (3)(.18) + (4)(.06)
E
Var (X) = E X2 − [E (x)]2 = 4.90 − (1.82)2 = 1.5876
3. The mean of a binomial distribution is 20 and standard deviation is 4.
Find the parameters of the distribution.
Le
p : Given Mean
Solution = np = 20.
√
S.D. = Var (X) = npq = 4 ⇒ npq = 16⇒ (20) q = 16 ⇒ q = 4/5,
w.
∴ p = 1 − q = 1/5
np = 20 ⇒ n (1/5) = 20 ⇒ n = 100
ww
1, 0 < y < 1
(
∴g y =
0, otherwise
5. Is the function defined as follows a density function?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 37
x<2
0,
f (x) = (3 + 2x) , 2 ≤ x ≤ 4
1
18
x>4
0,
Z ∞
Solution : Condition for p.d.f. is f (x) dx = 1
−∞ Z
Z 2 Z 4 ∞
1
LHS = 0 dx + (3 + 2x) dx + 0 dx = 1
−∞ 2 18 4
Hence the given function is density function.
n
6. Define Poisson distribution and state any two instance where Poisson
g.i
distribution may be successfully employed.
Solution : Poisson distribution : A random variable X is said to be
follow Poisson distribution if it assumes only non-negative values and
n
its probability mass function is given by
P (X = x) = p (x) =
(
eri
e−λ λx
x! , x = 01, 2, 3, ...∞ and λ > 0
ine
0, otherwise
0, elsewhere
2x, 0 < x < 1
(
Solution: Given f (x) =
0, elsewhere
ww
Let y = 8x .
3
dx 1 1/3 1 1 (1/3)−1 1 1
fY y = fX (x) =2 y = y1/3 y−2/3 = y−1/3
y
dy 2 23 6 6
8. If Var(X) = 4, find Variance(3X + 8), where X is a random variable.
Solution : W.K.T. Var(aX + b) = a2 Var (X)
h i
E [X (X − 1)] = E X − X 2
= E X2 − E (X) ⇒ 4 = E X2 − 1 ⇒ E X2 = 4 + 1 = 5
Var (X) = E X2 − [E (X)]2
=5−1=4
h i
∴ Var (2 − 3x) = (−3)2 Var (X) ∵ Var (aX ± b) = a2 Var (X)
= 9(4) = 36
10. Let X be r.v. taking values -1,0 and 1 such that
P(X = −1) = 2P(X = 0) = P(X = 1). Find the mean of 2X − 5.
n
Solution : Let P(X = 1) = K.
g.i
Then P(X = −1) = K and P(X = 0) = K/2.
∴ The distribution of X is given by
n
eri
X −1 0 1
2 K 1 2
P(X = x) K = = K=
5 2 5 5
ine
Since the total probability = 1
5K 2
=1⇒K=
ng
We have
2 X5 2 1 2
Mean of X = E (X) = x · P(X = x) = (−1) + 0 +1 =0
E
x
5 5 5
Mean of 2X − 5 = E (2X − 5) = 2E (X) − 5 = −5.
arn
−∞ −∞
" #5 " ! !#
x2 52 22
⇒k x+ =1⇒k 5+ − 2+ =1
ww
2 2 2 2
21
⇒k 3+ =1
2
2
⇒k=
27 Z 4 Z 4
2
Now P[X < 4] = P[2 < X < 4] = k(1 + x) dx = (1 + x)dx
2 2 27
2 4
!
2 x 16
= x+ =
27 2 2 27
12. A continuous random variable X has p.d.f. f (x) = 3x2 ; 0 < x < 1. Find
a such that P(X = a) = P(X > a).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 39
1
Solution: P(X = a) = P(X > a) =
Z a " 3 #a 2
1 x 1 h ia 1 1
i.e., 3x dx = ⇒ 3
2
= ⇒ x3 = ⇒ a3 =
0 2 3 0 2 0 2 2
1/3
1
⇒a=
2
13. A continuous random variable X has probability density function
given by f (x) = 3x2 , 0 = x = 1. Find k such that P(X > k) = 0.05.
Solution : P(X > k) = P(X = k) = 0.05
n
R∞ R∞
k
f (x) dx = 0.05 ⇒ k 3x dx = 0.05 ⇒ x3 1k = 0.05
2
g.i
⇒ 1 − k3 = 0.05 ⇒ k = 0.9830
14. Find the cumulative distribution function F(x) corresponding to the
n
1 1
p.d.f. f (x) = , −∞ < x < ∞.
eri
π 1 + x2
Solution : The cumulative distribution function F(x) corresponding to
the p.d.f. f(x) is given by
ine
Z x Z x
1 1 1 h −1 ix
F (x) = f (x)dx = dx = tan x
−∞ π 1 + x π
2 −∞
−∞
ng
1 h −1 i 1 π
= tan x − tan (−∞) =
−1
tan−1 x +
π π 2
E
15. The p.d.f. of a random variable X is f (x) = 2x, 0 < x < 1, find the
probability density function, of Y = 3x + 1.
arn
Y−1 dX
X= ⇒ = 1/3
3 dY
w.
dX Y−1 1 2
y = fX (x) =2 = (Y − 1)
fY
dY 3 3 9
ww
−∞
Ce−x dx = 1 ⇒ C x (−1) − (1) (−1)2 = 1
e e
0
⇒ C [(0 − 0) − (−1)] = 1 ⇒ C = 1
xe ; x > 0
( −x
∴ f (x) =
0; x ≤ 0
Rx x
e−x e−x
∴ C.d.f. F (x) = 0 xe dx = x (−1) − (−1)2 = 1 − (1 + x) e−x
−x
0
[Link]
[Link]
40 UNIT I - Random Variables
Example 1.1 Let X denotes the number of heads in an experiment of tossing two
coins. Find
(a) probability distribution (b) Cumulative distribution
(c) Mean of X (d) Variance of X
(e) MGF of X (f) P (X ≤ 1)
n
(g) P (|X| ≤ 1) (h) find minimum value of c
g.i
such that P (X ≤ c) > 1/2.
n
Solution : W.K.T. by tossing two coins, the sample space is
eri
S = {HH, HT, TH, TT}, n(S) = 4
Given X is a random variable which denotes the number of heads.
ine
X(HH) = 2, X(HT) = 1, X(TH) = 1, X(TT) = 0
∴ The range space of X is RX = {0, 1, 2}.
ng
X: 0 1 2
arn
1 2 1 1
P(X = x) : =
4 4 2 4
(b) Cumulative distribution F(X = x) = P(X ≤ x) :
Le
X: 0 1 2
1 3 4
F(X = x) : P(X ≤ 0) = P(X ≤ 1) = P(X ≤ 2) = = 1
w.
4 4 4
(c) Mean of X :
ww
X
E(x) = xP(X = x)
∀x
x=2
X
= x · P(X = x)
x=0
= 0 · P(X = 0) + 1 · P(X = 1) + 2 · P(X = 2)
2 1
=0+ +2· (refer (a))
4 4
1 1
= +
2 2
=1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 41
(d) Variance of X : V(X) = E X − [E(X)]2 2
X
Now, E X 2
= x2 · P(X = x)
∀x
x=2
X
= x2 · P(X = x)
x=0
= 0 · P(X = 0) + 12 · P(X = 1) + 22 · P(X = 2)
2
2 1
=0+1· +4·
n
(refer a)
4 4
g.i
1
= +1
2
3
n
=
2
3
∴ V(X) = − 1
2 eri (refer c)
ine
1
=
2
ng
h i
(e) MGF of X : MX (t) = E etX
E
h i
MX (t) = E etX
arn
X
= etx · P(X = x)
∀x
Le
x=2
X
= etx · P(X = x)
x=0
w.
1 2 1
= 1 · + et · + e2t · (refer a)
ww
4 4 4
1 1 1
= + et + e2t
4 2 4
(f) P (X ≤ 1) =
= P(X = 0) + P(X = 1)
1 2
= + (refer a)
4 4
3
=
4
[Link]
[Link]
42 UNIT I - Random Variables
(g) P (|X| ≤ 1) =
= P(X = −1) + P(X = 0) + P(X = 1)
1 2
=0+ + (refer a)
4 4
3
=
4
(h) Minimum value of c such that P (X ≤ c) > 1/2.
1
X P(X ≤ c) >
2
n
P(X ≤ 0) = P(X = 0)
g.i
1 1
0 = >
4 2
n
∴c,0
1 2
1
P(X ≤ 1) = P(X = 0) + P(X = 1) =
=
3 1
>
+
4 4
eri
ine
4 2
∴c=1
1 2 1
P(X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = + +
ng
4 4 4
2 1
=1>
E
2
∴c=2
arn
1
∴ P(X ≤ c) > satisfies for c = 1, 2.
2
∴ Minimum value of C is 1.
Le
Example 1.2 A discrete random variable X has the probability function given
w.
below:
ww
X 0 1 2 3 4 5 6 7
P(X) 0 a 2a 2a 3a a2 2a2 7a2 + a
Find
(a) a (b) cumulative distribution
(c) P (X < 6) (d)P (3 ≤ X < 6)
(e) P (2X + 3 < 7) (f) P (X > 1 / X < 3)
(g) find minimum value of c such that
3
P (X ≤ c) > .
4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 43
Solution : X
(a) By the property of p.m.f., W.K.T. P(X = x) = 1
∀x
x=7
X
P(X = x) = 1
x=0
0 + a + 2a + 3a + a + 2a + 7a2 + a = 1
2 2
10a2 + 9a = 1
10a2 + 9a − 1 = 0
n
10a2 + 10a − a − 1 = 0
g.i
10a(a + 1) − (a + 1) = 0
(10a − 1)(a + 1) = 0
n
1
a=
eri
or − 1
10
But a , −1 (∵ Probability ∈ [0,1])
ine
1
∴a=
10
ng
X: 0 1 2 3 4 5 6 7
P(X) : 0 1/10 2/10 2/10 3/10 1/100 2/100 17/100
arn
X: 0 1 2 3 4 5 6 7
F(X = x) : 0 1/10 3/10 5/10 8/10 81/100 83/100 1
w.
n
P[(X = 2, 3, 4, 5, 6, 7) ∩ (X = 0, 1, 2)]
=
g.i
P(X = 0, 1, 2)
P[(X = 2)]
= P(X = 2)
n
P(X = 0) + P(X = 1)+
eri
2
10
= 1
10 + 2
10
ine
2
=
= 0.666666
3
3
(g) Minimum value of c such that P (X ≤ c) > .
ng
4
3
X P(X ≤ c) > = 0.75 Remarks
E
4
0 P(X ≤ 0) = F(X = 0) = 0
>0.75 ∴c,0
arn
1
1 P(X ≤ 1) = F(X = 1) = >
0.75 ∴c,1
10
3
2 P(X ≤ 2) = F(X = 2) = >
0.75 ∴c,2
Le
10
5
3 P(X ≤ 3) = F(X = 3) = >
0.75 ∴c,3
10
w.
8
4 P(X ≤ 4) = F(X = 4) = > 0.75 ∴c=4
10
81
ww
Example 1.3 Let X represents the sum of number of two dice. Find
(a) distribution of X (b) P (|X − 1| ≤ 3).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 45
n
n(S) = number of elements in S = 36 = 62 dice
g.i
Given X = sum of number of two dice.
∴ X = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
n
(a) Probability distribution of X:
X: 2 3 4 eri
5 6 7 8 9 10 11 12
ine
1 2 3 4 5 6 5 4 3 2 1
P(X = x) :
36 36 36 36 36 36 36 36 36 36 36
ng
6 1
= =
36 6
w.
Example 1.4 Let X takes the values −1, 0, 1 such that P (X = −1) = 2P (X = 0) =
ww
∴ P(X = −1) = k
k
P(X = 0) =
2
P(X = 1) = k
X
P(X = x) = 1
∀x
x=1
X
P(X = x) = 1
x=−1
P(X = −1) + P(X = 0) + P(X = 1) = 1
k
k+ +k=1
2
n
2
5k = 2 ⇒ ∴ k =
g.i
5
The probability distribution table is
n
X: −1 0 1
P(X = x) : 2/5 1/5 2/5
eri
Example 1.5 Let X takes values 1, 2, 3, 4 such that 2P (X = 1) = 3P (X = 2) =
ine
P (X = 3) = 5P (X = 4). Find the distributions of X.
ng
Solution : Let 2P (X = 1) = 3P (X = 2) = P (X = 3) = 5P (X = 4) = k
E
k
∴ P(X = 1) =
2
arn
k
P(X = 2) =
3
P(X = 3) = k
Le
k
P(X = 4) =
5
w.
∀x
x=4
X
P(X = x) = 1
x=1
P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4) = 1
k k k
+ +k+ =1
2 3 5
15k + 10k + 30k + 6k
=1
30
30
61k = 30 ⇒ ∴ k =
61
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 47
X: 1 2 3 4
P(X = x) : 15/61 10/61 30/61 6/61
F(X = x) : 15/61 20/61 55/61 61/61 = 1
n
Example 1.6 Check whether the following functions are probability density
g.i
function (p.d.f.):
1 1
(a) f (x) = 6x(1 − x), 0 ≤ X ≤ 1 (b) f (x) = , −∞ < X < ∞
n
100 π 1 + x2
x2 , X > 100
eri
(c) f (x) = (d) f (x) = sin x, 0 < x < π
0, X < 100
ine
Zb
arn
(a) Given f (x) = 6x(1 − x) = 6 x − x , 0 ≤ X ≤ 1
2
Z∞
f (x)dx = 1
w.
Z1 Z1
LHS of (1) = f (x)dx = 2
6 x − x dx
0 0
" #1
6x2 6x3 h i1
= − = 3x2 − 2x3
2 3 0 0
= [(3 − 2) − (0 − 0)] = 1
= RHS of (1)
Z∞
We have to prove f (x)dx = 1 (2)
−∞
Z∞
LHS of (2) = f (x)dx
−∞
1 h −1 i∞
= tan x
π −∞
1 π −π
= −
n
π 2 2
1
g.i
= [π] = 1
π
= RHS of (2)
n
∴ The given f (x) is p.d.f.
100
eri
2 , X > 100
(c) Given f (x) =
x 0, X < 100
ine
Z∞
We have to prove f (x)dx = 1 (3)
ng
−∞
Z∞ Z∞
100
E
100 100
∞
−1 1
= 100 = 100 0 +
x 100 100
Le
=1
= RHS of (3)
w.
Z∞
We have to prove f (x)dx = 1 (4)
−∞
Zπ Zπ
LHS of (4) = f (x)dx = sin xdx = [− cos x]π0 = −[cos x]π0
0 0
= −[(cos 0) − (cos π)] = −[(−1) − (1)] = 2 , 1
, RHS of (4)
∴ The given f (x) is not p.d.f.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 49
ax, X ∈ [0, 1]
a, X ∈ [1, 2]
Example 1.7 Given p.d.f. f (x) =
. Find
−ax + 3a, X ∈ [2, 3]
0, X > 3
1 3
(a) a (b) P ≤ X ≤ (c) P (|X| ≤ 2)
2 2
1
(d) P (X + 1 < 3) (e) P X > / X < 2 (f) P(X=2)
2
n
ax, X ∈ [0, 1]
g.i
a, X ∈ [1, 2]
Solution : Given f (x) =
. (1)
−ax + 3a, X ∈ [2, 3]
n
0, X > 3
eri
Here f (x) is not defined clearly. i.e., it has unknown quantity a. So we
have to find that unknown from the property of pdf f (x)
ine
Z∞
i.e., f (x)dx = 1 (2)
ng
−∞
(a) To find a :
E
Z0 Z1 Z2 Z3 Z∞
f (x)dx = 1
arn
2 1
" # 2
#3
"
ax −ax
0+ + [ax]21 + + 3ax + 0 = 1
2 0
2 2
ww
a −9a
− (0) + [(2a) − (a)] + + 9a − (−2a + 6a) = 1
2 2
−4a + 6a = 1
1
2a = 1 ⇒ ∴ a =
2
x
, X ∈ [0, 1]
2
1
,
X ∈ [1, 2]
∴ (1) ⇒ f (x) = .
2 (3)
x 3
− + ,
X ∈ [2, 3]
2 20,
X>3
[Link]
[Link]
50 UNIT I - Random Variables
Z3/2
1 3
(b) P ≤X≤ = f (x)dx
2 2
1/2
Z1 Z3/2
x 1
= dx + dx [∵ by (3)]
2 2
1/2 1
#1 "
x2
3/2
x
= +
4 1/2 2 1
n
1 h i1 1
= x2 + [x]13/2
g.i
4 1/2 2
1 1 1 3
= (1) − + − (1)
4 4 2 2
n
1 3 1 1
= +
eri
4 4 2 2
3 1
= +
16 4
ine
7
=
16
Z2
ng
−2
Z0 Z1 Z2
arn
x 1
= (0)dx + dx + dx [∵ by (3)]
2 2
w.
−2 0 1
2 1
"
2 #
x x 1 1
=0+ + = +1−
4 0 2 1 4 2
ww
1 1 3
= + =
4 2 4
Z2
(d) P (X + 1 < 3) = P (X < 3 − 1) = P (X < 2) = f (x)dx
−∞
Z0 Z1 Z2
x 1
= (0)dx + dx + dx [∵ by (3)]
2 2
−∞ 0 1
#1 2"
x2 x 1 1 3
=0+ + = + =
4 0 2 1 4 2 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 51
h i
P X > 1 ∩ (X < 2) " #
1 P (A ∩ B)
2
(e) P X > / X < 2 = ∵ P(A/B) =
2 P(X < 2) P(B)
h i
P 2 <X<2
1
=
P(X < 2)
2 Z1 Z2
Z
1 x 1
Now, P < X < 2 = f (x)dx = dx + dx [∵ by (3)]
2 2 2
1/2 1/2 1
11
= [by simplification]
n
16
3
g.i
By (d), we have P(X < 2) =
4
1 11/16 11
∴P X> /X<2 = =
n
2 3/4 12
eri
Z2
(f) P(X = 2) = f (x)dx
ine
2
=0 [∵ lower and upper limits are same]
ng
0, X < 1
F(x) =
k(x − 1)4 , 1 ≤ X ≤ 3 .
Le
1, X > 3
w.
0, X < 1
Solution : Given F(x) = k(x − 1)4 , 1 ≤ X ≤ 3 .
1, X > 3
(a) To find f (x) (p.d.f.) :
d[F(x)]
W.K.T. f (x) =
dx
[Link]
[Link]
52 UNIT I - Random Variables
d [0]
, X<1
dx
h i
d k(x − 1)4
=
, 1≤X≤3
dx
d [1]
1, X > 3
dx
0, X < 1
∴ f (x) = 4k(x − 1)3 , 1 ≤ X ≤ 3
0, X > 3
n
g.i
(b) to find k :
n
eri
Z∞
W.K.T. f (x)dx = 1 (∵ f (x) has an unknown quantity k)
ine
−∞
Z3
4k(x − 1)3 dx = 1
ng
1
n+1
4 3 ax1 + b
" # Z
(x − 1) n
E
4k =1 (∵ ax1 + b = )
4 (n + 1)(a)
arn
h 1i
k (3 − 1)4 − (1 − 1)4 = 1
k[16] = 1
Le
1
∴k=
16
0, X < 1
w.
1
∴ f (x) =
(x − 1)3 , 1 ≤ X ≤ 3
4
ww
0, X > 3
Z2
(c) P(1 < X < 2) = f (x)dx
1
Z2 " 4 2
#
1 1 (x − 1)
= (x − 1)3 dx =
4 4 4 1
1
1
∴ P(1 < X < 2) =
16
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 53
(d )P(|X| ≤ 2) = P(−2 ≤ X ≤ 2)
Z2
= f (x)dx
−2
Z1 Z2 Z3
0 *0
= f (x)
* dx + f (x)dx + f (x)
dx
−2 1 2
Z2 " 4 2
#
1 1 (x − 1)
= (x − 1)3 dx =
n
4 4 4 1
g.i
1
1
∴ P(|X| ≤ 2) =
16
n
(e )P(X + 3 < 5) = P(X < 5 − 3) = P(X < 2)
=
Z2
f (x)dx eri
ine
−∞
Z1 Z2
*0 1
ng
= f(x)
dx + (x − 1)3 dx
4
−∞ 1
E
1
∴ P(X + 3 < 5) =
∵ by (c)
16
arn
1
P X> ∩ (X < 2) " #
1 P (A ∩ B)
2
(f )P X > / X < 2 = ∵ P(A/B) =
Le
=1
ww
x
, X ∈ [0, 1]
2
1
,
X ∈ [1, 2]
Example 1.9 Given p.d.f. f (x) = 2
. Find c.d.f. F(x).
x 3
− + ,
X ∈ [2, 3]
2 2
X>3
0,
[Link]
[Link]
54 UNIT I - Random Variables
x
, X ∈ [0, 1]
2
1
, X ∈ [1, 2]
Solution : Given f (x) =
2 . (1)
x 3
− + , X ∈ [2, 3]
2 20, X > 3
W.K.T., the relation to find F(x) from f (x) is
Zx
n
F(x) = f (x)dx
g.i
−∞
n
In [0, 1]
F(x) =
Zx
f (x)dx
eri
ine
−∞
Z0 Zx
ng
= f (x)dx + f (x)dx
−∞ 0
E
Z0 x∈[0,1]
Z
*0 x
= dx +
arn
f (x)
dx (∵ by (1))
2
−∞ 0
2
x
=
Le
4
w.
In [1, 2]
ww
Zx
F(x) = f (x)dx
−∞
Z0 Z1 Zx
= f (x)dx + f (x)dx + f (x)dx
−∞ 0 1
Z0 1∈[0,1]
Z x∈[1,2]
Z
*0 x 1
= f(x)
dx + dx + dx (∵ by (1))
2 2
−∞ 0 1
x 1
= −
2 4
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 55
In [2, 3]
Zx
F(x) = f (x)dx
−∞
Z0 Z1 Z2 Zx
= f (x)dx + f (x)dx + f (x)dx + f (x)dx
−∞ 0 1 2
Z0 1∈[0,1]
Z 2∈[1,2]
Z x∈[2,3]
Z
0 x 1 −x 3
= + dx + dx + + dx
n
f (x)
* dx (∵ by (1))
2 2 2 2
g.i
−∞ 0 1 2
2
−x 3x 5
= + −
4 2 4
n
eri
In [3, ∞)
Zx
ine
F(x) = f (x)dx
−∞
Z0 Z1 Z2 Z3 Z∞
ng
0 1 2 3
Z0 1∈[0,1]
Z 2∈[1,2]
Z 3∈[2,3]
Z Z∞
arn
*0 x 1 −x 3
= f (x)
dx+ dx+ dx+ + dx+ (0)dx
2 2 2 2
−∞ 0 1 2 3
(∵ by (1))
Le
=1
w.
x2
, X ∈ [0, 1]
4
x 1
ww
− , X ∈ [1, 2]
∴ F(x) =
2
2 4
x 3x 5
+ , X ∈ [2, 3]
− −
4 4 4
1, X > 3
dF(x)
Note : Check = f (x) is true for the above problem.
dx
W.K.T. P (X ≤ k) + P (X > k) = 1
2P (X ≤ k) = 1 (∵ P (X ≤ k) = P (X > k))
1
P (X ≤ k) =
2
1
∴ (2) ⇒ P (X ≤ k) = P (X > k) = (3)
2
n
From (3), we have
g.i
1 1
P (X ≤ k) = or P (X > k) =
2 2
n
Zk Z∞
1 1
eri
i.e., f (x)dx = i.e., f (x)dx =
2 2
−∞ k
ine
Zk Z1
1 1
3x2 dx = [∵ by (1)] 3x2 dx = [∵ by (1)]
2 2
0 k
ng
h ik 1 h i1 1
x3 = x3 =
0 2 k 2
E
1 1
k3 = 1 − k3 =
2 2
arn
1 1
1 3 1 3
∴k= ∴k=
2 2
Le
Z1
3x2 dx = 0.1 [∵ by (1)]
α
h i1 1
x3 = 0.1 =
α 10
1
1 − α3 =
10
9
α3 =
10
1
9 3
∴α=
10
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 57
(c) P(|X| ≤ 1) :
n
h i1
= x3 dx
g.i
0
=1
n
eri
1.6.3 Examples of moments
Example 1.11 The first 4 raw moments about X = 4 are 1, 4, 10, 45 respectively.
ine
Show that the mean is 5, variance is 3, µ3 = 0 and µ4 = 26.
ng
Given
0
h i
The 1 moment about the point X = 4 is
st
µ1 = E (X−4) = 1 1
(1)
Le
0
h i
The 2nd moment about the point X = 4 is µ2 = E (X−4)2 = 4 (2)
0
h i
The 3rd moment about the point X = 4 is µ3 = E (X−4) = 10
3
(3)
w.
0
h i
The 4th moment about the point X = 4 is µ4 = E (X−4)4 = 45 (4)
ww
∴ (1) ⇒ E[X − 4] = 1
⇒ E[X] − 4 = 1
∴ E[X] = 5 (5)
h i
∴ (2) ⇒ E (X − 4) = 4
2
h i
⇒ E X2 − 8X + 16 = 4
h i
⇒ E X2 − 8E[X] + 16] = 4
h i
⇒ E X2 − 8(5) + 16 = 4
h i
∴ E X2 = 28 (6)
[Link]
[Link]
58 UNIT I - Random Variables
h i
∴ (3) ⇒ E (X − 4) = 103
h i
⇒ E X3 − 12X2 + 48X − 64 = 10
h i h i
⇒ E X − 12E X2 + 48E[X] − 48 = 10
3
h i
⇒ E X3 − 12(28) + 48(5) − 48 = 10
h i
∴ E X3 = 170 (7)
h i
∴ (4) ⇒ E (X − 4) = 454
n
h i
⇒ E X4 − 16X3 + 96X2 − 256X + 256 = 45
g.i
h i h i h i
⇒ E X − 16E X + 96E X2 − 256E [X] + 256 = 45
4 3
h i
⇒ E X4 − 16(170) + 96(28) − 256(5) + 256 = 45
n
h i
∴ E X4 = 1101 (8)
∴ The central moments about X = A are
eri
ine
µ1 = Mean = E[X] = 5
h i
µ2 = Variance = E X2 − [E(X)]2 = 28 − 52 = 3
ng
0 0 0
h 0 i3
µ3 = µ3 − 3µ2 µ1 + 2 µ1 = 10 − 3(4)(1) + 2(1)3 = 0
0 0 0 0
h 0 i4
µ4 = µ4 − 4µ3 µ1 + 6µ2 − 3 µ1 = 45 − 4(10)(1) + 6(4)(1)2 − 3(1)4 = 26
E
arn
Alternative Method :
∴ Mean = A + µ11 = 4 + 1 = 5
0
h 0 i2
Variance = µ2 = µ2 − µ1 = 4 − 1 = 3
w.
0 0 0
h 0 i3
µ3 = µ3 − 3µ2 µ1 + 2 µ1
ww
= 10 − 3(4)(1) + 2(1)3 = 10 − 12 + 2 = 0
0 0 0 0
h 0 i2 h 0 i4
µ4 = µ4 − 4µ3 µ1 + 6µ2 µ1 − 3 µ1
= 45 − 4(10)(1) + 6(4)(1)2 − 3(1)4 = 26
n
∞
X 1
= etx
g.i
2x
x=1
∞ t 2
!
X e
n
=
2
eri
x=1
!1 !2 !3
e1t e2t e3t
= + + + ···
2 2 2
ine
! !1 !2
e1t e1t e2t
= 1 + + + · · ·
2 2 2
ng
!" !#−1
e1t et
= 1− (∵ 1 + x + x2 + x3 · · · = [1 − x]−1 )
2 2
E
!
e1t 2
=
arn
2 2 − et
et
∴ M.g.f of X =
2 − et
Le
w.
Mean of X = E[X]
ww
( )
d
= [MX (t)]
dt
( " t #)t=0
d e
=
dt 2 − et t=0
2 − et et + e2t
(" #)
=
(2 − et )2 t=0
1+1
=
1
∴ Mean of X = 2
Variance of X = VarX = E X2 − [E(X)]2
[Link]
[Link]
60 UNIT I - Random Variables
( )
d2
Now, E X 2
= [MX (t)]
dt2
t=0 2t #)
d 2−e e +e t t
( "
= (refer Mean of X)
dt (2 − et )2 t=0
( " t
#)
d 2e
=
dt (2 − et )2 t=0
2 2t 2t
2 2 − e e + 4 2 − e e
t t
n
=
(2 − et )4
g.i
t=0
2+4
=
1
n
=6
eri
ine
∴ Var(X) = 6 − 4 (∵ Mean of X = E(X) = 2)
=2
E ng
2e
−2x
,X ≥ 0
arn
Example 1.13 Given p.d.f. f (x) =
.
0 , otherwise
Find m.g.f. and hence find mean, variance, standard deviation and µ3 .
Le
h i
M.G.F. of X = MX (t) = E E tX
Z∞
ww
= etx f (x)dx
−∞
Z∞ Z∞
= 2 etx e−2x dx = 2 e−(2−t)x dx
" #
2
MX (t) =
(2 − t)
2
= h i , |t| < 2
2 1 − t
2
−1
t
= 1−
2
2 3
t t t
=1+ + ++ + ···
2 2 2
n
! !
1 t 1 t2 3 t3
=1+ + + + ···
g.i
2 1! 2 2! 4 3!
t 0 1
W.K.T. Coefficient of = µ1 = = Mean
n
1! 2
eri
2
t 0 1
Coefficient of = µ2 =
2! 2
3
t 3
ine
0
Coefficient of = µ3 =
3! 4
0 2 1 1 2 1 1 1
ng
0
∴ Variance = µ2 = µ2 − µ1 = − = − =
r 2 2 2 4 4
√ 1 1
E
∴ S.D. = Variance = =
4 2
arn
0 0 0
h 0 i3 3 3 1
µ3 = µ3 − 3µ2 µ1 + 2 µ1 = − + = 0
4 4 4
Le
w.
ww
Example 1.14 A continuous random variable X has a p.d.f. f (x) = Kx2 e−x , x ≥ 0.
Find mean, variance and m.g.f.
Z∞
By pdf, wkt f (x)dx = 1 (∵ pdf contains an unknown K)
−∞
Z∞
K x2 e−x dx = 1 (∵ by (1))
n
0
K[(0 − 0 − 0) − (0 − 0 − 2)] = 1
g.i
1
∴K=
2
n
1
∴ f (x) = x2 e−x , x ≥ 0. eri
ine
2
Mean of X:
ng
Z∞
E
E(X) = x f (x)dx
arn
−∞
Z∞
1 3 −x
= x e dx
Le
2
0
" " # " −x # " −x ##∞
1 3 e−x e e
= x − 3x2 +6
w.
2 −1 1 −1 0
ww
1 h 3 −x i∞
=
x e − 3x2 e−x − 6e−x
2 0
1
= [(0 − 0 − 0) − (0 − 0 − 6)]
2
1
= [6]
2
∴ E[X] = 3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 63
h i
Variance of X : Var(X) = E X − [E(X)]2 2
h i Z∞
Now, E X2 = x2 f (x)dx
−∞
Z∞
1 4 −x
=x e dx
n
2
0
g.i
" " # " −x # " −x # " −x # " −x ##∞
1 4 e−x e e e e
= x − 4x3 + 12x2 − 24x + 24
2 −1 1 −1 1 −1 0
n
1h i∞
= −x4 e−x − 4x3 e−x − 12x2 e−x − 24xe−x − 24e−x
2
eri
0
1
= [(−0 − 0 − 0 − 0 − 0) − (−0 − 0 − 0 − 0 − 24)]
ine
2
1
= [24]
2
∴ E[X] = 12
ng
h i
∴ Var(X) = E X2 − [E(X)]2 = 12 − 32 = 3
E
arn
Le
MGF of X:
w.
h i
MGF = MX (t) = E etX
ww
Z∞
= etx f (x)dx
−∞
Z∞
1
= etx x2 e−x dx
2
0
Z∞
1
= x2 e−(1−t)x dx
2
0
[Link]
[Link]
64 UNIT I - Random Variables
n
g.i
1 −x
Example 1.15 Let X be a random variable with p.d.f. f (x) = e 3 , x ≥ 0. Find
3
n
P (X > 3), [Link] X, mean and variance.
Solution :
eri
ine
x , X ∈ [0, 1]
Example 1.16 For the triangular distribution f (x) =
2 − x , X ∈ [1, 2] .
ng
0 , otherwise
Find mean, variance and m.g.f.
E
x , X ∈ [0, 1]
f (x) = 2 − x , X ∈ [1, 2]
(1)
Le
0 , otherwise
Mean of X:
w.
Z∞
E(X) = x f (x)dx
ww
−∞
Z0 Z1 Z2 Z∞
0 *0
= f(x)
* dx + x · xdx + x · (2 − x)dx + f(x)
dx (∵ by(1))
−∞ 0 1 0
Z1 Z2
= x2 dx + (2x − x2 )dx
0 1
"#1 " 2 #2
x3 2x x3
= + −
3 0 2 3 1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 65
1 8 1
= − (0) + 4 − − 1 −
3 3 3
1 7
= + 4−
3 3
1 2
= +
3 3
∴ E[X] = 1
h i
Variance of X : Var(X) = E X2 − [E(X)]2
n
Z∞
g.i
h i
Now, E X2 = x2 f (x)dx
−∞
n
Z0 Z1 Z2 Z∞
*0 *0
= dx + x2 · xdx + x2 · (2 − x)dx +
eri
f(x)
f(x)
dx
−∞ 0 1 0
(∵ by(1))
ine
Z1 Z2
= x3 dx + 2x2 − x3 dx
ng
0 1
#1 " 3
" #2
x4 2x x4
= + −
E
4 0 3 4
1
arn
1 16 16 2 1
= − (0) + − − −
4 4 4 3 4
1 14 15
= + −
Le
3 3 4
1 56 − 45
= +
4 12
w.
1 11
= +
4 12
ww
14
=
12
h i 7
∴ E X2 =
6
h i 7 1
∴ Var(X) = E X2 − [E(X)]2 = − 1 =
6 6
Example 1.17 Let X denotes the number of heads in an experiment of tossing two
coins. Find probability distribution by using Binomial distribution.
[Link]
[Link]
66 UNIT I - Random Variables
Solution :
X = a discrete r.v. which denotes the number of heads. (1)
1
p = Probability of getting a head from a single coin = (2)
2
1 1
q=1−p=1− = (3)
2 2
n = number of coins = 2 (3)
W.K.T. Binomial PMF of B.D. is P(X = x) = nCx px qn−x (4)
Probability distribution P(X = x) :
n
0 2−0
1 1 1 1
When X = 0, P(X = 0) = 2C0 =1·1· =
g.i
2 2 4 4
1 2−1
1 1 1 1 2
When X = 1, P(X = 1) = 2C1 =2· · =
n
2 2 2 2 4
eri
2 2−2
1 1 1 1
When X = 2, P(X = 2) = 2C2 =1· ·1=
2 2 4 4
Example 1.18 Find Binomial distribution and P(X = 4), for
ine
(a) mean 4, variance = 3 (b) mean 4, variance = 5.
ng
(1)
Mean of B.D. = np (2)
arn
4q = 3
3
∴q = (b) Mean = np = 4
ww
4
1 Variance = npq = 5
W.K.T. p = 1 − q = 1 −
4 4q = 5
(5) ⇒ np = 4 5
∴q = >1
1 4
∴ n = 16 ∵p=
4 which is not possible
x 16−x
1 3 ∵ 0 ≤ Probability ≤ 1
∴ (1) ⇒ P(X = x) = 16Cx
4 4
4 12
1 3
i.e., P(X = 4) = 16C4
4 4
= 0.22
Here Mean > Variance Here Mean < Variance
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 67
Solution :
n
Let X = Number of defectives
g.i
5
p = Probability of getting a defective = = 0.05
→0
100
q = 1 − p = 0.95
n
n = Number of screws = 15
eri
ine →∞
So, X∼B.D.(n, p)
ng
Example 1.20 In a certain city, 20% of population is literate, assume that 200
investigators take sample 3 of a individual to see they are literate. How many
investigators would you expect to report that three people or less are literate in the
sample?
Solution :
n
Let X =Number of literates
g.i
p =Probability of getting a literate
20
=Percentage of getting a literate = = 0.2
n
→0
100
q =1 − 0.2 = 0.8
n =Number of literates = 10
→∞
eri
ine
N =Number of investigators = 200
ng
So, X∼B.D.(n, p)
E
Example 1.21 In 256 sets of 12 tosses of coins, in how many cases may one expect
eight heads and 4 tails?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 69
Solution :
n
So, X∼B.D.(n, p)
n g.i
W.K.T. P.M.F. of B.D. is P(X = x) = nCx px qn−x
= 30.212
arn
= 31
Example 1.22 6 dice are thrown 729 times, how many times do you expect at
Le
Solution :
ww
So, X∼B.D.(n, p)
n
∴ P(getting atleast 3 dice to get 5 or 6)
n g.i
=P(X ≥ 3)
=1 − P(X < 3) = 1 − P(X = 0, 1, 2)
=1 − [P(X = 0) + P(X = 1) + P(X = 2)] eri
ine
" 0 6 1 5 2 4 #
1 2 1 2 1 2
=1 − 6C0 + 6C1 + 6C2
3 3 3 3 3 3
ng
= N × P(X ≥ 3)
w.
= 729 × 0.320
= 233
ww
Example 1.23 With the issued rotation find p for a binomial distribution random
variable X, if n = 6 and if 9[P(X = 4)] = P(X = 2).
9 [
6C
4
q2
4] p = [
6C p2 q4
2]
(∵ 6C4 = 6C2 by nCr = nCn−r )
i.e., 9p = q
2 2
9p2 − q2 = 0
9p2 − (1 − p)2 = 0
9p2 − 1 − p2 + 2p = 0
n
8p2 + 2p − 1 = 0
g.i
8p2 + 4p − 2p − 1 = 0
4p(2p + 1) − 1(2p + 1) = 0
n
(4p − 1)(2p + 1) = 0
eri
1 −1
p = or
4 2
−1
ine
p, (∵ 0 ≤ p ≤ 1)
2
1
∴p=
ng
4
E
arn
Le
w.
ww
[Link]
[Link]
72 UNIT I - Random Variables
n
P(X = x) = , x = 0, 1, 2, 3, · · · , ∞
x!
g.i
Given P (X = 1) = 0.3, P (X = 2) = 0.2. (1)
e−λ λ0
We have to find P(X = 0) = = e−λ
n
0!
which indicates to find λ from (1)
P (X = 1) = 0.3
eri
P (X = 2) = 0.2
ine
e−λ λ1 e−λ λ2
= 0.3 = 0.2
1! 2!
ng
(2) 0.3 3
4
λ=
3
Le
e−λ λ0 −4
P(X = 0) = = e 3 ∴ P(X = 0) = 0.263
w.
0!
Example 1.25 A manufacturer of pins knows that 5% of his product is defective.
ww
If he sells pins in boxes of 100, and guarantee is that not more 10 pins will be
defective. What is the probability that a box will fail to meet the guaranteed quality.
Solution :
Let X =Number of defectives
p =Probability of getting a defective
5
=Percentage of getting a defective = = 0.05
→0
100
n =Number of pins = 10
→∞
np =5
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 73
e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!
−5 x
e 5
=
x!
P(Number of defectives not more than 10)
=P(Number of defectives less than 10)
=P(X > 10)
n
=P(X ≥ 11)
g.i
=P(X = 11) + P(X = 12) + P(X = 13) + · · · + P(X = 100)
=(or)
n
=1 − P(X ≤ 10) = 1 − [P(X = 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)]
=1 −
" −5 0
e 5
0!
+
1!
+ eri
e−5 51 e−5 52 e−5 53 e−5 54
2!
+
3!
+
4!
ine
#
e−5 55 e−5 56 e−5 57 e−5 58 e−5 59 e−5 510
+ + + + + +
5! 6! 7! 8! 9! 10!
ng
" 0 1 2 3 4 5 6 7 8 9 10
#
5 5 5 5 5 5 5 5 5 5 5
=1 − e−5 + + + + + + + + + +
0! 1! 2! 3! 4! 5! 6! 7! 8! 9! 10!
E
=0.013695
arn
Example 1.26 Find the probability that of ace of spade will drawn from pack of
well shuffled cards atleast once in 104 consecutive trials.
Le
Solution :
Let r.v.X =Number of ace spade
w.
1
= →0 (which indicates to use Poisson distribution)
52
n =Number of trials = 104→∞
1
np =104 = 2(= λ)
52
So, X∼P.D.(λ = np)
e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!
P(Number of ace spades at least one)
[Link]
[Link]
74 UNIT I - Random Variables
n
=0.8646 =0.8672
g.i
1
Example 1.27 In a blade factory there is a chance of for any blade to be
n
500
eri
defective. The blades are in packets of 10. Use Poisson distribution to calculate
the number of packets that containing
ine
(a) no defective (b) one defective (c) two defectives
ng
Solution :
=
500
=0.002→0 (which indicates to use Poisson distribution)
ww
10 1
np = = (= λ)
500 50
N =Number of Total Packets = 10000
e−λ λx
W.K.T. P.M.F. of P.D. is P(X = x) =
x!
1 x
1
e− 50 50
= x = 0, 1, 2, · · · , 10
x!
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 75
n
( )
1
− 50 1
e − 50 (c) N × P(X = 2) = 10000 × 0.00019603
(c) P(X = 2) =
g.i
2! = 1.9603 2
=0.00019603
n
1.6.7 Examples of Geometric distribution:
eri
ine
Example 1.28 With a probability that a child get a disease is 0.4. Find probability
that a child get disease (a) at 4th attempt (b) at 7th attempt.
ng
Solution :
E
Let X =Number of trials are performed until the first time disease occurs.
arn
q =1 − p = 0.6
w.
Solution :
n
g.i
W.K.T. P.M.F. of Geo.D. is P(X = x) = qx−1 p, x = 1, 2, 3, · · ·
n
eri
(a) P 3rd attempt = P(X = 3) = q3−1 p = q2 p = (0.2)2 (0.8) = 0.032
(b) P before 3rd attempt = P(X < 3) = P(X ≤ 2)
ine
x=2
X x=2
X
= q p=
x−1
(0.2)x−1 (0.8)
ng
x=1 x=1
h i
= (0.8) (0.2) + (0.2) = (0.8)[1.2] = 0.96
0 1
E
= 0.04
(d) P even attempts = P(X = 1) + P(X = 3) + P(X = 5) + · · ·
Le
h i−1
= (0.2)(0.8) 1 − (0.2)2
ww
= (0.16)(1.041666667) = 0.166666667
(e) P odd attempts = 1 − P(even attempts)
= 1 − 0.166666667
= 0.833333333
Example 1.30 A and B shoot independently until each has hit his own target.
3 5
The probabilities of their hitting the target at each shot 5 and 7 respectively. Find
the probability that B will require more shots than A.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 77
Solution :
Let X = Number of trials required by A to get his first success.
X ∼ Geo. Dist.(p)
3
Given p = Probability of Ahitting the target = = P(A)
5
3 2
P A = 1 − P(A) = 1 − =
5 5
Let Y = Number of trials required by B to get his first success.
Y ∼ Geo. Dist.(p)
5
n
Givenp = Probability of Bhitting the target = = P(B)
7
g.i
5 2
P B = 1 − P(B) = 1 − =
7 7
n
eri
W.K.T. P.M.F. of Geo.D. is P(X = x) = qx−1 p, x = 1, 2, 3, · · ·
ine
P(B requires more shots to get his first success thanAto get his first success)
x→∞
X
= P[X = x and Y = (x + 1)(or)Y = (x + 2)(or)Y = (x + 2)(or) · · · ]
ng
x=1
x→∞
E
X
= P(X = x) · P[Y = (x + 1)(or)Y = (x + 2)(or)Y = (x + 3)(or) · · · ]
arn
x=1
(∵ shooters A and B are hitting the target in independent fashion)
x→∞
X x−1 k→∞
X (x+k)−1
=
Le
P A · P(A) P B · P(B)
x=1 k=1
x→∞ x−1 k→∞
X (x+k)−1
2 3 2 5
w.
X
= ·
5 5 7 7
x=1 k=1
ww
x→∞ k→∞
3 5 X 2 x−1 X 2 x−1 2 k
= ·
5 7 5 7 7
x=1 k=1
x→∞ x−1 k→∞
3 X 2 x−1 2
k
2
X
= ·
7 5 7 7
x=1 k=1
x→∞ k→∞
3 4 x−1 X 2 k
X
=
7 35 7
x=1 k=1
x→∞
3 X 4 x−1 2 1
" 2 3 #
2 2
= + + + ···
7 35 7 7 7
x=1
[Link]
[Link]
78 UNIT I - Random Variables
x→∞
3 X 4 x−1 2
" 1 2 #
2 2
= 1+ + + ···
7 35 7 7 7
x=1
x→∞
3 2 X 4 x−1
−1
2
= 1−
7 7 35 7
x=1
x→∞
3 2 7 X 4 x−1
=
7 7 5 35
x=1
" 1 2 #
6 4 4
= 1+ + + ···
35 35 35
n
−1
6 4
g.i
= 1−
35 35
6 35
=
n
35 31
eri
6
=
31
ine
1.6.8 Examples of Uniform distribution
ng
1
∴ P.D.F. of X = f (x) = ,a < x < b
b−a
1 1 1
ww
n
(c) P(|X − 2| < 2) = P(−2 < X − 2 < 2)
g.i
= P(0 < X < 4)
= P(0 < X < 3) [∵ X ∼ in (−3, 3)]
n
Z3 Z3
eri
1
= f (x)dx = dx
6
0 0
ine
Z3
1 1 1 1
= dx = [x]30 = [3 − 0] =
6 6 6 2
ng
0
Z∞
1 1
(d) P(X > k) = ⇒ f (x)dx =
E
4 4
k
arn
Z3
1 1 1 1
dx = ⇒ [x]3k =
6 4 6 4
Le
k
6 3
[3 − k] = =
4 2
w.
3 −6 + 3
−k = −3 + =
2 2
3
ww
−k = −
2
3
k=
2
Solution :
Using X denotes Waiting time (OR) Using X denotes Arrival time
Let X be a random variable that Let X be a random variable that denotes
denotes the waiting time of the the arrival time of the passenger to get
passenger to get the bus who the bus who arrives between 7.00 a.m.
arrives between 7.00 a.m. and and 7.30 a.m.
7.30 a.m. The buses arrive for every 30 minutes.
The buses arrive for every 15 Whenever the passenger arrives
minutes. between 7.00 a.m. 7.30 a.m., his arrival
Whenever the passenger arrives time with minimum 0 minutes to
n
between 7.00 a.m. 7.30 a.m., he maximum 30 minutes.
g.i
has wait for minimum 0 minutes
X ∼ U.D. (0,30).
to maximum 15 minutes.
n
∴ X is uniformly distributed in (0, 30).
X ∼ U.D. (0,15).
eri
1 1 1
∴ X is uniformly distributed in f (x) = = =
b − a 30 − 0 30
(0, 15).
ine
(a) P(waiting time less than 5 minutes)
1 1 1
f (x) = = = =P(7.10 ≤ X ≤ 7.15)+P(7.25 ≤ X ≤ 7.30)
b − a 15 − 0 15
Z7.15 Z7.30
ng
Z5
= f (x)dx + f (x)dx
(a) P(X < 5) = f (x)dx
E
7.10 7.25
−∞
Z15 Z30
Z5
arn
1 1 1 h 15 i
1 1 = dx+ dx = [x]10 +[x]30
= dx = [x]50 15 15 30 25
15 15 10 25
0
1 10 1
Le
= f (x)dx + f (x)dx
−∞
Z12 Z1 7.00 7.15
1 1 Z3 Z18
=1− dx = 1 − dx 1 1
15 15 = dx + dx
0 0 30 30
1 12 0 15
=1− [x] 1 h 3
15 0
i
= [x]0 + [x]15
18
1 1 30
= 1 − [12 − 0] = 1 − [12] 1
15 15 = [[(3) − (0)] + [(18) − (15)]]
4 30
=1− 6 1
5 = =
1 30 5
=
5 [Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 81
Example 1.33 An electric train runs every half an hour from 12.00 a.m. to 6.00
a.m. in the morning. Find the probability of a man entering the station at a
random time during this period, will have to wait
n
Whenever the passenger arrives between 12.00 a.m. and 6.00 a.m., he has
g.i
wait for minimum 0 minutes to maximum 30 minutes.
X ∼ U.D. (0,30).
∴ X is uniformly distributed in (0, 30).
n
eri
1 1 1
f (x) = = =
b − a 30 − 0 30
Z20
ine
(a) P(X ≤ 20) = f (x)dx
−∞
ng
Z20
1
= dx
30
E
0
1 20
arn
= [x]
30 0
1
= [20 − 0]
Le
30
2
=
3
w.
3
1
=
3
x
Example 1.34 If X be the time to repair in hours with p.d.f. f (x) = ce− 5 , x > 0.
Find (a) c (b) mean (c) variance (d) P(X > 3) (e) P(3 < X ≤ 6).
Z∞
(a) By pdf, wkt f (x)dx = 1 ∵ pdf contains an unknown c
−∞
Z∞
x
c e− 5 dx = 1 ∵ by (1)
0
x ∞
e− 5
c 1 = 1
−5 0
1
n
c(0) − 1 = 1
−5
g.i
1
c0 + 1 = 1
n
5
1
1 x
∴ f (x) = e− 5 , x ≥ 0.
∴c=
5
eri
ine
5
1 x 1
Note : This f (x) = e− 5 is of the form f (x) = αe−αx , where α = .
5 5
ng
⇒ X ∼ Exponential Distribution.
1
(b) Mean =
α
E
1
= 1
arn
5
=5
Le
1
(c) Variance =
α2
1
= 2
w.
1
5
ww
= 25
Z∞
(d) P(X > 3) = f (x)dx
3
Z∞ x ∞
1 −x 1 e− 5
= e 5 dx = 1
5 5 −5
3 3
1 5 −x
h i∞
−3 ∞
h i
= − e 5 =− 0−e 5
5 1 3 3
= e−0.6
= 0.5488
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 83
Z6
(e) P(3 < X ≤ 6) = f (x)dx
3
Z6
1 −x
= e 5 dx
5
3
x 6
1 e− 5
= 1
5 −5
3h i6
n
1 5 −x
= − e 5
5h 1
g.i
3
−6 −3
i
=− e5 −e5
= e−0.6 − e−1.2
n
= 0.2476
eri
ine
Example 1.35 The mileage of a tyre have exponential distribution with mean
40,000 kms. Find the probability that one of these tyres will last
ng
α
1
α=
40000
ww
1 − x
∴ f (x) = e 40000
40000
20000
h i
(a) P(X ≥ 20000) = e− 40000 ∵ P(X > k) = P(X ≥ k) = e−αk is true for Exp. D
1
= e− 2
= 0.6065
(b) P(X ≤ 30000) = 1 − P(X > 30000)
h i
− 30000
=1−e 40000 ∵ P(X > k) = P(X ≥ k) = e is true for E. D.
−αk
3
= 1 − e− 4
= 0.5276
[Link]
[Link]
84 UNIT I - Random Variables
(c) P(X > 30000/X > 20000) = P(X > 20000 + 10000/X > 20000)
= P(X > 10000) [∵ P[X > (s + t)/P(X > s)] = P(X > t) is true for E. D.]
h i
− 14
=e ∵ P(X > k) = P(X ≥ k) = e is true for E. D.
−αk
= 0.7788
Note : P(X > t) for n tyres is [P(X > t)]n .
n
λ
g.i
35,000 litres. Find the probability of 2 days select at random, the stock is
insufficient(inadequate) for both days.
n
eri
Solution : Given distribution is exponential.
Let X be a random variable which denotes daily milk consumption in
excess of 20000 litres.
ine
If this excess is upto 15000 litres, the stock is sufficient.
If this excess is more than 15000 litres, the stock is insufficient.
ng
1
Given Mean = = 3000
arn
λ
1
λ=
3000
Le
1 − x
∴ f (x) = e 3000
3000
w.
15000
h i
∴ P(X > 15000) = e− 3000 ∵ P(X > k) = P(X ≥ k) = e−λk is true for Exp. D
= e−5
ww
= 0.006737947
∴ For 2 days, probability for the insufficient of milk is
2
[P(X > 15000)]2 = e−5 ∵ P(X > k) for n days = [P(X > k)]n
= 0.0000453999298391533
Example 1.37 In a certain locality, the daily consumption of water can be treated
as random variable having Gamma distribution with an average of 3 million litres.
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 85
If the pumping station has a daily supply of 4 million litres. What is the probability
that this water supply will be inadequate on any day?
e−x xβ−1
W.K.T. f (x) = , β > 0, 0 ≤ X < ∞.
n
β
g.i
(∵ X ∼ Gamma. dist. (parameter = β))
Given Mean = β = 3 million litres
n
e−x x2 e−x x2
f (x) = =
3
eri
2!
ine
∴ Probability that this water supply will be inadequate on any day is
ng
Z∞
P(X > 4) = f (x)dx
E
4
arn
Z∞
1
= e−x x2 dx
2
"4 " −x #
Le
1h i∞
= −x2 e−x − 2x e−x − 2 e−x
2 4
ww
1 h i
= (0 − 0 − 0) − −16e − 8e − 2e
−4 −4 −4
2
= 0.2381
Example 1.38 In a certain city the daily consumption of electric power in million
of Kilowatts can treated as a random variable having Erlang distribution with
parameters 12 , 3 . If the power plants daily capacity of 12 millions kilowatts. Find
the probability that this power supply will be inadequate on any given day?
[Link]
[Link]
86 UNIT I - Random Variables
1
X ∼ Erlang Distribution with parameters α = , β = 3 .
2
Let X be a Erl. r.v. which denotes daily consumption of electricity.
e−αx xβ−1
p.d.f. f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β
1
3
1
e− 2 x x2 2
=
n
3
g.i
− 12 x
e x2
=
16
n
Z∞ eri
ine
P(X > 12) = f (x)dx
12
Z∞ 1
ng
e− 2 x x2
= dx
16
12
E
∞
− 12 x − 12 x − 21 x
1 2 e e e
arn
16 12
1 h i
= 2 −6 −6
(0 − 0 − 0) − −2(12) e − 8(12)e − 16e −6
w.
16
1 h i
= 2(12)2 e−6 + 96e−6 + 16e−6
16
ww
1 h i
= 400e−6
16
= 25e−6
= 06196
Example 1.39 The daily consumption of milk in a city, in excess of 20,000 litres
is approximately distributed as Gamma variant with parameters α = 10,000 , β
1
= 2.
The city has a daily stock of 30,000 litres. What is the probability that the stock is
insufficient?
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 87
1
X ∼ Erlang Distribution with parameters α = ,β = 2 .
10000
Let X be a Erl. r.v. which denotes daily consumption of milk
in excess of 20000 litres.
e−αx xβ−1
p.d.f. f (x) = αβ , α, β > 0, 0 ≤ X < ∞
β
n
1
2
1
e− 10000 x x1 10000
=
g.i
2
1
− 10000 x
e x
n
=
(10000)2
eri
If this excess is upto 10000 litres, the stock is sufficient. If this excess is
more than 10000 litres, the stock is insufficient.
ine
Z∞
ng
10000
2 Z∞
1
arn
1
= e− 10000 x x1 dx
10000
10000
∞
1 1
Le
2 − 10000 x − x
1 e e 10000
= x − 2 dx
10000 − 1 − 1
10000 10000 10000
w.
1 2h h
i h ii∞
1 1
= x −10000e− 10000 x − (10000)2 e− 10000 x dx
10000 10000
ww
1 2h
h i h ii
1 1
= (0 − 0) − −(10000)2 e− 10000 − (10000)2 e− 10000
10000
= 2e−1
= 0.7358
(a) P(0 < Z < 1.25) (b) P(0.6 < Z < 1.25)
(c) P(Z < 1.25) (d) P(Z > 1.25)
(e) P(−1.25 < Z < 1.25) = P(|Z| < 1.25) (f) P(−2.5 < Z < 1.25)
n
Solution : Here Z is continuous normal varible.
n g.i
(a) P(0 < Z < 1.25)
= 0.3944 eri z=0 z = 1.25
ine
ng
= 0.3944 − 0.2258
= 0.1686
Le
w.
n
g.i
(f) P(−2.5 < Z < 1.25)
= P(−2.5 < Z < 0) + P(0 < Z < 1.25)
n
= P(0 < Z < 2.5) + P(0 < Z < 1.25) z = −2.5 z = 1.25
eri
[∵ Normal curve is symmetric]
= 0.3944 + 0.4938
ine
= 0.8882
Example 1.41 X is a normal variant with mean µ = 30 and σ = 5. Find
ng
(a) P(26 < X < 40) (b) P(X > 45) (c) P(|X − 30| > 5)
E
Solution : Given
arn
µ = 30
σ=5
∴ X ∼ N.D. (µ, σ)
Le
X−µ
W.K.T. Z =
σ
w.
X − 30
=
5
ww
∴ X = 5Z + 30
n
= P(|5Z + 30 − 30| > 5)
g.i
= P(|5Z| > 5)
= P(|Z| > 1)
n
= 1 − P(|Z| < 1) z = −1 z=1
= 1 − P(−1 < Z < 1)
= 1 − 2 × P(0 < Z < 1)
eri
ine
[∵ Normal curve is symmetric]
= 1 − 2 × 0.3431 = 1 − 0.6862
ng
= 0.3138
E
arn
Example 1.42 The mean height of soldiers to be 172 cm with variance (27cm)2 .
Le
Solution : Given
ww
µ = 172cm
σ2 = (27cm)2 ⇒ σ = (27cm)
X → denotes the height of a soldier
∴ X ∼ N.D. (µ, σ)
X−µ
W.K.T. Z =
σ
X − 172
=
27
∴ X = 27Z + 172
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 91
n
∴ Number of soldiers having height over 182 cms is
g.i
= 1000 × P(X > 182)
= 1000 × 0.3557
n
= 355.7
eri
356 soldiers
ine
Example 1.43 In a normal distribution, 31% of items are under 45 and 8% are
over 64. Find mean and variance.
ng
X−µ
arn
W.K.T. Z = (1)
σ
We have to find µ, σ2 .
Le
Given that 31% of items are under 45 and 8% are over 64.
w.
When X1 = 45 When X2 = 64
X1 − µ X2 − µ
Let Z = z1 = [∵ (1)] Let Z = z2 = [∵ (1)]
ww
σ σ
45 − µ 64 − µ
= =
σ σ
i.e., P(Z < z1 ) = 31% = 0.31 i.e., P(Z > z2 ) = 8% = 0.08
i.e., P(z1 < Z < 0) = 19% = 0.19 i.e., P(0 < Z < z2 ) = 42% = 0.42
Using table, ∴ z1 = −0.5 From table, ∴ z2 = 1.4
45 − µ 64 − µ
= −0.5 = 1.4
σ σ
45 − µ = −0.5σ 64 − µ = 1.4σ
µ − 0.5σ = 45 (1) µ + 1.4σ = 64 (2)
Solving (2) and (3)
[Link]
[Link]
92 UNIT I - Random Variables
Mean = µ = 50
S.D. = σ = 10
Variance = σ2 = 100
Example 1.44 In a normal distribution, 7% of items are under 35 and 89% are
under 65. Find µ, σ2 .
Solution : Given distribution is Normal.
X−µ
W.K.T. Z =
n
(1)
σ
g.i
We have to find µ, σ2 .
Given that 7% of items are under 35 and 89% are under 65.
n
eri
When X1 = 35 When X2 = 65
X1 − µ X2 − µ
Let Z = z1 = [∵ (1)] Let Z = z2 = [∵ (1)]
σ σ
ine
35 − µ 65 − µ
= =
σ σ
i.e., P(Z < z1 ) = 7% = 0.07 i.e., P(Z < z2 ) = 89% = 0.89
ng
i.e., P(z1 < Z < 0) = 53% = 0.53 i.e., P(0 < Z < z2 ) = 39% = 0.39
Using table, ∴ z1 = −1.5 From table, ∴ z2 = 1.25
E
35 − µ 65 − µ
= −1.5 = 1.25
arn
σ σ
35 − µ = −1.5σ 65 − µ = 1.25σ
µ − 1.5σ = 35 (1) µ + 1.4σ = 65 (2)
Le
S.D. = σ = 10.34
Variance = σ2 = 106.92
ww
Example 1.45 X is a random variable with p.d.f. f (x), 0 < X < 1, with
Y = aX + b. Find p.d.f. of Y.
Y = aX + b
y = ax + b
n
ax = y − b
g.i
y−b
x=
a
n
dx 1
=
dy a
Range of Y:
E
y−b y−b
>0 <1
a a
y>b y<a+b
Le
P.d.f. of Y is given by
y−b
fX
ww
a
∴ (1) ⇒ fY (y) = , b < y < a + b.
a
Example 1.46 Let X be a continuous random variable with
x
12 , 1 < X < 5
f (x) =
. Find p.d.f. of 2X − 3.
0 , otherwise
Let Y = 2X − 3
y = 2x − 3
2x = y + 3
y+3
x=
2
dx 1
=
dy 2
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)
dy
n
y+3 1
!
g.i
= fX
2 2
y+3
!
n
fX
2
eri
=
y+3 2
ine
2
12
= [∵ by (1)]
2
ng
y+3
= (2)
48
Range of Y:
E
x>1 x<5
y+3 y+3
>1 <5
Le
2 2
y > −1 y<7
∴ Range of Y is −1 < Y < 7
w.
P.d.f. of Y is given by
ww
y+3
∴ (1) ⇒ fY (y) = , −1 < y < 7.
2
2x , 0 < X < 1
Example 1.47 Given random variable X with p.d.f. f (x) =
.
0 , otherwise
Find p.d.f. of Y = 8X3 .
2x , 0 < X < 1
(
f (x) = . (1)
0 , otherwise
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 95
Y = 8X3
y = 8x3
y
x3 =
8
1
y3
x=
2
2
dx y− 3
=
dy 6
n
g.i
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)
dy
1
n
y 3 1 2
= fX y− 3
eri
2 6
1 1 2
= y 3 y− 3 [∵ by(1)]
6
ine
1
y− 3
= (2)
6
ng
Range of Y:
E
arn
1
>0 y3
2 <1
1 2
1
3 < 2y < 8
w.
y
y 3 > 0y > 0
∴ Range of Y is 0 < Y < 8
ww
P.d.f. of Y is given by
1
y− 3
∴ (1) ⇒ fY (y) = , 0 < y < 8.
6
Mean of Y:
[Link]
[Link]
96 UNIT I - Random Variables
Z∞
E[Y] = y f (y)dy
−∞
Z8 1
y− 3
= y dy
6
0
Z8
1 2
= y 3 dy
6
0
n
5 8
1 y 3
g.i
= 5 dy
6 3
" 5 0#
83
n
= −0
10
eri
" 5#
2 32
= =
10 10
ine
∴ E[Y] = 3.2
Z∞
ng
h i
E Y2 = y2 f (y)dy
−∞
E
Z8 1
y− 3
=
arn
y dy
6
0
Z8 5
y3
Le
= dy
6
0
w.
8 8
1 y 3
= 8 dy
6
ww
" 8 3 0#
83
= −0
16
" 8 # " 4 4#
2 2 ·2
= =
16 16
h i
∴E Y 2
= 16
h i
∴ Variance = E Y − [E(Y)]2 2
= 16 − (3.2)2
= 5.76
Example 1.48 X is a random variable uniformly distributed in (0, 1).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 97
n
x = sin−1 y
g.i
dx 1
= p
dy 1 − y2
n
dx
W.K.T. p.d.f. of Y = fY (y) = fX (x)
eri
dy
= fX sin−1 y p
1
ine
1 − y2
1 1
=1· p = p (1)
ng
1 − y2 1 − y2
Range of Y:
E
π
y>0 y < sin(1) =
Le
2
π
∴ Range of Y is 0 < Y < 2
P.d.f. of Y is given by
w.
1 π
∴ (1) ⇒ fY (y) = p ,0 < y < .
ww
1 − y2 2
n
g.i
6. The monthly breakdowns of a computer is a random variable having
Poisson distribution with a mean equal to 1.8. Find the probability
n
that this computer will function for a month (a) Without a breakdown
(b) With only one breakdown (c) with oatleast one breakdown.
eri
n
(a) = e−1.8 = 0.1653, (b) = 0.2975, (c) = 0.8347 [M/J 07, CSE]
ine
7. Obtain the Moment generating function of exponential distribution
and hence or otherwise compute the first four moments.
α
h i
{Mgf= α−t ;Mean=E [X] = α , E X2 = α22 , ...}
1
ng
X1 given X1 + X2 is Uniform.
X−µ 2
9. If X ˜ N( µ , σ2 ). Obtain the probability density function of U = 21 σ .
Le
1 −u 21 −1
{g (u) = e u ,u ≥0}
1/2
w.
10. A random
( −θx variable X has the density function
θe , f or x > 0
f (x) = find the generating function and hence
0, otherwise
ww
θ
n o
find the mean and variance. Mg f = θ−t , Mean = θ , Var = θ2
1 1
n
= 12 } [N/D 06, CSE]
g.i
16. The density function of a random variable X is given by f (x) = Kx(2 −
x), 0 = x = 2. Find K, mean, variance and rth moment.
n
( )
32r+1
eri
K = 3/4, Mean = 1, Var = 1 / 5 , µr = E [X ] =
1 r
(r + 2) (r + 3)
[M/J 2007, ECE]
ine
17. A random variable X has the following probability distribution.
x: 0 1 2 3 4 5 6 7
ng
P(1.5 < x < 4.5 / x > 2) (3) the smallest value of λ for which
P(x = λ) > 1/2. {1/10, 5/7, 4} [N/D 2008, ECE]
arn
19. Define Gamma Distribution and find its mean and variance. [N/D08,
ECE]
w.
Z
{µn = √ t e dt}
π −∞
[N/D 2007, ECE]
21. A random variable X has the following probability distribution.
x: –2 –1 0 1 2 3
P(x): 0.1 K 0.2 2k 0.3 3k
Find (1) the value of k 2 Evaluate P(X < 2) and P(−2 < X < 2) (3) Find
the cumulative distribution of X (4) Evaluate the mean of X {1/15, 1/2
and 2/5, mean = 16/15, cumulative distribution is
F(x) 0.1 1/6 11/30 1/2 4/5 1 }
:
[Link]
[Link]
100 UNIT I - Random Variables
n
<
0, x 0
g.i
2
x
,
4 0 ≤ x ≤ 1
1 3
, = , , > =
x 1
F (x) − 1 ≤ x ≤ 2 P exactlyone value 1.5
2 4
2 8
2
3 x 5
,
n
2 x − 4 − 4 2 ≤ x ≤ 3
x > 3
1,
eri
23. Find mean and variance of Gamma distribution and hence find mean
ine
and variance of Exponential distribution.{ Gamma distribution: Mean
= λ , Var = λ}, {E.D. Mean = α1 , Var = α12 } [M/J 07, CSE]
24. VLSI chips, essential to the running of a computer system, fail in
ng
supply will arrive in 8 weeks. What is the probability that during the
arn
next 8 weeks the system will be down for a week or more, owing to a
lack of chips?{0.1665}
25. Find the probability distribution of the total number of heads obtained
Le
1 h
i
Mg f = 1 + 4e + 6e + 4e + e , Mean = 2, variance = 1
t 2t 3t 4t
16
ww
27. Obtain the MGF of Poisson distribution and hence compute the first
four moments.
n
distributed?[A/M 2008, ECE]
g.i
30. In an engineering examination, a student is considered to have failed,
secured second class, first class and distinction, according as he scored
n
less than 45%, between 45% and 60%, between 60% and 75% and
eri
above 75% respectively. In a particular year 10% of the students failed
in the examination and 5% of the students get distinction. Find the
percentages of students who have got first class and second class.
ine
(Assume normal distribution of marks){I calss = 38%, II class = 47%}
[N/D 2008, ECE]
ng
33. Define geometric distribution. Obtain its m.g.f. and hence compute
the first four moments.
Le
34. For a normal distribution with mean 2 and variance 9, find the value
of x1 , of the variable such that the probability of the variable lying in
w.
(−3, 3). Compute (1) P(X = 2) (2) P( jX − 2 j < 2) (3) find k such that
P(X > k) = 1/3. {0, 2/3, k = 1}
36. Define gamma distribution. Prove that the sum of independent gamma
variates is a gamma variate.
37. In a book of 520 pages, 390 typographical errors occur. Assuming
Poisson’s law for the number or errors per page, find the probability
that a random sample of 5 pages will contain no error.
Kx, x = 1, 2, 3, 4, 5
(
38. If P(X = x) = represents a p.m.f. Find(1) k (2)
0, elsewhere
P(x is a prime number) (3) P(1/2 < x < 5/2 / x > 1) (4) distribution
[Link]
[Link]
102 UNIT I - Random Variables
function.
1 2 3
k = , , , pm f is F(x) 1
15
3
15
6
15
10
15 1
15 3 4
[N/D 2009, ECE]
39. The probability mass function of a r.v. x is given by
cλi
P (i) = i! , (i = 0, 1, 2, ...) whereλ > 0. Find (1) P(x = 0) (2) P(x > 2)
λ2
( " #)
1 1 1
c = λ, λ, 1 − λ 1 + λ +
e e e 2!
n
g.i
[N/D 2009, ECE]
40. The time (in hours) required to repairs a machine is exponential,
n
distributed with parameter λ = 1/2. (1) What is the probability that
eri
the repair time exceeds 2 hours? (2) What is the conditional
probability that a repair takes atleast 10 h given that its duration
exceeds 9h? {1/e, P(X > 1) = }[N/D 2009, ECE]
ine
ng
E
arn
Le
w.
ww
[Link]
[Link]
n
Joint distributions - Marginal and conditional distributions -
g.i
Covariance - Correlation and Regression - Transformation of random
n
variables - Central limit theorem (for iid random variables).
eri
ine
2.1 Joint distributions - Marginal and conditional distributions
ng
ii) pij = 1
j i
ww
n
i
g.i
(b) For continuous (X, Y):
(i) Marginal p.d.f. of X:
n
Z∞
fX (x) or f (x) =
eri
f x, y dy and
−∞
ine
Zb
P(a < X < b) = fX (x) dx
a
ng
Z∞
fY (y) or f (y) =
f x, y dx and
−∞
Le
Zb
P(a < Y < b) = fY (y) dy
w.
a
Note : fY (y) ⇒ It is function of only y and hence Integration is
ww
w.r.t. x.
P X = xi , Y = y j pi j
P Y = y j /X = xi = =
P (X = xi ) pi∗
(iii) For independent X and Y :
pi j = pi∗ p∗ j
and the converse is also true.
(b) For continuous (X, Y):
(i) Conditional p.d.f. of X given Y = y is
f x, y
n
fX/Y x/y or f x/y =
and
fY y
g.i
Zb
P a < X < b/Y = y =
f x/y dx
n
a
eri
(ii) Conditional p.d.f. of Y given X = x is
c
(iii) For independent X and Y:
E
or F(x, y) = P(X ≤ x, Y ≤ y)
, for discrete r.v. (X, Y)
PP
pi j
j i
F(x, y) =
x y
R R
f (x, y) dx dy , for continuous r.v. (X, Y)
−∞ −∞
n
j
(i) FX (x) =
Rx Rx R∞
g.i
fX (x) dx = f (x, y) dy dx , for c.r.v. (X, Y)
−∞ −∞ −∞
d
n
and fX (x) = [FX (x)]
dx
eri
P
P X = x i , Y ≤ y j , for d.r.v. (X, Y)
i
(ii) FY (y) =
y
Ry R∞
ine
R
fY y dy = f (x, y) dx dy , for c.r.v. (X, Y)
−∞ −∞ −∞
d
and fY (y) =
FY (y)
ng
dy
(d) Conditional Distribution Functions:
E
∂
(i) f y/x =
F y/x
∂y
arn
∂
(ii) f x/y =
F x/y
∂x
Le
2.1.5 Expectations :
w.
P P
g xi , y j pi j , for d.r.v. (X, Y)
ww
j i
(a) E[g(X, Y)] =
∞ ∞
R R
dx dy , for c.r.v. (X, Y)
g x, y f x, y
−∞ −∞
(b) Properties of Expected Values :
Z∞
(i) E g (X) =
g (x) fX (x) dx
−∞
R∞
E [h (Y)] =
h y fY y dy
−∞
n
g.i
(iii) In general, E(XY) , E(X) · E(Y).
If X and Y are independent, then
E(XY) = E(X) · E(Y)
n
eri
Note : However the converse may not be true i.e. E(XY) =
E(X)E(Y) ; X and Y are independent. They are independent
only when f x, y = fX (x) · fY y
ine
(c) Conditional Expectations :
(i) For discrete r.v. (X, Y) :
ng
h i P
E g (X, Y) /Y = y j = g xi , y j · P X = xi /Y = y j
i pi j
P
E
= g xi , y j p∗j
i
arn
P
E g (X, Y) X = xi = g xi , y j · P Y = y j /X = xi
j
P p
= g xi , y j pi∗i j
Le
E g(X, Y) /Y = y =
g x, y · f x/y dx
ww
−∞
Z∞
and conditional mean is E X/Y = y =
x f x/y dx
−∞
R∞
E g(X, Y) /X = x =
g x, y · f y/x dy
−∞
Z∞
and conditional mean is E [Y/X = x] =
y f y/x dy
−∞
(iii) If X and Y are independent, then
E(X/Y) = E(X)andE(Y/X) = E(Y).
[Link]
[Link]
108 UNIT II - Two Dimensional Random Variables
2.2.1 Covariance
n
(ii) If X and Y are independent, then Cov(X, Y) = 0.
g.i
(iii) If a, b are constants,
n
Cov(aX, bY) = abCov(X, Y)
eri
Cov(X + a, Y + b) = Cov(X, Y)
(iv) Var(X ± Y) = Variance(X) + Variance(Y) ± 2Cov(X, Y)
ine
ng
2.2.2 Correlation
(Supply) (Price)
Y Y
Le
12 12
w.
1 1
X X
ww
0 1 2 0 01 20
(Rain fall) (Supply)
+ ve linear correlation − ve linear correlation
0<r<1 −1 < r < 0
Y Y
12 12
1 1
X X
0 1 2 0 0 1 20
Perfect + ve linear correlation Perfect − ve linear correlation
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 109
Y Y
2
1 12
1 1
X X
0 1 0
2 0 1 20
n
Classification of correlation betweenX and Y
n g.i
eri
Correlation Coefficient
Discrete R.V.s · · · (r1 )
Continuous R.V.s · · · (r2 )
ine
Correlation between X and Y
ng
(a) Karl Pearson’s correlation coefficient r or rxy or ρxy :
w.
r= = q
σX σY
q
E (X2 ) − [E(X)]2 E (Y2 ) − [E(Y)]2
1
P
n xy − x y
, for n pairs of (x, y) values
q q
1 2 2
x2 − (x) n1 y2 − y
P P
n
r1 =
(or)
P P P
n xy − ( x) y
i , direct method
q
h P i h P 2
P 2 P
n x2 − ( x) n y2 −
y
R∞ R∞ R∞
! ∞ !
R
xy f (x, y)dxdy − x f (x)dx y f (y)dy
−∞ −∞ −∞ −∞
r2 = s #2 s ∞ #2
R∞
"∞ "∞
R R R
x2 f (x)dx − x f (x)dx y2 f (y)dy − y f (y)dy
−∞ −∞ −∞ −∞
" #X
6
r3 = 1 − d2i , where di = xi − yi
n (n2 − 1)
#
2
mi mi − 1
"
6 X X
r4 = 1 − +
2
d
XY
n (n2 − 1)
12
n
∀i
g.i
where di = xi − yi ;
mi = number of times an item is repeated
n
(b) Properties of r:
(i) −1 ≤ r ≤ 1 eri
ine
r = +1 ⇒ perfect +ve (Best Relationship between X and Y)
r = −1 ⇒ perfect −ve (Worst Relationship between X and Y)
r = 0 ⇒ No correlation
ng
y−b
i.e., if u = x−a
h , v = k
rxy = ruv
Le
2.2.3 Regression
w.
ww
n
(i) Regression lines intersect at point x, y
g.i
(ii) Angle between regression lines :
2 σ σ
!
1 − r x y
Acute angle, θ = tan−1
n
σx + σ y
2 2
|r|
eri
σ σ
!
2
r − 1 x y
Obtuse angle, θ0 = tan−1
|r| σ2x + σ2y
ine
If r = 0 ⇒ The regression lines are perpendicular
If r = ±1 ⇒ The regression lines are parallel(or coincide)
ng
(ii) bxy and byx have same sign and if one of them is greater than 1
(numerically), then the other is less than one (numerically).
arn
(iii) r2 = bxy b yx
i.e., r = ± bxy b yx and r has the same sign as that of bxy and b yx .
p
Le
y−b
if u = x−a
h ,v = k then
bxy = hk buv and b yx = hk bvu
ww
Y(X = 0) V(U = 0)
3
1 v = f2 (x, y)
(x, y) v (u, v)
2
1
u = f1 (x, y)
U(V = 0)
0 1 2 0 X(Y = 0)
3 u
[Link]
[Link]
112 UNIT II - Two Dimensional Random Variables
XY − plane U V − plane
f (x, y) = Joint pdf of X&Y g(u, v) = Joint pdf of U&V
= fXY (x, y) = gUV (u, v)
= fxy (x, y) = guv (u, v)
f (x) = Marginal pdf of X g(u) = Marginal pdf of U
Z∞ Z∞
= f (x, y)dy = g(u, v)dv
−∞ −∞
= fX (x) = gU (u)
n
f (y) = Marginal pdf of Y g(v) = Marginal pdf of V
g.i
Z∞ Z∞
= f (x, y)dx = g(u, v)du
n
−∞ −∞
eri
= fY (y) = gV (v)
Let U = f1 (x, y) and V = f2 (x, y)
ine
Find x andy in terms of u and v
∂x ∂x
f (x, y) |J| , J = ∂u ∂v
∂y ∂y
ng
∂u ∂v
Define g(u, v) = (or)
E
∂u ∂u
∂x ∂y
arn
f (x, y) |J10 | , J0 = ∂v ∂v
∂x ∂y
Find Range of space of u and v
Le
Z∞
P.D.F. of U = g(u) = g(u, v)dv
w.
−∞
Z∞
ww
is given by
∂x ∂x
∂(x,y) ∂u ∂v xu xv
J= ∂(u,v) = ∂y ∂y =
∂u ∂v
yu yv
Note : On obtaining the joint p.d.f. fUV (u, v) we can find the
marginal p.d.f.s of U and V as
R∞
fU (u) = f (u) = fUV (u, v) dv
−∞
R∞
fV (v) = f (v) = fUV (u, v) du
n
−∞
(b) Theorem : If X and Y are independent continuous r.v.s then the
g.i
p.d.f. of U = X + Y is given by
R∞
n
f (u) = fX (v) fY (u − v) dv
eri
−∞
(c) Working rule for finding fUV (u, v) given the j.p.d.f. fXY x, y as in
following steps:
ine
(i) Find out the joint p.d.f. of (X, Y) if not already given.
(ii) Consider the new random variables, u = f1 x, y , v = f2 x, y
ng
∂x ∂x
∂(x,y) ∂u ∂v xu xv
(iii) Find the Jacobian J = = = and hence
arn
∂(u,v) ∂y ∂y
∂u ∂v
yu yv
the value of |J|
(iv) Find the j.p.d.f. fUV (u, v) using the formula:
Le
(vii) If required, find out the expression for marginal p.d.f.s of U and
V as
R∞
fU (u) = f (u) = fUV (u, v) dv
−∞
R∞
fV (v) = f (v) = fUV (u, v) du
−∞
This is one of the best and widely used theorems which helps find the
distribution of the sum of a large number of random variables.
[Link]
[Link]
114 UNIT II - Two Dimensional Random Variables
n
as n tends to infinity. Thus Sn ∼ N µ, σ as n → ∞. 2
g.i
(b) Lindberg-Levy’s form(of central limit theorem):
If X1 , X2 , · · · , Xn , · · · is a sequence of independent identically
n
distributed random variables with E (Xi ) = µ and
eri
Var (Xi ) = σ , i = 1, 2, · · · n, · · · and if Sn = X1 + X2 + · · · + Xn , then
2
If X = n1 (X1 + X2 + · · · + Xn ) then
1 1
$E X = E (X1 + X2 + ... + Xn ) = nµ = µ and
E
n n
1 1 2 σ2
arn
Var X = 2 Var (X1 + X2 + ... + Xn ) = 2 nσ =
n n n
σ2
∴ X follows a Normal distribution with mean µ and variance n as
Le
n tends to infinity.
i.e., X ∼ N µ, σn as n → ∞.
2
w.
Normal distribution as n → ∞.
(2) It provides a simple method for calculating the approximate
probabilities of sum of a large number of independent random
variables.
(3) It is very useful in statistical surveys. For a large sample size, it
helps provide fairly accurate results.
(4) It also implies that empirical frequencies of many natural
‘populations’ (where we consider all possible observations)
exhibit a bell shaped (or normal) curve.
(e) Note :
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 115
n
follows a normal distribution with mean µ and variance σ2 /n
g.i
by if n → ∞ i.e., if n is very large.
n
2.4.1 Part - A[Problems of Marginal, conditional distributions(Discrete r.v.)]
f x, y =
1 eri
1. X and Y are two random variables having the joint density function
x + 2y , where x and y assumes the integer values 0,1 and
ine
27
2. Find the marginal probability distribution of X.[N/D 2006, IT]
1
Solution : Given f x, y = x + 2y ; x = 0, 1, 2; y = 0, 1, 2.
ng
27
E
f (x, y) X PY (y)
0 1 2
arn
white balls drawn and Y denotes the number of red balls drawn, find
the probability distribution of (X, Y).
Solution :
X(White) Y(Red)
0 1 2 3
0 1/21 3/14 1/7 1/84
1 1/7 2/7 1/14 0
2 1/21 1/28 0 0
X Y
0 1 2
0 0.1 0.04 0.06
1 0.2 0.08 0.12
2 0.2 0.08 0.12
n
2.4.2 Part - A [Problems of Marginal, conditional distributions(Cont r.v.)] :
g.i
1. Define joint probability distribution function of two random variables
X and Y and state its properties. [M/J 2007, ECE]
n
Solution : Joint probability density function of two random variables
X and Y :
eri
We define the probability of the joint event X = x, Y = y, which is a
ine
function of the numbers x and y, by a joint probability distribution
function and denote it by the symbol FXY x, y . Hence FXY x, y =
p X ≤ x, Y ≤ y
ng
P(a < X < b, c < Y < d) = FX,Y (b, d) + FX,Y (a, c) − FX,Y (a, d) − FX,Y (b, c)
(vii) P(a < X < b, Y = y) = F(b, y) − F(a, y)
ww
" Z2 Z2
f x, y dxdy = 1 ⇒ k x + y dxdy = 1
R 0 0
Z2 " #x=2 2
x2
Z
+ xy dy = 1 ⇒ k 2 + 2y dy = 1
k
2 x=0
0 0
1
k=
8
3. The joint p.d.f. of 2 random variables X and Y is f (x, y) = cx(x − y), 0 <
n
x < 2, −x < y < x. Find c. [M/J 2007, CSE]
g.i
Solution : By the property of joint probability density function, we
have
n
Z∞ Z∞ Z2 Zx
eri
f (x, y)dxdy = 1 ⇒ cx x − y dydx = 1
−∞ −∞ 0 −x
ine
1
c=
8
ng
0, otherwise
Solution : By the property of joint p.d.f. we have
arn
" Z1 Z1
f x, y dxdy = 1 ⇒ k (1 − x) 1 − y dxdy = 1
Le
R 0 0
Z2 #x=1 Z1
x2 y
"
x2 y
w.
1
k x − − xy + dy = 1 ⇒ k − y + dy = 1
2 2 x=0 2 2
0 0
ww
k=4
R∞ R1
f (x) = f (x, y)dy = (x + y)dy = x + 1
2
−∞ 0
R∞ R1
f (y) = f (x, y)dx = (x + y)dx = y + 1
2
−∞ 0
∴ f (x) f y = x + 12 y + 12 , f x, y
n
variables X and Y. [N/D 2007, CSE]
Solution :
g.i
1 − λe−λ(x+y) , if x > 0, y > 0
(
F x, y =
0, otherwise
n
∂ λ2 e−λ(x+y) , if x > 0, y > 0
(
F x, y =
eri
⇒
∂y 0, otherwise
∂2 −λ3 e−λ(x+y) , if x > 0, y > 0
(
ine
F x, y =
⇒
∂x∂y 0, otherwise
∂2
F x, y < 0, it cannot be a joint p.d.f.,
Since ∂x∂y
ng
x>0 y>0 0 0
Z∞
∞
Z " #
du
2 2
⇒k ye−y xe−x dx dy = 1, take x2 = u ⇒ xdx =
2
0 0
Z∞ Z∞ " #
k 2 dv
⇒ ye−y e−t dtdy = 1, take y2 = v ⇒ ydy =
2 2
0 0
Z∞
k1
⇒ e−v dv = 1 ⇒ k = 4
22
0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 119
n
g.i
9. Find the marginal density function of X and Y if
f x, y = 2 2x + 5y /5, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1[N/D 06, ECE][M/J 06, IT]
Solution :
n
The marginal density function of X is given by
fX (x) = f (x) =
R∞
eri
f (x, y)dy =
R1
2
5 (2x + 5y)dy = 4x+5
5 .
ine
−∞ 0
R∞ R1 2+10y
y = f y = f (x, y)dx = 52 (2x + 5y)dx =
fY 5
−∞ 0
E
10. The joint probability density function of the random variable (X, Y)
is given by f x, y = Kxye−(x +y ) , x > 0, y > 0. Find the value of K,
2 2
arn
Solution : Here the range space is the entire first quadrant of the
xy-plane.
By the property of the j.p.d., we have
w.
Z Z
Kxye−(x +y ) dxdy = 1
2 2
ww
x>0 y>0
Z∞ Z∞
2 2
⇒K ye−y .xe−x dxdy = 1
0 0
∞ Z∞
put x2 = t ⇒ 2xdx = dt
Z (
K −y2
⇒ ye .e dtdy = 1,
−t
2 put y2 = v ⇒ 2ydy = dv
0 0
Z∞
K1
⇒ e−v dv = 1
22
0
⇒k=4
[Link]
[Link]
120 UNIT II - Two Dimensional Random Variables
n
g.i
0 0
1 2 2
= 4ye−y= 2ye−y , y > 0
2
n
∴ f (x) f y = 4xye−(x +y ) = f x, y
2 2
⇒ X and Y are independent R.V.s.
11. Let X and Y be random variables with joint density
eri
function f x, y
=
ine
( XY
4xy, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
. Find E(XY). [M/J 07, ECE]
0, otherwise
ng
Solution :
Z ∞Z ∞
E [XY] = xy f (x, y)dxdy
E
Z−∞ −∞
arn
1Z 1
=
xy 4xy dxdy
0 0
Z 1Z 1
=4
Le
x2 y2 dxdy
0 0
4
=
w.
13. If the function f (x, y) = c(1 − x)(1 − y), 0 < x < 1, 0 < y < 1, y < 1 to be
a density function, find the value of c.
Solution : Given f (x, y) = c(1 − x)(1 − y), 0 < x < 1, 0 < y < 1, y < 1
Given is a density function. We have
Z∞ Z∞ Z1 Z1
f x, y dxdy = 1 ⇒ c (1 − x) 1 − y dxdy = 1
−∞ −∞ 0 0
Z1 !1 !1 !1
x2 x2 y2
1 − y dy = 1 ⇒ c x − =1
⇒c x− y−
n
2 0
2 0
2 0
g.i
0
1
⇒c=
4
n
14. Is the function defined as follows a density function?
0, erix<2
ine
f (x) = (3 + 2x) , 2 ≤ x ≤ 4
1
18
x>4
0,
ng
R∞
Solution : Condition for p.d.f. is f (x) dx = 1
−∞
E
R2 R4 R∞
L.H.S. = (0)dx + 1
18
(3 + 2x) dx + (0)dx = 1
arn
−∞ 2 4
Hence the given function is density function.
15. Can the joint distributions of two random variables X and Y be got if
Le
n
= E[x(−2x + 3)] − E[x]E[−2x + 3]
g.i
= E[−2x2 + 3x] − E[x][−2E(x) + 3]
= −2E(x2 ) + 3E(x) + 2[E(x)]2 − 3E(x)
n
= −2E(x2 ) + 2[E(x)]2
= −2Var(x)
eri
ine
3. Prove that the correlation coefficient ρxy takes value in the range -1 to
1. [M/J 2007, IT]
Solution : Let X and Y be two random variables with variances σ2x , σ2y
ng
respectively.
We know that variance of any random variable = 0.
E
X Y
∴ Var + =0
σX σY
arn
X Y X Y
⇒ Var + Var + 2Cov + =0
σX σY σX σY
(By the property of variance)
Le
1 1 2
⇒ 2 Var (X) + 2 Var (Y) + σY Cov (X + Y) = 0
σX σ σX
w.
( Y
∵ Var (aX) = a2 Var (X) ,
)
2
⇒ 1 + 1 + σY Cov (X + Y) = 0
σX
⇒ 2 + 2ρXY = 0
⇒ 1 + ρXY = 0
⇒ ρXY = −1
X Y
Similarly, Var − =0
σX σY
⇒ 2 − 2ρXY = 0
⇒ 1 − ρXY = 0
⇒ ρXY = 1
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 123
n
σx σ y
g.i
E[aX2 + bX] − E[X][aE[X] + b]
=
σx σ y
n
aE[X2 ] + bE[X] − a[E[X]]2 − bE[X]
=
eri
σx σ y
h i
2 2
a E[X ] − [E[X]] aσ2x aσx
= = =
ine
σx σ y σx σ y σy
h i h i
σ y = E Y − [E [Y]] = E (aX + b) − [E (aX + b)]2
2 2 2 2
ng
h i
= E a2 X2 + 2abX + b2 − [aE [X] + b]2
h i n o
= a E X + 2abE [X] + b − [aE [X]] + b + 2abE [X]
2 2 2 2 2
E
h i
= a2 E X2 + 2abE [X] + b2 − a2 [E [X]]2 − b2 − 2abE [X]
arn
h i n h i o
2 2 2 2
= a E X − a [E [X]] = a E X − [E [X]] = a2 σ2x
2 2 2
∴ σ y = ±aσx
Le
aσx
r (X, Y) = = ±1 ⇒ |r (X, Y)| = 1
±aσx
w.
V.
6. The two regression lines are 4x − 5y + 33 = 0 and 20x − 9y = 107. Find
the means of x and y. Also find the value of r. [M/J 2006, CSE]
Solution : Given
4x − 5y + 33 = 0 (A)
20x − 9y = 107 (B)
Since both the lines of regression passes through the mean values
x and y, the point x, y must satisfy the two given regression lines.
4x − 5y = −33 (1)
20x − 9y = 107 (2)
[Link]
[Link]
124 UNIT II - Two Dimensional Random Variables
x = 13, y = 17
(A) ⇒ 5y = 4x + 33 ⇒ y = 54 x + 33
5 ⇒ b yx = 4
5
n
(B) ⇒ 20x = 9y + 107 ⇒ x = 9
20 y + 107
20 ⇒ bxy = 9
20
g.i
We know that r2 = bxy bxy = 9 4
20 5 = 9
25 ⇒r= 3
5 = 0.6
n
eri
ine
2.4.4 Part - A[Problems of Central limit theorem(C.L.T.)]
1. State central limit theorem. [N/D 06, CSE][M/J 06, IT] [M/J 07, IT]
ng
Sn −nµ
variance σ2 . Then lim P √
σ n
≤ x = φ (x) , −∞ < x < ∞.
n→∞
arn
Usefulness derives more the fact that YN = for finite N may have a
ww
n
σ
X follows N µ, √n ⇒ X follows N 1200, 250
g.i
√
60
√
n
P(X > 60) = N X−1200
250
√
> 1250−1200
250
√
=P Z> 60
5
eri
60 60
X−E(Xi )
(here Z = √σ
, the standard Normal variate)
n
∴ P(X > 60) = P(Z > 1.55) = P(0 < Z < ∞) − P(0 < Z < 1.55) =
ine
0.5 − 0.4394 = 0.0606 (from the table of areas under normal curve)
5. The lifetime of a TV tube (in years) is an Exponential random variable
ng
with mean 10. What is the probability that the average lifetime of a
random sample of 36 TV tubes is atleast 10.5?[N/D 07,CSE]
E
Solution : λ = 1/10.
arn
Example 2.1 Two coins are tossed. X denotes number of heads and Y denotes
ww
n
1
0 P(X = 0, Y = 0) = 0 P(X = 1, Y = 0) = 0 P(X = 2, Y = 0) =
g.i
4
2
Y 1 P(X = 0, Y = 1) = 0 P(X = 1, Y = 1) = P(X = 2, Y = 1) = 0
4
n
1
2 P(X = 0, Y = 2) = P(X = 1, Y = 2) = 0 P(X = 2, Y = 2) = 0
4
(b) Marginal P.M.F. of X&Y :
eri
ine
X Mar. PMF of Y :
0 2 1 PY (y) = P∗ j
1 1
0 0 0 P(Y = 0) =
ng
4 4
2 2
Y 1 0 0 P(Y = 1) =
4 4
E
1 1
2 0 0 P(Y = 2) =
arn
4 XX 4
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P X = xi , Y = y j
∀i ∀j
1 2 1
Le
PX (x) = Pi∗ = = = =1
4 4 4
Marginal P.M.F. of X Marginal P.M.F. of Y
w.
j=2 i=2
1 X 1 X
P(X = 0) = = P X = 0, Y = y j P(Y = 0) = = P (X = xi , Y = 0)
ww
4 4
j=0 i=0
j=2 i=2
2 X 2 X
P(X = 1) = = P X = 1, Y = y j P(Y = 1) = = P (X = xi , Y = 1)
4 4
j=0 i=0
j=2 i=2
1 X 1 X
P(X = 2) = = P X = 2, Y = y j P(Y = 2) = = P (X = xi , Y = 2)
4 4
j=0 i=0
n
4 4
g.i
3
=
4
(f) FY (2) = FY (y = 1)
n
= P(Y ≤ 2)
= P(X = 0, 1, 2)
eri
= P(X = 0) + P(X = 1) + P(X = 2)
ine
1 2 12
= + +
4 4 4
=1
ng
2
P(X = 0, Y = 1) 0 P(X = 1, Y = 1) 4 P(X = 2, Y = 1) 0
= =0 = =1 = =0
P(Y = 1) 2 P(Y = 1) 2 P(Y = 1) 2
4 4 4
Le
For X = 0, Y = 0:
Here P(X = 0, Y = 0) = 0
1 1 1
But P(X = 0) · P(Y = 0) = · = , P(X = 0, Y = 0)
4 4 16
∴ X and Y are not independent.
Example 2.2 A two dimensional random variable has (X, Y) as a joint p.m.f.
P(x, y) = k(2x + 3y), x = 0, 1, 2; y = 1, 2, 3. Find
(a) Cond. prob. of X given Y. (b) Prob. dist. of X + Y
n
(c) P(X > Y) (d) P[Max(X, Y) = 3]
g.i
(e) Prob. dist. of Z = Min(X, Y).
n
Solution : Given discrete random variables X and Y with joint pmf as
eri
P(x, y) = k(2x + 3y), x = 0, 1, 2; y = 1, 2, 3.
which contains an unknown k.
ine
The joint pmf is
X
0 1 2
ng
1 3k 5k 7k
Y 2 6k 8k 10k
E
3 9k 11k 13k
arn
To find k, use
XX
P(X = x, Y = y) = 1
∀x ∀y
Le
P(X ≤ 2, Y ≤ 3) = 1
3k + 5k + 7k + 6k + 8k + 10k + 9k + 11k + 13k = 1
w.
72k = 1
1
∴k=
ww
72
X Mar. PMF of Y :
0 1 2 PY (y) = P∗ j
3 5 7 15
1 P(Y = 1) =
72 72 72 72
6 8 10 24
Y 2 P(Y = 2) =
72 72 72 72
9 11 13 33
3 P(Y = 3) =
72 72 72 XX 72
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P X = xi , Y = y j
∀i ∀j
18 24 30
PX (x) = Pi∗ = = = =1
72 72 72
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 129
n
P(X = 0/Y = 3) P(X = 1/Y = 3) P(X = 2/Y = 3)
g.i
9 11 13
P(X = 0, Y = 3) 72 9 P(X = 1, Y = 3) 72 11 P(X = 2, Y = 3) 72 13
= = = = = =
P(Y = 3) 33 33 P(Y = 3) 33 33 P(Y = 3) 33 33
72 72 72
n
(c) Prob. dist. of X + Y:
X + Y P(X+Y) eri
ine
3
1 P(X = 0, Y = 1) =
72
6 5 11
P(X = 0, Y = 2) + P(X = 1, Y = 1) = + =
ng
2
72 72 72
9 8 7 24
3 P(X = 0, Y = 3)+P(X = 1, Y = 2)+P(X = 2, Y = 1) = + + =
E
72 72 72 72
11 10 21
4 P(X = 1, Y = 3) + P(X = 2, Y = 2) = + =
arn
72 72 72
13
5 P(X = 2, Y = 3) =
72
Le
7
(d) P(X > Y) = P(X = 2, Y = 1) =
72
(e) P[Max(X, Y) = 3]:
w.
9 11 13
= + +
72 72 72
33
=
72
(f) P[Z = Min(X, Y)]:
Z = Min(X, Y) P(Z)
3 6 9 18
0 P(0, 1) + P(0, 2) + P(0, 3) = + + =
72 72 72 72
5 8 11 7 31
1 P(1, 1)+P(1, 2)+P(1, 3)+P(2, 1) = + + + =
72 72 72 72 72
10 13 23
2 P(2, 2) + P(2, 3) = + =
72 72 72
[Link]
[Link]
130 UNIT II - Two Dimensional Random Variables
x + 2y
Example 2.3 (X, Y) as a joint p.m.f. is given by , x = 0, 1, 2; y = 0, 1, 2.
27
Find
(a) Cond. prob. of Y given X = 1 (b) Cond. prob. of X given Y = 1
(c) Are X and Y independent?
Solution : (a) {1/9, 1/3, 5/9} (b) {2/9, 3/9, 4/9} (c) not independent
Example 2.4 Three balls are drawn from a box in random containing 3 red, 2
n
white and 4 black. If X denotes number of white balls drawn and Y denotes
g.i
number of red balls drawn. Find joint probability distribution P(X,Y).
n
Solution : Given that a box contains 3 red, 2 white and 4 black.
eri
X = denotes the number of white balls
Y = denotes the number of red balls
ine
RX = {0, 1, 2}
RY = {0, 1, 2, 3}
ng
0 1 2 3
arn
i.e.,
ww
Y Mar. PMF of Y
0 1 2 3 PX (x) = Pi∗
4 18 12 1 35
0 P(X = 0) =
84 84 84 84 84
12 24 6 42
X 1 0 P(X = 1) =
84 84 84 84
4 3 7
2 0 0 P(X = 2) =
84 84 XX 84
Mar. PMF of X : P(X = 0) P(X = 1) P(X = 2) P(X = 3) P(i, j)
∀i ∀j
20 45 18 1
PY (y) = P∗j = = = = =1
84 84 84 84
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 131
n
P(X = 0/Y = 1) = 1/5, P(X = 1/Y = 1) = 1/3,
g.i
P(X = 2/Y = 1) = 7/15, · · ·
Conditional distribution of Y given X :
P(Y = 1/X = 0) = 1/6, P(Y = 2/X = 0) = 1/3,
n
P(Y = 3/X = 0) = 1/2, · · ·
eri
2. Let the joint probability distribution of X and Y be given by
ine
X
-1 0 1
ng
1 1/6 0 1/6
arn
Show that their covariance is zero even though the two random
variables are not independent.
Le
0 1 2
Y
ww
X
0 0.1 0.04 0.02
1 0.08 0.20 0.06
2 0.06 0.14 0.30
Example 2.5 Given joint p.d.f. f (x, y) = 2, 0 < y < x < 1. Find
(a) Marginal p.d.f. of X (b) Marginal p.d.f. of Y
(c) Conditional. prob. of X given Y (d) Conditional prob. of Y given X
(e) Are X and Y independent?
n
(a) Marginal p.d.f. of X
g.i
Z∞
f (x) = f (x, y)dy
n
eri
−∞
Zy=x
= 2dy (∵ by (1))
ine
y=0
y=x
= 2[y] y=0
ng
= 2[(x) − (0)]
∴ f (x) = 2x, 0 < x < 1 (use extreme limits)
E
Z∞
f (y) = f (x, y)dx
Le
−∞
Zx=1
=
w.
2dy (∵ by (1))
x=y
ww
= 2[x]x=1
x=y
= 2[(1) − (y)]
∴ f (y) = 2(1 − y), 0 < y < 1 (use extreme limits)
(c) Conditional probability of X given Y
f (x, y)
f (x/y) =
f (y)
2
= (∵ by (1) and (b))
2(1 − y)
1
∴ f (x/y) = ,0 < y < 1
1−y
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 133
f (x, y)
f (y/x) =
f (x)
2
= (∵ by (1) and (a))
2x
1
∴ f (x/y) = , 0 < y < 1
x
n
g.i
(e) Are X and Y independent?
n
If X and Y are independent, then f (x, y) = f (x) · f (y)
eri
If X and Y are not independent, then f (x, y) , f (x) · f (y)
ine
Here f (x, y) = 2 (∵ by (1))
ng
, f (x, y)
arn
Le
Example 2.6 The bivariant random variable X and Y has the p.d.f.
f (x, y) = kx2 8 − y , x < y < 2x, 0 ≤ x ≤ 2. Find
(a) k:
Z∞ Z∞
W.K.T. f (x, y)dxdy = 1 (∵ f (x, y) has an unknown k)
−∞ −∞
Z2 Z2x
kx2 (8 − y)dydx = 1
0 x
(∵ by(1), y is a function of variable x)
Z2 2 y=2x
n
" #
y
k x2 8y − dx = 1
g.i
2 y=x
0
Z2 " !#
x2
n
k x2 16x − 2x2 − 8x − dx = 1
2
0
Z2 "
x2
# eri
ine
k x2 16x − 2x2 − 8x + dx = 1
2
0
Z2
ng
" 2
#
x
k x2 8x − 3 dx = 1
2
0
E
Z2 " 4
#
x
arn
k 8x3 − 3 dx = 1
2
0
" 4 #2
x x5
Le
k 8 −3 =1
4 10 0
5 2
" #
x
w.
k 2x4 − 3 =1
10 0
32
ww
k (32 − 0) − 3 =1
10
32
k (32) − =1
5
160 − 48
k =1
5
112
k =1
5
5
∴k=
112
5 2
∴ (1) ⇒ f (x, y) = x 8 − y , x < y < 2x, 0 ≤ x ≤ 2.
(2)
112
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 135
(b) f (x) :
Z∞
f (x) = f (x, y)dy
−∞
Z2x Z2x
5 2 5 2
= x 8 − y dy =
x 8 − y dy
112 112
x x
(∵ integration w.r.t. y, so treat remaining as constant)
n
!2x
y2
" ! !#
5 2 5 2 4x2 x2
= x 8y − = x 16x − − 8x −
g.i
112 2 x 112 2 2
" ! !#
5 2 4x2 x2
= x 16x − − 8x −
n
112 2 2
eri
" #
5 2 4x2
= x 8x − 3
112 2
ine
5 h 3 i
= 16x − 3x4 , 0 < x < 2
224
ng
Y Y = 2X
arn
51
(2, 4)
4
Le
Z∞ X=Y
f (y) = f (x, y)dx 3 R2
w.
−∞
(2, 2)
Zy 2
5 2 R1
ww
=
x 8 − y dx
112
y 1
2
[Refer region R1 ] X
0 1 2 3 40
Zy
5
= x2 dx
8−y
112
y
2
" #y
5 x3
= 8−y
112 3 y
" 3 !2
y3
!#
5 y
= 8−y −
112 3 24
[Link]
[Link]
136 UNIT II - Two Dimensional Random Variables
7y3
" #
5
= 8−y
112 24
5 3
= y 8 − y ,0 < y < 2
384
In Region R2 , f (y) :
Z∞
f (y) = f (x, y)dx
−∞
Z2
n
5 2
=
x 8 − y dx [Refer region R2 ]
g.i
112
y
2
Z2
n
5
= x2 dx
8−y
eri
112
y
2
" #2
x3
ine
5
= 8−y
112 3 y
" 2
y3
!#
5 8
ng
= 8−y −
112 3 24
5 h i
E
= 8 − y 64 − y3 , 2 < y < 4
112
arn
(d) f (x/y) :
f (x, y)
f (x/y) =
Le
f (y)
h i
5 2 2
112 8x − x y
w.
5 3
yh8−y i
=
384
5
112 8x2 − x2 y
ww
5 8 − y 64 − y3
112
(e) f (y/x) :
f (x, y)
f (y/x) =
f (x)
h i
5 2 2
112 8x − x y
= 5
224
(16x3 − 3x4 )
h i
2 2
2 8x − x y
=
(16x3 − 3x4 )
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 137
(f) f (y/x = 1) :
f (x = 1, y)
f (y/x = 1) =
f (x = 1)
5
112 8 − y
= 5
(16 − 3)
224
2
=
8−y
13
1 3
(g) P 4 ≤X≤ 4 :
n
Z3/4
g.i
1 3
P ≤X≤ = f (x)dx
4 4
1/4
n
Z3/4
eri
5 3
= 4
16x − 3x dx
224
1/4
ine
5 3 5 3/4
= 4
4x − x
224 5 1/4
= 0.0247366769
ng
given by
F(x, y) = (1 − e−x ) (1 − e−y ) , x > 0, y > 0 (1)
ww
To find Marginal p.d.f. f (x) and f (y), first we have to find joint p.d.f. f (x, y).
i.e.,
∂2 ∂2
f (x, y) = F(x, y) (or) F(x, y)
∂x∂y ∂y∂x
∂ ∂
" #
= −x
(1 − e ) (1 − e )−y
∂x ∂y
∂
= (1 − e−x ) e−y
∂x
= e−x e−y
= e−(x+y)
[Link]
[Link]
138 UNIT II - Two Dimensional Random Variables
"0 −(x+y) #∞
e
n
=
−1 0
g.i
= (0) − (−e−x )
n
eri
(b) Marginal p.d.f. of Y:
Z∞
f (y) =
ine
f (x, y)dx
−∞
Z∞
ng
= e−(x+y) dx (∵ by (2))
"0 −(x+y) #∞
E
e
=
arn
−1 0
= (0) − (−e−y )
= e−x · e−y
ww
= f (x) · f (y)
∴ X and Y are independent.
(d) P(1 < x < 3, 1 < y < 2) :
Z2 Z3
P(1 < x < 3, 1 < y < 2) = e−x e−y dxdy
1 1
=e −5
− e−4 − e−3 + e−2
Example 2.8 Given joint p.d.f. f (x, y) = 81 (6 − x − y), 0 < x < 2, 2 < y < 4.
Find
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 139
(a) P(X < 1, Y < 3) (b) P(X + Y < 3) (c) P(X + Y > 3)
(d) P(X < 1/Y < 3) (e) f (x/y = 1)
n
g.i
1
f (x, y) = (6 − x − y), 0 < x < 2, 2 < y < 4. (1)
8
n
(a) P(X < 1, Y < 3) : eri
ine
ng
Z∞ Z∞
P(X < 1, Y < 3) = f (x, y)dxdy
E
−∞ −∞
arn
Z1 Z3
1
= (6 − x − y)dydx (∵ by (1))
8
0 2
Le
Z1 !3
1 y2
= 6y − xy − dx
8 2
w.
2
0
Z1
1 9
ww
= 18 − 3x − − (12 − 2x − 2) dx
8 2
0
Z1
1 7
= − x dx
8 2
0
" #1
1 7 x2
= x−
8 2 2 0
1 7 1
= − − (0 − 0)
8 2 2
3
=
8
[Link]
[Link]
140 UNIT II - Two Dimensional Random Variables
41
3
(b) P(X + Y < 3) : X+Y=3
2
Z∞ Z∞
n
P(X + Y < 3) = f (x, y)dxdy 1
g.i
−∞ −∞
Z3 Z3−y X
1 0 1 2 30
n
= (6 − x − y)dxdy [∵ by (1)]
8
eri
2 0
3 !3−y
x2
Z
1
=
ine
6x − − yx dy
8 2 0
2
Z3 "
9 y2
! #
1
ng
= 18 − 6y − − + 3y − 3y + y2 − (0 − 0 − 0) dy
8 2 2
2
E
Z3 "
y2
#
1 27
= − 6y +
arn
dy
8 2 2
2
3 3
" #
1 27 y
= y − 3y2 −
Le
8 2 6 2
1 27 27 8
= 3 − 27 + − 27 − 12 +
w.
8 2 6 6
1 4
= 3−
ww
8 3
5
=
24
Z3
where, P(Y < 3) = f (y)dy (3)
2
Z∞
n
Now, f (y) = f (x, y)dx
g.i
−∞
Z2
1
=
n
(6 − x − y)dx
8
eri
0
Z2
1
= (6 − x − y)dx
ine
8
0
!2
1 x2
=
ng
6x − − xy
8 2 0
1
=
12 − 2 − 2y − ((0 − 0 − 0))
E
8
1
arn
= 10 − 2y
8
1
= 5−y
4
Le
Z3
1
∴ (3) ⇒ P(Y < 3) =
5 − y dy
w.
4
2
#3
y2
"
1
ww
= 5y −
4 2 2
1 9
= 15 − − (10 − 2)
4 2
1 9
= 7−
4 2
1 5
=
4 2
5
=
8
P(X < 1, Y < 3) 3/8 3
∴ (2) ⇒ P(X < 1/Y < 3) = = = [∵ by (a) and (3)]
P(Y < 3) 5/8 5
[Link]
[Link]
142 UNIT II - Two Dimensional Random Variables
(e) f (x/y = 1) :
" #
f (x, y)
f (x/y = 1) =
f (y) y=1
1
8 (6 − x − y)
= 1
(5 − y)
4 y=1
1 5−x
=
2 4
5−x
=
n
8
n g.i
Example 2.9 A gun is aimed at a certain point (origin of coordinate system),
eri
because of the random factors, the actual hit point can be any point (x, y)in a circle
of radius R about the origin. Assume that the joint density function of X and Y is
ine
c ,x + y ≤ R
2 2 2
constant in this circle and given by f (x, y) =
.
0 , otherwise
ng
q 2
2
πR 1 − R
x
, −R ≤ X ≤ R
(b) show that f (x) =
(a) Find c
E
0 , otherwise
arn
c , x2 + y2 ≤ R2
(
f (x, y) = (1)
0 , otherwise
w.
(a) Value of c :
ww
Z∞ Z∞
W.K.T. f (x, y)dxdy = 1 (∵ f (x, y) contains an unknown)
−∞ −∞
Z Z
cdx = 1
πR2 c = 1
1
∴c=
πR2
1
, x2 + y2 ≤ R2
∴ (1) ⇒ f (x, y) =
πR0 , otherwise
2 (2)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 143
n
= dy (∵ constant fn. which is an even function)
πR2
g.i
0
2 h √ 2 i
= 2
R − x − (0)
πR2 q
n
1 − x 2
=
2
πR
2
R
R
eri
ine
r 2
2 x
= 1− , −R ≤ X ≤ R.
πR R
ng
0, otherwise
marginal and conditional probability density function of X and Y. Are
X and Y independent? [May/June 2006, ECE][M/J 07, CSE]
Le
x < y < 1, f (x) f y , f x, y
by ( 2 xy
x + 3 ; 0 < x < 1, 0 < y < 2
f x, y = Find (a) p x > 21 (b) P(Y < X)
0, elsewhere.
(c) p y < 21 /x < 12
{(a) = 5/6 , (b) = 7/24 , (c) = 5/32}
3. If X is the proportion of persons who will respond to one kind of mail
order solicitation, Y is the proportion of persons who will respond to
another kind of mail-order solicitation and the joint probability
density (function of X and Y is given by
5 x + 4y , 0 < x < 1, 0 < y < 1, find the probabilities
2
f x, y =
0, elsewhere
[Link]
[Link]
144 UNIT II - Two Dimensional Random Variables
that
(1) atleast 30% will respond to the first kind of mail-order solicitation.
(2) atmost 50% will respond to the second kind of mail-order
solicitation given that there has been 20% response to the first kind of
mail-order solicitation.
n
=
5. The joint p.d.f of a bivariate random variable (X, Y) is fXY x, y
k x + y , i f 0 < x < 2, 0 < y < 2
(
g.i
0, otherwise
(1) Find k {1/8}o
n
1+y
n
4 , 0 < x < 2; 4 , 0 < y < 2
x+1
(2) Find the marginal p.d..f of X and Y.
(3) Are X and Y independent?
(4) Find fY/X y/x and fX/Y x/y
eri {not
n independent}
1 x+y
x+y o
1
2 x+1 ; 2 y+1
ine
6. The joint probability density function of a two dimensional random
x2
= 2
+ 8 , 0 ≤ x ≤ 2, 0 ≤ y ≤ 1.
variable (X, Y) is given by f x, y xy
ng
Compute P (X > 1) , P y < 12 , P X > 1/Y < 12 , P Y < 12 /X > 1 ,
P(X<Y) and P (X + Y ≤ 1) . [M/J 07, ECE][N/D 07, CSE] {19/24, 1/4, 5/6,
E
Find the marginal densities,
0, otherwise
conditional density of X given Y = y and P X ≤ 12 /Y = 12 . [A/M 2008,
ECE]
10. Find the marginal and conditional densities if f (x, y) = k(x3 y + xy3 ), 0 =
x = 2, 0 = y = 2. [M/J 09, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 145
n
g.i
2.4.9 Part-B[Problems of Correlation]
n
Example 2.10 For the following data about the use of fertilizers, X in kg and the
X 12 12 eri
output yield Y in kg from a field, find the correlation coefficient.
15 17 20 26
ine
Y 140 160 100 190 200 260
ng
r=
σX · σY
arn
E(XY) − E(X)E(Y)
= q q
E (X2 ) − [E(X)] E (Y2 ) − [E(Y)]2
2
Le
1
P
n xy − x y
, for n pairs of (x, y) values
q q
1
P 2 1
P 2
x2 − (x) n y2 − y
w.
n
=
(or)
P P P
n xy − ( x) y
ww
, direct method
q
h P P 2i h P 2 P 2 i
2
n x − ( x) n y − y
# X Y XY X2 Y2 X
1 12 140 1680 144 19600
2 12 160 1920 144 25600
3 15 100 1500 225 10000
4 17 190 3230 289 36100
5 20 200 4000 400 40000
6 26 260 6760 676 67600
n= X= Y= XY = X = Y =
P P P P 2 P 2
6 102 1050 19090 1878 198900
[Link]
[Link]
146 UNIT II - Two Dimensional Random Variables
19090
− 102
6 6 · 6
1050
∴ r(X, Y) = q 2 q 2
1 102 1 1050
6 (1878) − 6 6 (198900) − 6
19090
− 107100
= √ √ 36
6
24 2525
206.6666667
=
246.1706725
∴ r = 0.8395 (Check −1 ≤ r ≤ 1)
n
g.i
Note : Here r ≥ 0, ⇒ the relation between fertilizers and yield is near to
best (r = 1).
n
eri
Example 2.11 Calculate the coefficient of correlation from the following heights
in inches of fathers X and sons Y.
ine
X 65 66 67 67 68 69 70 72
ng
Y 67 68 65 68 72 72 69 71
E
f (x, y) = x + y, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1.
ww
(1)
Cov(X, Y)
r=
σX · σY
E(XY) − E(X)E(Y)
= q q
E (X2 ) − [E(X)] E (Y2 ) − [E(Y)]2
2
R∞ R∞ R∞
! ∞ !
R
xy f (x, y)dxdy − x f (x)dx y f (y)dy
−∞ −∞ −∞ −∞
= s #2 s ∞ #2
∞
"∞ "∞
R R R R
x2 f (x)dx − x f (x)dx y2 f (y)dy − y f (y)dy
−∞ −∞ −∞ −∞
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 147
Z∞ Z∞
E(XY) = xy f (x, y)dxdy
−∞ −∞
Z1 Z1
= xy(x + y)dxdy
0 0
Z1 Z1
= x2 y + xy2 dxdy
0 0
n
Z1 " #1
x3 x2 2
g.i
= y + y dy
3 2 0
0
n
Z1 "
y y2
! #
eri
= + − (0 + 0) dy
3 2
0
ine
#1
y2 y3
"
= +
6 6 0
1 1 2
ng
= + − (0 + 0) =
6 6 6
1
=
E
3
arn
Le
Z∞
E(X) = x f (x)dx (2)
w.
−∞
Z∞
ww
Z1
1
∴ (2) ⇒ E(X) = x x + dx
2
0
Z1
x
= x + dx
2
2
0
" #1
x3 x2
= +
3 4 0
1 1
n
= + − (0 + 0)
3 4
g.i
7
=
12
n
Z∞
eri
E(Y) = y f (y)dy (3)
−∞
Z∞
ine
where f (y) = f (x, y)dx
−∞
ng
Z1
= (x + y)dx
E
0
#1
arn
"
x2
= + xy
2
0
1
= + y − (0 + 0)
Le
2
1
= y + ,0 < y < 1
w.
2
Z1
1
ww
∴ (3) ⇒ E(Y) = y y + dy
2
0
Z1
y
= y + dy
2
2
0
#1
y3 y2
"
= +
3 4 0
1 1
= + − (0 + 0)
3 4
7
=
12
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 149
∞
Z
E X2 = X2 f (x)dx
−∞
Z1
1
= x x + dx
2
2
0
Z1 2
!
x
= x3 + dx
2
0
n
" #1
x4 x3
g.i
= +
4 12 0
1 1
= + − (0 + 0)
n
4 12
eri
5
=
12
∞
ine
Z
E Y2 = y2 f (y)dy
−∞
ng
Z1
1
= y y + dy
2
2
E
0
arn
Z1 2
!
y
= y3 + dy
2
0
Le
#1
y4 y3
"
= +
4 12 0
w.
1 1
= + − (0 + 0)
4 12
ww
5
=
12
1 7 7
3 − 12 · 12
r= q q
5 49 5 49
12 − 144 12 − 144
= −0.0909
X 1 6 5 10 3 2 4 9 7 8
Y 3 5 8 4 7 10 2 1 6 9
Z 6 4 9 8 1 2 3 10 5 7
Using rank correlation method, discuss which pair of judges has the nearest
approximation to common likings of music.
Solution : Here X, Y, Z are discrete random variables.
n
W.K.T. the rank correlation coefficient for the given arrangement of ranks.
Here ranks are not repeated.
g.i
So, use Non-Repeated rank correlation formula. i.e.,
" # X
n
6 h i
r(X, Y) = 1 − dXY , d = x − y
2
(1)
n (n2 − 1)
r(Y, Z) = 1 −
"
6
n (n2 − 1)
# X
h
2
i
eri
dYZ , d = y − z (2)
ine
" # X
6 h i
r(Z, X) = 1 − dZX , d = z − x
2
(3)
n (n2 − 1)
ng
1 1 3 6 -2 -5 5 4 25 25
arn
2 6 5 4 1 1 -2 1 1 4
3 5 8 9 -3 -1 4 9 1 16
4 10 4 8 6 -4 -2 36 16 4
Le
5 3 7 1 -4 6 -2 16 36 4
6 2 10 2 -8 8 0 64 64 0
w.
7 4 2 3 2 -1 -1 4 1 1
8 9 1 10 8 -9 1 64 81 1
ww
9 7 6 5 1 1 -2 1 1 4
10 8 9 7 -1 2 -1 1 4 1
P 2 P 2 P 2
n dXY dYZ dZX
= 10 =200 =214 =60
" # X
6 h i
∴ (1) ⇒ r(X, Y) = 1 − 200 2
= −0.2121
10 (102 − 1)
" # X
6 h i
(2) ⇒ r(Y, Z) = 1 − 214 2
= −0.2969
10 (102 − 1)
" # X
6 h i
(3) ⇒ r(Z, X) = 1 − 60 = 0.6363
2
10 (102 − 1)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 151
Example 2.14 Find rank correlation coefficient for X and Y given below:
X 68 64 75 50 64 80 75 40 55 64
Y 62 58 68 45 81 60 68 48 50 70
n
Solution : Here X, Y are discrete random variables.
g.i
Here ranks of X and Y are not given directly as in previous problem. So
find ranks for the given data of X and Y.
n
Here some of data of X and Y are repeated.
eri
So, use Repeated rank correlation formula. i.e.,
#
2
m m − 1
"
6 X X i i
r(X, Y) = 1 − +
2
ine
d (1)
2 XY
n (n − 1) 12
∀i
where d = x − y;
ng
# X Y Rank of X Rank of Y d = x − y d2
arn
1 68 62 4 5 -1 1
2 64 58 6 7 -1 1
3 75 68 2.5 3.5 -1 1
Le
4 50 45 9 10 -1 1
5 64 81 6 1 5 25
6 80 60 1 6 -5 25
w.
7 75 68 2.5 3.5 -1 1
8 40 48 10 9 1 1
ww
9 55 50 8 8 0 0
10 64 70 6 2 4 16
n = 10 d2XY = 72
P
In X− series:
∗ 75 is repeated 2 times, m1 = 2
2 2
m1 m1 − 1 2 2 −1 1
= =
12 12 2
∗ 64 is repeated 3 times, m2 = 3
2 2
m2 m2 − 1 3 3 −1
= =2
12 12
[Link]
[Link]
152 UNIT II - Two Dimensional Random Variables
In Y− series:
∗ 68 is repeated 2 times, m3 = 2
m3 m23 − 1 2 22 − 1 1
= =
12 12 2
" #
6 1 1
∴ (1) ⇒ r(X, Y) = 1 − 72 + + 2 +
10 (102 − 1) 2 2
= 1 − 0.4545
∴ r = 0.5455
n
g.i
2.4.10 Part-B[Problems of Correlation](Anna University Questions)
n
and 16 respectively,
√ find the correlation coefficient between (X +Y) and
eri
(X − Y). {r = 5/13 } [N/D 2006, ECE]
6−x−y
2. If f (x, y) = 8 , 0 ≤ x ≤ 2, 2 ≤ y ≤ 4 for a bivariate r. v. (X, Y) find the
ine
correlation coefficient ρXY . {-1/11}
3. Find the correlation coefficient for the following data: [N/D07,ECE]
ng
X 10 14 18 22 26 30
Y 18 12 24 6 30 36 {0.6}
E
X : 39 65 62 90 82 75 25 98 36 78
Y : 47 53 58 86 62 68 60 91 51 84
7. Let the joint p.d.f. of (X, Y) be fXY (x, y) = e−y , 0 < x < y < 8. Find the
correlation coefficient r(X, Y).[16 marks] [N/D 2008, ECE]
Example 2.15 The following table gives age X in years if cars and annual
maintenance cost Y(in 100 Rs.)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 153
X 1 3 5 7 9
Y 15 18 21 23 22
Find
(a) regression equations (b) correlation coefficient
(c) Estimate the maintenance cost for a 4 year old car.
Solution :
n
(a) W.K.T. regression equations, i.e.,
g.i
Regression equation of Y on X is
y − y = b yx (x − x)
(1)
n
Regression equation of X on Y is
eri
(x − x) = bxy y − y
(2)
ine
where
Var (X)
σy σx
=r =r
arn
σP
x σy
P P P P P
n xy − ( x) y n xy − ( x) y
= P 2 P 2 = P 2
n x − ( x)
P 2
n y − y
Le
X X X X X
Now, find the values of x, y, x, y, xy, x2 , y2 by the table :
w.
X2 Y2
ww
# X Y XY
1 1 15 15 1 225
2 3 18 54 9 324
3 5 21 105 25 441
4 7 23 161 49 529
5 9 22 198 81 484
X = 25 Y = 99 XY = 533 X = 165 Y = 2003
P P P P 2 P 2
n=5
5(533) − (25)(99)
P
x 99
x= = = 19.8 b yx = 2
= 0.95
Pn 5 5(165) − (25)
y 25 5(533) − (25)(99)
y= = =5 bxy = = 0.8878
n 5 5(2003) − (99)2
[Link]
[Link]
154 UNIT II - Two Dimensional Random Variables
n
= ± 0.8878 × 0.95
g.i
∴ r = ±0.9183
(c) Cost(Y) for 4 year old(X) car :
n
i.e., we have to use regression equation of Y on X. i.e.,
∴ (3) ⇒ y = 0.95x + 15.05
= 0.95(4) + 15.05
eri
ine
∴ y = 18.85
ng
8x − 10y + 66 = 0
Find
w.
Solution : Given
8x − 10y + 66 = 0 (1)
40x − 18y − 214 = 0 (2)
(a) Since both the lines of regression pass through the point x, y , where
(x) and y are mean values of x and y, which satisfies the two given
lines of regression.
Find the mean value point x, y by solving equations (1) and (2) : i.e.,
8x − 10y = −66 (3)
40x − 18y = 214 (4)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 155
On solving,
x = 13
y = 17
(b) r(x, y) :
n
8 8 10 10
10 8
g.i
∴ bxy = ∴ b yx =
8 10
(2) ⇒ 40x − 18y − 214 = 0 (2) ⇒ 40x − 18y − 214 = 0
n
⇒ 40x − 214 = 18y ⇒ 40x = 18y + 214
⇒ y= y −
40
18
40
214
18 eri ⇒ x= y +
18
40
18
214
40
ine
∴ b yx = ∴ bxy =
18q 40q
∴ r=± bxy · b yx ∴ r=± bxy · b yx
ng
r r
10 40 18 8
r=± · r=± ·
8 18 40 10
E
r = ±2.777 r = ±0.6
arn
W.K.T. 0 ≤ r ≤ 1.
∴ r = ±0.6
Le
Example 2.17 For 10 observations on price X and supply Y the following data
w.
3467. Obtain the line of regression of Y on X and estimate the supply when the
price is 16 units. [M/J 2009, ECE]
Solution :
[Link]
[Link]
156 UNIT II - Two Dimensional Random Variables
n
Solution : Given U = X + Y. (1)
g.i
Let us define a new variable Let us define a new variable
V=X (2) V=Y (4)
n
(2) ⇒ x = v (2) ⇒ y = v
(1) ⇒ u = x + y
i.e., u = v + y [∵ by (2)]
(1) ⇒ u = x + y
eri
i.e., u = x + v [∵ by (4)]
ine
∴ y=u−v ∴x=u−v
g(u, v) = f (x, y) |J| (3) g(u, v) = f (x, y) |J| (5)
ng
0 1 1 −1
where J = = −1 where J = =1
1 −1 0 1
E
−∞ −∞
Z∞ Z∞
= =
w.
Example 2.19 Joint p.d.f. of X&Y is given by f (x, y) = e−(x+y) , x > 0, y > 0.
X+Y
Find p.d.f. of .
2
Solution : Given Joint p.d.f. of X&Y is given by
X+Y
Let U = . (2)
2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 157
(3) ⇒ v = x ⇒∴ x = v (4)
x+y
∴ (2) ⇒ u =
2
⇒ 2u = v + y ⇒∴ y = 2u − v (5)
n
(6)
g.i
0 1
where J = = −2 ⇒ |J| = |−2| = 2
2 −1
n
∴ g(u, v) = e−(x+y) · 2
eri
= 2e−(v+2u−v)
g(u, v) = 2e−2u
ine
x>0
y>0
2u − v > 0
ng
∴ 0 < v < 2u
V(U = 0)
arn
v = 2u
Z∞
Le
= 2e−2u dv
U(V = 0)
ww
−∞
Z2u
= 2e−2u du
0
= 2e−2u · 2u
= 4ue−2u , u > 0
n
g.i
Given U = X − Y. (2)
Define a new variable V = X. (3)
n
eri
(3) ⇒ v = x ⇒∴ x = v (4)
∴ (2) ⇒ u = x − y
⇒ u = v − y ⇒∴ y = v − u
ine
(5)
ng
0 1
where J = = 1 ⇒ |J| = |1| = 1
arn
−1 1
∴ g(u, v) = e−[x+y] · 1
Le
= e−[v+v−u]
g(u, v) = eu · e−2v
w.
x≥0
y≥0
v−u≥0
ww
V(U = 0)
u=v
R1 R2
v≥u v>0
U(V = 0)
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 159
Z∞
∴ Pdf of U is g(u) = g(u, v)dv (7)
−∞
In R1 In R2
Z∞ Z∞
∴ (7) ⇒ g(u) = eu e−2v dv ∴ (7) ⇒ g(u) = eu e−2v dv
n
" −2u
#
1
e
g.i
= e (0) −
u = eu (0) −
−2 −2
u
e e −u
= ,u < 0 = ,u ≥ 0
n
2 2
eu
eri
,u < 0
∴ The required pdf of U : g(u) = 2
−u
e
,u ≥ 0
ine
2
Example 2.21 If X and Y are independent uniform random variates on [0, 1].
ng
(1)
[Link]
[Link]
160 UNIT II - Two Dimensional Random Variables
n
g.i
0<x<1 0<y<1
u
0< <1
n
To find range space of u and v: v 0<v<1
0<u<v
∴0<u<v<1
eri
V(U = 0)
ine
u=v
Z∞
ng
v=1
∴ Pdf of U is g(u) = g(u, v)dv v=1
−∞
E
Z1
1 v=u
arn
= dv U(V = 0)
v
u
Z
= [log v]1u
Le
= − log u, u > 0
ww
Example 2.22 If X and Y are independent uniform random variates on [0, 1].
X
Find p.d.f. of U = .
Y
n
X
g.i
Given U = . (2)
Y
n
Define a new variable V = X. (3)
eri
(3) ⇒ v = X ⇒∴ x = v
ine
(4)
x
∴ (2) ⇒ u =
y
ng
v v
⇒ u = ⇒∴ y = (5)
y u
E
arn
0 1 −v −v v
where J = −v 1 = 2 ⇒ |J| = 2 = 2
ww
u u u
u2 u
v
∴ g(u, v) = 1 · 2
u
v
= 2
u
0<x<1
0<y<1
v
0< <1
To find range space of u and v: 0 < v < 1 u
0<v<u
∴0<v<u<1
[Link]
[Link]
162 UNIT II - Two Dimensional Random Variables
V(U = 0)
u=v
Z∞
∴ Pdf of U is g(u) = g(u, v)dv v=1
−∞
Z1
1 R1 R2
= dv U(V = 0)
v
u
Z
n
= [log v]1u
g.i
= [(log 1) − (log u)]
= − log u, u > 0
n
eri
Example 2.23 If X and Y are independent random variables with mean ‘0’ and
√
variance σ2 . Find p.d.f. of R = X2 + Y2 , φ = tan−1 XY .
ine
Solution : Given X and Y are independent random variables with mean
‘0’ and variance σ2 . i.e.,
ng
X ∼ Nor. Dist. µ = 0, σ1 Y ∼ Nor. Dist. µ = 0, σ1
1 x−µ 2 1 y−µ 2
E
1 − 1 −
f (x) = √ e 2 σ ; f (y) = √ e 2 σ ;
arn
σ 2π σ 2π
−∞ < x < ∞ −∞ < y < ∞
x−µ y−µ
z= ; −∞ < µ < ∞ z= ; −∞ < µ < ∞
σ σ
Le
σ>0 σ>0
Joint pdf of X and Y :
!2 !2
w.
1 y − 0 1 y − 0
− −
1 σ 1 σ
f (x, y) = f (x) · f (y) = √ e 2 · √ e 2
σ 2π σ 2π
ww
1 x +y
2 2
1 −
= e 2 σ2 ; −∞ < x, y < ∞ (1)
2πσ 2
Given
√
R = X 2 + Y2 (2)
Y
φ = tan−1 (3)
X
(2) ⇒ a circle of radius R and argument φ.
Since the given transformations R and φ are in polar form of (x, y). So
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 163
define
x = R cos φ (3)
y = R sin φ (4)
n
g.i
g(R, φ) = f (x, y) |J| (6)
n
∂x ∂x
∂R ∂φ
where J = ∂y ∂y =
eri
cos φ sin φ
−R sin φ R cos φ
= R cos2 φ + R sin2 φ = R ⇒ |J| = R
ine
∂R ∂φ
∴ g(R, φ) = f (x, y) · |J|
ng
1 x2
+ y2
1 −
= e 2 σ2 ·R
2πσ 2
E
2
1 R
arn
R −
= 2
e 2 σ2 (∵ by(1))
2πσ
Le
w.
−∞ < x, y < ∞
To find range space of R and φ:
0 ≤ φ ≤ 2π and 0 ≤ R < ∞
ww
V(U = 0)
R=0 R→∞
U(V = 0)
0 ≤ φ ≤ 2π
[Link]
[Link]
164 UNIT II - Two Dimensional Random Variables
∴ Pdf of R is : ∴ Pdf of φ is :
Z∞
g(φ) = g(R, φ)dR
−∞
2
Z∞ Z∞ 1 R
R −
g(R) = g(R, φ)dφ = 2
e 2 σ2 dR
2πσ
−∞ 0
2 2
Z2π 1 R Z∞ 1 R
R − R −
n
= e 2 σ2 dφ = Re 2 σ2 dR
2πσ 2 2πσ 2
g.i
0 0
R2
2 2RdR
1 R Take t = 2 ⇒ dt =
R −
n
2σ 2σ2
= 2
e 2 σ2 [φ]2π 0 Z∞
eri
2πσ 1
∴ g(φ) = σ2 e−t dt
2
1 R 2πσ 2
R − " −t0 #∞
= e 2 σ2 [2π]
ine
2
2πσ 1 e
2 =
1 R 2π −1 0
R − 1 h −∞ 0 i
∴ g(R) = 2 e 2 σ2
ng
=− (e ) − e
σ 2π
1
= − [(0) − (1)]
E
2π
1
arn
∴ g(φ) = , 0 ≤ φ ≤ 2π
2π
Questions)
w.
(1) − (2)
# General Format of CLT problems: (1) ∼ N((2), (3)) p Z=
(3)
X−µ
(a) X ∼ N µ, σ2 Z= √
σ2
Sn − nµ
Sum = Total = X = Sn ∼ N nµ, nσ2 Z =
P
(b) √
σ n
n
σ2 X−µ
P !
X
g.i
(c) Mean = Average = = X ∼ N µ, Z= σ
n n √
n
n
eri
ine
Example 2.24 The life time of certain brand of an electric bulb may be considered
a random variable with mean 1200 hrs and standard deviation 250 hrs. Find the
ng
probability using CLT, that the average life time of 60 bulbs exceeds 1250 hrs.
E
arn
σ2
P !
X
∴ Mean = Average = = X ∼ N µ,
n n
X−µ
⇒Z= σ
√
n
[Link]
[Link]
166 UNIT II - Two Dimensional Random Variables
X − µ 1250 − µ
∴ P(X > 1250) = P σ >
σ
√
√
n n
1250 − 1200
= P z > z = 1.55 z → ∞
250
√
60
= P (z > 1.55)
n
= 0.5 − P (0 < z < 1.55)
g.i
= 0.5 − 0.4394
∴ P(X > 1250) = 0.0606
n
eri
Example 2.25 If Vi , i = 1, 2, · · · , 20 are independent noise voltages received in
an adder and V is sum of voltages received. Find the probability that the total
ine
incoming exceeds 105, using C.L.T., assume that each of the random variable Vi
is uniformly distributed over (0, 10)?
ng
X i=n
X
arn
Vi = V = Total voltage = Vi = V1 + V2 + V3 + · · · + Vn
i=1
n = Number of voltages = 20
Le
We have to find the probability of total voltage exceeds 105 = P(V > 105) :
So use CLT for sum of random variables.
w.
X
Sum = Total = Vi = Sn = V ∼ N nµ, nσ 2
Sn − nµ
ww
⇒Z= √
σ n
To find µ and σ:
V−nµ 105−(20)(5)
∴ P(V > 105) = P √ > r
σ n
100 √
20
12 z = 0.38 z → ∞
5
=P z>
12.90
= P(z > 0.3872)
= 0.5 − 0.1480
n
∴ P(V > 105) = 0.3520
n g.i
Example 2.26 An unbiased die is rolled 300 times find the probability that number
eri
of outcomes which are odd will be between 140 and 150.
ine
Solution : Given, an unbiased die is rolled.
ng
3 1
E
1
q=1−p=
2
X = Number of odd outcomes
Le
i.e., X ∼ [Link](n, p)
1
np = Mean = 300 × = 150 ⇒ µ = 150
w.
2
150
npq = (np)q = Variance = = 75 ⇒ σ2 = 75
ww
We have to find the probability of getting odd numbers between 140 and
150 = P(140 < X < 150) :
So use C.L.T. for a random variable.
i.e., By C.L.T., [Link]. approaches as n → ∞.
X ∼ N µ, σ 2
X−µ
⇒z=
σ
[Link]
[Link]
168 UNIT II - Two Dimensional Random Variables
!
X−µ
∴ P(140 < X < 150) = P 140 < < 150
σ
140 − µ 150 − µ
!
=P <z<
σ σ
! z=0 z = 1.15
140 − 150 150 − 150
=P √ <z< √
75 75
= P (−1.15 < z < 0)
= P (0 < z < 1.15)
n
∴ P(140 < X < 150) = 0.3749
g.i
Example 2.27 A random sample of size 100 is taken from a population whose
mean is 60 and variance is 400. Use CLT, find probability of mean of sample will
n
not differ from µ = 60 by more than 4.
Solution : Given n = Size of the sample = 100
eri
ine
µ = Mean of the sample = 60
σ2 = Variance of the sample = 400
ng
We have to find, the probability of mean of sample will not differ from
µ = 60 by more than 4 = P (|X − 60| ≤ 4):
So use CLT for average of random variables.
E
σ2
P !
X
∴ Mean = Average = = X ∼ N µ,
arn
n n
X−µ
⇒Z= σ
Le
√
n
∴ P X − 60 ≤ 4 = P −4 ≤ X − 60 ≤ 4
w.
= P −4 + 60 ≤ X ≤ 4 + 60
ww
= P 56 ≤ X ≤ 64
56 − µ X − µ 64 − µ
= P σ ≤ σ ≤
√σ
√ √
n n
n
z=0 z=2
56 − 60 64 − 60
= P 20 ≤z≤
√ √20
100 100
= P(−2 ≤ z ≤ 2)
= 2P(0 ≤ z ≤ 2)
= 2(0.4772)
∴ P X − 60 ≤ 4 = 0.9544
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 169
1. State and prove the central limit theorem for a sequence of independent
and identically distributed random variables.[M/J 06, ECE]
2. The life time of a certain brand of a tube light may be considered
as a random variable with mean 1200 hours and standard deviation
250 hours. Find the probability, using central limit theorem, that the
average life time of 60 lights exceeds 1250 hours. {0.0606} [N/D 2006,
n
ECE]
g.i
3. State and prove Central Limit Theorem. [N/D 2007, ECE]
4. A random sample of size 100 is taken from a population whose mean is
n
60 and variance 400. Using C.L.T. with what probability can we assert
eri
that the mean of the sample will not differ from = 60 more that 4?
[A/M 2008, ECE]
ine
5. The life time of a certain brand of an electric bulb may be considered
a RV with mean 1200h and standard deviation 250h. Find the
probability, using central limit theorem, that the average lifetime of 60
ng
1, 0 < x < 1, what is the approximate probability that the sum of these
numbers is atleast 8? [N/D 2008, ECE]
Le
w.
ww
[Link]
[Link]
n
Definition and examples - first order, second order, strictly stationary,
g.i
wide-sense stationary and ergodic processes - Markov process -
n
Binomial, Poisson and Normal processes - Sine wave process -
Random telegraph process.
eri
ine
ng
Introduction:
arn
Definition :
A random process is a collection or (ensemble) of a random variables
ww
{X(s, t)}, that are functions of a real variable namely time (t), where s ∈ S,
the sample space and t ∈ T, the parameterized set or index set.
(OR)
Families of random variables which are functions of time t, a stochastic
process
{X(t), t ∈ T}, 0 ≤ t < ∞.
[Link]
[Link]
172
3.1.1 Classification of random process
sequence Xn = {1, 2, 3, 4, 5, 6}
T = {1, 2, 3, · · · }
i n g t
e r
(1) Xn represents the (2) Non-Step Wave function:
random
sequence
e
temperature at a point in a f (t)
n
room at the end of nth hour
of a day.
i π
t
g
Xn = {1 ≤ n ≤ 24} 0 2π
E
random process number n
T = {1, 2, 3, · · · }
(1) X(t) represents the (2) Step Wave function :
of telephone x(t)
rn
received by a hospital in
time interval [0, t]
a
t
4. Continuous
.L e
Continuous Continuous (1) X(t) represents the (2) Non-Step Wave function :
random process maximum temperature at a f (t)
place in interval [0, t]
w w 0 π 2π
t
w
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 173
Example :
1. If a FM station is broadcasting a tone X(t) = 10 cos 20t. It is received
by a large number of receivers in given area whose distance from a
transmitter can be considered as a continuous random variable as the
amplitude and phase of the waveform received by any receiver will be
dependent on this distance, they are all random variables. Hence the
ensemble of the received waveforms can be represented by a random
process {X(t)} of the form X(t) = A cos(20t + θ), where A and θ will be
random variables representing amplitude and phase of the received
n
signal.
g.i
2. If Xn denotes the presence or absence of a pulse at the end of the nth
time (say nth sec) in a digital communication system or digital data
processing system, then {Xn , n ≥ 1} is a random process.
n
3.2 Stationary random processes eri
ine
3.2.1 First order stationary processes
ng
∂x
order density function does not change with shift in time margin.
n
E[(R.P.)] = (R.P.) f [(R.V.)] d(R.V.)
g.i
[Link] limit
where R.V. = a r.v. follows a cont. dist. with lower and upper limits
n
(OR)
and is defined as
(OR)
ww
Z∞ Z∞
RXX (t1 , t2 ) = E[X(t1 ).X(t2 )] = x1 x2 [ f (x1 , x2 ; t1 , t2 )] dx1 dx2
−∞ −∞
n
g.i
3.2.9 Auto covariance of Random process X(t)
n
eri
or
CXX (t1 , t2 ) = RXX (t1 , t2 ) − E[X(t1 )] . E[X(t2 )]
ine
3.3 Ergodic random process
ng
is defined as
RT
XT = 2T X(t) dt
1
Le
−T
Consider a random process X(t) with constant mean µ {i.e., E[X(t) = µ]}
and time average XT , then X(t) is said to be mean ergodic (or ergodic
in mean) if XT tends to µ as t → ∞ i.e.,
lim XT = E[X(t)] = µ
T→∞
ZT
1
lim X(t)dt = E[X(t)] = µ
T→∞ 2T
−T
[Link]
[Link]
176 UNIT III - Classification of Random Processes
n
Note :
g.i
Ergodic process ⊂ Stationary process ⊂ W.S.S. ⊂ Random process
n
eri
3.4 Markov Process(Markovian process)
Definition :
ine
A random process X(t) is called Markov process, if for t0 < t1 < t2 <
· · · < tn < tn+1 , we have
ng
i.e., in words,
arn
Note :
(1) Thus from the definition, the future state xn+1 of the process at
w.
time tn+1 {i.e., X(tn+1 ) = xn+1 }, depends only on the present state
xn of the process at tn {i.e., X(tn ) = xn } and not on the past values
x0 , x1 , · · · , xn−1 .
ww
(3) Let random variable Sn represents the total number hits in first n
trials and defined by Sn = x1 + x2 + · · · + xn and possible values of
Sn are 0, 1, 2, · · · , n, then random variable
Sn+1 = x1 + x2 + · · · + xn + xn+1
= Sn + xn+1 , then
k + 1, if the (n + 1)st trial result is a hit.
(
Sn+1 =
k, if the (n + 1)st trial result is a miss.
Thus
P[Sn+1 = k + 1/Sn = k] = P[Xn+1 = 1] = p
n
P[Sn+1 = k + 1/Sn = k] = P[Xn+1 = 0] = q
g.i
These conditions probabilities depends only on the value of Sn
( i.e., the just previous value) and not on the values of random
n
variables S1 , S2 , · · · , Sn−1 (past values) as it does not matter how the
value Sn = k is reached.
eri
ine
3.4.1 Types of Markov process
ng
n
pij (n − 1, n)=pi j (m − 1, m), then Markov chain is called a homogeneous
g.i
Markov chain or the chain is said to have stationary (constant)
transition probabilities.
n
Transition probability matrix(t.p.m.) :
eri
When the Markov chain is homogeneous, the one step (stationary)
transition probability is denoted by pi j and the matrix of all pi j values
i.e., matrix P = {pi j } is called (one - step) t.p.m.
ine
Note :
The t.p.m. of a Markov chain is a stochastic matrix since
ng
(1)all pi j ≥ 0 and
(2) j pi j = 1, ∀i (i.e., sum of all elements in each row =1)
P
E
(n)
P[xn = a j /x0 = ai ] = pi j
Chapman - Kolmogorov theorem(for n step tpm) :
Le
Irreducible :
A Markov chain is said to tbe irreducible if every state can be
(n)
reached from every other state, where Pi j > 0 for some n and for
all i and j.
The t.p.m. of an irreducible chain is an irreducible matrix.
Otherwise, the chain is reducible or non-reducible.
n
Return state :
(n)
If Pi j > 0, for some n > 1, then we call the state i of the Markov
g.i
chain as return state.
n
Period :
(m)
Let Pii > 0 for all m. Let i be a return state. Then we define the
period di as follows,
eri
di = GCD{m, Pm > 0},
ine
ii
where GCD = Greatest common divisor.
If di = 1, then state i is said to be aperiodic.
ng
∞
µii = n fiin is finite and null persistent if µii → ∞.
P
n=1
w.
Ergodic state:
A non null persistent and a periodic state is called ergodic.
ww
π1 + π2 + ... + πn = 1
and πP = π.
[Link]
[Link]
180 UNIT III - Classification of Random Processes
Bernoulli Process :
Bernoulli random variable Xi is defined as
(
1, if the ith trial is success
X(ti ) = Xi =
0, if the ith trial is success failure
where the time index ti is such that i = · · · , −2, −1, 0, 1, 2, · · ·
and the state space is S = {0, 1}.
The ensemble {X(t)} of the r.v.s is called as a Bernoulli process.
n
Properties of Bernoulli Process :
g.i
(1) It is a discrete sequence.
(2) It is a SSS process.
n
h i
(3) E[Xi ] = p, E Xi2 = p and
eri
h i
Var[Xi ] = E Xi2 − [E (Xi )]2 = p − p2 = p 1 − p = pq
Binomial process :
ine
Let Sn be the number of successes in n Bernoulli trials.
i.e., Sn = X1 + X2 + · · · + Xn .
ng
i.e., Sn = X1 + X2 + · · · + Xn , n = 1, 2, 3, · · · .
arn
n−k
P [Sn = k] = nCk pk 1 − p = nCk pk qn−k , k = 0, 1, 2, · · · , n
E [Sn ] = np
Var [Sn ] = npq
(3) The Binomial distribution of the processes approaches the Poisson
distribution when n is very large and p is very small.
n
Poisson process is a continuous parameter (t) and discrete state X
g.i
process. This is practically a very useful random process.
Uses :
n
(1)To find number of incoming telephone calls received in particular
time.
eri
(2)The arrival of customers at a bank in a day.
ine
(3)The emission of an electron from the surface of a light - sensitive
material (photo detector)
ng
X
= xP [X (t) = x]
x=0
∞
e−λt (λt)x
Le
X
= x
x=0 x
∞
(λt)x
w.
X
=e −λt
x
x
x=0 1
ww
2 3
(λt) (λt) (λt)
= e−λt 1 +2 +3 + · · ·
1 2 3
1 2
(λt) (λt)
= e−λt (λt) 1 + + + · · ·
1 2
= e−λt (λt) eλt
= λt
h i X∞
Now, E X (t) =
2
x2 P [X (t) = x]
x=0
∞ −λt (λt)x
2e
X
= x
x=0 x
∞
X e−λt (λt)x
= [x (x + 1) − x]
x=0 x
∞ −λt (λt)x
∞
X e−λt (λt)x
X e
= x (x + 1)
n
− x
x x
x=0 x=0
g.i
∞
X e−λt (λt)x
= x (x + 1) − λt ∵ Mean o f Poisson process = λt
x
n
x=0
−λt (λt)1 −λt (λt)2 −λt (λt)3
eri
e e e
= 0 + 1.2 + 2.3 + 3.4 + ... − λt
1 2 3
ine
(λt)1 (λt)2
= 2e λt 1 + 3
−λt
+ 3.2 + ... − λt
2 3
1 2
ng
(λt) (λt)
= 2e−λt λt 1 − (−3) + 3.2 + ... − λt
2 3
E
= (λt)2 + λt
arn
h i
∴ Var [X (t)] = (λt)2 + λt − {λt}2 = λt
CXX (t1 , t2 )
n
rXX (t1 , t2 ) = p p
g.i
Var [X (t1 )] Var [X (t2 )]
λt1
= p , if t2 ≥ t1
(λt1 ) (λt2 )
n
r
eri
t1
= , i f t2 ≥ t1
t2
ine
3.5.7 M.G.F. of Poisson process
h i
MX(t) (u) = E e uX(t)
ng
X∞
= eux P [X (t) = x]
E
x=0
arn
∞ −λt (λt)x
ux e
X
= e
x=0 x
∞
[(λt) eu ]x
Le
X
=e −λt
x
x=0
w.
u 1 u 1
[(λt) e ] [(λt) e ]
=e
−λt
+ + + ...
1
1 2
ww
u
= e−λt e(λt)e
= eλt(e
u
−1)
, where‘u0 is a parameter
e−λt (λt)x
P[N(t) = x] = , x = 0, 1, 2, · · ·
x!
n
n g.i
eri
ine
E ng
arn
Le
w.
ww
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 185
(1) Prove that a first order stationary random process has a constant
mean.[Nov/Dec 2006 ECE]
Solution : Consider a random process X(t) at two different instants t1
and t2 .
Z∞
E [X (t1 )] = x fX (x, t1 )dx
−∞
n
Z∞
g.i
E [X (t2 )] = x fX (x, t2 )dx
−∞
n
Let t2 = t1 + C
Z∞ eri
ine
E [X (t2 )] = x fX (x, t1 + C)dx
−∞
Z ∞
ng
=
x fX (x, t1 )dx ∵ X(t) is first order stationary
−∞
=E [X (t1 )]
E
arn
(2) Define Binomial process. Give an example for its sample function.[N/D
Le
2006, ECE]
Solution : Binomial process can be defined as a sequence of partial
w.
sums.
{Sn /n = 1, 2, · · · } where Sn = X1 + X2 + · · · + Xn . Example: A sample
ww
(x1 , x2 , x3 , · · · ) = (1, 1, 0, 0, 1, 0, 1, · · · ) is
(S1 , S2 , S3 , · · · ) = (1, 2, 2, 2, 3, 3, 4, · · · )
n
g.i
,(λ1 − λ2 )2 t2 + (λ1 − λ2 )t
∴ X(t) = X1 (t) − X2 (t) is not a Poisson process.
n
(4) Define Strictly sense stationary (SSS) process. [M/J 07, ECE]
eri
Solution : A random process is called a strongly stationary process
or strict sense stationary process (abbreviated as SSS process), if all its
ine
finite dimensional distributions are invariant under translation of time
parameter.
i.e., time series {X(t)} is said to be strictly stationary is
ng
n
X
P [X (t) = n] = [P [X1 (t) = r]] [P [X2 (t) = n − r]]
r=0
w.
n
X e−λ1 t (λ1 t)r e−λ2 t (λ2 t)n−r
=
r! (n − r)!
ww
r=0
n
e−(λ1 +λ2 )t X n!
= (λ1 t)r (λ2 t)n−r
n! r=0
r!(n − r)!
n
e−(λ1 +λ2 )t X
= nCr (λ1 t)r (λ2 t)n−r
n! r=0
e−(λ1 +λ2 )t
= (λ1 t + λ2 t)n
n!
⇒ X (t) = X1 (t)+X2 (t) is a Poisson distribution with parameter λ1 +λ2
(6) Define wide - sense stationary process.
Solution : A random process X(t) is said to be wide sense stationary
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 187
n
g.i
i.e., in words,
If the future behaviour of a process depends only on the present state,
n
but not on the past, then the process is called a Markov process.
eri
(8) When is a stochastic process said to be ergodic? [M/J 08, ECE]
Solution : All the time averages are equal to the corresponding
ine
ensemble averages.
(11) Define strict sense stationary process and wide sense stationary
Le
(13) Prove that the sum of two independent Poisson processes is a Poisson
ww
(15) Define wide sense stationary process. Give an example. [M/J 09, ECE]
Solution : (= 6th question)
Example: If X(t) = sin(ωt + y), where y a uniform random variable in
(0, 2π), then X(t) is W.S.S.
[Link]
[Link]
188 UNIT III - Classification of Random Processes
(16) Determine
whether the given matrix is irreducible or not
0.3 0.7 0
P = 0.1 0.4 0.5 [N/D 09, ECE]
0 0.2 0.8
Solution : In this Markov chain, as all states communicate with each
other, the chain is irreducible. For example, if we assume the three
states 0,1 and 2, in this chain, it is possible to go from state 0 to1 with
probability 0.7 and from state 1 to 2 with probability 0.5. So it is
possible to go from state 0 to 2. So the chain is irreducible and all the
states are recurrent.
n
g.i
(17) Show that the random process X(t) = Acos(ω0 t + θ) is not stationary, if
A and ω0 are constants and θ is uniformly distributed in (0, π). [N/D
09,EC]
n
Solution : E[X(t)] = − 2Aπ sin ω0 t = a function of t ⇒ X(t) is not
stationary.
eri
(18) Examine whether the Poisson process {X(t)} given by the law
ine
−λt (λt)r
P [X (t) = r] = e r! , r = 0, 1, 2, · · · is covariance stationary.[M/J 06,
CSE]
ng
(19) Define Markov process and a Markov chain. [M/J 06, CSE]
Solution : If for t1 < t2 < ··· < tn < t.
P [X (t) ≤ x/X(t1 ) = x1 , X(t2 ) = x2 , · · · , X(tn ) = xn ] =
Le
(20) Define random process and its classification. [N/D 06, CSE]
Solution : Solution : Refer (Classification of random process table)
ww
(21) Let X bet the random variable which gives the inter arrival time (time
between successive arrivals), where the arrival process is a Poisson
process. What will be the distribution of X? How? [M/J 06, CSE]
Solution : X follows an exponential distribution.
P(X = t) = P(number of arrivals in time t) = e−λt . F(t) = P(X = t) =
1 − e−λt is the distribution function.
F(t) = λe−λt which is the p.d.f. of an exponential variate.
(22) Define strict sense stationary process and give an example.[M/J 06]
Solution: Refer(S.S.S. 4th question)
Example :
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 189
(1) Ergodic function (2) If X(t) = Acos(ω0 t + θ), where A and ω0 are
constants and θ is uniformly distributed r.v. in (0, 2π).
(23) A man tosses a fair coin until 3 heads occur in a row. Let Xn denotes
the longest string of heads ending at the nth trial (i.e.,) Xn = k, if at the
nth trial, the last tail occurred at the (n − k)th trial. Find the transition
probability matrix. [M/J06,CSE]
Solution : The state space = {0, 1, 2, 3}, since the coin is tossed until 3
heads occur in a row.
n
State o f Xn
g.i
0 1 2 3
0 1/2 1/2 0 0
n
1 1/2 0 1/2 0
P = State o f Xn−1
eri
2 1/2 0 0 1/2
3 0 0 0 1
ine
(24) Define strict sense and wide sense stationary process. [N/D 06, CSE]
Solution : Refer(4th and 15th questions)
ng
/4
3/4 0
{Xn }, n = 1, 2, 3, · · · with three state 0,1, and 2 is P =
1/4 1/2 1/4
arn
0 3/4 1/4
with initial P( 0) = (1/3, 1/3, 1/3}. Find
P(x3 = 1, X2 = 2, X1 = 1, X0 = 2).[N/D 06, CSE]
Le
w.
ww
Example 3.1 Let random process X(t) = cos(t + ϕ) where ϕ is a random variable
1 π π
with f (φ) = , − < φ < . Check whether density function stationary or not.
π 2 2
n
Z2
1
g.i
∴ E[X(t)] = cos(t + φ) dφ
π
π
−
n
2
#π
1 sin(t + φ) 2
eri
"
= π
π 1 −
ine
2
1 π π
= sin t + − sin t −
π 2 2
ng
1
= [cos t] + sin −t
π 2
arn
1
= {cos t + cos (t)} (∵ sin(90 − t) = cos t)
π
1
Le
= [2 cos t]
π
= a function of t
w.
ww
Example 3.2 Show that X(t) = A cos(ωt + θ) is not stationary if A and ω are
constants and θ is uniformly distributed in (0, π).
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 191
Zπ
1
∴ E[X(t)] = A cos(ωt + θ)
n
dθ
π
g.i
0
A
= [sin(ωt + θ)]π0
π
n
A
eri
= {[sin (ωt + π)] − [sin (ωt − 0)]}
π
A
= {[−sinωt − sin ωt]} ( ∵ sin(ωt + 180) = − sin ωt )
ine
π
−2A
= sin ωt
π
ng
= a function of t
E
Example 3.3 Let X(t) = cos(ωt+θ), where θ is uniformly distributed in (−π, π).
w.
Check whether X(t) is stationary or not? Also find first and second moments of
ww
process.
n
2π
g.i
1
= [− sin ωt + sin ωt]
2π
= 0 = a constant
n
= not a function of t
∴ X(t) is stationary.
eri
∴ First moment of the Random process X(t) = E[X(t)] = 0
ine
h i
II moment of the Random process : E X2 (t)
ng
h i h i
∴ E X (t) = E cos (ωt + θ)
2 2
1 + cos 2(ωt + θ)
E
" #
=E
2
arn
1 1
= E[1] + E [cos(2ωt + 2θ)]
2 2
Zπ
Le
1 1 1
= + cos(2ωt + 2θ) dθ
2 2 2π
−π
w.
= +
2 4π 2 −π
1 1
= + [sin(2ωt + 2π) − sin(2ωt − 2π)]
2 8π
1 1
= + [sin(2π + 2ωt) + sin(2π − 2ωt)]
2 8π
(∵ sin(A − 2π) = − sin(2π − A), A = 2ωt)
1 1
= + [sin(2ωt) − sin(2ωt)]
2 8π
(∵ sin(2π + A) = sin(A), sin(2π − A) = − sin(A), A = 2ωt)
1 1
= +0=
2 2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 193
h i 1
∴ Second moment of the Random process X(t) = E X2 (t) =
2
∴ X(t) is stationary w.r.t. to both I and II moment.h i
Note : Variance of the random process X(t) = E X2 (t) − {E[X(t)]}2
1 1
= − (0)2 =
2 2
n
g.i
covariance stationary) if A and ω are constants θ is uniformly distributed
random variable in (0, 2π).
n
eri
Solution : Given: X(t) = Random process = A cos(ωt + θ),
where A and ω are constants
ine
θ = a uniformly distributed r. v. defined in (0, 2π).
1
f (θ) = P.d.f. of θ = .
2π
ng
(OR)
Autocorrelation of X(t) = E[X(t1 ).X(t2 )] = a function of time difference
= f (t2 − t1 ) (or) f (t1 − t2 )
Le
Now,
w.
Z2π
1
(1) ⇒ E[X(t)] = A cos(ωt + θ) dθ
ww
2π
0
A
= [sin(ωt + θ)]2π
0
2π
A
= {[sin (ωt + 2π)] − [sin (ωt − 0)]}
2π
A
= {[sin ωt − sin ωt]} ( ∵ sin(ωt + 2π) = sin ωt )
2π
−2A
= [0]
π
=0
= a constant
[Link]
[Link]
194 UNIT III - Classification of Random Processes
n
= E {cos [ωτ] + cos [2ωt + ωτ + 2θ]} (∵ cos(−ωt) = cos ωt)
2
g.i
A2 A2
= cos [ωτ] + E[cos(2ωt + ωτ + 2θ)] (3)
2 2
n
(∵ E[R.P. without r.v. ] = E[constant] = constant])
2
Z2π
A
E
0
#2π
A2 sin (2ωt + ωτ + 2θ)
"
=
2π 2 0
Le
2
A
= [sin (2ωt + ωτ + 4π) − sin (2ωt + ωτ)]
4π
w.
A2
= [sin (2ωt + ωτ) − sin (2ωt + ωτ)] (∵ sin(4π + A) = sin(A))
4π
A2
ww
= [0]
4π
=0
n
Autocorrelation of X(t) = E[X(t1 ).X(t2 )] = a function of time difference
g.i
= f (t2 − t1 ) (or) f (t1 − t2 )
n
Now,
(1) ⇒ E[X(t)] =
Z2π
eri
sin(ωt + θ)
1
2π
dθ
ine
0
1
= [− cos(ωt + θ)]2π
0
2π
ng
1
= − {[cos (ωt + 2π)] − [cos (ωt − 0)]}
2π
E
−1
= {[cos ωt − cos ωt]} ( ∵ cos(ωt + 2π) = cos ωt )
arn
2π
−1
= [0]
2π
=0
Le
= a constant
w.
ww
n
0
2
A
g.i
= [sin (2ωt + ωτ + 4π) − sin (2ωt + ωτ)]
4π
A2
n
= [sin (2ωt + ωτ) − sin (2ωt + ωτ)] (∵ sin(4π + A) = sin(A))
4π
=
=0
A2
4π
[0]
eri
ine
1
∴ (3) ⇒ RXX (t, t + τ) = cos ωτ = a function of τ.
2
ng
Example 3.6 If X(t) is WSS process with auto correlation R(τ) = Ae−α|τ| . Find
arn
h i
= E X (8) − 2X(8) · X(5) + X (5)
2 2
n
g.i
Also given φ(1) = 0
⇒ E [cos Y] + iE [sin Y] = 0
n
⇒ E [cos Y] = 0, E [sin Y] = 0 (1)
eri
and also φ(2) = 0
E [cos 2Y] + iE [sin 2Y] = 0
ine
⇒ E [cos 2Y] = 0, E [sin 2Y] = 0 (2)
ng
(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
arn
(OR)
RXX (t1 , t2 ) = a function of time difference
Le
Now (i)
w.
Now (ii)
[Link]
[Link]
198 UNIT III - Classification of Random Processes
n
2
g.i
1
= cos λ (t1 − t2 )
2 " #
1 0 0
n
+ cos λ (t1 + t2 ) − sin λ (t1 + t2 )
:
:
E[cos E[sin
2Y] 2Y]
2
1
= cos λ (t1 − t2 )
2 eri
ine
1
= cos λτ (where τ = t1 − t2 or τ = t2 − t1 )
2
∴ X(t) is WSS.
ng
Example 3.8 Let random process X(t) = B cos(50t + φ) where B and φ are
E
arn
independent random variables. B is a random variable with mean ‘0’ and variance
‘1’ and φ is uniformly distributed in interval (−π, π). Find mean and auto
Le
correlation of process.
" #2
h i 0 h i
Variance of B = 1 ⇒ E B2 − E(B) = 1 ⇒ E B2 = 1
*
=0
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 199
n
1
= cos (50τ) + 0 (∵ cos(−ωτ) = cos(ωτ))
g.i
2
1
= cos (50τ)
2
n
eri
Note : In the previous problem, X(t) is WSS.
ine
Example 3.9 Show that the process X(t) = A cos λt + B sin λt is WSS, where A
ng
(b) ⇒ E A = E B = σ , say
2 2 2
(c) ⇒ E(AB) = 0
w.
(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
(OR)
RXX (t1 , t2 ) = a function of time difference
Now (i)
Now (ii)
RXX (t1 , t2 )
= E [X (t1 ) · X (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (A cos λt2 + B sin λt2 )]
h i
= E A cos λt1 cos λt2 +AB cos λt1 sin λt2 +AB cos λt2 sin λt1 +B sin λt1 sin λt2
2 2
h i
= E A2 cos λt1 cos λt2 + 0 + 0 + B2 sin λt1 sin λt2 (∵ by (c))
= E A2 cos λt1 cos λt2 + E B2 sin λt1 sin λt2
n
= E A2 cos λt1 cos λt2 + E A2 sin λt1 sin λt2 (∵ by (b))
g.i
= σ2 cos λt1 cos λt2 + σ2 sin λt1 sin λt2 (∵ by (b))
= σ2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)
n
eri
(or)
= σ2 cos λτ
= a function of time difference
ine
∴ X(t) is WSS.
E ng
Example 3.10 The process X(t) whose probability distribution under certain
arn
conditions is given by
Le
(at)n−1
(1 + at)n+1 , n = 1, 2, · · ·
P [X(t) = n] =
w.
at
, n = 0.
1 + at
ww
(at)n−1
(1 + at)n+1 , n = 1, 2, · · ·
Solution : Given: P [X(t) = n] =
(1)
at
, n = 0.
1 + at
where X(t) = a discrete random process
To show X(t) is not stationary, we have to show
When r=1
n→∞
X
E[X(t)] = nPn (t)
n=0
n→∞
X
= [nPn (t)]n=0 + nPn (t) (∵ by (1))
n=1
n→∞
at (at)n−1
X
= 0· + n
1 + at (1 + at)n+1
n=1
n
1 (at) (at)2
=0+1· + 2 · + 3 ·
g.i
(1 + at)2 (1 + at)3 (1 + at)4
at 2
" #
1 at
= 1+2 +3
n
(1 + at)2 1 + at 1 + at
eri
−2
1 at
= 1 − (∵ 1 + 2x + 3x2 + · · · = (1 − x)−2 )
(1 + at)2 1 + at
ine
#−2
1 + at − at
"
1
=
(1 + at)2 1 + at
1 1 −2
ng
=
(1 + at)2 1 + at
1
= [1 + at]2
E
(1 + at)2
arn
=1 (3)
= a constant
Le
h i n→∞
X
E X (t) =
2
n2 Pn (t)
n=0
h i n→∞
X
= n Pn (t)
2
+ n2 Pn (t) (∵ by (1))
n=0
n=1
n→∞
X
=0+ [n(n + 1) − n]Pn (t)
n=1
n→∞
X n→∞
X
= [n(n + 1)]Pn (t) − [n]Pn (t)
n=1 n=1
[Link]
[Link]
202 UNIT III - Classification of Random Processes
n→∞
(at)n−1
X " #
= [n(n + 1)] −1 (∵ by (3))
n=1
(1 + at)n+1
1 (at) (at)2
= 1(2) + 2(3) + 3(4) −1
(1 + at)2 (1 + at)3 (1 + at)4
at 2
" #
2 at
= 1+3 +6 −1
(1 + at)2 1 + at 1 + at
−3
2 at
= 1− −1 (∵ 1 + 3x + 6x2 + · · · = (1 − x)−3 )
(1 + at)2 1 + at
#−3
n
1 + at − at
"
2
= −1
g.i
(1 + at)2 1 + at
2 1 −3
= −1
(1 + at)2 1 + at
n
2
=
(1 + at)2
[1 + at]3 − 1
= 1 + 2at − (1)2
arn
= 2at
Solution : Given
that 4 equally likely discrete random processes i.e.,
1
= =
X (t, s1 ) cos t ⇒ P [X (t, s 1 )]
4
1
X s2 = − cos t ⇒ P [X s2 =
(t, ) (t, )]
4
(1)
1
X (t, s ) = sin t ⇒ P [X (t, s )] =
3 3
4
1
= =
X (t, s4 ) − sin t ⇒ P [X (t, s 4 )]
4
To show that X(t) is WSS, we have to show
(i)E[X(t)] = a constant
(ii)RXX (t, t + τ) = a function of τ
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 203
Now (i)
i=4
X
E[X(t)] = X (t, si ) P [X (t, si )]
i=1
= X (t, s1 ) P [X (t, s1 )] + X (t, s2 ) P [X (t, s2 )]
+ X (t, s3 ) P [X (t, s3 )] + X (t, s4 ) P [X (t, s4 )]
1 1 1 1
= cos + sin
4
− cos
4
− sin (∵ by (1))
4
4
= 0 = a constant (2)
n
Now (ii)
g.i
RXX (t, t + τ)
n
= E[X(t) · E(t + τ)]
eri
= E[X (t, si ) · X (t + τ, si )]
Xi=4
= [X (t, si ) · X (t + τ, si )]P [X (t, si )]
ine
i=1
= [X (t, s1 )·X (t+τ, s1 )]P [X (t, s1 )]+[X (t, s2 )·X (t+τ, s2 )]P [X (t, s2 )]
ng
+[X (t, s3 )·X (t+τ, s3 )]P [X (t, s3 )]+[X (t, s4 )·X (t+τ, s4 )]P [X (t, s4 )]
1 1
= (cos t)[cos(t + τ)] + (− cos t)[− cos(t + τ)]
E
4 4
1 1
arn
2
= [cos t cos(t + τ) + sin t sin(t + τ)]
4
w.
1
= [cos(t − (t + τ))] (∵ cos(A − B) = cos A cos B + sin A sin B)
2
1
ww
= [cos(−τ)]
2
1
= [cos τ]
2
= a function of τ (3)
Example 3.12 Two random processes X(t) and Y(t) are defined by
X(t) = A cos λt + B sin λt, Y(t) = B cos λt − A sin λt, where A and B are random
[Link]
[Link]
204 UNIT III - Classification of Random Processes
variables and λ is a constant. If A and B are uncorrelated with zero means and
same variances. Prove that X(t) and Y(t) are jointly WSS.
n
And given A and B are random variables with same variances i.e.,
g.i
Variance(A) = Variance(B)
n
h i h i
E A − [E(A)] = E B2 − [E(B)]2
2 2
h i h i
E A2 − 0 = E B 2 − 0
eri (∵ by (1))
ine
h i h i
E A = E B = σ , say
2 2 2
(2)
(i) X(t) is WSS ⇒ {(a) E[X(t)] = a constant, (b) RXX (t1 , t2 ) = a fn. of τ}
(ii) Y(t) is WSS ⇒ {(a) E[Y(t)] = a constant, (b) RYY (t1 , t2 ) = a fn. of τ}
ww
Now (i)(a)
Now (i)(b)
RXX (t1 , t2 )
= E [X (t1 ) · X (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (A cos λt2 + B sin λt2 )]
h i
= E A2 cos λt1 cos λt2 +ABcos λt1 sin λt2 +ABcos λt2 sin λt1 +B2 sin λt1 sin λt2
h i
= E A cos λt1 + 0 + 0 + B sin λt1 sin λt2
2 2
(∵ by (3))
= E A2 cos λt1 + E B2 sin λt1 sin λt2
n
g.i
= E A cos λt1 + E A2 sin λt1 sin λt2
2
(∵ by (2))
= σ2 cos λt1 + σ2 sin λt1 sin λt2 (∵ by (2))
n
= σ2 cos λ (t1 − t2 ) (∵ cos(A − B) = cos A cos B + sin A sin B)
(or)
= σ2 cos λτ eri (∵ τ = t1 − t2 or ∵ τ = t2 − t1 )
ine
= a function of time difference
ng
∴ X(t) is WSS.
Similarly, we can prove (ii)(a) and (ii)(b)
E
Now (iii)
Le
RXY (t1 , t2 )
= E [X (t1 ) · Y (t2 )]
= E [(A cos λt1 + B sin λt1 ) · (B cos λt2 − A sin λt2 )]
w.
h i
= E ABcos λt1 cos λt2 −A2 cos λt1 sin λt2 +B2 sin λt1 cos λt2 −ABsin λt1 sin λt2
ww
h i
= E 0−A cos λt1 sin λt2 +B sin λt1 cos λt2 − 0
2 2
(∵ by (1))
h i h i
= E −A2 cos λt1 sin λt2 +E B2 sin λt1 cos λt2
h i h i
= −E A cos λt1 sin λt2 +E B2 sin λt1 cos λt2
2
= −σ2 cos λt1 sin λt2 +σ2 sin λt1 cos λt2 (∵ by (2))
= −σ2 [cos λt1 sin λt2 −sin λt1 cos λt2 ]
= −σ2 [sin λ (t2 − t1 )] (∵ sin(A − B) = sin A cos B + cos A sin B)
(or)
= −σ2 sin λτ
= a function of time difference
[Link]
[Link]
206 UNIT III - Classification of Random Processes
Example 3.13 Let random process X(t) = A cos ωt + B sin ωt where A and B are
independent normally distributed random variables N 0, σ . Show that X(t) is
2
Solution : Given X ∼ N 0, σ2
n
⇒ Mean = 0 ⇒ E[X(t)] = 0 (1)
g.i
h i h i
Variance = σ ⇒ E X (t) − {E[X(t)]} = σ ⇒ E X (t) = σ2
2 2 2 2 2
(2)
n
eri
To show X(t) is WSS :
= σ cos ωτ
w.
Example 3.14 Show that the random process X(t) = cos(t + φ) where φ is a
random variable uniformly distributed in (0, 2π) is
(c) ergodic based on first order and 2nd order arguments .
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 207
n
1
= cos(t + φ) dφ
g.i
2π
0
1 h i2π
n
= sin(t + φ)
2π 0
eri
1
= [sin(t + 2π) − sin(t + 0)]
2π
1
ine
= [sin(t) − sin(t)]
2π
= 0 = a constant (1)
ng
1
= E[cos(t + φ − t − τ − φ)
+ cos(t + φ + t + τ + φ)]
2
1
= E[cos(−τ) + cos(τ + 2t + 2φ)]
Le
2
1 1
= E[cos(−τ)] + E[cos(τ + 2t + 2φ)]
w.
2 2
Z2π
1 1 1
= cos(−τ) + cos(2t + τ + 2φ) dφ
ww
2 2 2π
0
#2π
1 sin(2t + τ + 2φ)
"
1
= cos(τ) +
2 4π 2 0
1 1 h i 2π
= cos(τ) + sin(2t + τ + 2φ)
2 8π 0
1 1
= cos(τ) + [sin(2t + +τ + 4π) − sin(2t + τ + 0)]
2 8π
1 1
= cos(τ) + [sin(2t + τ) − sin(2t + τ)]
2 8π
1
= cos(τ) = a function of τ (2)
2
[Link]
[Link]
208 UNIT III - Classification of Random Processes
n
ZT
g.i
1
lim XT = lim cos(t + φ)dt (∵ by the definition. of time average)
T→∞ T→∞ 2T
n
−T
1 h iT
= lim sin(t + φ)
T→∞ 2T
= lim
1 h
−T
sin(T + φ) − sin(−T + φ)
i
eri
ine
T→∞ 2T
1 h i
= lim sin(T + φ) + sin(T − φ)
T→∞ 2T
ng
1
= lim 2 sin T cos φ
2T
T→∞
=0
E
(c) II order :
To show X(t) Ergodic(based on second order), we have to show
Le
1
E[X(t)] = cos(τ) (∵ by (2))
2
ww
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 209
To check ergodicity , if
E[X(t)] = lim XT (1)
T→∞
Now, LHS of (1) :
E[X(t)] = E[A sin t + B cos t]
= E[A] sin t + E[B] cos t
=0+0
= 0 = µ = Mean of X(t) (2)
Now, RHS of (1):
n
g.i
ZT
1
lim XT = lim A sin t + B cos tdt
T→∞ T→∞ 2T
n
−T
(∵ by the definition. of time average)
= lim
1
T→∞ 2T eri
[−A cos(t) + B sin t]T−T
ine
1
= lim −A
[( cos T +Acos
T + B sin T + B sin T]
(((
((
T→∞ 2T
1
= lim
ng
[2B sin T]
T→∞ 2T
1 B sin T
= lim
E
T→∞ 2T T
=0 (∵ Numerator is finite and denominator → ∞)
arn
Example 3.16 A particle performs a random walk with absorbing barriers 0 and
4. If it moves to r + 1 with probability p or to r − 1 with probability q such that
ww
Future State
Xn+1
0 1 2 3 4
0 1 0 0 0 0
1 q 0 p 0 0
Present State Xn 2
0 q 0 p 0
3
0 0 q 0 p
4 0 0 0 0 1
n
g.i
Initial probability distribution :
1 1 1 1 1
P = (0)
n
5 5 5 5 5
(∵ initial position of the particle is not given, so equally likely events)
Important Note : Each row sum of P and P0 is equal to 1. eri
ine
Example 3.17 The t.p.m. of a Markov chain {Xn } having the space S = {0, 1, 2, 3}
0 1 0 0
ng
.3 0 .7 0
is P = . Verify Chapman Kolmogorov theorem.
E
.0 .3 0 .7
arn
0 0 0 1
Solution :
Le
Example 3.18 A gambler has Rs. 2. He bets Re. 1 at a time and wins Re. 1 with
w.
(b) Find the probability that he lost his money at the end of 5 plays?
(c) Find the probability that the game lasts more than 7 plays?
Solution : Let Xn represents the amount with the player at the end of nth
round of the play. The game is over if the player loses all the money i.e.,
(Xn = 0) or wins Rs.4 i.e., (Xn = 6).
The state space is Xn = {0, 1, 2, 3, 4, 5, 6}.
1
If he wins the game, then the probability p = .
2
1
If he looses the game, then the probability q = 1 − p = .
2
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 211
n
5 0 0 0 0 1/2 0 1/2
g.i
6 0 0 0 0 0 0 1
(b) Probability that the player has lost money at the end of the five plays:
n
eri
Since the player initially has got Rs. 2, the initial state probability
distribution of {Xn } is
ine
P(0) = 0 0 1 0 0 0 0 (2)
After first play, the TPM is
ng
P(1) = P(0) · P
E
1 0 0 0 0 0 0
arn
1 1
!
0 0 0 0 0
= (1)
2
(1) (1)
2
(1) (1) (1) (1) (⇒ TPM after I play)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
ww
Similarly,
1 1 1
!
0 0 0 0
P(2) = P(1) · P = 4
(2) (2)
2
(2) (2)
4
(2) (2) (2) (After II)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
1 1 3 1
!
0 0 0
P(3) = P(2) · P = 4
(3)
4
(3) (3)
8
(3) (3)
8
(3) (3) (After III)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
3 5 1 1
!
0 0 0
P(4) = P(3) · P = 8
(4) (4)
16
(4) (4)
4
(4) (4)
16
(4) (After IV)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
[Link]
[Link]
212 UNIT III - Classification of Random Processes
3 5 9 1 1
!
0 0
P(5) = P(4) · P = 8
(5)
32
(5) (5)
32
(5) (5)
8
(5)
16
(5) (After V)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
∴ P[The man has lost his money at the end of 5th play]
= After V play, the probability that the gambler has zero rupee
(5)
= PRs.0
= The entry corresponding to the state ’0’ in P(5)
n
3
=
g.i
8
(c) The probability the the game lasts more than 7 plays:
n
29 7 13 1
!
eri
0 0 0
P(6)
=P (5)
·P= 64
(6) (6)
32
(6) (6)
64
(6) (6)
8
(6) (After VI)
PRs.0 PRs.1 PRs.2 PRs.3 PRs.4 PRs.5 PRs.6
ine
29 7 27 13 1
!
0 0
P(7) = P(6) · P = 64
(7)
64
(7) (7)
128
(7) (7)
128
(7)
8
(7) (After VII)
ng
Now,
E
= P[the system is neither in state 0 nor in 6 at the end of the seventh play]
= P [X7 = 1, 2, 3, 4, 5]
7 27 13
= +0+ +0+
Le
64 128 128
27
=
w.
64
ww
n
P2 = P × P = 0 0 1 × 0 0 1 = 12 21 0 (TPM, After II throw)
g.i
1 1 1 1
2 2 0 2 2 0 0 12 12
0 0 1 0 1 0 12 12 0
n
P3 = P2 × P = 21 12 0 × 0 0 1 = 0 12 12 (TPM, After III throw)
eri
1 1
0 12 12
1 1 1
2 2 0
1 1 4 14 21
2 2 0 0 1 0 0 2 2
ine
P = P × P = 0 12 12 × 0 0 1 = 14 14 12 (TPM, After IV throw)
4 3
1 1 1 1 1 1 1 1
2 2 0
4 41 21 14 12 41
0 2 2 0 1 0 4 4 2
ng
4 2 4 8 8 2
..
.
arn
and so on · · ·
Classification of states:
Le
(n)
PSi Si > 0 for some i.
i.e.,
(3) (5)
PS1 S1 > 0, PS1 S1 > 0, · · · ⇒ Period of S1 = d1 = GCD {3, 5, · · · } = 1
(2) (3) (4)
PS2 S2 > 0, PS2 S2 > 0, PS2 S2 > 0 · · · ⇒ Period of S2 = d2 = GCD {2, 3, 4, · · · } = 1
(2) (3) (4)
PS3 S3 > 0, PS3 S3 > 0, PS3 S3 > 0 · · · ⇒ Period of S3 = d3 = GCD {2, 3, 4, · · · } = 1
Non-null persistent states :
(
n→∞ finite, the state i is non-null persistent
(n)
µii = n fii =
P
→ ∞, the state i is null persistent
n
n=1
where µii is Mean recurrence time of the state i.
g.i
Here all states are non-null persistent.
∴ Finally, Markov chain is finite, all states are aperiodic, irreducible
n
and non-null persistent. So States are Ergodic. i.e., the Markov chain is
eri
Ergodic.
Example 3.20 * Find the nature of the states of the Markov chain with the t.p.m.
ine
0 1 0
P = 1/2 0 1/2
ng
0 1 0
E
Solution :
arn
0 2/3 1/3
Example 3.21 Let 3 state Markov chain with t.p.m. 1/2 0 1/2 . Find
w.
1/2 1/2 0
ww
Now,
0 2/3 1/3
∴ (2) ⇒ [π1 π2 π3 ] 1/2 0 1/2 = [π1 π2 π3 ]
1/2 1/2 0
π2 π3
i.e., + = π1 (4)
2 2
2π1 π3
+ = π2 (5)
3 2
π1 π2
+ = π3 (6)
3 2
n
g.i
Solving (4),(5),(6), using (3) , we get
9
π1 = = Long run probability for the I state
n
27
10
π2 =
π3 =
27
8 eri
= Long run probability for the II state
= [π1 π2 π3 ]
9 10 8
=
E
27 27 27
arn
Example 3.22 A man either drives a car or catches a train to go to office each day.
He never goes two day in a row by train but if he drives only, then the next day he
Le
is just as likely to drive again as he to travel by train. Now, suppose that on the
first dice and drove to work iff a six appeared. Find
w.
(a) Initial probability distribution is based on the outcome that the fair die
is tossed.
[Link]
[Link]
216 UNIT III - Classification of Random Processes
n
P(1) = 56 T 16 C
g.i
" #
0 1
P(2) = P2 = P × P = 56 16 1 1
2 2
n
= 12 12
1 11
eri
" #
0 1
P = P = P × P = 12 12
(3) 3 2 1 11
1 1
2 2
ine
= 11 13
24 T 24 C
11
ng
W.K.T. πP = π (2)
and π1 + π2 = 1 (3)
Le
i.e.,
" #
w.
h i 0 1 h i
(2) ⇒ π1 π2 1 1 = π1 π2
2 2
π2
ww
i.e., 0 · π1 + = π1 (4)
2
π2
π1 + = π2 (5)
2
Solving (4),(5), using (3), we get
1
π1 = = Long run probability to travel in Train
3
2
π2 = = Long run probability to travel in Car
3
2
∴ P [man travels by Car in long run] = .
3
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 217
Example 3.23 Find the steady state probabilities of state Markov chain with t.p.m.
2/3 1/3
P = .
1/6 5/6
Solution :
n
Example 3.24 The t.p.m. of a Markov chain {Xn , n = 0,1,2,3,. . . } with 3 states
g.i
3/4 1/4 0
0,1,2 is P = 1/4 1/2 1/4 and the initial distribution is P{X0 = i} = 31 , i =
n
eri
0 3/4 1/4
0, 1, 2. Find
ine
(a) P[x2 = 2/x1 = 0]
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]
ng
(c) P[x2 = 1]
E
0 1 2
Le
0 3/4 1/4 0
Solution : Given TPM of 3 state M.C. is P(1) = (1)
1 1/4 1/2 1/4
2 0 3/4 1/4
w.
1
Initial probability distribution P{X0 = i} = , i = 0, 1, 2.
3
ww
i.e.,
1
P{X0 = 0} = (2)
3
1
P{X0 = 1} = (3)
3
1
P{X0 = 2} = (4)
3
i.e., P = 13
(0) 1
3
1
3 (5)
(1) 1 n h i o
(a) P [X2 = 1/X1 = 0] = P01 = ∴ P Xn+1 = ai /Xn = a j = Pi j
4
(b) P[x3 = 1, x2 = 2, x1 = 1, x0 = 2]:
[Link]
[Link]
218 UNIT III - Classification of Random Processes
P(A ∩ B) P(AB)
Use P[A/B] = =
P(B) P(B)
⇒ P(AB) = P[A/B] · P(B)
Similarly, P(ABC) = P[A/BC] · P[B/C] · P(C)
P(ABCD) = P[A/BCD] · P[B/CD] · P[C/D] · P(D)
∴ P[x3 = 1,x2 = 2, x1 = 1, x0 = 2]
= P[x3 = 1/x2 = 2, x1 = 1, x0 = 2] · P[x2 = 2/x1 = 1, x0 = 2]
· P[x1 = 1/x0 = 2] · P[x0 = 2]
n
= P[x3 = 1/x2 = 2] · P[x2 = 2/x1 = 1] · P[x1 = 1/x0 = 2] · P[x0 = 2]
g.i
(1) (1) (1) (0)
= P21 · P12 · P21 · P2
3 1 3 1
n
= · · · (Refer (1) and (4))
4 4 4 3
=
3
64 eri
ine
X
(c) P [X2 = 1] = P [X2 = 1/x0 = i] · P [X0 = i]
∀i
ng
i=2
X
= P [X2 = 1/x0 = i] · P [X0 = i]
i=0
E
P(2) = P2 = P × P
w.
3/4 1/4 0 3/4 1/4 0
= 1/4 1/2 1/4 × 1/4 1/2 1/4
ww
5 1 1 1 9 1
∴ (6) ⇒ P [X2 = 1] = · + · + ·
16 3 2 3 13 3
5 + 9 + 8 22
= =
48 48
11
=
24
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 219
n
i
= 8 24 6
3 11 1
g.i
= a state distribution at 2nd step
n
(e) Stationary distribution of the process :
W.K.T πP = π,
&π1 + π2 + π3 = 1 eri (7)
(8)
ine
where π = [π1 π2 π3 ]
be the stationary distribution of the 3 state of M.P.
ng
Now,
E
3/4 1/4 0
arn
h
i h i
∴ (7) ⇒ π1 π2 π3 1/4 1/2 1/4 = π1 π2 π3
0 3/4 1/4
3 1
Le
i.e., π1 + π2 + 0 · π1 = π1 (9)
4 4
1 1 3
π1 + π2 + π3 = π2 (10)
w.
4 2 4
1 1
0 · π1 + π2 + · π3 = π3 (11)
ww
4 4
Solving (9),(10),(11), using (8), we get
h i
π = π1 π2 π3
h i
= 7 7 7 , which is the required stationary distribution
3 3 1
P(0) = ( .7 .2 .1 ). Find
Solution :
n
e−λ λx
P[X = x] = , x = 0, 1, 2, · · · (1)
g.i
x!
from Poisson Distribution of I unit
where λ is the parameter
n
e−(λt) (λt)n
P[X(t) = x = n] =
n!
, n = 0, 1, 2, · · ·
eri
from Poisson Process of III unit( this unit )
(2)
ine
where λt is the parameter
Zb
ng
(4)
n!
from Poisson Process of III unit( this unit ), where
w.
Example 3.26 If a customer arrives at bank counter in accordance with mean rate
of 3 per minute. Find probability that during time interval of 2 minutes
Also find the probability that the interval between two consecutive arrivals is
n = Number of arrivals = 4
X(t) ∼ Poisson Process
e−(λt) (λt)n
P[X(t) = n] = , n = 0, 1, 2, · · · (by using above (2))
n
n!
g.i
e−(3×2) (3 × 2)4
P[X(3) = 4] =
4!
= 0.1339
n
eri
(b) Probability of more than 2 customers :
n!
P[X(3) > 2] = 1 − P[X(2) ≤ 2] = 1 − P[X(2) = 0, 1, 2]
E
= 0.9881
(c) Probability that the interval between 2 consecutive arrivals in more than
Le
1 minute :
Zb
P[a ≤ T ≤ b] = f (t)dt, where f (t) = λe−λt
ww
n
1
g.i
Z2 Z2
= 2e−2t dt = 2 e−2t dt
n
1 1
−2t 2
" # " #
eri
e e−4 e−2
=2 =2 −
−2 1
−2 −2
= 0.119
ine
(e) Probability that the interval between 2 consecutive arrivals in 4 min or
less :
ng
Zb
arn
Z4 Z4
= 2e−2t dt = 2 e−2t dt
ww
0 0
#4
" " −8 #
e−2t e 1
=2 =2 −
−2 0 −2 −2
= 0.9997
Example 3.27 A machine generates defective items at the rate of 2 per minute.
Find the chance that the interval of 5 minutes (a) exactly 10 defective items are
generated (b) at most 4 defective items (c) at least 2 defective items
Solution :
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 223
n
∴ The chance that 11 particles are counted in 4 minutes :
g.i
e−(λp)t [(λp)t]n
P[N(t) = n] = , n = 0, 1, 2, · · · (by using above (4))
n!
n
e−(6×0.7)4 [(6 × 0.7)4]11
eri
P[N(4) = 11] =
11!
= 0.038
ine
Example 3.29 A radio active source emits particles at the rate of 5 per minutes
ng
in accordance with Poisson fashion. Each particle emitted has a probability 0.6 of
being recorded. Find probability that
E
arn
n
= 0.5 − P [0 ≤ z ≤ 0.5]
g.i
= 0.5 − P [0 ≤ z ≤ 0.5]
= 0.3085
n
(b) P[|X(10) − X(6)| ≤ 4] :
eri
Let U = X(10) − X(6), whichisalsoaR.V.
µ[U] = µ[X(10) − X(6)]
ine
i.e., E[U] = E[X(10) − X(6)]
= E[X(10)] − E[X(6)] = 10 − 10
=0
ng
h i
= 16e−|10−10| + 16e−|6−6| − 2 16e−|10−6|
h i
= 16 + 16 − 2 16e−4
Le
= 32 − 32e = 32 1 − e = 32(1 − 0.01832) = 32(0.9817)
−4 −4
⇒ σ2 [U] = 31.4139
w.
√
∴ σ[U] = 31.4139
ww
⇒ σ[U] = 5.6048
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 225
n
= 2 × P [0 ≤ z ≤ 0.7137]
g.i
= 2 × 0.2611
= 0.5222
n
Example 3.31 If X(t) is a Gaussian process,
eri with µ(t) = 3 and
ine
C (t1 , t2 ) = 4e0.2|t1 −t2 | . Find the probability that
[Link]
[Link]
226 UNIT III - Classification of Random Processes
Example 3.32 Verify whether the sine wave process of X(t) where
X(t) = Y cos ωt, where Y is uniformly distributed R.V. in (0, 1) is SSS or not.
n
If X(t) to be Stationary, then its mean is not a function of 0 t0 .
g.i
Z∞
E[X(t)] = X(t) f (y)dy
n
−∞
eri
Z1
= y cos ωt · 1 · dy
ine
0
Z1
= cos ωt y · dy
ng
0
#1
y2
"
E
= cos ωt
2
0
arn
1
= cos ωt − 0
2
1
= cos ωt
Le
2
= a function of t
w.
Example 3.33 For a R.V. X(t) = Y sin ωt, Y is an uniformly distributed r.v. in
(−1, 1). Check whether the process is WSS or not.
Solution :
(1) Show that the random process X(t) = cos(t + φ) where φ is a random
variable uniformly distributed in (0, 2π) is (a) first order stationary (b)
stationary in the wide sense (c) ergodic (based on first order or
second order averages) [May/June 2006, ECE]
[Link]
[Link]
PROBABILITY AND RANDOM PROCESSES 227
Solution:
(a) E[X(t)] = 0 ⇒ X(t) is first order stationary
(b)RXX (t, t + τ) = 2 cos τ ⇒ X(t) is WSS
1
(c) If lim X̄T = E[X(t)] = 0 ⇒ X(t) is first order average in ergodic
T→∞
lim ȲT = E[Y(t)] = 12 cos τ ⇒ X(t) is second order average in ergo
If
T→∞
(2) Define Poisson process and obtain the probability distribution for that.
Also find the autocorrelation function of the process.
Solution :
Definition : If X(t) represents the number of occurrences of a certain
n
event in (0, t), then the discrete random process X(t) is called Poisson
g.i
process, provided that the following postulates are satisfied
(a) P[1 occurrence in (t, t + ∆t)] = λ.∆t + O(∆t)
n
(b) P[0 occurrence in (t, t + ∆t)] = 1 − λ.∆t + O(∆t)
eri
(c) P[2 or more occurrences in (t, t + ∆t)] = O(∆t)
(d) X(t) is independent of the number of occurrences of the event in
ine
any interval prior and after the interval (0, t).
(e) The probability that the event occurs a specified number of times
in (t0 , t0 + t) depends only on t, but not on t0 .
ng
Probability distribution:
Let X(t) be the Poisson process. Let ‘λ’ be the occurrences of the event
E
per unit time and let ‘t’ be the time. Let ∆t be the increment in time.
arn
λn h i
P0n (t) = nt n−1
f (t) + t f (t)
n 0
(3)
n
λ h
n i (λt)
n−1
(λt)n
ntn−1 f (t) + tn f 0 (t) = λ
f (t) − f (t)
n
n n−1 n
g.i
λn tn−1 (λt)n λn tn−1 (λt)n
f (t) + f 0 (t) = f (t) + λ f (t)
n−1 n n−1 n
n
⇒ f 0 (t) = −λ f (t)
⇒
f 0 (t)
f (t)
= −λ
eri
ine
Integrating w.r.t. ‘t’, we get
E ng
f 0 (t)
Z Z
dt = − λdt
arn
f (t)
log f (t) + log c = −λt
⇒ log c[ f (t)] = −λt
Le
⇒ c f (t) = e−λt
1
⇒ f (t) = e−λt
w.
c
∴ f (t) = ke−λt
ww
n
=λ2 t1 t2 + λt1 , i f t2 ≥ t1
g.i
=λ2 t1 t2 + λ [min (t1 , t2 )]
n
(3) Show that when events occur as a Poisson process, the time interval
eri
between successive events follow exponential distribution.[N/D 2006,
ECE]
ine
Solution : Consider two consecutive occurrences of the event Ei and
Ei+1 .
Let Ei takes place at time ti and T, the interval between the occurrences
ng
= P[X(t) = 0]
= e−λt
Le
n
0.3 0.4 0.3
g.i
(0.7, 0.2, 0.1).
Find (1) P{X2 = 3} and (2) P{X3 = 2, X2 = 3, X1 = 3, X0 = 2}.
n
Solution : {(1) = 0.279 ,(2) = 0.0048}
eri
(7) If {X(t)} and {Y(t)} are two independent Poisson processes, show that
the conditional distribution {X(t)} and {X(t) + Y(t)} is
ine
binomial.[May/June 2007, ECE]
Solution : Let {X(t)} and {Y(t)} be two independent Poisson
processes with λ1 t and λ2 t respectively.
ng
P [X(t)+Y(t) = n]
λ1 λ2
arn
where p = ;q = ;p + q = 1
λ1 + λ2 λ1 + λ2
⇒ Binomial.
Le
A2
Solution : {E[X(t)] = 0, RXX [X(t)] = cos(ω0 t) ⇒ WSS}
2
ww
(9) If X(t) = Y cos ωt+Z sin ωt, where Y and Z are two independent normal
random variables with E(Y) = E(Z) = 0, E(Y2 ) = E(Z2 ) = σ2 and ω is a
constant, prove that {X(t)} is strict sense stationary process of order 2.
Solution : {R(t, t + τ) = k cos λτ, where k = E(Y2 ) = E(Z2 )}
(10) Define Poisson process and derive the probability law for the Poisson
process {X(t)}.
n
conditions is given by
g.i
(at)n−1
(1+at)n+1 , n = 1, 2, ...
P{X(t) = n} =
Show that it is evolutionary (not
at , n = 0
n
1+at
stationary).
eri
(14) Three boys A, B and C are throwing a ball to each other. A always
throws the ball to B and B always throws the ball to C, but C is just as
ine
likely to throw the ball to B as to A. Show that the process is Markovian.
Find the transition matrix and classify the states. [A/M 2008, ECE]
A B C
ng
A 0 1 0 irreducible
Solution : Refer notes.
P = B 0 0 1 , aperiodic
E
C 1/2 1/2 0 ergodic
arn
Definition :
A real valued random process X(t) is called Gaussian process or
normal process, if the random variables X(t1 ), X(t2 ), · · · , X(tn ) are
w.
Properties :
(1) If a Gaussian process is WSS it is also strict-sense stationary.
(2) If the member functions of a Gaussian process are
uncorrelated, then they are independent.
(3) If the input {X(t)} is a linear system is a Gaussian process, the
output will also be a Gaussian process.
(4) Random phenomena in communication system are well
approximated by a Gaussian process. For example, the voltage
across a resister at the output of an amplifier can be random due
to a random current that is the result of many contributions from
n
other random currents at various places within the amplifier.
g.i
Random thermal agitation of electrons causes the randomness of
the various current. This type of noise is call Gaussian because
n
the random variable representing the noise voltage has the
eri
Gaussian density.
Uses : One of the important uses of Gaussian process is to
analyze the effects of thermal noise in electronic circuits used in
ine
communication systems.
(16) Distinguish between ‘stationary’ and weakly stationary stochastic
ng
Solution :
# Strict sense stationary Wise sense stationary
arn
(17) State the postulates of a Poisson process and derive its probability
law.[M/J 2009, ECE]
Solution : Refer Definition and probability distribution of 2nd
question
n
g.i
(18) Classify the random process and explain with an
n
example(=12question)
eri
(19) Given a random variable Y with characteristic function ϕ(ω) and a
random process X(t) = cos(λt + Y). Show that {X(t)} is stationary in the
wide sense if ϕ(1) = ϕ(2) = 0.
ine
Solution : {E(X(t) = 0, RXX (t, t + τ) = 12 cos λτ}
(20) A machine goes out of order whenever a component fails. The failure
ng
of this part follows a Poisson process with a mean rate of 1 per week.
Find the probability that 2 weeks have elapsed since last failure. If
E
the next supply is not due in 10 weeks, find the probability that the
machine will not be out of order in the next 10 weeks. [N/D 2009, ECE]
Solution : Here the unit time is 1 week.
Le
P [X (t) = x] =
x
e−(1)(2) [(1)(2)]0
⇒ P [X (2) = 0] = = e−2 = 0.135
0
(b) There are only 5 spare parts and the machine should not go out of
order in the next 10 weeks.
(21) x(t) = A sin(ωt+θ), where A and ω are constants and θ is R.V. uniformly
distributed over (−π, π). Find the auto correlation of {y(t)}, where
y(t) = x2 (t). ( )
A4
Solution : R (t1 , t2 ) = [2 + cos2ω (t1 − t2 )]
8
(22) For a random process x(t) = y sin ωt, y is an uniformly distributed
random variable in the interval (−1, 1). Check whether the process is
wide sense(stationary or not. )
A4
Solution : E[X(t)] = 0, R (t1 , t2 ) = [2 + cos2ω (t1 − t2 )]
n
8
g.i
(23) In a village road, buses cross a particular place at a Poisson rate of 4
per hour. If a boy starts counting at 9.00 am.
n
(i) What is the probability that his count is 1 by 9.30 a.m?
eri
(ii) What is the probability that his count is 3 by 11.00 a.m?
(iii) What is the probability that his count is more than 5 by noon?
ine
ng
E
arn
Le
w.
ww
[Link]
UNIT - IV v
Visit for More : [Link]
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
n
g.i
rin
ee
gin
En
arn
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww
.in
ng
eri
ine
ng
rnE
a
Le
w.
ww