You are on page 1of 39

18MAB204T-U1-Common-Lecture Notes

18MAB204T-Probability and Queueing Theory

S
N
A
H
IT
H
AT
Prepared by
Dr. S. ATHITHAN
F

Assistant Professor
O
S

Department of of Mathematics
TE

Faculty of Engineering and Technology


O

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY


N

Kattankulathur-603203, Chengalpattu District.


E
R
TU
C
LE

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY


Kattankulathur-603203, Chengalpattu District.
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Contents
1 Probability (Review) 2

2 Discrete and Continuous Distributions 5

3 Mathematical Expectation 7

4 Moments and Moment Generating Function 8

5 Examples 11

S
6 Function of a Random Variable 33

N
7 Exercise/Practice/Assignment Problems 36

A
H
IT
H
AT
F
O
S
TE
O
N
E
R
TU
C
LE

Page 1 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

1 Probability (Review)
1.1 Definition of Probability and Basic Theorems
1.1.1 Random Experiment
An Experiment whose outcome or result can be predicted with certainty is called a deterministic
experiment.
An Experiment whose all possible outcomes may be known in advance, the outcome of a par-
ticular performance of the experiment cannot be predicted owing to a number of unknown cases
is called a random experiment.

S
Definition 1.1.1 (Sample Space). Set of all possible outcomes which are assumed to be equally

N
likely is called the sample space and is usually denoted by S. Any subset A of S containing
favorable outcomes is called an event.

A
H
1.1.2 The Approaches to Define the Probability

IT
Definition 1.1.2 (Mathematical or Apriori Definition of Probability). Let S be a sample space

H
and A be an event associated with a random experiment. Let n(S) and n(A) be the number of
AT
elements of S and A. Then the probability of event A occurring, denoted by P(A) and defined as
n(A) Number of cases favorable to A
F

P (A) = =
n(S) Exhaustive number of cases in S(Set of all possible cases)
O
S

Definition 1.1.3 (Statistical or Aposteriori Definition of Probability). Let a random experiment


TE

n1
be repeated n times and let an event A occur n1 times out of n trials. The ratio is called
n
n1
O

the relative frequency of the event A. As n increases, shows a tendency to stabilize and to
n
N

approach a constant value and is denoted by P(A) is called the probability of the event A. i.e.,
E

n1
P (A) = lim
R

n→∞ n
TU

Definition 1.1.4 (Axiomatic Approach/Definition of Probability). Let S be a sample space and


A be an event associated with a random experiment. Then the probability of the event A is
C

denoted by P(A) is defined as a real number satisfying the following conditions/axioms:


LE

(i) 0 ≤ P (A) ≤ 1
(ii) P(S)=1
(iii) If A and B are mutually exclusive events, then P (A ∪ B) = P (A) + P (B)
Axiom (iii) can be extended to arbitrary number of events. i.e., If A1 , A2 , . . . , An , . . . be
mutually exclusive, then P (A1 ∪ A2 ∪ A3 · · · ∪ An . . . ) = P (A1 ) + P (A2 ) + P (A2 ) +
P (A3 ) + · · · + P (An ) + . . .

Theorem 1.1. The probability of the impossible event is zero. i.e., if ϕ is a subset of S containing
no event, then P (ϕ) = 0.

Page 2 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Theorem 1.2. If A be the complimentary event of A, then P (A) = 1 − P (A) ≤ 1.

Theorem 1.3 (Addition Theorem of Probability). If A and B be any two events, then P (A ∪
B) = P (A) + P (B) − P (A ∩ B) ≤ P (A) + P (B).

The above Theorem can be extended for 3 events which is given by P (A ∪ B ∪ C) =


P (A) + P (B) + P (C) − P (A ∩ B) − P (B ∩ C) − P (C ∩ A) + P (A ∩ B ∩ C)

S
Theorem 1.4. If B ⊂ A, then P (B) ≤ P (A).

N
A
1.2 Conditional Probability and Independents Events

H
IT
The conditional probability of an event B, assuming that the event A has happened, is denoted
by P (B/A) and is defined as

H
P (A ∩ B)
AT
P (B/A) = , provided P (A) ̸= 0
P (A)
F
O

Theorem 1.5 (Product Theorem of Probability). P (A ∩ B) = P (A) · P (B/A)


S
TE

The product theorem can be extended to three events such as


O

P (A ∩ B ∩ C) = P (A) · P (B/A) · P (C/A and B)


N
E

There are few properties for the conditional distribution. Please go through it in the textbook.
R

When two events A and B are independent, we have P (B/A) = P (B) and the Product
TU

Theorem takes the form P (A ∩ B) = P (A) · P (B). The converse of this statement is also.
i.e., If P (A ∩ B) = P (A) · P (B), then the events A and B are said to be independent.
C
LE

This result can be extended to any number of events. i.e., If A1 , A2 , . . . , An be n no. of


events, then

P (A1 ∩ A2 ∩ A3 ∩ · · · ∩ An ) = P (A1 ) · P (A2 ) · P (A3 ) · · · · P (An )

Theorem 1.6. If the events A and B are independent, then the events A and B (and similarly
A and B, A and B) are also independent.

1.3 Total Probability

Page 3 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Theorem 1.7 (Theorem of Total Probability). If B1 , B2 , . . . , Bn be the set of exhaustive and


mutually exclusive events and A is another event associated with(or caused by)Bi , then
n
X
P (A) = P (Bi )P (A/Bi )
i=1

S
N
A
H
IT
H
AT
F
O
S
TE
O
N
E
R
TU
C
LE

Page 4 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

2 Discrete and Continuous Distributions


Definition 2.0.1. A Random Variable (abbreviated as RV) is a function that assigns a real
number X(s) to every element s ∈ S, where S is the sample space corresponding to a
random experiment E.

Discrete Random Variable


If X is a random variable (RV) which can take a finite number or countably infinite number
of values, X is called a discrete RV. When the RV is discrete, the possible values of X may
be assumed as x1 , x2 , . . . , xn , . . . . In the finite case, the list of values terminates and in the
countably infinite case, the list goes upto infinity.

S
For example, the number shown when a die is thrown and the number of alpha particles emitted

N
by a radioactive source are discrete RVs.

A
H
Probability Mass Function

IT
If X is a discrete RV which can take the values x1 , x2 , x3 , . . . such that P (X = xi ) = pi ,

H
then pi is called the PROBABILITY FUNCTION or PROBABILITY MASS FUNCTION or POINT
AT
PROBABILITY FUNCTION , provided pi (i = 1, 2, 3, . . . ) satisfy the following conditions:

(a) pi ≥ 0, for all i, and


F

X
O

(b) pi = 1
i
S

The collection of pairs {xi , pi }, i = 1, 2, 3, . . . is called the probability distribution of the


TE

RV X, which is sometimes displayed in the form of a table as given below:


O

X = xi x1 x2 x3 ... xn ...
N

P (X = xi ) p1 p2 p3 ... pn ...
E

Continuous Random Variable


R
TU

If X is a RV which can take all values (i.e., infinite number of values) in an interval, then X is
called a continuous RV.
C

For example, the length of time during which a vacuum tube installed in a circuit functions is a
LE

continuous RV.

Probability Density Function


If X is a continuous RV such that
 
1 1
P x − dx ≤ X ≤ x − dx = f (x)dx
2 2

then f (x) is called the PROBABILITY DENSITY FUNCTION ( PDF ) of X, provided f (x) satis-
fies the following conditions:
(a) f (x) ≥ 0, for all x ∈ RX (Range space of ) X, and

Page 5 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Z
(b) f (x) dx = 1
RX

Moreover, P (a ≤ X ≤ b) or P (a < X < b) is defined as P (a ≤ X ≤ b) =


Z b
f (x) dx.
a

The curve y = f (x) is called the probability curve of the RV X.


Z a
Note : When X is a continuous RV d P (X = a) = P (a < X < a) = f (x) dx = 0
a

This means that it is almost impossible that a continuous RV assumes a specific value. Hence
P (a ≤ X ≤ b) = P (a ≤ X < b) = P (a < X ≤ b) = P (a < X < b).

S
N
Cumulative Distribution Function (CDF)

A
If X is an RV, discrete or continuous, then P (X ≤ x) is called the CUMULATIVE DISTRI -

H
BUTION FUNCTION of X or DISTRIBUTION FUNCTION of X and denoted as F (x). If X is

IT
discrete,

H
X
F (x) = pj AT
j

Z x
If X is continuous, F (x)P (−∞ < X ≤ x) = f (x) dx
F

−∞
O
S

Properties of the cdf F(x)


TE

(i) F (x) is a non-decreasing function of x, i.e., if x1 < x2 , then F (x1 ) ≤ F (x2 )


O

(ii) F (−∞) = 0 and F (∞) = 1.


N

(iii) If X is a discrete RV taking values x1 , x2 , x3 , . . . where x1 < x2 < x3 < · · · <


xi−1 < xi < . . . then P (X = xi ) = F (xi ) − F (xi−1 ).
E
R

d
(iv) If X is a continuous RV, then F (x) = f (x) at all points where F (x) is differen-
TU

dx
tiable.
C

Note : Although we may talk of probability distribution of a continuous RV, it cannot be rep-
LE

resented by a table as in the case of a discrete RV. The probability distribution of a continuous
RV is said to be known, if either its pdf or cdf is given.
For the problems on Discrete and Continuous Random Variables (RV’s) we have to note down
the following points from the table for understanding.

Page 6 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Discrete ♯ RV Continuous ♯ RV
(Probability Distribution Function) (Probability Density Function)
Z
Operator Σ

Takes Values of the form (Eg.) 1, 2, 3, . . . 0<x<∞

Standard Notation P (X = x) f (x)

CDF∗ F (X = x) = P (X ≤ x) F (X = x) = P (X ≤ x)

S
Z∞

N

X
P (X = x) = 1 f (x)dx = 1

A
Total Probability
−∞

H
−∞

RV-Random Variable, *CDF-Cumulative Distribution Function or Distribution Function

IT
H
3 Mathematical Expectation
AT
Definition 3.0.1 (Expectation). An average of a probability distribution of a random variable
F

X is called the expectation or the expected value or mathematical expectation of X and is


O

denoted by E(X).
S

Definition 3.0.2 (Expectation for Discrete Random Variable). Let X be a discrete random vari-
TE

able taking values x1 , x2 , x3 , . . . with probabilities P (X = x1 ) = p(x1 ), P (X = x2 ) =



X
O

p(x2 ), P (X = x3 ) = p(x3 ), . . . , the expected value of X is defined as E(X) = xi p(xi ),


N

i=−∞
if the right hand side sum exists.
E

Definition 3.0.3 (Expectation for Continuous Random Variable). Let X be a continuous ran-
R

dom variable with pdf f (x) defined in (−∞, ∞), then the expected value of X is defined as
TU

Z∞
E(X) = xf (x)dx.
C

−∞
LE

Note: E(X) is called the mean of the distribution or mean of X and is denoted by X or µ.

3.1 Properties of Expectation


1. If c is any constant, then E(c) = c.
2. If a, b are constants, then E(aX + b) = aE(X) + b.
3. If (X, Y ) is two dimensional random variable, then E(X + Y ) = E(X) + E(Y ).
4. If (X, Y ) is a two dimensional random variable, then E(XY ) = E(X)E(Y ) only if
X and Y are independent random variables.

Page 7 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

3.2 Variance and its properties


Definition 3.2.1. Let X be a random variable with mean E(X), then the variance of X is
defined as σ 2 = E{(X − µ)2 } and it is denoted by V ar(X) or σX
2
.
Note: V ar(X) = E(X 2 ) − [E(X)]2 .

Properties of Variance
1. V ar(a) = 0, where a is any constant.
2. V ar(aX) = a2 V ar(X)

S
3. V ar(aX ± b) = a2 V ar(X)

N
4. V ar(aX + bY ) = a2 V ar(X) + b2 V ar(Y ) only if X and Y are independent

A
random variables.

H
IT
4 Moments and Moment Generating Function

H
4.1 Moments
AT
Definition 4.1.1 (Moments about origin). The r th moment of a random variable X about the
origin is defined as E(X r ) and is denoted by µ′r . Moments about origin are known as raw
F
O

moments.
S

 n
TE

X
xr P (X = x) if X is discrete





x=0
O



µ′r = E(X r ) =
N



 Z∞
xr f (x)dx if X is continuous

E





R

x=0
TU

Note:By moments we mean the moments about origin or raw moments.


The first four moments about the origin are given by
C
LE

1. µ′1 = E(X)=Mean
2. µ′2 = E(X 2 )
3. µ′3 = E(X 3 )
4. µ′4 = E(X 4 )
2
Note: V ar(X) = E(X 2 ) − [E(X)]2 =pµ′2 − µ′1 =Second moment - square of the first
moment. Standard Deviation SD = σX = V ar(X).

Definition 4.1.2 (Moments about mean or Central moments). The r th moment of a random
variable X about the mean µ is defined as E[(X − µ)r ] and is denoted by µr .

Page 8 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

The first four moments about the mean are given by


1. µ1 = E(X − µ) = E(X) − E(µ) = µ − µ = 0
2. µ2 = E[(X − µ)2 ] = V ar(X)
3. µ3 = E[(X − µ)3 ]
4. µ4 = E[(X − µ)4 ]

Definition 4.1.3 (Moments about any point a). The r th moment of a random variable X about
any point a is defined as E[(X − a)r ] and we denote it by m′r .
The first four moments about a point ‘a’ are given by

S
1. m′1 = E(X − a) = E(X) − a = µ − a

N
2. m′2 = E[(X − a)2 ]

A
H
3. m′3 = E[(X − a)3 ]

IT
4. m′4 = E[(X − a)4 ]

H
AT
Relation between moments about the mean and moments about any arbitrary point a
Let µr be the r th moment about mean and m′r be the r th moment about any point a. Let µ be
F

the mean of X.
O
S
TE

∴ µr = E[(X − µ)r ]
= E[(X − a) − (µ − a)]r
O

= E[(X − a) − m′1 ]r
N

E (X − a)r − r C1 (X − a)r−1 m′1 + r C2 (X − a)r−2 (m′1 )2 − · · · + (−1)r (m′1 )r


 
=
E

= E(X − a)r − r C1 E(X − a)r−1 m′1 + r C2 E(X − a)r−2 (m′1 )2


R

−r C3 E(X − a)r−3 (m′1 )3 + r C4 E(X − a)r−4 (m′1 )4 − · · · + (−1)r (m′1 )r


TU

= m′r − r C1 m′r−1 m′1 + r C2 m′r−2 (m′1 )2 − r C3 m′r−3 (m′1 )3 + r C4 m′r−4 (m′1 )4


− · · · + (−1)r (m′1 )r
C
LE

We define(fix) m′0 = 1, then we have


µ1 = m′1 − m′0 m′1 = 0
µ2 = m′2 − 2 C1 m′1 · m′1 + (m′1 )2
= m′2 − (m′1 )2
µ3 = m′3 − 3 C1 m′2 · m′1 + 3 C2 m′1 · (m′1 )2 − (m′1 )3 ·
= m′3 − 3m′2 · m′1 + 2(m′1 )3
µ4 = m′4 − 4 C1 m′3 · m′1 + 4 C2 m′2 · (m′1 )2 − 4 C3 m′1 · (m′1 )3 + (m′1 )4 ·
= m′4 − 4m′3 · m′1 + 6m′2 · (m′1 )2 − 3(m′1 )4

Page 9 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

4.2 Moment Generating Function (MGF)


Definition 4.2.1 (Moment Generating Function (MGF)). The moment generating function of a
random variable X is defined as E(etX ) for allt ∈ (−∞, ∞). It is denoted by M (t) or
MX (t).
 n
X
etx P (X = x) if X is discrete





x=0


MX (t) = E(etX ) =


 Z∞
etx f (x)dx if X is continuous

S




x=0

N
If X is a random variable. Its MGF MX (t) is given by

A
H
(tX)2 (tX)3 (tX)r
 
tX
tX

IT
MX (t) = E(e ) = E 1 + + + + ··· + + ...
1! 2! 3! r!

H
t (t)2 (t)3 (t)r
= 1 + E(X) + E(X 2 ) + E(X 3 ) + · · · + E(X r ) + . . .
1! 2! 3!
AT r!
t (t)2 ′ (t)3 ′ (t)r ′
= 1 + µ′1 + µ2 + µ3 + · · · + µ + ...
1! 2! 3! r! r
F
O

i.e.
S
TE

µ′1 = Coefft. of t in the expansion of MX (t)


t2
O

µ′2 = Coefft. of in the expansion of MX (t)


N

2!
..
.
E

tr
µ′r = Coefft. of
R

in the expansion of MX (t)


r!
TU

..
.
C

∴ MX (t) generates all the moments about the origin. (That is why we call it as Moment
LE

Generating Function (MGF)).


Another phenomenon which we often use to find the moments is given below:

(r)
µ′r = MX (0)

We have
t (t)2 (t)3 (t)r
MX (t) = E(etX ) = 1 + µ′1 + µ′2 + µ′3 + · · · + µ′r + . . .
1! 2! 3! r!

Differentiate with respect to t, we get

Page 10 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN


(t)2 (t)r−1
MX (t) = µ′1 + tµ′2 + µ′3 + ··· + µ′r + . . .
2! (r − 1)!
′′
(t)2 (t)r−2
MX (t) = µ′2 + tµ′3 + µ′4 + · · · + µ′r + . . .
2! (r − 2)!
..
.
(r)
MX (t) = µ′r + terms of higher powers of t
..
.

S
Putting t = 0, we get

N

MX (0) = µ′1

A
′′
MX (0) = µ′2

H
..

IT
.
(r)
MX (0) = µ′r

H
.. AT
.

The Maclaurin’s series expansion given below will give all the moments.
F
O

t ′
t2 ′′
MX (t) = MX (0) + MX (0) + MX (0) + . . .
S

1! 2!
TE

The MGF of X about its mean µ is MX−µ (t) = E et(X−µ) .


 
O

Similarly, the MGF of X about any point a is MX−a (t) = E et(X−a) .


 
N
E

Properties of MGF
R

1. McX (t) = MX (ct)


TU

2. MX+c (t) = ect MX (t)


C

3. MaX+b (t) = ebt MX (at)


LE

4. If X and Y are independent RV’s then, MX+Y (t) = MX (t) · MY (t).

5 Examples
Example: 5.1. From 6 positive and 8 negative numbers, 4 are chosen at random(without re-
placement) and multiplied. What is the probability that the product is negative?
Solution: If the product is negative(-ve), any one number is -ve (or) any 3 numbers are -ve.
No. of ways choosing 1 negative no. = 6 C3 ·8 C1 = 20 · 8 = 160.
No. of ways choosing 3 negative nos. = 6 C1 ·8 C3 = 6 · 56 = 336.

Page 11 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Total no. of ways choosing 4 nos. out of 14 nos. =14 C4 = 1001.


160 + 336 496
∴ P (the product is -ve) = = .
1001 1001

Example: 5.2. A box contains 6 bad and 4 good apples. Two are drawn out from the box at a
time. One of them is tested and found to be good. What is the probability that the other one is
also good.
Solution: Let A be an event that the first drawn apple is good and let B be an event that the

S
other one is also good.

N
4
P (A ∩ B) C2 /10 C2 6/45 1
∴ The required probability is P (B/A) = = = =

A
4
P (A) 10
4/10 3

H
4

IT
Aliter: Probability of the first drawn apple is good= .
10
Once the first apple is drawn, the box contains only 9 apples in total and 3 good apples.

H
3 1
∴ Probability of the second drawn apple is also good=
AT = .
9 3
F
O

Example: 5.3. A random variable X has the following distribution


S
TE

x 0 1 2 3 4 5 6 7
2 2 2
P[X=x] 0 k 2k 2k 3k k 2k 7k + k
O

Find (i) the value of k,(ii) the Cumulative Distribution Function (CDF) (iii) P (1.5 < X <
N

1
4.5/X > 2) and (iv) the smallest value of α for which P (X ≤ α) > .
2
E

Solution:
R
TU

X
(i) We know that P [X = xi ] = 1.
i
C

Here,
LE

P [X = 0] + P [X = 1] + P [X = 2] + P [X = 3]+
P [X = 4] + P [X = 5] + P [X = 6] + P [X = 7] = 1
i.e. 0 + k + 2k + 2k + 3k + k2 + 2k2 + 7k2 + k = 1
i.e. 10k2 + 9k − 1 = 0
i.e. (10k − 1)(k + 1) = 0
1
=⇒ k = or k = −1
10
1
Since the value of the probability is not negative, we take the value k = .
10
∴ The probability distribution is given by

Page 12 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

x 0 1 2 3 4 5 6 7
P [X = x] 0 k 2k 2k 3k k2 2k2 2
7k + k
1 2 2 3 1 2 17
P [X = x] 0
10 10 10 10 100 100 100
(ii) The Cumulative Distribution Function (CDF)

F (x) = P (X ≤ x)
F (0) = P (X ≤ 0) = P (X = 0) = 0
1 1
F (1) = P (X ≤ 1) = P (X = 0) + P (X = 1) = 0 + =
10 10
1 2 3

S
F (2) = P (X ≤ 1) = P (X = 0) + P (X = 1) + P (X = 2) = 0 + + =
10 10 10

N
..
.

A
F (7) = P (X ≤ 7) = P (X = 0) + P (X = 1) + P (X = 2) + P (X = 2)

H
+P (X = 4) + P (X = 5) + P (X = 6) + P (X = 7)

IT
100

H
= =1
100 AT
The detailed CDF F (X = x) is given in the table.
F

x 0 1 2 3 4 5 6 7
O

k2 2k2 7k2 + k
S

P [X = x] 0 k 2k 2k 3k
TE

1 2 2 3 1 2 17
P [X = x] 0
10 10 10 10 100 100 100
O

1 3 5 8 81 83 100
N

F [X = x] 0 =1
10 10 10 10 100 100 100
E

(iii)
R
TU

P [(1.5 < X < 4.5) ∩ (X > 2)]


P (1.5 < X < 4.5/X > 2) =
P (X > 2)
C

5
P (1.5 < X < 4.5) = P (X = 3) + P (X = 4) =
LE

10
P (X > 2) = 1 − P (X ≤ 2)
= 1 − {P (X = 0) + P (X = 1) + P (X = 2)}
3 7
= 1− =
10 10
5
10
5
P (1.5 < X < 4.5/X > 2) = 7 =
10
7

1
(iv) The smallest value of α for which P (X ≤ α) > = 0.5.
2

Page 13 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

5
From the CDF table, we found that P (X ≤ 3) = = 0.5. But we need the probability
10
8
which is more than 0.5, for this we have P (X ≤ 4) = = 0.8 > 0.5. ∴ α = 4 satisfies
10
the given condition.

Example: 5.4. A random variable X has the following distribution


x 0 1 2 3 4 5 6 7 8
P[X=x] k 3k 5k 7k 9k 11k 13k 15k 17k
Find (i) the value of k,(ii) the Distribution Function (CDF) (iii) P (0 < X < 3/X > 2) and

S
1
(iv) the smallest value of α for which P (X ≤ α) > .

N
2

A
Solution:

H
x 0 1 2 3 4 5 6 7 8

IT
P [X = x] k 3k 5k 7k 9k 11k 13k 15k 17k

H
1 3 5 7 9 11 13 15 17
P [X = x]
81 81 81 81 81
AT 81 81 81 81
1
(i) The value of k =
F

81
O

(ii) the Distribution Function (CDF)


S

x 0 1 2 3 4 5 6 7 8
TE

P [X = x] k 3k 5k 7k 9k 11k 13k 15k 17k


O

1 3 5 7 9 11 13 15 17
P [X = x]
N

81 81 81 81 81 81 81 81 81
1 4 9 16 25 36 49 64 81
F [X = x] =1
E

81 81 81 81 81 81 81 81 81
R

(iii) P (0 < X < 3/X > 2)


TU

P [(0 < X < 3) ∩ (X > 2)]


P (0 < X < 3/X > 2) = = 0(∵ there is no common elemen
C

P (X > 2)
LE

1
(iv) The smallest value of α for which P (X ≤ α) > = 0.5.
2
36
From the CDF table, we found that P (X ≤ 5) = < 0.5. But we need the probability
81
49
which is more than 0.5, for this we have P (X ≤ 6) = > 0.5. ∴ α = 6 satisfies the
81
given condition.

Page 14 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN



 kx, when 0 ≤ x ≤ 1

k, when 1 ≤ x ≤ 2
Example: 5.5. Find the value of k for the pdf f (x) = . Also


 3k − kx, when 2 ≤ x ≤ 3

0, otherwise
find (a) the CDF of X (b) P (1.5 < X < 3.2/0.5 < X < 1.8)
Solution:

Z∞

S
We know that f (x)dx = 1

N
−∞

A
Z0 Z1 Z2 Z3 Z∞

H
i.e. f (x)dx + f (x)dx + f (x)dx + f (x)dx + f (x)dx = 1

IT
−∞ 0 1 2 3

H
Z0 Z1 Z2 Z3 AT Z∞
i.e. 0 · dx + kx · dx + k · dx + (3k − kx)dx + 0 · dx = 1
−∞ 0 1 2 3
F
O

1 3
x2 x2
 
[x]21 + 3k · x − k ·
S

i.e. 0 + k +k +0 = 1
2 2
TE

0 2

i.e. 2k = 1
O

1
N

=⇒ k=
2
E

x
R

 , when 0 ≤ x ≤ 1
TU

2






 1
, when 1 ≤ x ≤ 2

C


1 
(a) Since k = , f (x) = 2
LE

2 
 1

 (3 − x), when 2 ≤ x ≤ 3
2






0, otherwise

Page 15 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Zx
The CDF of X is F (x) = P (X ≤ x) = f (x)dx
−∞

When x < 0, F (x) = 0, since f (x) = 0 for x < 0


Zx
When 0 ≤ x < 1, F (x) = f (x)dx
−∞

Zx Zx
x x2
= f (x)dx = dx =
2 4

S
0 0

N
Zx

A
When 1 ≤ x < 2, F (x) = f (x)dx

H
0

IT
Z1 Zx
x 1 1

H
= dx + dx = (2x − 1)
2 2 4
0
AT 0
F

Zx
O

When 2 ≤ x < 3, F (x) = f (x)dx


S

0
TE

Z1 Zx Zx
x 1 1 1
O

= dx + dx + (3 − x) dx = (−x2 + 6x − 5)
2 2 2 4
N

0 0 0
E

When x ≥ 3, F (x) = 1
R
TU


 0 whenx < 0
C




x2


LE


, when 0 ≤ x ≤ 1





 2

1

∴ F (x) = (2x − 1), when 1 ≤ x ≤ 2


 4


 1
(−x2 + 6x − 5), when 2 ≤ x ≤ 3






 4

1, when x ≥ 3

P [(1.5 < X < 3.2) ∩ (0.5 < X < 1.8)]


(b) P (1.5 < X < 3.2/0.5 < X < 1.8) =
P (0.5 < X < 1.8)

Page 16 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

P [(1.5 < X < 3.2) ∩ (0.5 < X < 1.8)] = P (1.5 < X < 1.8)
Z1.8
1 3
= dx =
2 20
1.5

Z1 Z1.8
x 1 3
P (0.5 < X < 1.8) = dx + dx =
2 2 16
0.5 1

P [(1.5 < X < 3.2) ∩ (0.5 < X < 1.8)]

S
P (1.5 < X < 3.2/0.5 < X < 1.8) =
P (0.5 < X < 1.8)

N
16
=

A
20

H
IT
H
AT
Example: 5.6. Experience has shown that while walking in a certain park, the time X (in
minutes) duration between seeing two people smoking has a density function
F

(
kxe−x , when x > 0
O

f (x) =
0, otherwise
S
TE

Find the (a) value of k (b) distribution function of X (c) What is the probability that a person,
who has just seen a person smoking will see another person smoking in 2 to 5 minutes, in at
O

least 7 minutes?
N

Solution: Given the pdf of X is


E

(
kxe−x ,
R

when x > 0
f (x) =
TU

0, otherwise
C

∴ f (x) ≥ 0 ∀ x =⇒ k > 0.
LE

Z∞
(a) We know that f (x) dx = 1
−∞

Z∞
i.e. kxe−x dx = 1
0
∞
e−x e−x

i.e. k x − = 1
−1 (−1)2 0

=⇒ k = 1

Page 17 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

(
xe−x , when x > 0
∴ f (x) =
0, otherwise

(b) The distribution function (CDF) is given by

Zx
F (x) = P (X ≤ x) = f (x) dx
−∞

Zx

S
Here we have x > 0, F (x) = xe−x dx = 1 − (1 + x)e−x , x > 0

N
0

A
(c)

H
Z5

IT
P (2 < X < 5) = f (x) dx

H
2 AT
Z5
= xe−x dx
F

2
O

= −6e−5 + 3e−2
S

= −0.04 + 0.406 = 0.366


TE

and
O
N

Z∞
P (X ≥ 7) = f (x) dx
E

7
R

Z∞
TU

= xe−x dx
C

7
LE

= 8e−7
= 0.007

Page 18 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Example: 5.7. The sales of convenience store on a randomly selected day are X thousand
dollars, where X is a random variable with a distribution function


 0, when x < 0
2


 x
, when 0 ≤ x < 1

F (x) = 2
k(4x − x2 ) − 1, when 1 ≤ x < 2





1, when x ≥ 2

Suppose that this convenience store’s total sales on any given day is less than 2000 units(Dollars
or Pounds or Rupees). Find the (a) value of k (b) Let A and B be the events that tomorrow the

S
store’s total sales between 500 and 1500 units respectively. Find P (A) and P (B). (c) Are A
and B are independent events?

N
A
Solution: The pdf of X is given by

H

0, when x < 0

IT



d x, when 0 ≤ x < 1
f (x) = F ′ (x) =

H
F (x) =
dx 

k(4 − 2x), AT when 1 ≤ x < 2
when x ≥ 2

0,
F

Since f (x) is a pdf,


O
S

Z∞
TE

we have f (x)dx = 1
O

−∞
N

Z0 Z1 Z2 Z∞
i.e. f (x)dx + f (x)dx + f (x)dx + f (x)dx = 1
E
R

−∞ 0 1 2
TU

Z0 Z1 Z2 Z∞
i.e. 0 · dx + x dx + k(4 − 2x) dx + 0 · dx = 1
C

−∞ 0 1 2
LE

1
x2

2
+ k 4x − x2 1 + 0 = 1

i.e. 0 +
2 0

1
=⇒ k =
2



 0, when x < 0

x, when 0 ≤ x < 1
f (x) =


 (2 − x), when 1 ≤ x < 2
when x ≥ 2

0,

Page 19 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Since the total sales X is in thousands of units, the sales between 500 and 1500 units is the
1 3
event A which stands for = 0.5 < X < = 1.5 and the sales over 1000 units is the
2 2
event B which stands for X > 1. =⇒ A ∩ B = 1 < X < 1.5
Now

Z1.5
P (A) = P (0.5 < X < 1.5) = f (x)dx
0.5

Z1 Z1.5

S
3
= x dx + (2 − x) dx =
4

N
0.5 1

A
H
Z2

IT
P (B) = P (X > 1) = f (x)dx

H
1

Z2
AT 1
= (2 − x) dx =
2
1
F
O

Z1.5
S

P (A ∩ B) = P (1 < X < 1.5) = f (x)dx


TE

1
O

Z1.5
3
N

= (2 − x) dx =
8
1
E
R

The condition for independent events: P (A) · P (B) = P (A ∩ B)


3 1 3
TU

Here, P (A) · P (B) = · = = P (A ∩ B)


4 2 8
∴ A and B are independent events.
C
LE

Example: 5.8. If a random variable X has the following probability distribution, find
x −1 0 1 2
E(X), E(X 2 ), V ar(X), E(2X + 1), V ar(2X + 1).
p(x) 0.3 0.1 0.4 0.2

Solution: Here X is a discrete RV. ∴


X∞
E(X) = xi p(xi )
i=−∞
= (−1) × 0.3 + 0 × 0.1 + 1 × 0.4 + 2 × 0.2
= −0.3 + 0 + 0.4 + 0.4 = 0.5

Page 20 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN


X
2
E(X ) = x2i p(xi )
i=−∞

= (−1) × 0.3 + 02 × 0.1 + 12 × 0.4 + 22 × 0.2


2

= 0.3 + 0 + 0.4 + 0.8 = 1.5

V ar(X) = E(X 2 ) − [E(X)]2


= (1.5) − (0.5)2

S
= 1.5 − 0.25 = 1.25

N
A
H
E(2X + 1) = 2E(X) + 1

IT
= 2 × (0.5) + 1

H
= 1+1= 2 AT
V ar(2X + 1) = 22 V ar(X)
F

= 4 × (1.25) = 5
O
S
TE
O

Example: 5.9. If a random variable X has the following probability dis-


E(X 2 ), V ar(X),
N

tribution, find E(X), E(3X − 4), V ar(3X − 4).


x 0 1 2 3 4
E

1 3 5 7 9
R

p(x)
25 25 25 25 25
TU

Solution: Here X is a discrete RV. ∴


C
LE

X
E(X) = xi p(xi )
i

14
=
5

X
E(X 2 ) = x2i p(xi )
i=

46
=
5

Page 21 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

V ar(X) = E(X 2 ) − [E(X)]2


 2
46 14
= −
5 5
34
=
25

E(3X − 4) = 3E(X) − 4
22
= =

S
5

N
V ar(3X − 4) = 32 V ar(X)

A
H
306
=

IT
25

H
AT
Example: 5.10. A test engineer found that the distribution function of the lifetime X (in years)
F
O

of an equipment follows a distribution function which is given by


S

(
0, when x < 0
TE

F (x) = −x
1 − e 5 , when x ≥ 0
O

Find the pdf, mean and variance of X.


N

Solution: The pdf of X is given by


E


R

d 0, when x < 0


f (x) = F ′ (x) = F = 1 −x
TU

dx  e 5, when x ≥ 0
5
C
LE

Z∞
The Mean E(X) = xf (x)dx
−∞

Z0
1 x
= x e− 5 dx
5
−∞

1 x ∞
= − (5x + 25)e− 5 0
= 5
5

V ar(X) = E(X 2 ) − [(X)]2

Page 22 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Z∞
Now E(X 2 ) = x2 f (x)dx
−∞

Z0
1 x
= x2 e− 5 dx
5
−∞

1 x ∞
= − (5x2 + 50x + 250)e− 5 0 = 50
5

S
∴ V ar(X) = 50 − [5]2 = 25

N
A
H
Example: 5.11. Find the mean and standard deviation of the distribution

IT
(

H
kx(2 − x), when 0 ≤ x ≤ 2
f (x) =
0, otherwise
AT
Solution: Given that the continuous RV X whose pdf is given by
F
O

(
kx(2 − x), when 0 ≤ x ≤ 2
f (x) =
S

0, otherwise
TE

Since f (x) is a pdf,


O
N

Z∞
we have f (x)dx = 1
E

−∞
R
TU

Z0 Z2 Z∞
i.e. f (x)dx + f (x)dx + f (x)dx = 1
C

−∞ 0 2
LE

Z0 Z2 Z∞
i.e. 0 · dx + kx(2 − x) dx + 0 · dx = 1
−∞ 0 2

2
x3

2
i.e. 0 + k x − +0 = 1
3 0

3
=⇒ k =
4

Page 23 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Z∞
The Mean E(X) = xf (x)dx
−∞

Z0
3
= x x(2 − x)dx
4
−∞
 3 2
x 3x4
= − 2 − = 1
4 3 4 0

S
N
V ar(X) = E(X 2 ) − [(X)]2

A
H
Z∞

IT
Now E(X 2 ) = x2 f (x)dx

H
−∞

Z0
AT
3
= x2 x(2 − x)dx
4
F

−∞
O

2
x4 x5

3 6
= − 2 − =
S

4 4 5 0 5
TE

r
O

6 1 1
∴ V ar(X) = − [1]2 = =⇒ S.D. =
N

5 5 5
E
R
TU

Example: 5.12. If X has the distribution function


C


0, when x < 1
LE




1




 , when 1 ≤ x < 4
3




1

F (x) = , when 4 ≤ x < 6

 2

 5
, when 6 ≤ x < 10






 6

1, when x ≥ 10

Find (a) the probability distribution of X (b) Find the mean and variance of X.

Page 24 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Solution: Given that the CDF of X is




 0, when x < 1

1




 , when 1 ≤ x < 4
3




1

F (x) = , when 4 ≤ x < 6

 2

 5
, when 6 ≤ x < 10






 6

1,
 when x ≥ 10

S
N
We know that P (X = xi ) = F (xi ) − F (xi−1 ), i = 1, 2, 3, . . . , where F is constant in

A
xi−1 ≤ x ≤ xi .

H
The CDF changes its values at x = 1, 4, 6, 10. ∴ The probability distribution takes its values

IT
as follows:

H
AT
1 1
P (X = 1) = F (1) − F (0) = −0=
3 3
F

1 1 1
O

P (X = 4) = F (4) − F (1) = − =
2 3 6
S
TE

5 1 1
P (X = 6) = F (6) − F (4) = − =
6 2 3
O

5 1
N

P (X = 10) = F (10) − F (6) = 1 − =


6 6
E
R

x 1 4 6 10
TU

∴ The probability distribution is given by 1 1 1 1


p(x)
3 6 3 6
C
LE

14
Mean=E(X) = Σxp(x) = .
3
95
E(X 2 ) = Σx2 p(x) = .
3
285 196 89
V ar(X) = E(X 2 ) − [E(X)]2 = − = .
9 9 9

Page 25 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Example: 5.13. If X has the distribution function




 0, when x < 0

1



 , when 0 ≤ x < 2
6




1

F (x) = , when 2 ≤ x < 4

 2

 5
, when 4 ≤ x < 6






 8

1, when x ≥ 6

S
N
Find (a) the probability distribution of X (b) Find the mean and variance of X.

A
Solution: Given that the CDF of X is

H

0, when x < 0

IT



1

H


 , when 0 ≤ x < 2
6

 AT


1

F (x) = , when 2 ≤ x < 4
 2
F



 5
, when 4 ≤ x < 6
O







 8
S


1,
 when x ≥ 6
TE
O

We know that P (X = xi ) = F (xi ) − F (xi−1 ), i = 1, 2, 3, . . . , where F is constant in


N

xi−1 ≤ x ≤ xi .
The CDF changes its values at x = 1, 4, 6, 10. ∴ The probability distribution takes its values
E

as follows:
R
TU

1
C

P (X = 0) =
6
LE

1 1 1
P (X = 2) = F (2) − F (0) = − =
2 6 3
5 1 1
P (X = 4) = F (4) − F (2) = − =
8 2 8
5 3
P (X = 6) = F (6) − F (4) = 1 − =
8 8

Page 26 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

x 0 2 4 6
∴ The probability distribution is given by 1 1 1 3
p(x)
6 3 8 8
37
Mean=E(X) = Σxp(x) = .
12
101
E(X 2 ) = Σx2 p(x) = .
6  2
2 2
101 37
V ar(X) = E(X ) − [E(X)] = − = 16.83 − 9.51 = 7.32.
6 12

S
N
Example: 5.14. The first four moments about X = 5 are 2,5,12 and 48. Find the first four

A
central moments.

H
IT
Solution: Let m′1 , m′2 , m′3 , m′4 be the first four moments about X = 5. Given m′1 =
1, m′2 = 5, m′3 = 12, m′4 = 48 be the first four moments about X = 5 m′1 = E(X −

H
5) = 1 =⇒ E(X) − 5 = 1 =⇒ E(X) = µ1 = 6 AT
µ2 = m′2 − (m′1 )2 = 5 − 1 = 4
µ3 = m′3 − 3m′2 · m′1 + 2(m′1 )3
F
O

= 12 − 3 × 5 × 1 + 2(1) = −1
µ4 = m′4 − 4m′3 · m′1 + 6m′2 · (m′1 )2 − 3(m′1 )4
S
TE

= 48 − 4(12)(1) + 6(5)(1) − 3(1) = 27


O
N
E

Example: 5.15. The first four moments about x = 4 are 1,4,10 and 45. Find the first four
R

moments about the mean.


TU

Solution:
Let m′1 , m′2 , m′3 , m′4 be the first four moments about X = 4. Given m′1 = 1, m′2 =
C

4, m′3 = 10, m′4 = 45 be the first four moments about X = 4 m′1 = E(X − 4) =
LE

1 =⇒ E(X) − 4 = 1 =⇒ E(X) = µ1 = 5

µ2 = m′2 − (m′1 )2 = 4 − 1 = 3
µ3 = m′3 − 3m′2 · m′1 + 2(m′1 )3
= 10 − 3 × 4 × 1 + 2(1) = 0
µ4 = m′4 − 4m′3 · m′1 + 6m′2 · (m′1 )2 − 3(m′1 )4
= 45 − 4(10)(1) + 6(4)(1) − 3(1) = 26

Page 27 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Example: 5.16. A random variable X has the pdf f (x) = kx2 e−x , x ≥ 0. Find the r th
moment and hence find the first four moments.
Solution: We know that
Z∞
f (x) dx = 1
−∞
Z∞
i.e., kx2 e−x dx = 1
0
Z∞

S
i.e., k e−x x3−1 dx = 1

N
0

A
Z∞
 

H
i.e., kΓ(3) = 1 ∵ Γ(n) = e−x xn−1 dx

IT
0
1

H
i.e., k · 2! = 1 =⇒ k = . [∵ Γ(n) = (n − 1)!]
2
AT
Now, The r th moment is given by
F
O

µ′r = E(X r )
S

Z∞
TE

= xr f (x) dx
O

−∞
Z∞
N

= xr kx2 e−x dx
E

0
R

Z∞
TU

= k e−x xr+3−1 dx
0
C

1 1
= Γ(r + 3) = (r + 2)!
LE

2 2

1
Now, First Moment µ′1 = (3)! = 3
2
1
Second Moment µ′2 = (4)! = 12
2
1
Third Moment µ′3 = (5)! = 60
2
1
Fourth Moment µ′4 = (6)! = 360
2

Page 28 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

1 x
Example: 5.17. A random variable X has the pdf f (x) = e− 2 , x ≥ 0. Find the
2
MGF(Moment Generating Function) and hence find its mean and variance.
Solution: The MGF of X is given by
Z∞
MX (t) = E(etX ) = etx f (x) dx
−∞
Z∞
1 x
= etx e− 2 dx
2

S
0
Z∞
1

N
1
= e− 2 (1−2t)x dx
2

A
0

H
" 1
#∞
1 e− 2 (1−2t)x

IT
=
2 − 12 (1 − 2t)
0

H
1 1
AT
∴ MX (t) = if t < .
1 − 2t 2

= (1 − 2t)−1 = 1 + 2t + 4t2 + 8t3 + . . .


F
O

Now, Differentiating w.r.to t, we get


S
TE


MX (t) = 2 + 8t + 24t2 + . . .
′′
O

MX (t) = 8 + 48t + . . .
Now, First Moment=Mean= E(X) = µ′1 ′
N

= MX (0) = 2
Second Moment= E(X 2 ) = µ′2 = ′′
MX (0) = 8
E

∴ V ar(X) = E(X 2 ) − [E(X)]2 = 8 − 4 = 4.


R
TU
C
LE

1
Example: 5.18. A random variable X has the pmf p(x) = , x = 1, 2, 3, . . . . Find the
2x
MGF(Moment Generating Function) and hence find its mean and variance.

Page 29 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Solution: The MGF of X is given by



X
tX
MX (t) = E(e ) = etx p(x)
x=−∞
∞ ∞  t x ∞  t x
X
tx
1 X e X e
= e = =
x=1
2x x=1
2 x=1
2
"  t   t 2 #
et e e
= 1+ + + ...
2 2 2
 t −1
et

e
1−

S
=
2 2

N
et
if et ̸= 2.

A
∴ MX (t) =
2− et

H
IT
H
Now, Differentiating w.r.to t, we get
AT

2et
MX (t) =
F

(2 − et )2
O

′′
2[(2 − et )et + 2e2t ]
MX (t) =
S

(2 − et )3
TE


Now, First Moment=Mean= E(X) = µ′1 = MX (0) = 2
O

2 ′ ′′
Second Moment= E(X ) = µ2 = MX (0) = 6
N

∴ V ar(X) = E(X 2 ) − [E(X)]2 = 6 − 4 = 2.


E
R
TU

Example: 5.19. A random variable X has the r th moment of the form µ′r = (r + 1)!2r . Find
C

the MGF(Moment Generating Function) and hence find its mean and variance.
LE

Page 30 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Solution: The MGF of X is given by

Given µ′r = (r + 1)!2r


∴ µ′1 = 2!2
µ′2 = 3!22
µ′3 = 4!23
..
.

t t2 t3 tr

S
∴ MX (t) = E(e tX
) = 1+ µ′1 + µ′2 + µ′3 + ··· + µ′r + . . .
1! 2! 3! r!

N
t t2 t3

A
2
= 1+ 2+ 3!2 + 4!23 + . . .
1! 2! 3!

H
IT
= 1 + 2(2t) + 3(2t)2 + 4(2t)3 + . . .

H
∴ MX (t) = (1 − 2t)−2 .
AT
F

Now, Differentiating w.r.to t, we get


O


MX (t) = −2(1 − 2t)−3 (−2)
S

′′
MX (t) = 6(1 − 2t)−4 (−2)2
TE

Now, First Moment=Mean= E(X) = µ′1 = MX′


(0) = 4
O

Second Moment= E(X 2 ) = µ′2 = ′′


MX (0) = 24
N

∴ V ar(X) = E(X 2 ) − [E(X)]2 = 24 − 16 = 8.


E
R
TU

1 3 1
Example: 5.20. A random variable X takes the values -1,0,1 with probabilities
, , re-
C

8 4 8
LE

spectively. Evaluate P {|X − µ| ≥ 2σ} and compare it with the upper bound given by
Tchebycheff’s(Chebychev’s) inequality.

Page 31 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Solution:

E(X) = 0
1
E(X 2 ) =
4
1
V ar(X) =
4
∴ P {|X − µ| ≥ 2σ} = P {|X| ≥ 1}
1
= P {X = −1 or X = 1} =
4
By Chebychev’s inequality

S
σ2
P {|X − µ| ≥ c} ≤

N
c2
Choosing c = 2σ

A
1

H
P {|X − µ| ≥ 2σ} ≤
4

IT
In this problem both the values coincide.

H
AT
Note: Alternative form of Chebychev’s inequality: If we put c = kσ, where k > 0, then
F
O

it takes the form


|X − µ|
 
1
P ≥σ ≤ 2
S

k k
TE
O

|X − µ|
 
1
P <σ ≥1−
N

k k2
Example: 5.21. A fair dice is tossed 720 times. Use Tchebycheff’s(Chebychev’s) inequality to
E
R

find a lower bound for the probability of getting 100 to 140 sixes.
TU

Solution:
1
C

p = P {getting ‘6’ in a single toss} =


LE

6
1 5
q = 1− = and
6 6
n = 720

X follows binomial distribution with mean np = 120 and variance npq = 100. i.e. µ =
120 and σ = 100.

Page 32 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

By Alternate form of Chebychev’s inequality


1
P {|X − µ| ≤ kσ} ≥ 1 −
k2
1
P {|X − 120| ≤ 10k} ≥ 1 −
k2
1
P {120 − 10k ≤ X ≤ 120 + 10k} ≥ 1 −
k2

S
Choosing k = 2, we get,
3

N
P {100 ≤ X ≤ 140} ≥

A
4

H
3

IT
∴ Required lower bound is .
4

H
AT
F

6 Function of a Random Variable


O

In the analysis of electrical systems, we will be often interested in finding the properties of
S

a signal after it has been subjected to certain processing operations by the system, such as
TE

integration, weighted averaging, etc. These signal processing operations may be viewed as
transformations of a set of input variables to a set of output variables. If the input is a set of
O

random variables (RVs), then the output will also be a set of RVs. In this chapter, we deal with
N

techniques for obtaining the probability law (distribution) for the set of output RVs when the
probability law for the set of input RVs and the nature of transformation are known.
E
R
TU

Finding fY (y), when fX (x) is given or known


Let us now derive a procedure to find fY (y), the pdf of Y , when Y = g(X), where X is a
C

continuous RV with pdf fX (x) and g(x) is a strictly monotonic function of x.


LE

Case (i): g(x) is a strictly increasing function of x.

fY (y) = P [Y ≤ y], where fY (y) is the cdf of Y


= P [g(X) ≤ y]
= P [X ≤ g −1 (y)]
= fX [g −1 (y)]

Differentiating both sides with respect to y,


dx
fY (y) = fX (x) , where x = g −1 (y) (6.1)
dy

Page 33 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Case (ii): g(x) is a strictly decreasing function of x.

fY (y) = P [Y ≤ y], where fY (y) is the cdf of Y


= P [g(X) ≤ y]
= P [X ≥ g −1 (y)]
= 1 − P [X ≤ g −1 (y)]
= 1 − fX [g −1 (y)]

Differentiating both sides with respect to y,


dx

S
fY (y) = −fX (x) , where x = g −1 (y) (6.2)
dy

N
A
Combining (6.1) and (6.2), we get

H
IT

dx

H
fY (y) = fX (x)
dy
AT
dx
fY (y) = fX (x) ′
|g (x)|
F
O

Note : The above formula for fY (y) can be used only when x = g −1 (y) is single valued.
S

When x = g −1 (y) takes finitely many values x1 , x2 , x3 , . . . , xn , i.e., when the roots of the
TE

equation y = g(x) are x1 , x2 , x3 , . . . , xn , the following extended formula should be used to


find fY (y):
O
N


dx1 dxn dxn
E

fY (y) = fX (x1 ) dy + · · · + fX (xn ) dy or


+ f (x1 )
R

X
dy
TU

dx1 dx2 dxn


fY (y) = fX (x1 ) ′ + fX (x2 ) ′ + · · · + fX (xn ) ′
|g (x1 )| |g (x2 )| |g (xn )|
C
LE

2
Example: 6.1. Let X be a random variable with pdf f (x) = (x + 1), −1 < x < 2. Find
9
2
the pdf of the random variable Y = X .
Solution:
Divide the interval
into two parts (-1,1) and (1,2).
dx
fY (y) = fX (x)
dy  
1 2 √ 2 √ 2
When −1 < x < 1, fY (y) = √ (1 + y) + (1 − y) = √ , 0 < y <
2 y 9 9 9 y
1.  
1 2 √
When 1 < x < 2, fY (y) = √ (1 + y) , 1 < y < 4.
2 y 9

Page 34 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

x
Example: 6.2. Let X be a random variable with pdf f (x) = , 1 < x < 5. Find the pdf
12
of the random variable Y = 2X − 3.
Solution:
dx 1 x
fY (y) = fX (x) = ·
dy 2 12
y+3
When 1 < x < 5, fY (y) = , −1 < y < 7.
48
Example: 6.3.
If X is uniformly distributed in (−π/2, π/2), find the pdf of Y = tan X.

S
Solution:
1 1

N
Given that fX (x) is uniformly distributed. ∴ fX (x) = =
π/2 − (−π/2) π

A
H

dx 1 1
Now, fY (y) = fX (x) = ·

IT
dy π (1 + y 2 )
1

H
When x ∈ (−π/2, π/2), fY (y) = , −∞ < y < ∞.
π(1 + y 2 )
AT
F
O
S
TE
O
N
E
R
TU
C
LE

Page 35 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

7 Exercise/Practice/Assignment Problems
1. Find the probability of drawing two red balls in succession from a bag containing 3 red
and 6 black balls when (i) the ball that is drawn first is replaced, (ii) it is not replaced.
2. A bag contains 3 red and 4 white balls. Two draws are made without replacement; what
is the probability that both the balls are red.
3. A random variable X has the following distribution
X -2 -1 0 1 2 3
P[X=x] 0.1 k 0.2 2k 0.3 3k
Find (i) the value of k,(ii) the Distribution Function (CDF) (iii) P (0 < X < 3/X <

S
1
2) and (iv) the smallest value of α for which P (X ≤ α) > .
2

N
1
Ans: k =

A
15

H
4. A random variable X has the following distribution

IT
X 0 1 2 3 4
P[X=x] k 2k 5k 7k 9k

H
Find (i) the value of k,(ii) the Distribution Function (CDF) (iii) P (0 < X < 3/X <
AT 1
2) and (iv) the smallest value of α for which P (X ≤ α) > .
3
1
F

Ans: k =
O

24
5. A random variable X has a pdf
S

(
TE

3x2 , when 0 ≤ x ≤ 1
f (x) =
0, otherwise
O

Find the (i) Find a, b such that P (X ≤ a) = P (X > a) and P (X > b) = 0.05,(ii)
N

r
3 1
the Distribution Function (CDF) (iii) P (0 < X < 0.5/X < 0.8). Ans: a = ,
E

r 2
R

3 19
b=
TU

20
6. The amount of bread (in hundred kgs) that a certain bakery is to sell in a day is a random
C

variable X with a pdf


LE


Ax,
 when 0 ≤ x < 5
f (x) = A(10 − x), when 5 ≤ x < 10

0, otherwise

Find the (i) the value of A,(ii) the Distribution Function (CDF) (iii) P (X > 5/X <
5), P (X > 5/2.5 < X < 7.5). (iv) the probability that in a day the sales is (a) more
1
than 500 kgs (b) less than 500 kgs (c) between 250 and 750 kgs. Ans: A =
25
7. The Cumulative Distribution Function (CDF) of a random variable X is given by
1 − 4 , when x > 2

F (x) = x2
0, otherwise

Page 36 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

Find (i) the pdf of X,(ii) P (X > 5/X < 5), P (X > 5/2.5 < X < 7.5) (iii)
P (X < 3), P (3 < X < 5).
8. A coin is tossed until a head appears. What is the expected value of the number of tosses?.
Also find its variance.
9. The pdf of a random variable X is given by
(
a + bx, when 0 ≤ x ≤ 1
f (x) =
0, otherwise

Find the (i) the value of a, b if the mean is 1/2,(ii) the variance of X (iii) P (X >

S
0.5/X < 0.5)

N
10. The first three moments about the origin are 5,26,78. Find the first three moments about

A
the value x=3. Ans: 2,5,-48

H
11. The first two moments about x=3 are 1 and 8. Find the mean and variance. Ans: 4,7

IT
12. The pdf of a random variable X is given by

H
( AT
k(1 − x), when 0 ≤ x ≤ 1
f (x) =
0, otherwise
F

Find the (i) the the value of k,(ii) the r th moment about origin (iii) mean and variance.
O

Ans: k = 2
S

13. An unbiased coin is tossed three times. If X denotes the number of heads appear, find
TE

the MGF of X and hence find the mean and variance.


O

14. Find the MGF of the distribution whose pdf is f (x) = ke−x , x > 0 and hence find its
N

mean and variance.


15. The pdf of a random variable X is given by
E
R


x, when 0 ≤ x ≤ 1
TU


f (x) = 2 − x, when 1 < x ≤ 2

C

0, otherwise

LE

For this find the MGF and prove that mean and variance cannot be find using this MGF
and then find its mean and variance using expectation.
16. The pdf of a random variable X is given by f (x) = ke−|x| , −∞ < x < ∞ Find the
(i) the the value of k,(ii) the r th moment about origin (iii) the MGF and hence mean and
1
variance. Ans: k =
2
17. Find the MGF of the RV whose moments are given by µ′r = (2r)!. Find also its mean
and variance.
18. Use Tchebycheff’s(Chebychev’s) inequality to find how many times a fair coin must be
tossed in order to the probability that the ratio of the number of heads to the number of
tosses will lie between 0.45 and 0.55.

Page 37 of 38 https://sites.google.com/site/lecturenotesofathithans/home
18MAB204T-Probability and Queueing Theory S. ATHITHAN

19. If X denotes the sum of the numbers obtained when 2 dice are thrown, obtain an upper
bound for P {|X − 7| ≥ 4} (Use Chebychev’s inequality). Compare with the exact
probability.
20. A RV X has mean µ = 12 and variance σ 2 = 9 and unknown probability distribution.
Find P [6 < X < 8].
√ √
 

21. If a RV X is uniformly distributed over (− 3, 3), compute P |X − µ| ≥ .
2
22. A RV X is exponentially distributed with parameter 1. Use Chebychev’s inequality to
show that P (−1 < X < 3) ≥ 3/4. Find the actual probability to compare.

S
23. Given a RV X with density function

N
(
2x, when 0 ≤ x ≤ 1

A
f (x) =
0, otherwise

H
IT
find the pdf of Y = 8X 3 .

H
24. If X has an exponential distribution with parameter a, find the pdf of Y = log X.
AT √
25. If X has an exponential distribution with parameter 1, find the pdf of Y = X.
26. If X is uniformly distributed in (1, 2), find the pdf of Y = 1/X.
F
O

27. If X is uniformly distributed in (−2π, 2π), find the pdf of (i) Y = X 3 and (ii) Y =
X 4.
S
TE

S OLVE /P RACTICE M ORE P ROBLEMS FROM THE T EXTBOOK AND R EFERENCE B OOKS
O
N
E
R
TU
C

Figure 7.1: Values of e−λ .


LE

Contact: (+91) 979 111 666 3 (or) athithansingaram@gmail.com


Visit: https://sites.google.com/site/lecturenotesofathithans/home

Page 38 of 38 https://sites.google.com/site/lecturenotesofathithans/home

You might also like