You are on page 1of 11

1

Probability Distribution
Probability Distribution: In probability distribution the variables are distributed according to
some definite probability function.
Any statement of a function associating each of a set of mutually exclusive and exhaustive
classes or class intervals with its probability is a probability distribution.

Bernoulli trial: A random experiment whose outcomes have been classified into two categories
called ‘success’ and ‘failure’ represented by letters 𝑠𝑠 and 𝑓𝑓 respectively is called a Bernoulli
trial.

Bernoulli random variable: A discrete random variable X is called a Bernoulli random


variable if it takes the value 1 when the Bernoulli trial is a success and the value 0 when the
same Bernoulli trial is a failure.

Bernoulli distribution: A discrete random variable X is said to have a Bernoulli distribution if


its probability function is given by
𝒇𝒇(𝒙𝒙; 𝒑𝒑) = 𝑷𝑷𝑷𝑷[𝑿𝑿 = 𝒙𝒙] = 𝒑𝒑𝒙𝒙 𝒒𝒒𝟏𝟏−𝒙𝒙 ; 𝒙𝒙 = 𝟎𝟎, 𝟏𝟏
Where, p = Parameter of the distribution satisfying 0 ≤ 𝑝𝑝 ≤ 1 and 𝑝𝑝 + 𝑞𝑞 = 1.
x = Number of successes.
Mean, 𝐸𝐸(𝑋𝑋) = 𝑝𝑝 and variance, 𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = 𝑝𝑝𝑝𝑝

Binomial distribution: A direct random variable X is said to have a binomial distribution if its
probability function is defined by

𝒇𝒇(𝒙𝒙; 𝒏𝒏, 𝒑𝒑) = 𝑷𝑷𝑷𝑷[𝑿𝑿 = 𝒙𝒙] = 𝒏𝒏𝒄𝒄𝒙𝒙 𝒑𝒑𝒙𝒙 𝒒𝒒𝒏𝒏−𝒙𝒙 ; 𝒙𝒙 = 𝟎𝟎, 𝟏𝟏, 𝟐𝟐 … . . , 𝒏𝒏

Where, p = the probability of success on a single Bernoulli trial satisfying0 ≤ 𝑝𝑝 ≤ 1.

n= the number Bernoulli trials.


x= the number of success in n trials.
The Binomial distribution satisfies the two essential properties of probability distribution
(𝑖𝑖) 𝑓𝑓(𝑥𝑥) ≥ 0, (𝑖𝑖𝑖𝑖) ∑ 𝑓𝑓(𝑥𝑥) = 1.

Inherent conditions of binomial distribution: The binomial distribution is also known as the
outcome of Bernoulli process and is associated with the name of Jacob Bernoulli. A Bernoulli
process is a random process in which:
(a) The process is performed under the same conditions for a fixed and finite number of
trials, say n.
(b) Each trial is independent of other trials.
(c) Each trial has two mutually exclusive possible outcomes such as “success” or “failure”,
“yes” or “no”, “hit” or “miss” and so on.
(d) The probability of success p remains constant from trial to trial. So the probability of
failure is q, where q=1- p.
2

Distribution of any random variable is called binomial distribution if it satisfies the above
inherent conditions of binomial distribution.

Derivation of Binomial Distribution: Consider a series of n independent Bernoulli trials with


probability of success p, and then the probability of failure is 𝑞𝑞 = 1 − 𝑝𝑝. The probability of x
success and consequently (n-x) failure in n independent trials in a specified order (say)
SSS…….S FFF……..F ; 𝒙𝒙 = 0, 1, 2 … . . 𝑛𝑛
x times (n-x) times
(where S and F represent success and failure) is given by the compound probability theorem by
the expression.
𝑃𝑃𝑃𝑃[ SSS … … S FFF … … F ] = 𝑃𝑃(𝑆𝑆)𝑃𝑃(𝑆𝑆)𝑃𝑃(𝑆𝑆) … … 𝑃𝑃(𝑆𝑆)𝑃𝑃(𝐹𝐹)𝑃𝑃(𝐹𝐹)𝑃𝑃(𝐹𝐹) … … 𝑃𝑃(𝐹𝐹)
x times (n-x) times
n− x
= ppp...... pqqq......q = p q
x

But x successes in n trials can occur in 𝑛𝑛𝒄𝒄𝒙𝒙 ways and the probability of each of these ways is
𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 . Hence the probability of x success in n trials in any order is given by
𝒇𝒇(𝒙𝒙; 𝒏𝒏, 𝒑𝒑) = 𝒏𝒏𝒄𝒄𝒙𝒙 𝒑𝒑𝒙𝒙 𝒒𝒒𝒏𝒏−𝒙𝒙 ; 𝒙𝒙 = 𝟎𝟎, 𝟏𝟏, 𝟐𝟐, … , 𝒏𝒏

Mean of binomial distribution: The mean of binomial random variable X, donated by 𝜇𝜇 or


E(X) is the theoretical expected number of success in n trials, i.e.,
𝑛𝑛

𝜇𝜇 = 𝐸𝐸 (𝑋𝑋) = � 𝑥𝑥𝑥𝑥(𝑥𝑥 )
𝑥𝑥=0
𝑛𝑛

= 0 × 𝑓𝑓(0) + � 𝑥𝑥 𝑓𝑓(𝑥𝑥 )
𝑥𝑥=1
𝑛𝑛

= � 𝑥𝑥 𝑛𝑛𝑐𝑐𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞𝑛𝑛−𝑥𝑥
𝑥𝑥=1
𝑛𝑛
𝑛𝑛!
= � 𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥
𝑥𝑥! (𝑛𝑛 − 𝑥𝑥)!
𝑥𝑥=1

𝑛𝑛 𝒏𝒏
𝑛𝑛(𝑛𝑛 − 1)!
= � 𝑥𝑥 𝑝𝑝. 𝑝𝑝 𝑥𝑥−1 𝑞𝑞 𝑛𝑛−𝑥𝑥 � 𝒏𝒏𝒄𝒄𝒙𝒙 𝒑𝒑𝒙𝒙 𝒒𝒒𝒏𝒏−𝒙𝒙
𝑥𝑥(𝑥𝑥 − 1)! (𝑛𝑛 − 𝑥𝑥 )!
𝑥𝑥=1 𝒙𝒙=𝟎𝟎
𝑛𝑛
(𝑛𝑛 − 1)! = (𝒒𝒒 + 𝒑𝒑)𝒏𝒏
= 𝑛𝑛𝑛𝑛 � 𝑝𝑝 𝑥𝑥−1 𝑞𝑞 𝑛𝑛−𝑥𝑥
(𝑥𝑥 − 1)! (𝑛𝑛 − 𝑥𝑥)!
𝑥𝑥=1 𝑥𝑥 = 1, 2, … 𝑛𝑛
𝑛𝑛

= 𝑛𝑛𝑛𝑛 � 𝑛𝑛 − 1𝑐𝑐𝑥𝑥−1 𝑝𝑝 𝑥𝑥−1 𝑞𝑞 𝑛𝑛−𝑥𝑥 𝑥𝑥 − 1 = 0, 1, 2, … 𝑛𝑛 − 1


𝑥𝑥=1
= 𝑛𝑛𝑛𝑛(𝑞𝑞 + 𝑝𝑝)𝑛𝑛−1
= 𝑛𝑛𝑛𝑛 [∵ 𝑞𝑞 + 𝑝𝑝 = 1]
∴ 𝜇𝜇 = 𝐸𝐸 (𝑋𝑋) = 𝑛𝑛𝑛𝑛
Thus the mean of binomial distribution is np.
3

The variance: The variance of the binomial random variable X measures the variation of the
binomial distribution and is given by

𝜎𝜎 2 = 𝑉𝑉 (𝑋𝑋) = 𝐸𝐸 (𝑋𝑋 2 )−{𝐸𝐸 (𝑋𝑋)}2 = 𝐸𝐸 (𝑋𝑋 2 ) − (𝜇𝜇)2

Now, 𝐸𝐸 (𝑋𝑋 2 ) = 𝐸𝐸[𝑋𝑋(𝑋𝑋 − 1) + 𝑋𝑋]

= 𝐸𝐸[𝑋𝑋(𝑋𝑋 − 1)] + 𝐸𝐸(𝑋𝑋)


𝑛𝑛

= � 𝑥𝑥 (𝑥𝑥 − 1) 𝑓𝑓(𝑥𝑥) + 𝑛𝑛𝑛𝑛


𝑥𝑥=0

𝑛𝑛

= 0(0 − 1) 𝑓𝑓(0) + 1(1 − 1) 𝑓𝑓(1) + � 𝑥𝑥 (𝑥𝑥 − 1) 𝑛𝑛𝑐𝑐𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 + 𝑛𝑛𝑛𝑛


𝑥𝑥=2

𝑛𝑛
𝑛𝑛!
= � 𝑥𝑥(𝑥𝑥 − 1) 𝑝𝑝 𝑥𝑥 𝑞𝑞𝑛𝑛−𝑥𝑥 + 𝑛𝑛𝑛𝑛
𝑥𝑥! (𝑛𝑛 − 𝑥𝑥)!
𝑥𝑥−2

𝑛𝑛
𝑛𝑛(𝑛𝑛 − 1)(𝑛𝑛 − 2)!
= � 𝑥𝑥 (𝑥𝑥 − 1) 𝑝𝑝2 . 𝑝𝑝 𝑥𝑥−2 𝑞𝑞 𝑛𝑛−𝑥𝑥 + 𝑛𝑛𝑛𝑛
𝑥𝑥(𝑥𝑥 − 1)(𝑥𝑥 − 2)! (𝑛𝑛 − 𝑥𝑥 )!
𝑥𝑥=2

(𝑛𝑛−2)!
= 𝑛𝑛(𝑛𝑛 − 1)𝑝𝑝2 ∑𝑛𝑛𝑥𝑥=2 (𝑥𝑥−2)!(𝑛𝑛−𝑥𝑥)! 𝑝𝑝 𝑥𝑥−2 𝑞𝑞 𝑛𝑛−𝑥𝑥 + 𝑛𝑛𝑛𝑛
n−2 𝑥𝑥 = 2, 3, … . 𝑛𝑛
2 x−2 (n−2)−(x−2)
= n(n − 1)p � n − 2c x −2 p q + np
∴ 𝑥𝑥 − 2 = 0, 1, … . 𝑛𝑛
x−2=0

= 𝑛𝑛(𝑛𝑛 − 1)𝑝𝑝2 (𝑞𝑞 + 𝑝𝑝)𝑛𝑛−2 + 𝑛𝑛𝑛𝑛

= 𝑛𝑛(𝑛𝑛 − 1)𝑝𝑝2 + 𝑛𝑛𝑛𝑛 [∵ (𝑞𝑞 + 𝑝𝑝) = 1]

So, 𝑉𝑉 (𝑋𝑋) = 𝑛𝑛(𝑛𝑛 − 1)𝑝𝑝2 + 𝑛𝑛𝑛𝑛 − (𝑛𝑛𝑛𝑛)2

= 𝑛𝑛2 𝑝𝑝2 − 𝑛𝑛𝑛𝑛2 + 𝑛𝑛𝑛𝑛 − 𝑛𝑛2 𝑝𝑝2

= 𝑛𝑛𝑛𝑛 − 𝑛𝑛𝑝𝑝2

= 𝑛𝑛𝑛𝑛(1 − 𝑝𝑝)

= 𝑛𝑛𝑛𝑛𝑛𝑛 [∵ 𝑞𝑞 + 𝑝𝑝 = 1, ∴ 𝑞𝑞 = 1 − 𝑝𝑝]

Therefore, variance 𝝈𝝈𝟐𝟐 = 𝑽𝑽(𝑿𝑿) = 𝒏𝒏𝒏𝒏𝒏𝒏

Thus, the Standard deviation of the binomial distribution is 𝝈𝝈 = �𝑽𝑽(𝑿𝑿) = �𝒏𝒏𝒏𝒏𝒏𝒏 .


4

Fitting a Binomial Distribution: When binomial distribution is to be fitted to observe data, the
following procedure is adopted:

(𝑖𝑖) Determine the value of p and q. If one of these values is known, the other can be found out
by the simple relationship 𝑝𝑝 = 1 − 𝑞𝑞 𝑎𝑎𝑎𝑎𝑑𝑑 𝑞𝑞 = 1 − 𝑝𝑝.

(𝑖𝑖𝑖𝑖) Expand the binomial(𝑝𝑝 + 𝑞𝑞)𝑛𝑛 . The power n is equal to one less than the number of terms in
the expanded binomial.

(𝑖𝑖𝑖𝑖𝑖𝑖) Multiply each term of the expanded binomial by N (the total frequency), in order to obtain
the expected frequency in each category.

It is convenient to use the following recurrence relation for fitting of binomial distribution:

𝑓𝑓(𝑥𝑥) = Pr[𝑋𝑋 = 𝑥𝑥] = 𝑛𝑛𝑐𝑐𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥

𝑓𝑓(𝑥𝑥 + 1) = 𝑃𝑃𝑃𝑃[𝑋𝑋 = 𝑥𝑥 + 1] = 𝑛𝑛𝑐𝑐𝑥𝑥+1 𝑝𝑝 𝑥𝑥+1 𝑞𝑞 𝑛𝑛−𝑥𝑥−1

Now,

𝑓𝑓(𝑥𝑥 + 1) 𝑛𝑛𝑐𝑐𝑥𝑥+1 𝑝𝑝 𝑥𝑥+1 𝑞𝑞 𝑛𝑛−𝑥𝑥−1


=
𝑓𝑓 (𝑥𝑥 ) 𝑛𝑛𝑐𝑐𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞𝑛𝑛−𝑥𝑥

𝑛𝑛!
(𝑥𝑥 + 1)! (𝑛𝑛 − 𝑥𝑥 − 1)! 𝑥𝑥+1−𝑥𝑥 𝑛𝑛−𝑥𝑥−1−𝑛𝑛+𝑥𝑥
= 𝑛𝑛!
𝑝𝑝 𝑞𝑞
𝑥𝑥! (𝑛𝑛 − 𝑥𝑥)!

𝑥𝑥! (𝑛𝑛 − 𝑥𝑥 )!
= 𝑝𝑝𝑞𝑞 −1
(𝑥𝑥 + 1)! (𝑛𝑛 − 𝑥𝑥 − 1)!

𝑥𝑥! (𝑛𝑛 − 𝑥𝑥 )(𝑛𝑛 − 𝑥𝑥 − 1)! 𝑝𝑝


=
(𝑥𝑥 + 1)𝑥𝑥! (𝑛𝑛 − 𝑥𝑥 − 1)! 𝑞𝑞

𝑝𝑝(𝑛𝑛 − 𝑥𝑥)
=
𝑞𝑞(𝑥𝑥 + 1)

𝑃𝑃(𝑛𝑛 − 𝑥𝑥)
⇒ 𝑓𝑓(𝑥𝑥 + 1) = 𝑓𝑓(𝑥𝑥)
𝑞𝑞(𝑥𝑥 + 1)
𝑃𝑃(𝑛𝑛−0) 𝑝𝑝
When, 𝑥𝑥 = 𝑜𝑜, 𝑓𝑓(1) = 𝑓𝑓(0) = 𝑛𝑛 𝑓𝑓(0)
𝑞𝑞(0+1) 𝑞𝑞

𝑃𝑃(𝑛𝑛 − 1)
𝑥𝑥 = 1, 𝑓𝑓(2) = 𝑓𝑓(1)
𝑞𝑞(1 + 1)

𝑝𝑝(𝑛𝑛 − 1) 𝑝𝑝 𝑃𝑃 2 𝑛𝑛(𝑛𝑛 − 1)
= 𝑛𝑛 𝑓𝑓(0) = � � 𝑓𝑓(0)
𝑞𝑞(1 + 1) 𝑞𝑞 𝑞𝑞 2!
5

𝑃𝑃(𝑛𝑛−2) 𝑝𝑝(𝑛𝑛−2) 𝑃𝑃 2 𝑛𝑛(𝑛𝑛−1) 𝑃𝑃 3 𝑛𝑛(𝑛𝑛−1)(𝑛𝑛−2)


𝑥𝑥 = 2, 𝑓𝑓(3) = 𝑓𝑓(2) = � � 𝑓𝑓(0) = � � 𝑓𝑓(0)
𝑞𝑞(2+1) 𝑞𝑞(2+1) 𝑞𝑞 2! 𝑞𝑞 3!

and so on.

This formula provides us a very convenient method for fitting the binomial distribution. The
only probability we need to calculate is 𝑓𝑓(0) which is equal to 𝑞𝑞 𝑛𝑛 , where 𝑞𝑞 can be estimated
from the given data.

Example: The mean of a binomial distribution is 40 and the standard deviation 6. Calculate n, p
and q.

Solution: We know the mean of binomial distribution is np and standard deviation is �𝑛𝑛𝑛𝑛𝑛𝑛.

So, np=40 and �𝑛𝑛𝑛𝑛𝑛𝑛=6

⇒ npq = 36
⇒ 40 × q = 36
36
⇒q= = 0.90
40

Now, p + q = 1
⇒ p = 1 − q = 1 − 0.90 = 0.1

Given, np = 40
⇒ n × 0.1 = 40
40
⇒n= = 400
0.1

Hence, for the given equation, n = 400, p = 0.1 and q = 0.9


6

Illustration: 4, 5, 22, 35
7

Normal (Gaussian) Distribution:


A continuous random variable X is said to have a normal distribution with parameters µ (mean)
and σ2 (variance) if the density function is given by:
1  X −µ 
2

1 −  
y = f ( x) = e 2 σ 
;−∞ < X < ∞,−∞ < µ < ∞, σ 2 > 0
σ 2π
Where, y = the computed height of an ordinate at a distance of X from the mean,
𝜎𝜎 = standard deviation of the given normal distribution,
𝜋𝜋= the constant= 3.1416,
e=the constant=2.7183,
µ=mean of the given normal distribution,
In symbols, it can be expressed as 𝑋𝑋~𝑁𝑁(µ, σ2 )

Shape of Normal Distribution:


If we draw the graph of normal distribution, the curve obtained will be known as normal curve
and is given by

Mean = Median = Mode


Figure: Symmetric curve (bell shaped)

The graph of 𝑦𝑦 = 𝑓𝑓(𝑥𝑥) is a famous “bell shaped” curve. The top of the bell is directly above the
mean µ. For large values of 𝜎𝜎, the curve tends to flatten out and for small values of 𝜎𝜎, it has a
sharp peak. When we say that curve has unit area, we mean that the total frequency N is equal to
one.

Standard Normal Variate:


For a normal distribution with mean µ and standard deviation 𝜎𝜎, the standardized variable Z is
𝑋𝑋−𝜇𝜇
obtained as 𝑍𝑍 = . Here, Z is called a standard normal variate, which has mean zero and
𝜎𝜎
standard deviation one. In symbols, if X~𝑁𝑁(µ, 𝜎𝜎 2 ) then 𝑧𝑧~𝑁𝑁(0,1). The probability density
function (p.d.f) of standard normal variate Z is given by,
1 −1𝑧𝑧 2
𝑦𝑦 = 𝑓𝑓(𝑧𝑧) = 𝑒𝑒 2 ; − ∞ < 𝑧𝑧 < ∞
√2𝜋𝜋

Properties of Normal Distribution: The following are the important properties of the normal
curve and the normal distribution:
1. The normal curve is symmetrical about the mean (skewness = 0). The number of cases
below the mean in a normal distribution is equal to the number of cases above the mean,
which makes the mean and median coincide.
2. For a normal distribution mean, median and mode are all equal.
8

3. There is one maximum point of the normal curve which occurs at the mean.
4. Since there is only one maximum point, therefore, the normal curve is unimodal.
1
1
5. The points of inflation occur at 𝑥𝑥 = 𝜇𝜇 ± 𝜎𝜎, 𝑦𝑦 = 𝑓𝑓(𝑥𝑥) = 𝑒𝑒 −2 .
𝜎𝜎 √2𝜋𝜋
6. The variable distributed according to the normal curve is a continuous one.
7. First and third quartiles are equidistant from the median.
4
8. Mean deviation about mean= 𝜎𝜎 or 0.7979 times of standard deviation.
5
9. If 𝑋𝑋1 and 𝑋𝑋2 are two independent normal varieties and 𝑎𝑎1 and 𝑎𝑎2 are given constants, then
the linear combination 𝑎𝑎1 𝑋𝑋1 + 𝑎𝑎2 𝑋𝑋2 will also follow a normal distribution.
10. All odd moments of the normal distribution are zero i.e., 𝜇𝜇2𝑛𝑛+1 = 0 (𝑛𝑛 = 0,1,2 … . ).
11. 𝛽𝛽1 = 0 (The normal distribution is perfectly symmetrical) and 𝛽𝛽2 = 3 implies that normal
curve is neither leptokurtic nor platykurtic.
12. Mean±𝜎𝜎, mean±2𝜎𝜎, mean±3𝜎𝜎 covers 68.27%, 95.45%, and 99.73% area respectively.

Importance of Normal Distribution: The normal distribution has great significance in


statistical work because of the following reasons:
1. The normal distribution has the remarkable property stated in the so-called central limit
theorem, which asserts that sample mean and sample variance, tends to be normally
distributed as the sample size becomes large.
2. Even if a variable is not normally distributed, it can sometimes be brought to normal form
by simple transformation of variable.
3. Many of the sampling distributions like Student’s t, F etc., also tend to normal
distribution.
4. The sampling distribution and tests of hypothesis are based upon the assumption that
samples have been drawn from a normal population with mean µ and variance 𝜎𝜎 2 .
5. Normal distribution finds large applications in statistical quality control.
6. As n becomes large, the normal distribution serves as a good approximation for many
discrete distributions.
7. In theoretical statistics, many problems can be solved only under the assumption of a
normal population.
8. The normal distribution has numerous mathematical properties which make it popular
comparatively easy to manipulate.

Fitting of Normal Distribution: In order to fit a normal distribution to given data, we first
calculate the mean µ and standard deviation 𝜎𝜎 from the given data. Then the normal curve fitted
to the given data is given by
1 1 𝑋𝑋−𝜇𝜇 2
𝑦𝑦 = 𝑓𝑓(𝑥𝑥) = 𝑒𝑒 −2( 𝜎𝜎 ) ; −∞ < 𝑥𝑥 < ∞
𝜎𝜎√2𝜋𝜋
To calculate the expected normal frequency, we first find the standard normal variates
corresponding to the lower units of each class-interval. Then, the area under the normal curve is
computed from the tables for each standard normal variate. Finally, the areas for the successive
class-intervals are obtained by subtracting the successive areas and multiplying these areas by
N, we get the normal frequencies.
9

Applications of Normal Distribution: The normal distribution is mostly used for the following
purposes:
1. To approximate or ‘fit’ a distribution of measurement under certain conditions.
2. To approximate the binomial distribution and other discrete or continuous probability
distribution under suitable conditions.
3. To approximate the distributions of means and certain other statistic calculated from
samples, especially from large samples.

Example 1: How many workers have a salary between Rs. 4000 and Rs. 6500, if the arithmetic
mean is Rs. 5000, standard deviation is Rs. 1000 and number of workers is 15000, if the salary
of the workers is assumed to follow the normal law?

Solution: Here given, X 1 = 4000, X 2 = 6500, µ = 5000, σ = 1000

3000 4000 5000 6500 7000

X1 − µ 4000 − 5000
Now, z1 = = = −1.00
σ 1000 (Left of mean)

X2 − µ 6500 − 5000
z2 = = = 1.50
σ 1000 (Right of mean)

From the table, we find that 34.13% of workers fall between Rs. 4000 and Rs. 5000 and 43.32%
fall between Rs. 5000 and Rs. 6500.
Hence, 34.13+43.32= 77.45% workers have a salary between Rs. 4000 and Rs. 6500. So number
of workers getting a salary between Rs. 4000 and Rs. 6500 is given by 0.7745×15000=11618.

Example 2: A normal curve has µ = 20 and σ = 10 . Find the area between


X 1 = 15 and X 2 = 40

Solution:
0.1915

15 20 40 x

X1 − µ 15 − 20
Here, z1 = = = −0.50
σ 10
10

X2 − µ 40 − 20
z2 = = = 2.00
σ 10

From the table, the areas corresponding to z1 = −0.50 and z 2 = 2.0 are 0.1915 and 0.4772 and
thus the desired area between X 1 = 15 and X 2 = 40 is (0.1915+0.4772) = 0.6687.

Example 3: A workshop produces 2000 units per day. The average weight of units is 130 kg
with a standard deviation of 10 kg. Assuming normal distribution, how many units are expected
to weight less than 142 kg?

Solution: Here given, µ = 130, σ = 10, N = 2000, X = 142

0.5 0.3849

130 142

X −µ 142 − 130
z1 = = = 1.2
Now σ 10

Area between z=0 and z= 1.2 is 0.3849


Probability of units having weight less than 142 kg = 0.5+0.3849=0.8849
Expected number of units weighting less than 142 kg = 2000×0.8849=1769.8 or 1770

Illustration: 19, 25, 26, 29, 43


11

Normal Probability table: Page-692

You might also like