You are on page 1of 298

ESTIMATION

Gauranga C. Samanta

P. G. Department of Mathematics
F. M. University, Odisha

5th Feb 2024


Estimation

POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample

Gauranga C. Samanta F. M. University, Odisha


Estimation

POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample

Gauranga C. Samanta F. M. University, Odisha


Estimation

POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample

Gauranga C. Samanta F. M. University, Odisha


Estimation

POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample

Gauranga C. Samanta F. M. University, Odisha


Estimation contained

INFERENCE THEORY

Inference Theory deals with the method of drawing valid


conclusions and prediction about the Population using information
contained in the sample.
Topics to be covered
1 ESTIMATION PARAMETER:
POINT ESTIMATION
INTERVAL ESTIMATION
2 TESTING OF HYPOTHESIS

Gauranga C. Samanta F. M. University, Odisha


Estimation contained

INFERENCE THEORY

Inference Theory deals with the method of drawing valid


conclusions and prediction about the Population using information
contained in the sample.
Topics to be covered
1 ESTIMATION PARAMETER:
POINT ESTIMATION
INTERVAL ESTIMATION
2 TESTING OF HYPOTHESIS

Gauranga C. Samanta F. M. University, Odisha


Estimation contained

INFERENCE THEORY

Inference Theory deals with the method of drawing valid


conclusions and prediction about the Population using information
contained in the sample.
Topics to be covered
1 ESTIMATION PARAMETER:
POINT ESTIMATION
INTERVAL ESTIMATION
2 TESTING OF HYPOTHESIS

Gauranga C. Samanta F. M. University, Odisha


Estimation contained

INFERENCE THEORY

Inference Theory deals with the method of drawing valid


conclusions and prediction about the Population using information
contained in the sample.
Topics to be covered
1 ESTIMATION PARAMETER:
POINT ESTIMATION
INTERVAL ESTIMATION
2 TESTING OF HYPOTHESIS

Gauranga C. Samanta F. M. University, Odisha


Estimation contained

INFERENCE THEORY

Inference Theory deals with the method of drawing valid


conclusions and prediction about the Population using information
contained in the sample.
Topics to be covered
1 ESTIMATION PARAMETER:
POINT ESTIMATION
INTERVAL ESTIMATION
2 TESTING OF HYPOTHESIS

Gauranga C. Samanta F. M. University, Odisha


Point Estimation

POINT ESTIMATE:
An estimate of a population parameter given by a single number is
called point estimate
POINT ESTIMATOR:
A point estimator is a statistic for Estimating the population
Parameter θ and will be denoted by θ̂

Gauranga C. Samanta F. M. University, Odisha


Point Estimation

POINT ESTIMATE:
An estimate of a population parameter given by a single number is
called point estimate
POINT ESTIMATOR:
A point estimator is a statistic for Estimating the population
Parameter θ and will be denoted by θ̂

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

Definition
Let X ∼ f (x; θ) and X1 , X2 , · · · , Xn be a random sample from the
population X . Any statistic that can be used to guess the
parameter θ is called an estimator of θ. The numerical value of
this statistic is called an estimate of θ. The estimator of the
parameter θ is denoted by θ̂.

Definition
Let X be a population with the density function f (x; θ), where θ is
an unknown parameter. The set of all admissible values of θ is
called a parameter space and it is denoted by Ω, that is
Ω = {θ ∈ Rn |f (x; θ) is a pdf} for some natural number n.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

Definition
Let X ∼ f (x; θ) and X1 , X2 , · · · , Xn be a random sample from the
population X . Any statistic that can be used to guess the
parameter θ is called an estimator of θ. The numerical value of
this statistic is called an estimate of θ. The estimator of the
parameter θ is denoted by θ̂.

Definition
Let X be a population with the density function f (x; θ), where θ is
an unknown parameter. The set of all admissible values of θ is
called a parameter space and it is denoted by Ω, that is
Ω = {θ ∈ Rn |f (x; θ) is a pdf} for some natural number n.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Point Estimation Continued

One of the basic problems is how to find an estimator of


population parameter θ. There are several methods for finding an
estimator of θ. Some of these methods are:
1 Moment Method.
2 Maximum Likelihood Method.
3 Bayes Method.
4 Least Squares Method.
5 Minimum Chi-Squares Method.
6 Minimum Distance Method.
Here, we only discuss the first two methods of estimating a
population parameter.

Gauranga C. Samanta F. M. University, Odisha


Moment Method

Let X1 , X2 , · · · , Xn be a random sample from a population X with


probability density function f (x; θ1 , θ2 , · · · , θm ), where
θ1 , θ2 , · · · , θm are m unknown parameters. Let
Z ∞
k
E [X ] = x k f (x; θ1 , θ2 , · · · , θm )dx (1)
−∞

be the k th population moment about origin.


Further, let
n
X Xk i
Mk = (2)
n
i=1

be the k th sample moment about origin .

Gauranga C. Samanta F. M. University, Odisha


Moment Method

Let X1 , X2 , · · · , Xn be a random sample from a population X with


probability density function f (x; θ1 , θ2 , · · · , θm ), where
θ1 , θ2 , · · · , θm are m unknown parameters. Let
Z ∞
k
E [X ] = x k f (x; θ1 , θ2 , · · · , θm )dx (1)
−∞

be the k th population moment about origin.


Further, let
n
X Xk i
Mk = (2)
n
i=1

be the k th sample moment about origin .

Gauranga C. Samanta F. M. University, Odisha


Moment Method

Let X1 , X2 , · · · , Xn be a random sample from a population X with


probability density function f (x; θ1 , θ2 , · · · , θm ), where
θ1 , θ2 , · · · , θm are m unknown parameters. Let
Z ∞
k
E [X ] = x k f (x; θ1 , θ2 , · · · , θm )dx (1)
−∞

be the k th population moment about origin.


Further, let
n
X Xk i
Mk = (2)
n
i=1

be the k th sample moment about origin .

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1

n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1

n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1

n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1

n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1

n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued

The moment method is one of the classical methods for estimating


parameters and motivation comes from the fact that the sample
moments are in some sense estimates for the population moments.
The moment method was first discovered by British statistician
Karl Pearson in 1902.
Now we provide some examples to illustrate this method:
Example
Let X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample of size
n from the population X . What are the estimators of the
population parameters µ and σ 2 if we use the moment method?

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

The moment method is one of the classical methods for estimating


parameters and motivation comes from the fact that the sample
moments are in some sense estimates for the population moments.
The moment method was first discovered by British statistician
Karl Pearson in 1902.
Now we provide some examples to illustrate this method:
Example
Let X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample of size
n from the population X . What are the estimators of the
population parameters µ and σ 2 if we use the moment method?

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: Since X ∼ N(µ, σ 2 ), we know that E [X ] = µ and


V (X ) = σ 2
Hence µ = E [X ] = M1 = ni=1 Xni = X̄
P
So, the estimator for the parameter µ is X̄ , i. e. µ̂ = X̄
Next we find the estimator for σ 2 , equating E [X 2 ] to M2 . Note
that
X2
E [X 2 ] = M2 = ni=1 ni , we know E [X 2 ] = σ 2 + µ2 = σ 2 + X̄ 2 ,
P
this implies that
X2 X2
σ 2 + X̄ 2 = ni=1 ni , this implies σ 2 = ni=1 ni − X̄ 2 . This implies
P P
2 2
σ 2 = ni=1 (Xi −nX̄ ) . Thus the estimator for σ 2 is ni=1 (Xi −nX̄ ) .
P P
2
That is σˆ2 = n (Xi −X̄ )
P
i=1 n

Gauranga C. Samanta F. M. University, Odisha


Splution Continued

1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄

Gauranga C. Samanta F. M. University, Odisha


Splution Continued

1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄

Gauranga C. Samanta F. M. University, Odisha


Splution Continued

1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄

Gauranga C. Samanta F. M. University, Odisha


Splution Continued

1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,

where 0 < θ < ∞ is an unknown parameter. Using the method of


moment find an estimator of θ? If
x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3 is a random sample of size 4,
then what is the estimate of θ?
Hint: To find an estimator, we shall equate the population
moment to the sample moment.
ANS: θ̂ = 1−X̄X̄ , θ̂ = 23

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,

where 0 < θ < ∞ is an unknown parameter. Using the method of


moment find an estimator of θ? If
x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3 is a random sample of size 4,
then what is the estimate of θ?
Hint: To find an estimator, we shall equate the population
moment to the sample moment.
ANS: θ̂ = 1−X̄X̄ , θ̂ = 23

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,

where 0 < θ < ∞ is an unknown parameter. Using the method of


moment find an estimator of θ? If
x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3 is a random sample of size 4,
then what is the estimate of θ?
Hint: To find an estimator, we shall equate the population
moment to the sample moment.
ANS: θ̂ = 1−X̄X̄ , θ̂ = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: The population moment E [X ] is given by


R1 R1
E [X ] = 0 xf (x; θ)dx = 0 xθx θ−1 dx. This implies that
θ
E [X ] = 1+θ
θ
We know M1 = X̄ = E [X ]. This implies that X̄ = 1+θ
So, θ̂ = 1−X̄X̄
Given x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have x̄ = 0.4, then
0.4
θ̂ = 1−0.4 = 23

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
 −x
 x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise

Find an estimator of β by the moment method.

Hint:Since, we have only one parameter, we need to compute only


the first population moment E (X ) about 0.
ANS: β̂ = X̄7

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
 −x
 x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise

Find an estimator of β by the moment method.

Hint:Since, we have only one parameter, we need to compute only


the first population moment E (X ) about 0.
ANS: β̂ = X̄7

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
 −x
 x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise

Find an estimator of β by the moment method.

Hint:Since, we have only one parameter, we need to compute only


the first population moment E (X ) about 0.
ANS: β̂ = X̄7

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise

Find an estimator of θ by the moment method.

ANS: θ̂ = 2X̄

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise

Find an estimator of θ by the moment method.

ANS: θ̂ = 2X̄

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise

Find an estimators of α and β by the moment method.

Hint: Since, the distribution has two unknown parameters, we


need the first two
qpopulation moments. q P
ANS: α̂ = X̄ − n i=1 (Xi − X̄ )2 , β̂ = X̄ + n3 ni=1 (Xi − X̄ )2
3 Pn

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise

Find an estimators of α and β by the moment method.

Hint: Since, the distribution has two unknown parameters, we


need the first two
qpopulation moments. q P
ANS: α̂ = X̄ − n i=1 (Xi − X̄ )2 , β̂ = X̄ + n3 ni=1 (Xi − X̄ )2
3 Pn

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise

Find an estimators of α and β by the moment method.

Hint: Since, the distribution has two unknown parameters, we


need the first two
qpopulation moments. q P
ANS: α̂ = X̄ − n i=1 (Xi − X̄ )2 , β̂ = X̄ + n3 ni=1 (Xi − X̄ )2
3 Pn

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if − θ < x < θ
f (x; θ) = 2θ
0, otherwise

Find an estimators of θ by the moment method.


q Pn
3 i=1 Xi2
ANS: θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Moment Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if − θ < x < θ
f (x; θ) = 2θ
0, otherwise

Find an estimators of θ by the moment method.


q Pn
3 i=1 Xi2
ANS: θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method

Let X1 , X2 , · · · , Xn be a random sample from a population X


with probability density function f (x; θ), where θ is an
unknown parameter.
The likelihood function, L(θ), is the distribution of the
sample. That is
L(θ) = ni=1 f (xi ; θ)
Q

This definition says that the likelihood function of a random


sample X1 , X2 , · · · , Xn is the joint density of the random
variables X1 , X2 , · · · , Xn .
The θ that maximizes the likelihood function L(θ) is called
the maximum likelihood estimator of θ, and it is denoted by θ̂.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method

Let X1 , X2 , · · · , Xn be a random sample from a population X


with probability density function f (x; θ), where θ is an
unknown parameter.
The likelihood function, L(θ), is the distribution of the
sample. That is
L(θ) = ni=1 f (xi ; θ)
Q

This definition says that the likelihood function of a random


sample X1 , X2 , · · · , Xn is the joint density of the random
variables X1 , X2 , · · · , Xn .
The θ that maximizes the likelihood function L(θ) is called
the maximum likelihood estimator of θ, and it is denoted by θ̂.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method

Let X1 , X2 , · · · , Xn be a random sample from a population X


with probability density function f (x; θ), where θ is an
unknown parameter.
The likelihood function, L(θ), is the distribution of the
sample. That is
L(θ) = ni=1 f (xi ; θ)
Q

This definition says that the likelihood function of a random


sample X1 , X2 , · · · , Xn is the joint density of the random
variables X1 , X2 , · · · , Xn .
The θ that maximizes the likelihood function L(θ) is called
the maximum likelihood estimator of θ, and it is denoted by θ̂.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method

Let X1 , X2 , · · · , Xn be a random sample from a population X


with probability density function f (x; θ), where θ is an
unknown parameter.
The likelihood function, L(θ), is the distribution of the
sample. That is
L(θ) = ni=1 f (xi ; θ)
Q

This definition says that the likelihood function of a random


sample X1 , X2 , · · · , Xn is the joint density of the random
variables X1 , X2 , · · · , Xn .
The θ that maximizes the likelihood function L(θ) is called
the maximum likelihood estimator of θ, and it is denoted by θ̂.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

The method is summarized below:


1 Obtain a random sample X1 , X2 , · · · , Xn from the distribution
of a population X with probability density function f (x; θ).
2 Define the likelihood function for the sample x1 , x2 , · · · , xn by
L(θ) = f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ).
3 Find the expression for θ that maximizes L(θ). This can be
done directly or by maximizing ln L(θ)
4 replace θ by θ̂ to obtain an expression for the maximum
likelihood estimator for θ.
5 Find the observed value of this estimator for a given sample.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

The method is summarized below:


1 Obtain a random sample X1 , X2 , · · · , Xn from the distribution
of a population X with probability density function f (x; θ).
2 Define the likelihood function for the sample x1 , x2 , · · · , xn by
L(θ) = f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ).
3 Find the expression for θ that maximizes L(θ). This can be
done directly or by maximizing ln L(θ)
4 replace θ by θ̂ to obtain an expression for the maximum
likelihood estimator for θ.
5 Find the observed value of this estimator for a given sample.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

The method is summarized below:


1 Obtain a random sample X1 , X2 , · · · , Xn from the distribution
of a population X with probability density function f (x; θ).
2 Define the likelihood function for the sample x1 , x2 , · · · , xn by
L(θ) = f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ).
3 Find the expression for θ that maximizes L(θ). This can be
done directly or by maximizing ln L(θ)
4 replace θ by θ̂ to obtain an expression for the maximum
likelihood estimator for θ.
5 Find the observed value of this estimator for a given sample.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

The method is summarized below:


1 Obtain a random sample X1 , X2 , · · · , Xn from the distribution
of a population X with probability density function f (x; θ).
2 Define the likelihood function for the sample x1 , x2 , · · · , xn by
L(θ) = f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ).
3 Find the expression for θ that maximizes L(θ). This can be
done directly or by maximizing ln L(θ)
4 replace θ by θ̂ to obtain an expression for the maximum
likelihood estimator for θ.
5 Find the observed value of this estimator for a given sample.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

The method is summarized below:


1 Obtain a random sample X1 , X2 , · · · , Xn from the distribution
of a population X with probability density function f (x; θ).
2 Define the likelihood function for the sample x1 , x2 , · · · , xn by
L(θ) = f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ).
3 Find the expression for θ that maximizes L(θ). This can be
done directly or by maximizing ln L(θ)
4 replace θ by θ̂ to obtain an expression for the maximum
likelihood estimator for θ.
5 Find the observed value of this estimator for a given sample.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
(1 − θ)x −θ , if 0 < x < 1
f (x; θ) =
0, otherwise

what is the maximum likelihood estimator of θ?


1
ANS: θ̂ = 1 + ln X

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
(1 − θ)x −θ , if 0 < x < 1
f (x; θ) =
0, otherwise

what is the maximum likelihood estimator of θ?


1
ANS: θ̂ = 1 + ln X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
 −x
 x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise

Find the maximum likelihood estimator of β.



ANS: β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
 −x
 x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise

Find the maximum likelihood estimator of β.



ANS: β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0

β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0

β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0

β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0

β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Solution

Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0

β̂ = 7

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Remark: Note that the maximum likelihood estimator of β is


same as the one found for β using the moment method in previous
example. However, in general the estimators by different methods
are different as the following example illustrates.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise

Find an estimator of θ by MLE.

ANS: θ̂ = X(n) or Max{Xi }

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Remark: Note that the maximum likelihood estimator of β is


same as the one found for β using the moment method in previous
example. However, in general the estimators by different methods
are different as the following example illustrates.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise

Find an estimator of θ by MLE.

ANS: θ̂ = X(n) or Max{Xi }

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Remark: Note that the maximum likelihood estimator of β is


same as the one found for β using the moment method in previous
example. However, in general the estimators by different methods
are different as the following example illustrates.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise

Find an estimator of θ by MLE.

ANS: θ̂ = X(n) or Max{Xi }

Gauranga C. Samanta F. M. University, Odisha


Solution

Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ

Gauranga C. Samanta F. M. University, Odisha


Solution

Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ

Gauranga C. Samanta F. M. University, Odisha


Solution

Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Note:
If we estimate the parameter θ of a distribution with uniform
density on the interval (0, θ), then the maximum likelihood
estimator is given by
θ̂ = X(n)
where as
θ̂ = 2X
is the estimator obtained by the method of moment.
Hence, in general these two methods do not provide the same
estimator of an unknown parameter.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Note:
If we estimate the parameter θ of a distribution with uniform
density on the interval (0, θ), then the maximum likelihood
estimator is given by
θ̂ = X(n)
where as
θ̂ = 2X
is the estimator obtained by the method of moment.
Hence, in general these two methods do not provide the same
estimator of an unknown parameter.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn be a random sample from a normal
population with mean µ and variance σ 2 . What are the maximum
likelihood estimators of µ and σ 2 ?

ANS: µ̂ = X̄ and σˆ2 = n1 ni=1 (Xi − X̄ )2


P

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn be a random sample from a normal
population with mean µ and variance σ 2 . What are the maximum
likelihood estimators of µ and σ 2 ?

ANS: µ̂ = X̄ and σˆ2 = n1 ni=1 (Xi − X̄ )2


P

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise

Find an estimators of α and β by MLE.

ANS: α̂ = X(1) and β̂ = X(n)

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise

Find an estimators of α and β by MLE.

ANS: α̂ = X(1) and β̂ = X(n)

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
q 2
 2 −(x−θ)
π e 2 , if x ≥ θ
f (x; θ) =
0, otherwise

What is the MLE of θ?


ANS: θ̂ = X(1)

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
q 2
 2 −(x−θ)
π e 2 , if x ≥ θ
f (x; θ) =
0, otherwise

What is the MLE of θ?


ANS: θ̂ = X(1)

Gauranga C. Samanta F. M. University, Odisha


Solution

q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2

ln L(θ) = n2 ln π2 − 21 (xi − θ)2


 P
d ln L(θ)
= − 21 (xi − θ)(−2) = xi − nθ = 0. this implies that
P P

θ = X̄ . Unfurtunatelly this is not an estimator for θ

Gauranga C. Samanta F. M. University, Odisha


Solution

q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2

ln L(θ) = n2 ln π2 − 21 (xi − θ)2


 P
d ln L(θ)
= − 21 (xi − θ)(−2) = xi − nθ = 0. this implies that
P P

θ = X̄ . Unfurtunatelly this is not an estimator for θ

Gauranga C. Samanta F. M. University, Odisha


Solution

q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2

ln L(θ) = n2 ln π2 − 21 (xi − θ)2


 P
d ln L(θ)
= − 21 (xi − θ)(−2) = xi − nθ = 0. this implies that
P P

θ = X̄ . Unfurtunatelly this is not an estimator for θ

Gauranga C. Samanta F. M. University, Odisha


Solution

q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2

ln L(θ) = n2 ln π2 − 21 (xi − θ)2


 P
d ln L(θ)
= − 21 (xi − θ)(−2) = xi − nθ = 0. this implies that
P P

θ = X̄ . Unfurtunatelly this is not an estimator for θ

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued
Note: The following example shows that the maximum likelihood
estimator of a parameter is not necessarily unique.
Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
1
, if θ − 1 < x < θ + 1
f (x; θ) = 2
0, otherwise,

then what is the maximum likelihood estimator of θ?


ANS:
( n The likelihood function of this sample is given by L(θ) =
1
2 , if max{x1 , x2 , · · · , xn } − 1 ≤ θ ≤ min{x1 , x2 , · · · , xn } + 1
0, otherwise
Since the likelihood function is a constant, any value in the interval
[max{x1 , x2 , · · · , xn } − 1, min{x1 , x2 , · · · , xn } + 1] is a maximum
likelihood estimate of θ.
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Note: The following example shows that the maximum likelihood
estimator of a parameter is not necessarily unique.
Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
1
, if θ − 1 < x < θ + 1
f (x; θ) = 2
0, otherwise,

then what is the maximum likelihood estimator of θ?


ANS:
( n The likelihood function of this sample is given by L(θ) =
1
2 , if max{x1 , x2 , · · · , xn } − 1 ≤ θ ≤ min{x1 , x2 , · · · , xn } + 1
0, otherwise
Since the likelihood function is a constant, any value in the interval
[max{x1 , x2 , · · · , xn } − 1, min{x1 , x2 , · · · , xn } + 1] is a maximum
likelihood estimate of θ.
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Note: The following example shows that the maximum likelihood
estimator of a parameter is not necessarily unique.
Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
1
, if θ − 1 < x < θ + 1
f (x; θ) = 2
0, otherwise,

then what is the maximum likelihood estimator of θ?


ANS:
( n The likelihood function of this sample is given by L(θ) =
1
2 , if max{x1 , x2 , · · · , xn } − 1 ≤ θ ≤ min{x1 , x2 , · · · , xn } + 1
0, otherwise
Since the likelihood function is a constant, any value in the interval
[max{x1 , x2 , · · · , xn } − 1, min{x1 , x2 , · · · , xn } + 1] is a maximum
likelihood estimate of θ.
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Theorem
Let θ̂ be a maximum likelihood estimator of a parameter θ and let
g (θ) be a, function of θ. Then the maximum likelihood estimator
of g (θ) is given by g (θ̂) .

Now we give two examples to illustrate the importance of this


theorem.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
p
Find an estimators of α2 + β 2 by MLE.
q
ANS: X(1) 2 + X2
(n)
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Theorem
Let θ̂ be a maximum likelihood estimator of a parameter θ and let
g (θ) be a, function of θ. Then the maximum likelihood estimator
of g (θ) is given by g (θ̂) .

Now we give two examples to illustrate the importance of this


theorem.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
p
Find an estimators of α2 + β 2 by MLE.
q
ANS: X(1) 2 + X2
(n)
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Theorem
Let θ̂ be a maximum likelihood estimator of a parameter θ and let
g (θ) be a, function of θ. Then the maximum likelihood estimator
of g (θ) is given by g (θ̂) .

Now we give two examples to illustrate the importance of this


theorem.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
p
Find an estimators of α2 + β 2 by MLE.
q
ANS: X(1) 2 + X2
(n)
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued
Theorem
Let θ̂ be a maximum likelihood estimator of a parameter θ and let
g (θ) be a, function of θ. Then the maximum likelihood estimator
of g (θ) is given by g (θ̂) .

Now we give two examples to illustrate the importance of this


theorem.
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
p
Find an estimators of α2 + β 2 by MLE.
q
ANS: X(1) 2 + X2
(n)
Gauranga C. Samanta F. M. University, Odisha
Maximum Likelihood Method Continued

Overall concluding remarks about the MLE:


To choose a value of the parameter for which the observed
data have as high a probability or density as possible.
In other words a maximum likelihood estimate is a parameter
value under which the sample data have the highest
probability.

Gauranga C. Samanta F. M. University, Odisha


Maximum Likelihood Method Continued

Overall concluding remarks about the MLE:


To choose a value of the parameter for which the observed
data have as high a probability or density as possible.
In other words a maximum likelihood estimate is a parameter
value under which the sample data have the highest
probability.

Gauranga C. Samanta F. M. University, Odisha


CRITERIA FOR EVALUATING THE GOODNESS OF
ESTIMATORS

In general, different parameter estimation methods yield


different estimators.
For example, if X ∼ U(0, θ) and X1 , X2 , · · · , Xn is a random
sample from the population X , then the estimator of θ
obtained by moment method is θ̂MM = 2X̄ where as the
estimator obtained by the maximum likelihood method is
θ̂ML = X(n)
where X̄ and X(n) are the sample average and the nth order
statistic, respectively.
Now the question arises: which of the two estimators is
better?

Gauranga C. Samanta F. M. University, Odisha


CRITERIA FOR EVALUATING THE GOODNESS OF
ESTIMATORS

In general, different parameter estimation methods yield


different estimators.
For example, if X ∼ U(0, θ) and X1 , X2 , · · · , Xn is a random
sample from the population X , then the estimator of θ
obtained by moment method is θ̂MM = 2X̄ where as the
estimator obtained by the maximum likelihood method is
θ̂ML = X(n)
where X̄ and X(n) are the sample average and the nth order
statistic, respectively.
Now the question arises: which of the two estimators is
better?

Gauranga C. Samanta F. M. University, Odisha


CRITERIA FOR EVALUATING THE GOODNESS OF
ESTIMATORS

In general, different parameter estimation methods yield


different estimators.
For example, if X ∼ U(0, θ) and X1 , X2 , · · · , Xn is a random
sample from the population X , then the estimator of θ
obtained by moment method is θ̂MM = 2X̄ where as the
estimator obtained by the maximum likelihood method is
θ̂ML = X(n)
where X̄ and X(n) are the sample average and the nth order
statistic, respectively.
Now the question arises: which of the two estimators is
better?

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator

We need some criteria to evaluate the goodness of an estimator.


Some well known criteria for evaluating the goodness of an
estimator are:
1 Unbiasedness
2 Efficiency and Relative Efficiency
3 Uniform Minimum Variance
4 Sufficiency, and
5 Consistency.
Here, we will discuss only first one.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator

We need some criteria to evaluate the goodness of an estimator.


Some well known criteria for evaluating the goodness of an
estimator are:
1 Unbiasedness
2 Efficiency and Relative Efficiency
3 Uniform Minimum Variance
4 Sufficiency, and
5 Consistency.
Here, we will discuss only first one.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator

We need some criteria to evaluate the goodness of an estimator.


Some well known criteria for evaluating the goodness of an
estimator are:
1 Unbiasedness
2 Efficiency and Relative Efficiency
3 Uniform Minimum Variance
4 Sufficiency, and
5 Consistency.
Here, we will discuss only first one.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator

We need some criteria to evaluate the goodness of an estimator.


Some well known criteria for evaluating the goodness of an
estimator are:
1 Unbiasedness
2 Efficiency and Relative Efficiency
3 Uniform Minimum Variance
4 Sufficiency, and
5 Consistency.
Here, we will discuss only first one.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator

We need some criteria to evaluate the goodness of an estimator.


Some well known criteria for evaluating the goodness of an
estimator are:
1 Unbiasedness
2 Efficiency and Relative Efficiency
3 Uniform Minimum Variance
4 Sufficiency, and
5 Consistency.
Here, we will discuss only first one.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Definition
An estimator θ̂ of θ is said to be an unbiased estimator of θ if and
only if

E (θ̂) = θ

If θ̂ is not unbiased, then it is called a biased estimator of θ.

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?

ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?

ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?

ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?

ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ

Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution: σ̂ 2 = n1 ni=1 (Xi − X̄ )2 


P

(n−1) Pn
E [σ̂ 2 ] = E n1 ni=1 (Xi − X̄ )2 = E n(n−1) 2
P 
i=1 (Xi − X̄ )
 
n−1 1 Pn 2 = n−1 E [S 2 ]
E (Xi − X̄ )
n
 n−1 2i=1
 n
1 2 (n−1)S 1 2 2
nσ E σ2
= n σ E [χn−1 ]
1 2
n σ (n − 1)
So, E [σ̂ 2 6= σ 2 ]. Therefore n1 ni=1 (Xi − X̄ )2 is aPbiased estimator
P
1 n
for σ 2 . Howevre, the sample variance S 2 = n−1 2
i=1 (Xi − X̄ ) is
an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?

ANS: Yes 
1 Pn
E [S 2 ] = E n−1 2

i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]

σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Alternative Solution

 1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2

Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
 
= n−1 E i=1
1
P 2
Xi − nX̄ 2

= n−1 E
 2

1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2

Hence, S 2 is an unbiased estimator for σ 2

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .

Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.

Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.

Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5

Gauranga C. Samanta F. M. University, Odisha


Unbiased Estimator Continued

Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.

Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
=Q P(Max{Xi } ≤ t)
= 5i=1 P(Xi ≤ t)
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Solution

Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65

Gauranga C. Samanta F. M. University, Odisha


Good Estimator

Definition
Let θ̂1 and θ̂2 be two unbiased estimators for θ. The estimator θ̂1
is said to be more efficient than θ̂2 if Var (θ̂1 ) < Var (θ̂2 ).
Var (θ̂2 )
Remark: The ratio η(θ̂1 , θ̂2 ) is given by Var (θ̂1 )
is called the
relative efficiency of θ̂1 with respect to θ̂2
Example
Let X1 , X2 , X3 be a random sample of size 3 from the popolation
with mean µ and variance σ 2 . If the statistics X̄ and Y given by
Y = X1 +2X62 +3X3 are two unbiased estimators of the population
mean µ, then which one is more efficient?

Gauranga C. Samanta F. M. University, Odisha


Good Estimator

Definition
Let θ̂1 and θ̂2 be two unbiased estimators for θ. The estimator θ̂1
is said to be more efficient than θ̂2 if Var (θ̂1 ) < Var (θ̂2 ).
Var (θ̂2 )
Remark: The ratio η(θ̂1 , θ̂2 ) is given by Var (θ̂1 )
is called the
relative efficiency of θ̂1 with respect to θ̂2
Example
Let X1 , X2 , X3 be a random sample of size 3 from the popolation
with mean µ and variance σ 2 . If the statistics X̄ and Y given by
Y = X1 +2X62 +3X3 are two unbiased estimators of the population
mean µ, then which one is more efficient?

Gauranga C. Samanta F. M. University, Odisha


Challenging Questions

Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ

Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.

ANS: 56 θ

Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE

Gauranga C. Samanta F. M. University, Odisha


Challenging Questions

Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ

Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.

ANS: 56 θ

Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE

Gauranga C. Samanta F. M. University, Odisha


Challenging Questions

Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ

Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.

ANS: 56 θ

Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE

Gauranga C. Samanta F. M. University, Odisha


Challenging Questions

Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ

Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.

ANS: 56 θ

Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE

Gauranga C. Samanta F. M. University, Odisha


Challenging Questions

Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ

Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.

ANS: 56 θ

Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE

Gauranga C. Samanta F. M. University, Odisha


Final Remarks

1. Under very general conditions on the joint distribution of the


sample, when the sample size n is large, the maximum
likelihood estimator of any parameter θ is approximately
unbiased [E (θ̂) ≡ θ] and has variance that is either as small as
or nearly as small as can be achieved by any estimator. Stated
another way, the MLE θ̂ is approximately the MVUE of θ .

Gauranga C. Samanta F. M. University, Odisha


Interval Estimation and Confidence Interval for Parameters

By using point estimation ,we may not get desired degree of


accuracy in estimating a parameter. Therefore ,it is better to
replace point estimation by interval estimation.
Definition
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with density f (x; θ), where θ is an unknown
parameter. The interval estimator of θ is a pair of statistics
L = L(X1 , X2 , · · · , Xn ) and U = U(X1 , X2 , · · · , Xn ) with L ≤ U
such that if x1 , x2 , · · · , xn is a set of sample data, then θ belongs
to the interval [L(X1 , X2 , · · · , Xn ), U(X1 , X2 , · · · , Xn )].

Gauranga C. Samanta F. M. University, Odisha


Interval Estimation and Confidence Interval for Parameters

By using point estimation ,we may not get desired degree of


accuracy in estimating a parameter. Therefore ,it is better to
replace point estimation by interval estimation.
Definition
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with density f (x; θ), where θ is an unknown
parameter. The interval estimator of θ is a pair of statistics
L = L(X1 , X2 , · · · , Xn ) and U = U(X1 , X2 , · · · , Xn ) with L ≤ U
such that if x1 , x2 , · · · , xn is a set of sample data, then θ belongs
to the interval [L(X1 , X2 , · · · , Xn ), U(X1 , X2 , · · · , Xn )].

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval Continued

Definition
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with density f (x; θ), where θ is an unknown
parameter. The interval estimator of θ is called a 100(1 − α)%
confidence interval for θ if

P(L ≤ θ ≤ U) = 1 − α.

The random variable L is called the lower confidence limit and U is


called the upper confidence limit. The number (1 − α) is called the
confidence coefficient or degree of confidence.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known)

Let X1 , X2 , · · · , Xn be a random sample from a population


X ∼ N(µ, σ 2 ), where µ is an unknown parameter and σ 2 is a
known parameter.
Since each Xi ∼ N(µ, σ 2 ), the distribution of sample mean is
2
given by X̄ ∼ N(µ, σn )

n(X̄ −µ)
If we standardize X̄ , then we get σ ∼ N(0, 1)
The distribution of the standardized X̄ is independent of the
parameter µ.
Hence, the (1 − α)% confidence interval for µ when the
population X is normal with the known variance σ 2 is given by
X̄ − √σn z α2 , X̄ + √σn z α2
 

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known)

Let X1 , X2 , · · · , Xn be a random sample from a population


X ∼ N(µ, σ 2 ), where µ is an unknown parameter and σ 2 is a
known parameter.
Since each Xi ∼ N(µ, σ 2 ), the distribution of sample mean is
2
given by X̄ ∼ N(µ, σn )

n(X̄ −µ)
If we standardize X̄ , then we get σ ∼ N(0, 1)
The distribution of the standardized X̄ is independent of the
parameter µ.
Hence, the (1 − α)% confidence interval for µ when the
population X is normal with the known variance σ 2 is given by
X̄ − √σn z α2 , X̄ + √σn z α2
 

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known)

Let X1 , X2 , · · · , Xn be a random sample from a population


X ∼ N(µ, σ 2 ), where µ is an unknown parameter and σ 2 is a
known parameter.
Since each Xi ∼ N(µ, σ 2 ), the distribution of sample mean is
2
given by X̄ ∼ N(µ, σn )

n(X̄ −µ)
If we standardize X̄ , then we get σ ∼ N(0, 1)
The distribution of the standardized X̄ is independent of the
parameter µ.
Hence, the (1 − α)% confidence interval for µ when the
population X is normal with the known variance σ 2 is given by
X̄ − √σn z α2 , X̄ + √σn z α2
 

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known)

Let X1 , X2 , · · · , Xn be a random sample from a population


X ∼ N(µ, σ 2 ), where µ is an unknown parameter and σ 2 is a
known parameter.
Since each Xi ∼ N(µ, σ 2 ), the distribution of sample mean is
2
given by X̄ ∼ N(µ, σn )

n(X̄ −µ)
If we standardize X̄ , then we get σ ∼ N(0, 1)
The distribution of the standardized X̄ is independent of the
parameter µ.
Hence, the (1 − α)% confidence interval for µ when the
population X is normal with the known variance σ 2 is given by
X̄ − √σn z α2 , X̄ + √σn z α2
 

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known)

Let X1 , X2 , · · · , Xn be a random sample from a population


X ∼ N(µ, σ 2 ), where µ is an unknown parameter and σ 2 is a
known parameter.
Since each Xi ∼ N(µ, σ 2 ), the distribution of sample mean is
2
given by X̄ ∼ N(µ, σn )

n(X̄ −µ)
If we standardize X̄ , then we get σ ∼ N(0, 1)
The distribution of the standardized X̄ is independent of the
parameter µ.
Hence, the (1 − α)% confidence interval for µ when the
population X is normal with the known variance σ 2 is given by
X̄ − √σn z α2 , X̄ + √σn z α2
 

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

This says that if samples of size n are taken from a normal


population with mean µ and known variance σ 2 and if the
interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

is constructed for every sample.


Then in the long-run 100(1 − α)% of the intervals will cover
the unknown parameter µ and hence with a confidence of
(1 − α)100% we can say that µ lies on the interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

That is, “with (1 − α) percent confidence” we assert that the


true mean lies within zα/2 √σn of the observed sample mean.
The interval
 
x̄ − √σn z α2 , x̄ + √σn z α2
is called a 1 − α percent confidence interval estimate of µ.
Gauranga C. Samanta F. M. University, Odisha
Cofidence Interval for µ (σ 2 known) Continued

This says that if samples of size n are taken from a normal


population with mean µ and known variance σ 2 and if the
interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

is constructed for every sample.


Then in the long-run 100(1 − α)% of the intervals will cover
the unknown parameter µ and hence with a confidence of
(1 − α)100% we can say that µ lies on the interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

That is, “with (1 − α) percent confidence” we assert that the


true mean lies within zα/2 √σn of the observed sample mean.
The interval
 
x̄ − √σn z α2 , x̄ + √σn z α2
is called a 1 − α percent confidence interval estimate of µ.
Gauranga C. Samanta F. M. University, Odisha
Cofidence Interval for µ (σ 2 known) Continued

This says that if samples of size n are taken from a normal


population with mean µ and known variance σ 2 and if the
interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

is constructed for every sample.


Then in the long-run 100(1 − α)% of the intervals will cover
the unknown parameter µ and hence with a confidence of
(1 − α)100% we can say that µ lies on the interval
X̄ − √σn z α2 , X̄ + √σn z α2
 

That is, “with (1 − α) percent confidence” we assert that the


true mean lies within zα/2 √σn of the observed sample mean.
The interval
 
x̄ − √σn z α2 , x̄ + √σn z α2
is called a 1 − α percent confidence interval estimate of µ.
Gauranga C. Samanta F. M. University, Odisha
Cofidence Interval for µ (σ 2 known) Continued

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Derivation

Variance is known, X̄ ∼ N(µ, σ 2 /n)


X̄ −µ
√ ∼ N(0, 1)
Z = σ/ n
P(−zα/2 ≤ Z ≤ zα/2 ) = 1 − α
X̄ −µ
⇐⇒ P(−zα/2 ≤ √
σ/ n
≤ zα/2 ) = 1 − α
⇐⇒ P(− √n zα/2 ≤ X̄ − µ ≤ √σn zα/2 ) = 1 − α
σ

⇐⇒ P(X̄ − √σn zα/2 ≤ µ ≤ X̄ + √σn zα/2 ) = 1 − α


 
So X̄ − √σn zα/2 , X̄ + √σn zα/2 is a 100(1 − α) confidence
intervalfor µ

Gauranga C. Samanta F. M. University, Odisha


Example

Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.

ANS: (7.69, 10.31)

Gauranga C. Samanta F. M. University, Odisha


Example

Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.

ANS: (7.69, 10.31)

Gauranga C. Samanta F. M. University, Odisha


Example

Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.

ANS: (7.69, 10.31)

Gauranga C. Samanta F. M. University, Odisha


Normal Table

Gauranga C. Samanta F. M. University, Odisha


Normal Table

Gauranga C. Samanta F. M. University, Odisha


Normal Table

Gauranga C. Samanta F. M. University, Odisha


Example

Example
From past experience it is known that the weights of salmon grown
at a commercial hatchery are normal with a mean that varies from
season to season but with a standard deviation that remains fixed
at 0.3 pounds. If we want to be 95 percent certain that our
estimate of the present season’s mean weight of a salmon is
correct to within ±0.1 pounds, how large a sample is needed?

ANS: n ≥ 35

Gauranga C. Samanta F. M. University, Odisha


Example

Example
From past experience it is known that the weights of salmon grown
at a commercial hatchery are normal with a mean that varies from
season to season but with a standard deviation that remains fixed
at 0.3 pounds. If we want to be 95 percent certain that our
estimate of the present season’s mean weight of a salmon is
correct to within ±0.1 pounds, how large a sample is needed?

ANS: n ≥ 35

Gauranga C. Samanta F. M. University, Odisha


Solution

A 95 percent confidence interval estimate for the unknown mean


µ, based
 on a sample of size n, is
µ ∈ x̄ − 1.96 √σn , x̄ + 1.96 √σn
Because the estimate x̄ is within 1.96 √σn = 0.588

n
of any point in
the interval, it follows that we can be 95 percent certain that x̄ is
within 0.1 of µ provided that
0.588

n
≤ 0.1, this implies that n ≥ 34.75
That is, a sample size of 35 or larger will suffice.

Gauranga C. Samanta F. M. University, Odisha


Solution

A 95 percent confidence interval estimate for the unknown mean


µ, based
 on a sample of size n, is
µ ∈ x̄ − 1.96 √σn , x̄ + 1.96 √σn
Because the estimate x̄ is within 1.96 √σn = 0.588

n
of any point in
the interval, it follows that we can be 95 percent certain that x̄ is
within 0.1 of µ provided that
0.588

n
≤ 0.1, this implies that n ≥ 34.75
That is, a sample size of 35 or larger will suffice.

Gauranga C. Samanta F. M. University, Odisha


Solution

A 95 percent confidence interval estimate for the unknown mean


µ, based
 on a sample of size n, is
µ ∈ x̄ − 1.96 √σn , x̄ + 1.96 √σn
Because the estimate x̄ is within 1.96 √σn = 0.588

n
of any point in
the interval, it follows that we can be 95 percent certain that x̄ is
within 0.1 of µ provided that
0.588

n
≤ 0.1, this implies that n ≥ 34.75
That is, a sample size of 35 or larger will suffice.

Gauranga C. Samanta F. M. University, Odisha


Solution

A 95 percent confidence interval estimate for the unknown mean


µ, based
 on a sample of size n, is
µ ∈ x̄ − 1.96 √σn , x̄ + 1.96 √σn
Because the estimate x̄ is within 1.96 √σn = 0.588

n
of any point in
the interval, it follows that we can be 95 percent certain that x̄ is
within 0.1 of µ provided that
0.588

n
≤ 0.1, this implies that n ≥ 34.75
That is, a sample size of 35 or larger will suffice.

Gauranga C. Samanta F. M. University, Odisha


Example

Example
Extensive monitoring of a computer time-sharing system has
suggested that response time to a particular editing command is
normally distributed with standard deviation 25 millisec. A new
operating system has been installed, and we wish to estimate the
true average response time µ for the new environment. Assuming
that response times are still normally distributed with σ = 25,
what sample size is necessary to ensure that the resulting 95% CI
has a width of (at most) 10?

ANS n = 97

Gauranga C. Samanta F. M. University, Odisha


Example

Example
Extensive monitoring of a computer time-sharing system has
suggested that response time to a particular editing command is
normally distributed with standard deviation 25 millisec. A new
operating system has been installed, and we wish to estimate the
true average response time µ for the new environment. Assuming
that response times are still normally distributed with σ = 25,
what sample size is necessary to ensure that the resulting 95% CI
has a width of (at most) 10?

ANS n = 97

Gauranga C. Samanta F. M. University, Odisha


Solution

25
The sample size n must satisfy 10 = 2 × 1.96 × √ n

n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required

Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
 for the two sided CI to have a
σ 2
width w is n = 2zα/2 w

Gauranga C. Samanta F. M. University, Odisha


Solution

25
The sample size n must satisfy 10 = 2 × 1.96 × √ n

n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required

Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
 for the two sided CI to have a
σ 2
width w is n = 2zα/2 w

Gauranga C. Samanta F. M. University, Odisha


Solution

25
The sample size n must satisfy 10 = 2 × 1.96 × √ n

n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required

Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
 for the two sided CI to have a
σ 2
width w is n = 2zα/2 w

Gauranga C. Samanta F. M. University, Odisha


Solution

25
The sample size n must satisfy 10 = 2 × 1.96 × √ n

n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required

Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
 for the two sided CI to have a
σ 2
width w is n = 2zα/2 w

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

One Sided Confidence Bounds


One sided confidence bounds are developed in the same
fashion as two sided intervals. However, the source is a
one-seded probability statement that makes use of the Central
Limit Theorem:  
X̄ −µ
P σ
√ < zα = 1 − α.
n

This implies that


P(µ > X̄ − zα √σn ) = 1 − α
Similar argument
 
X̄ −µ
P σ
√ > −zα = 1 − α.
n

gives
P(µ < X̄ + zα √σn ) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

One Sided Confidence Bounds


One sided confidence bounds are developed in the same
fashion as two sided intervals. However, the source is a
one-seded probability statement that makes use of the Central
Limit Theorem:  
X̄ −µ
P σ
√ < zα = 1 − α.
n

This implies that


P(µ > X̄ − zα √σn ) = 1 − α
Similar argument
 
X̄ −µ
P σ
√ > −zα = 1 − α.
n

gives
P(µ < X̄ + zα √σn ) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Theorem
If X̄ is the mean of a random sample of size n from a population
with variance σ 2 , the one-sided 100(1 − α)% confidence bounds
for µ are given by

upper one sided bound: x̄ + zα √σn ;


lower one-sided bound: x̄ − zα √σn

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?

ANS: [10.141, 13.859]


Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi√= 132, then√ what value of the constant k is
[12 − k 0.9, 12 + k 0.9] a 90% confidence interval for µ?

ANS: k = 1.64 and the corresponding 90% confidence interval is


[10.444, 13.556].

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?

ANS: [10.141, 13.859]


Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi√= 132, then√ what value of the constant k is
[12 − k 0.9, 12 + k 0.9] a 90% confidence interval for µ?

ANS: k = 1.64 and the corresponding 90% confidence interval is


[10.444, 13.556].

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?

ANS: [10.141, 13.859]


Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi√= 132, then√ what value of the constant k is
[12 − k 0.9, 12 + k 0.9] a 90% confidence interval for µ?

ANS: k = 1.64 and the corresponding 90% confidence interval is


[10.444, 13.556].

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?

ANS: [10.141, 13.859]


Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi√= 132, then√ what value of the constant k is
[12 − k 0.9, 12 + k 0.9] a 90% confidence interval for µ?

ANS: k = 1.64 and the corresponding 90% confidence interval is


[10.444, 13.556].

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Until now we have considered the case when the population is


normal with unknown mean µ and known variance σ 2 .
Now we consider the case when the population is non-normal
but its probability density function is continuous, symmetric
and unimodal.
If the sample size is large, then by the central limit theorem
X̄ −µ
σ
√ ∼ N(0, 1) as n → ∞.
n

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Until now we have considered the case when the population is


normal with unknown mean µ and known variance σ 2 .
Now we consider the case when the population is non-normal
but its probability density function is continuous, symmetric
and unimodal.
If the sample size is large, then by the central limit theorem
X̄ −µ
σ
√ ∼ N(0, 1) as n → ∞.
n

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Until now we have considered the case when the population is


normal with unknown mean µ and known variance σ 2 .
Now we consider the case when the population is non-normal
but its probability density function is continuous, symmetric
and unimodal.
If the sample size is large, then by the central limit theorem
X̄ −µ
σ
√ ∼ N(0, 1) as n → ∞.
n

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?

ANS: [6.344, 7.984].


Example
In sampling from a nonnormal distribution with a variance of 25,
how large must the sample size be so that the length of a 95%
confidence interval for the mean is 1.96?
ANS: n = 100.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?

ANS: [6.344, 7.984].


Example
In sampling from a nonnormal distribution with a variance of 25,
how large must the sample size be so that the length of a 95%
confidence interval for the mean is 1.96?
ANS: n = 100.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?

ANS: [6.344, 7.984].


Example
In sampling from a nonnormal distribution with a variance of 25,
how large must the sample size be so that the length of a 95%
confidence interval for the mean is 1.96?
ANS: n = 100.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 known) Continued

Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?

ANS: [6.344, 7.984].


Example
In sampling from a nonnormal distribution with a variance of 25,
how large must the sample size be so that the length of a 95%
confidence interval for the mean is 1.96?
ANS: n = 100.

Gauranga C. Samanta F. M. University, Odisha


Sampling Distribution (T )

The T Distribution
Definition
Let Z be the standard normal r. v. and let χ2γ be an independent
chi-squared random variable with γ degrees of freedom. The r. v.

T = rZ
χ2γ
γ

is said to follow a T distribution with γ degrees of freedom.

Gauranga C. Samanta F. M. University, Odisha


Sampling Distribution (T ) Continued
1 There are infinitely many T distributions, each identified by
one paramter γ, called degrees of freedom. This parameter is
always a positive integer.
2 The notation Tγ denotes a T r. v. with γ degrees of freedom.
3 Each T r. v. is continuous. The density for a T r. v. with γ
degrees of freedom is given by
Γ( γ+1 )
 2
 −(γ+1)
2
f (t) = Γ γ 2√πγ 1 + tγ , −∞ < t < ∞
(2)
4 The graph of the density of a Tγ r. v. is symmetric
bell-shaped curve centered at zero.
5 The parameter γ is a shape parameter in the sense that as its
value increases, the variance of the r. v. Tγ decreases.
6 Thus as the value of γ increases, the bell-shaped curve
associated with Tγ becomes more compact.
7 As the number of degrees of freedom increases, the bell
shaped curve associated with the Tγ r. v. approaches the
standard normal curve.
Gauranga C. Samanta F. M. University, Odisha
Sampling Distribution (T ) Continued

Remarks:
The T distribution is a generalization of the Cauchy
distribution and the normal distribution.
That is, if γ = 1, then the probability density function of T
becomes
1
f (t) = π(1+t 2 )
, −∞ < t < ∞
which is the Cauchy distribution.
Further, if γ → ∞, then
−t 2
f (t) = √1 e 2 , −∞ < t < ∞

which is the probability density function of the standard
normal distribution.

Gauranga C. Samanta F. M. University, Odisha


Sampling Distribution (T ) Continued

Theorem
If the random variable X has a t-distribution with γ degrees of
freedom, then
(
0, if γ ≥ 2
E [X ] =
DNE , if γ = 1

and
(
γ
γ−2 , if γ ≥ 3
V [X ] =
DNE , if γ = 1, 2

where DNE means does not exist.

Gauranga C. Samanta F. M. University, Odisha


Sampling Distribution (T ) Continued

Theorem
If X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample from the
population X , then
X̄ −µ
S

∼ T(n−1)
n

Example
Let X1 , X2 , X3 , X4 be a random sample of size 4 from a standard
normal distribution. If the statistic W is given by
X1 −X2 +X3
W =√
X12 +X22 +X32 +X42

then what is the expected value of W ?

ANS: E (W ) = 0

Gauranga C. Samanta F. M. University, Odisha


Sampling Distribution (T ) Continued

Theorem
If X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample from the
population X , then
X̄ −µ
S

∼ T(n−1)
n

Example
Let X1 , X2 , X3 , X4 be a random sample of size 4 from a standard
normal distribution. If the statistic W is given by
X1 −X2 +X3
W =√
X12 +X22 +X32 +X42

then what is the expected value of W ?

ANS: E (W ) = 0

Gauranga C. Samanta F. M. University, Odisha


Example
Suppose X1 , X2 , · · · , Xn is a random sample from aPnormal
distribution with mean µ and variance σ 2 . If X̄ = ni=1 Xni and
2
V 2 = ni=1 (Xi −nX̄ ) , and Xn+1 is an additional observation, what
P
m(X̄ −Xn+1 )
is the value of m so that the statistics V has a T
distribution.
q
ANS: n−1 n+1

Gauranga C. Samanta F. M. University, Odisha


Example
Suppose X1 , X2 , · · · , Xn is a random sample from aPnormal
distribution with mean µ and variance σ 2 . If X̄ = ni=1 Xni and
2
V 2 = ni=1 (Xi −nX̄ ) , and Xn+1 is an additional observation, what
P
m(X̄ −Xn+1 )
is the value of m so that the statistics V has a T
distribution.
q
ANS: n−1 n+1

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown)

So far, we have discussed the method of construction of


confidence interval for the parameter population mean when
the variance is known.
Now we treat case of constructing the confidence interval for
population mean when the population variance is also
unknown.
First of all, we begin with the construction of confidence
interval assuming the population X is normal.
The previous idea is not usefull for unknown variance.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown)

So far, we have discussed the method of construction of


confidence interval for the parameter population mean when
the variance is known.
Now we treat case of constructing the confidence interval for
population mean when the population variance is also
unknown.
First of all, we begin with the construction of confidence
interval assuming the population X is normal.
The previous idea is not usefull for unknown variance.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown)

So far, we have discussed the method of construction of


confidence interval for the parameter population mean when
the variance is known.
Now we treat case of constructing the confidence interval for
population mean when the population variance is also
unknown.
First of all, we begin with the construction of confidence
interval assuming the population X is normal.
The previous idea is not usefull for unknown variance.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown)

So far, we have discussed the method of construction of


confidence interval for the parameter population mean when
the variance is known.
Now we treat case of constructing the confidence interval for
population mean when the population variance is also
unknown.
First of all, we begin with the construction of confidence
interval assuming the population X is normal.
The previous idea is not usefull for unknown variance.

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Suppose X1 , X2 , · · · , Xn is random sample from a normal


population X with mean µ and variance σ 2 > 0.
Let the sample mean and sample variances be X̄ and S 2
respectively.
(n−1)S 2 X̄ −µ
Then σ2
∼ χ2n−1 and σ
√ ∼ N(0, 1)
n

(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S

∼ tn−1
(n−1)S 2 n
(n−1)σ 2

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Suppose X1 , X2 , · · · , Xn is random sample from a normal


population X with mean µ and variance σ 2 > 0.
Let the sample mean and sample variances be X̄ and S 2
respectively.
(n−1)S 2 X̄ −µ
Then σ2
∼ χ2n−1 and σ
√ ∼ N(0, 1)
n

(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S

∼ tn−1
(n−1)S 2 n
(n−1)σ 2

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Suppose X1 , X2 , · · · , Xn is random sample from a normal


population X with mean µ and variance σ 2 > 0.
Let the sample mean and sample variances be X̄ and S 2
respectively.
(n−1)S 2 X̄ −µ
Then σ2
∼ χ2n−1 and σ
√ ∼ N(0, 1)
n

(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S

∼ tn−1
(n−1)S 2 n
(n−1)σ 2

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Suppose X1 , X2 , · · · , Xn is random sample from a normal


population X with mean µ and variance σ 2 > 0.
Let the sample mean and sample variances be X̄ and S 2
respectively.
(n−1)S 2 X̄ −µ
Then σ2
∼ χ2n−1 and σ
√ ∼ N(0, 1)
n

(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S

∼ tn−1
(n−1)S 2 n
(n−1)σ 2

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Suppose X1 , X2 , · · · , Xn is random sample from a normal


population X with mean µ and variance σ 2 > 0.
Let the sample mean and sample variances be X̄ and S 2
respectively.
(n−1)S 2 X̄ −µ
Then σ2
∼ χ2n−1 and σ
√ ∼ N(0, 1)
n

(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S

∼ tn−1
(n−1)S 2 n
(n−1)σ 2

Gauranga C. Samanta F. M. University, Odisha


Figure

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Hence, the 100(1 − α)% confidence interval for µ when the


population X is normal with the unknown variance σ 2 is given
by
X̄ − √Sn t α2 ,(n−1) , X̄ + √Sn t α2 ,(n−1) .
 

Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?

ANS: [0.388, 9.612]

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Hence, the 100(1 − α)% confidence interval for µ when the


population X is normal with the unknown variance σ 2 is given
by
X̄ − √Sn t α2 ,(n−1) , X̄ + √Sn t α2 ,(n−1) .
 

Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?

ANS: [0.388, 9.612]

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

Hence, the 100(1 − α)% confidence interval for µ when the


population X is normal with the unknown variance σ 2 is given
by
X̄ − √Sn t α2 ,(n−1) , X̄ + √Sn t α2 ,(n−1) .
 

Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?

ANS: [0.388, 9.612]

Gauranga C. Samanta F. M. University, Odisha


Table

Gauranga C. Samanta F. M. University, Odisha


Cofidence Interval for µ (σ 2 unknown) Continued

One sided confidence bounds for µ with σ unknown are as follows:

x̄ + tα √sn and x̄ − tα √sn

They are the upper and lower 100(1 − α)% bounds, respectively.
Here tα is the t- value having an area of α to the right.

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)

Confidence Interval for σ 2


we will first describe the method for constructing the
confidence interval for variance (σ 2 ) when the population is
normal with a known population mean µ.
Then we treat the case when the population mean µ is also
unknown.
Let X1 , X2 , · · · , Xn be a random sample from a normal
population X with known mean µ and unknown variance σ 2 .
We would like to construct a 100(1 − α)% confidence interval
for the variance σ 2 , that is, we would like to find the estimate
of L and U such that
P(L ≤ σ 2 ≤ U) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)

Confidence Interval for σ 2


we will first describe the method for constructing the
confidence interval for variance (σ 2 ) when the population is
normal with a known population mean µ.
Then we treat the case when the population mean µ is also
unknown.
Let X1 , X2 , · · · , Xn be a random sample from a normal
population X with known mean µ and unknown variance σ 2 .
We would like to construct a 100(1 − α)% confidence interval
for the variance σ 2 , that is, we would like to find the estimate
of L and U such that
P(L ≤ σ 2 ≤ U) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)

Confidence Interval for σ 2


we will first describe the method for constructing the
confidence interval for variance (σ 2 ) when the population is
normal with a known population mean µ.
Then we treat the case when the population mean µ is also
unknown.
Let X1 , X2 , · · · , Xn be a random sample from a normal
population X with known mean µ and unknown variance σ 2 .
We would like to construct a 100(1 − α)% confidence interval
for the variance σ 2 , that is, we would like to find the estimate
of L and U such that
P(L ≤ σ 2 ≤ U) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)

Confidence Interval for σ 2


we will first describe the method for constructing the
confidence interval for variance (σ 2 ) when the population is
normal with a known population mean µ.
Then we treat the case when the population mean µ is also
unknown.
Let X1 , X2 , · · · , Xn be a random sample from a normal
population X with known mean µ and unknown variance σ 2 .
We would like to construct a 100(1 − α)% confidence interval
for the variance σ 2 , that is, we would like to find the estimate
of L and U such that
P(L ≤ σ 2 ≤ U) = 1 − α

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)
Continued

To find these estimate of L and U, we know


2
 Xi ∼N(µ, σ ),
Xi −µ
σ ∼ N(0, 1),
 2
Xi −µ
σ ∼ χ21 ,
Pn  Xi −µ 2
i=1 σ ∼ χ2n

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (known mean µ)
Continued

The confidence interval for σ 2 with known mean µ is defined as


follows:
P 
n 2
Pn 2
i=1 (Xi −µ) i=1 (Xi −µ)
χ2
, χ2
α ,n 1− α
2 2 ,n

or for σ
r P r Pn 
n 2 2
i=1 (Xi −µ) i=1 (Xi −µ)
χ2α ,n
, χ21− α ,n
2 2

Gauranga C. Samanta F. M. University, Odisha


Chisquare Figure

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (unknown mean µ)
Continued

Consider a random sample X1 , X2 , · · · , Xn from a normal


population X ∼ N(µ, σ 2 ), where the population mean µ and
population variance σ 2 are unknown.
We want to construct a 100(1 − α)% confidence interval for
the population variance.
We know that
(n−1)S 2
σ2
∼ χ2n−1
This implies that
Pn 2
i=1 (Xi −X̄ )
σ 2 ∼ χ2n−1

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (unknown mean µ)
Continued

Consider a random sample X1 , X2 , · · · , Xn from a normal


population X ∼ N(µ, σ 2 ), where the population mean µ and
population variance σ 2 are unknown.
We want to construct a 100(1 − α)% confidence interval for
the population variance.
We know that
(n−1)S 2
σ2
∼ χ2n−1
This implies that
Pn 2
i=1 (Xi −X̄ )
σ 2 ∼ χ2n−1

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (unknown mean µ)
Continued

Consider a random sample X1 , X2 , · · · , Xn from a normal


population X ∼ N(µ, σ 2 ), where the population mean µ and
population variance σ 2 are unknown.
We want to construct a 100(1 − α)% confidence interval for
the population variance.
We know that
(n−1)S 2
σ2
∼ χ2n−1
This implies that
Pn 2
i=1 (Xi −X̄ )
σ 2 ∼ χ2n−1

Gauranga C. Samanta F. M. University, Odisha


Confidence Interval for Variance (σ 2 ) (unknown mean µ)
Continued
Hence, the 100(1 − α)% confidence interval for variance σ 2 when
the population mean µ is unknown is given by
 
(n−1)S 2 2 (n−1)S 2
P χ2 ≤ σ ≤ χ2 =1−α
α ,n−1 1− α
2 2 ,n−1

That is
 
(n−1)S 2 (n−1)S 2
,
χ2α ,n−1 χ21− α ,n−1
2 2

or for σ
r r 
(n−1)S 2 (n−1)S 2
χ2α ,n−1
, χ21− α ,n−1
2 2

Gauranga C. Samanta F. M. University, Odisha


Examples

Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?

ANS: [4.625, 32.59]


Example
Let X1 , X2 , · · · , Xn be a random
P13 2sample of size 13 from a normal
distribution 2
N(µ, σ ). If i=1 xi = 4806.61 and
P13
x
i=1 i = 246.61. Find the 90% confidence interval for σ 2 ?

ANS: [6.11, 24.55]

Gauranga C. Samanta F. M. University, Odisha


Examples

Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?

ANS: [4.625, 32.59]


Example
Let X1 , X2 , · · · , Xn be a random
P13 2sample of size 13 from a normal
distribution 2
N(µ, σ ). If i=1 xi = 4806.61 and
P13
x
i=1 i = 246.61. Find the 90% confidence interval for σ 2 ?

ANS: [6.11, 24.55]

Gauranga C. Samanta F. M. University, Odisha


Examples

Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?

ANS: [4.625, 32.59]


Example
Let X1 , X2 , · · · , Xn be a random
P13 2sample of size 13 from a normal
distribution 2
N(µ, σ ). If i=1 xi = 4806.61 and
P13
x
i=1 i = 246.61. Find the 90% confidence interval for σ 2 ?

ANS: [6.11, 24.55]

Gauranga C. Samanta F. M. University, Odisha


Examples

Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?

ANS: [4.625, 32.59]


Example
Let X1 , X2 , · · · , Xn be a random
P13 2sample of size 13 from a normal
distribution 2
N(µ, σ ). If i=1 xi = 4806.61 and
P13
x
i=1 i = 246.61. Find the 90% confidence interval for σ 2 ?

ANS: [6.11, 24.55]

Gauranga C. Samanta F. M. University, Odisha


Chisquare table

Gauranga C. Samanta F. M. University, Odisha


Error of Estimate
When we use a sample mean to estimate the population
mean, we know that although we are using a method of
estimation which has certain desirable properties, the chances
are slim, virtually nonexistent, that the estimate will actually
equal to population mean µ?

Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.

To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ

n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate
When we use a sample mean to estimate the population
mean, we know that although we are using a method of
estimation which has certain desirable properties, the chances
are slim, virtually nonexistent, that the estimate will actually
equal to population mean µ?

Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.

To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ

n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate
When we use a sample mean to estimate the population
mean, we know that although we are using a method of
estimation which has certain desirable properties, the chances
are slim, virtually nonexistent, that the estimate will actually
equal to population mean µ?

Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.

To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ

n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate Continued

The error E = Max|X̄ − µ|. That is will be less than z α2 √σn


with probability 1 − α.

Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30

Gauranga C. Samanta F. M. University, Odisha


Error of Estimate Continued

The error E = Max|X̄ − µ|. That is will be less than z α2 √σn


with probability 1 − α.

Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30

Gauranga C. Samanta F. M. University, Odisha


Error of Estimate Continued

The error E = Max|X̄ − µ|. That is will be less than z α2 √σn


with probability 1 − α.

Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30

Gauranga C. Samanta F. M. University, Odisha


Error of Estimate Continued

Example
A research worker wants to determine the average time it takes a
mechanic to rotate the tires of a car, and he wants to be able to
assert with 95% confidence that the mean of his sample is off by
atmost 0.50 minute. If he can persume from past experience that
σ = 1.6 minutes,how large a sample will he has to take?

ANS: 39.3 ≈ 40

Gauranga C. Samanta F. M. University, Odisha


Error of Estimate Continued

Example
A research worker wants to determine the average time it takes a
mechanic to rotate the tires of a car, and he wants to be able to
assert with 95% confidence that the mean of his sample is off by
atmost 0.50 minute. If he can persume from past experience that
σ = 1.6 minutes,how large a sample will he has to take?

ANS: 39.3 ≈ 40

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q1 You take a random sample from some population and form a


96% confidence interval for the population mean, µ. Which
quantity is guaranteed to be in the interval you form?
(a) 0 (b) µ (c) X̄ (d) 0.96
ANS: (c)
Q2 Decreasing the sample size, while holding the confidence level
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller
(c) it will stay the same (d) cannot be determined from the
given information
ANS: (a)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q1 You take a random sample from some population and form a


96% confidence interval for the population mean, µ. Which
quantity is guaranteed to be in the interval you form?
(a) 0 (b) µ (c) X̄ (d) 0.96
ANS: (c)
Q2 Decreasing the sample size, while holding the confidence level
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller
(c) it will stay the same (d) cannot be determined from the
given information
ANS: (a)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q1 You take a random sample from some population and form a


96% confidence interval for the population mean, µ. Which
quantity is guaranteed to be in the interval you form?
(a) 0 (b) µ (c) X̄ (d) 0.96
ANS: (c)
Q2 Decreasing the sample size, while holding the confidence level
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller
(c) it will stay the same (d) cannot be determined from the
given information
ANS: (a)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q1 You take a random sample from some population and form a


96% confidence interval for the population mean, µ. Which
quantity is guaranteed to be in the interval you form?
(a) 0 (b) µ (c) X̄ (d) 0.96
ANS: (c)
Q2 Decreasing the sample size, while holding the confidence level
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller
(c) it will stay the same (d) cannot be determined from the
given information
ANS: (a)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems
Q3 Decreasing the confidence level, while holding the sample size
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (b)
Q4 If you increase the sample size and confidence level at the
same time, what will happen to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (d) We know that when the sample size increases, the
confidence interval will be smaller. However, it will become bigger
as the confidence level increases. Therefore, we cannot conclude
how the confidence interval will be in this question, since we don’t
have enough information to determine whether the change in
sample size or theGauranga
confidence level isF. M.
C. Samanta
more influential here.
University, Odisha
Practice Problems
Q3 Decreasing the confidence level, while holding the sample size
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (b)
Q4 If you increase the sample size and confidence level at the
same time, what will happen to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (d) We know that when the sample size increases, the
confidence interval will be smaller. However, it will become bigger
as the confidence level increases. Therefore, we cannot conclude
how the confidence interval will be in this question, since we don’t
have enough information to determine whether the change in
sample size or theGauranga
confidence level isF. M.
C. Samanta
more influential here.
University, Odisha
Practice Problems
Q3 Decreasing the confidence level, while holding the sample size
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (b)
Q4 If you increase the sample size and confidence level at the
same time, what will happen to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (d) We know that when the sample size increases, the
confidence interval will be smaller. However, it will become bigger
as the confidence level increases. Therefore, we cannot conclude
how the confidence interval will be in this question, since we don’t
have enough information to determine whether the change in
sample size or theGauranga
confidence level isF. M.
C. Samanta
more influential here.
University, Odisha
Practice Problems
Q3 Decreasing the confidence level, while holding the sample size
the same, will do what to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (b)
Q4 If you increase the sample size and confidence level at the
same time, what will happen to the length of your confidence
interval?
(a) make it bigger (b) make it smaller (c) it will stay the same
(d) cannot be determined from the given information
ANS: (d) We know that when the sample size increases, the
confidence interval will be smaller. However, it will become bigger
as the confidence level increases. Therefore, we cannot conclude
how the confidence interval will be in this question, since we don’t
have enough information to determine whether the change in
sample size or theGauranga
confidence level isF. M.
C. Samanta
more influential here.
University, Odisha
Practice Problems

Q5 Which of the following is a property of the Sampling


Distribution of X̄ ?
(a) if you increase your sample size, X̄ will always get closer
to µ, the population mean.
(b) the standard deviation of the sample mean is the same as
the standard deviation from the original population σ
(c) the mean of the sampling distribution of X̄ is µ, the
population mean.
d) X̄ always has a Normal distribution.
ANS: (c)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q5 Which of the following is a property of the Sampling


Distribution of X̄ ?
(a) if you increase your sample size, X̄ will always get closer
to µ, the population mean.
(b) the standard deviation of the sample mean is the same as
the standard deviation from the original population σ
(c) the mean of the sampling distribution of X̄ is µ, the
population mean.
d) X̄ always has a Normal distribution.
ANS: (c)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q6 A 95% confidence interval for the mean number of televisions


per American household is (1.15, 4.20). For each of the
following statements about the above confidence interval,
choose true or false.
(a) The probability that µ is between 1.15 and 4.20 is 0.95.
(b) We are 95% confident that the true mean number of
televisions per American household is between 1.15 and 4.20.
(c) 95% of all samples should have X̄ between 1.15 and 4.20.
(d) 95% of all American households have between 1.15 and
4.20 televisions.
(e) Of 100 intervals calculated the same way (95%), we
expect 95 of them to capture the population mean.
(f) Of 100 intervals calculated the same way (95%), we
expect 100 of them to capture the sample mean.
ANS (b), (e) and (f) are true

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q6 A 95% confidence interval for the mean number of televisions


per American household is (1.15, 4.20). For each of the
following statements about the above confidence interval,
choose true or false.
(a) The probability that µ is between 1.15 and 4.20 is 0.95.
(b) We are 95% confident that the true mean number of
televisions per American household is between 1.15 and 4.20.
(c) 95% of all samples should have X̄ between 1.15 and 4.20.
(d) 95% of all American households have between 1.15 and
4.20 televisions.
(e) Of 100 intervals calculated the same way (95%), we
expect 95 of them to capture the population mean.
(f) Of 100 intervals calculated the same way (95%), we
expect 100 of them to capture the sample mean.
ANS (b), (e) and (f) are true

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q7 A waiter believes that his tips from various customers have a


slightly right skewed distribution with a mean of 10 dollars and
a standard deviation of 2.50 dollars. What is the probability
that the average of 35 customers will be more than 13 dollars?
(a) almost 1 (b) almost zero (c) 0.1151 (d) 0.8849
X̄ −10 13−10
ANS: (b) P(X̄ > 13) = P( 2.5/ √
35
> √ )
2.5/ 35
= P(Z > 7.09)

Gauranga C. Samanta F. M. University, Odisha


Practice Problems

Q7 A waiter believes that his tips from various customers have a


slightly right skewed distribution with a mean of 10 dollars and
a standard deviation of 2.50 dollars. What is the probability
that the average of 35 customers will be more than 13 dollars?
(a) almost 1 (b) almost zero (c) 0.1151 (d) 0.8849
X̄ −10 13−10
ANS: (b) P(X̄ > 13) = P( 2.5/ √
35
> √ )
2.5/ 35
= P(Z > 7.09)

Gauranga C. Samanta F. M. University, Odisha


Example
In 6 determinations of the melting point of tin , a chemist obtained
a mean 232.26Â◦ C with a standard deviation of 0.14Â◦ C, what can
chemist assert with 98% confidence about the maximum error?

Gauranga C. Samanta F. M. University, Odisha


THANK YOU

Gauranga C. Samanta F. M. University, Odisha

You might also like