You are on page 1of 13

STAT 3202: Practice 03

Spring 2019, OSU

Exercise 1
iid
Let X1 , X2 , . . . , Xn ∼ Poisson(λ). That is

λx e−λ
f (x | λ) = , x = 0, 1, 2, . . . λ > 0
x!

(a) Obtain a method of moments estimator for λ, λ̃. Calculate an estimate using this estimator when

x1 = 1, x2 = 2, x3 = 4, x4 = 2.

Solution:
Recall that for a Poisson distribution we have E[X] = λ.
Now to obtain the method of moments estimator we simply equate the first population mean to the first
sample mean. (And then we need to “solve” this equation for λ. . . )

E[X] = X̄λ = X̄

Thus, after “solving” we obtain the method of moments estimator.

λ̃ = X̄

Thus for the given data we can use this estimator to calculate the estimate.

1
λ̃ = x̄ = (1 + 2 + 4 + 2) = 2.25
4

(b) Find the maximum likelihood estimator for λ, λ̂. Calculate an estimate using this estimator when

x1 = 1, x2 = 2, x3 = 4, x4 = 2.

Solution:
Pn
n n xi −nλ
Y Y λxi e−λ λ i=1 e
L(λ) = f (xi | λ) = = Qn
i=1 i=1
xi ! i=1 (x i !)

n
! n
X X
log L(λ) = xi log λ − nλ − log (xi !)
i=1 i=1

Pn
d xi
log L(λ) = i=1
−n=0
dλ λ

1
n
1X
λ̂ = xi
n i=1

Pn
d2 xi
log L(λ) = − i=1
<0
dλ2 λ2
We then have the estimator, and for the given data, the estimate.

n
1X 1
λ̂ = xi = (1 + 2 + 4 + 2) = 2.25
n i=1 4

(c) Find the maximum likelihood estimator of P [X = 4], call it P̂ [X = 4]. Calculate an estimate using
this estimator when

x1 = 1, x2 = 2, x3 = 4, x4 = 2.

Solution:
Here we use the invariance property of the MLE. Since λ̂ is the MLE for λ then

λ̂4 e−λ̂
P̂ [X = 4] =
4!

is the maximum likelihood estimator for P [X = 4].


For the given data we can calculate an estimate using this estimator.

λ̂4 e−λ̂ 2.254 e−2.25


P̂ [X = 4] = = = 0.1126
4! 4!

Exercise 2
iid
Let X1 , X2 , . . . , Xn ∼ N (θ, σ 2 ).
Find a method of moments estimator for the parameter vector θ, σ 2 .


Solution:
Since we are estimating two parameters, we will need two population and sample moments.

E[X] = θ

2
E X 2 = Var [X] + (E[X]) = σ 2 + θ2
 

We equate the first population moment to the first


Pnsample moment, x̄ and we equate the second population
moment to the second sample moment, X 2 = n1 i=1 Xi2 .

2
E [X] = X̄

E X2 = X2
 

For this example, that is,

θ = X̄

σ2 + θ2 = X 2

Solving this system of equations for θ and σ 2 we find the method of moments estimators.

θ̃ = X̄

n
1X
σ̃ 2 = X 2 − (X̄)2 = (Xi − X̄)2
n i=1

Exercise 3
iid
Let X1 , X2 , . . . , Xn ∼ N (1, σ 2 ).
Find a method of moments estimator of σ 2 , call it σ̃ 2 .
Solution:
The first moment is not useful because it is not a function of the parameter of interest σ 2 .

E[X] = 1

As a results, we instead use the second moment

2
E X 2 = Var [X] + (E[X]) = σ 2 + 12 = σ 2 + 1
 

Pn
We equate this second population moment to the second population moment, X 2 = 1
n i=1 Xi2

E X2 = X2
 

σ2 + 1 = X 2

Now solving for σ 2 we obtain the method of moments estimator.

n
!
1X 2
σ̃ =2
X −1
n i=1 i

3
Exercise 4
Let X1 , X2 , . . . , Xn be a random sample from a population with pdf

1 (1−θ)/θ
f (x | θ) = x , 0 < x < 1, 0 < θ < ∞
θ

(a) Find the maximum likelihood estimator of θ, call it θ̂. Calculate an estimate using this estimator
when

x1 = 0.10, x2 = 0.22, x3 = 0.54, x4 = 0.36.

Solution:

n n n
! 1−θ
1
θ
(1−θ)/θ
Y Y Y
L(θ) = f (xi | θ) = xi = θ−n xi
i=1 i=1
θ i=1

n n n
1−θ X 1X X
log L(θ) = −n log θ + log xi = −n log θ + log xi − log xi
θ i=1 θ i=1 i=1

n
d n 1 X
log L(θ) = − − 2 log xi = 0
dθ θ θ i=1

n
1X
θ̂ = − log xi
n i=1

Note that θ̂ > 0, since each log xi < 0 since 0 < xi < 1.

n
d2 n 2 X
log L(θ) = + log xi
dθ2 θ2 θ3 i=1

d2 n 2   n 2n n
log L(θ̂) = + −n θ̂ = − =− <0
dθ2 θ̂2 θ̂3 θ̂2 θ̂2 θ̂2
We then have the estimator, and for the given data, the estimate.

n
1X 1
θ̂ = − log xi = − log(0.10 · 0.22 · 0.54 · 0.36) = 1.3636
n i=1 4

(b) Obtain a method of moments estimator for θ, θ̃. Calculate an estimate using this estimator when

x1 = 0.10, x2 = 0.22, x3 = 0.54, x4 = 0.36.

Solution:

1
1 1
Z
E[X] = x · x(1−θ)/θ dx = ... some calculus happens... =
0 θ θ+1

4
E[X] = X̄

1
= X̄
θ+1

Solving for θ results in the method of moments estimator.

1 − X̄
θ̃ =

1
x̄ = (0.10 + 0.22 + 0.54 + 0.36) = 0.305
4
Thus for the given data we can calculate the estimate.

1 − x̄ 1 − 0.305
θ̃ = = = 2.2787
x̄ 0.305

Exercise 5
Let X1 , X2 , . . . , Xn iid from a population with pdf

θ
f (x | θ) = , 0<θ≤x
x2

Obtain the maximum likelihood estimator for θ, θ̂.


Solution:
First, be aware that the values of x for this pdf are restricted by the value of θ.

n
Y θ
L(θ) = 2 0 < θ ≤ xi for all xi
x
i=1 i

θn
= Qn 0 < θ ≤ min{xi }
i=1 x2i

n
X
log L(θ) = n log θ − 2 log xi
i=1

d n
log L(θ) = > 0
dθ θ

So, here we have a log-likelihood that is increasing in regions where it is not zero, that is, when θ min{xi }.
Thus, the likelihood is the largest allowable value of θ in this region, thus the maximum likelihood estimator
is given by

θ̂ = min{Xi }

5
Exercise 6
Let X1 , X2 , . . . Xn be a random sample of size n from a distribution with probability density function

f (x, α) = α−2 xe−x/α , x > 0, α > 0

(a) Obtain the maximum likelihood estimator of α, α̂. Calculate the estimate when

x1 = 0.25, x2 = 0.75, x3 = 1.50, x4 = 2.5, x5 = 2.0.

Solution:
We first obtain the likelihood by multiplying the probability density function for each Xi . We then simplify
this expression.

n n n
!  Pn 
Y Y Y − xi
L(α) = f (xi ; α) = α −2
xi e −xi /α
=α −2n
xi exp i=i

i=1 i=1 i=1


α

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

n Pn
X xi
log L(α) = −2n log α + log xi − i=i

i=i
α

To maximize this function, we take a derivative with respect to α.


Pn
d −2n xi
log L(α) = + i=i
dα α α2
We set this derivative equal to zero, then solve for α.
Pn
−2n xi
+ i=i
=0
α α2
Solving gives our estimator, which we denote with a hat.

Pn
xi x̄
α̂ = i=i
=
2n 2

Using the given data, we obtain an estimate.

0.25 + 0.75 + 1.50 + 2.50 + 2.0


α̂ = = 0.70
2·5

(We should also verify that this point is a maxmimum, which is omitted here.)
(b) Obtain the method of moments estimator of α, α̃. Calculate the estimate when

x1 = 0.25, x2 = 0.75, x3 = 1.50, x4 = 2.5, x5 = 2.0.

Hint: Recall the probability density function of an exponential random variable.

6
1 −x/θ
f (x | θ) = e , x > 0, θ > 0
θ
Note that, the moments of this distribution are given by


xk −x/θ
Z
E[X ] =
k
e = k! · θk .
0 θ

This hint will also be useful in the next exercise.


Solution:
We first obtain the first population moment. Notice the integration is done by identifying the form of the
integral is that of the second moment of an exponential distribution.

∞ ∞
1 x2 −x/α 1
Z Z
E[X] = x · α−2 xe−x/α dx = e dx = (2α2 ) = 2α
0 α 0 α α

We then set the first population moment, which is a function of α, equal to the first sample moment.
Pn
xi
2α = i=i
n
Solving for α, we obtain the method of moments estimator.

Pn
xi x̄
α̃ = i=i
=
2n 2

Using the given data, we obtain an estimate.

0.25 + 0.75 + 1.50 + 2.50 + 2.0


α̃ = = 0.70
2·5
Note that, in this case, the MLE and MoM estimators are the same.

Exercise 7
Let X1 , X2 , . . . Xn be a random sample of size n from a distribution with probability density function

1 2 −x/β
f (x | β) = x e , x > 0, β > 0
2β 3

(a) Obtain the maximum likelihood estimator of β, β̂. Calculate the estimate when

x1 = 2.00, x2 = 4.00, x3 = 7.50, x4 = 3.00.

Solution:
We first obtain the likelihood by multiplying the probability density function for each Xi . We then simplify
this expression.

7
n n n
! Pn
1 2 −x/β
 
Y Y Y − xi
L(β) = f (xi ; β) = x e = 2−n β −3n xi exp i=i

i=1 i=1
2β 3
i=1
β

Instead of directly maximizing the likelihood, we instead maximize the log-likelihood.

n Pn
X xi
log L(β) = −n log 2 − 3n log β + log xi − i=i

i=i
β

To maximize this function, we take a derivative with respect to β.


Pn
d −3n xi
log L(β) = + i=i
dβ β β2

We set this derivative equal to zero, then solve for β.


Pn
−3n xi
+ i=i
=0
β β2

Solving gives our estimator, which we denote with a hat.

Pn
xi x̄
β̂ = i=i
=
3n 3

Using the given data, we obtain an estimate.

2.00 + 4.00 + 7.50 + 3.00


β̂ = = 1.375
3·4

(We should also verify that this point is a maxmimum, which is omitted here.)
(b) Obtain the method of moments estimator of β, β̃. Calculate the estimate when

x1 = 2.00, x2 = 4.00, x3 = 7.50, x4 = 3.00.

Solution:
We first obtain the first population moment. Notice the integration is done by identifying the form of the
integral is that of the third moment of an exponential distribution.

∞ ∞
1 2 −x/β 1 x3 −x/β 1
Z Z
E[X] = x· x e dx = e dx = (6β 3 ) = 3β
0 2β 3 2β 2 0 β 2β 2

We then set the first population moment, which is a function of β, equal to the first sample moment.

E[X] = X̄
Pn
xi
3β = i=i
n
Solving for β, we obtain the method of moments estimator.

8
Pn
xi x̄
β̃ = i=i
=
3n 3

Using the given data, we obtain an estimate.

2.00 + 4.00 + 7.50 + 3.00


β̃ = = 1.375
3·4
Note again, the MLE and MoM estimators are the same.

Exercise 8
Let Y1 , Y2 , . . . , Yn be a random sample from a distribution with pdf

2
 2
y
f (y | α) = · y · exp − , y > 0, α > 0.
α α

(a) Find the maximum likelihood estimator of α.


Solution:
The likelihood function of the data is the joint distribution viewed as a function of the parameter, so we have:
( n
) ( n
)
2n Y 1X 2
L(α) = n yi exp − y
α i=1
α i=1 i

We want to maximize this function. First, we can take the logarithm:

n n
X 1X 2
log L(α) = n log 2 − n log α + log yi − y
i=1
α i=1 i

And then take the derivative:

n
d n 1 X 2
log L(α) = − + 2 y
dα α α i=1 i

Setting this equal to 0 and solving for α:

n
n 1 X 2
− + 2 y =0
α α i=1 i
n
n 1 X 2
⇐⇒ = 2 y
α α i=1 i
n
1X 2
⇐⇒ α = y
n i=1 i

So, our candidate for the MLE is

9
n
1X 2
α̂ = y .
n i=1 i

Taking the second derivative,

n
d2 n 2 X 2 n 2n
log L(α) = − y = 2 − 3 α̂
dα2 α2 α3 i=1 i α α

so that:

d2 n 2n n
2
log L(α̂) = 2 − 3 α̂ = − 2 < 0
dα α̂ α̂ α̂

Thus, the (log-)likelihood is concave down at α̂, which confirms that the value of α that maximizes the
likelihood is:

n
1X 2
α̂MLE = Y
n i=1 i

(b) Let Z1 = Y12 . Find the distribution of Z1 . Is the MLE for α an unbiased estimator of α?
Solution:

If Zi = Yi2 , then Yi = Zi , and dyi
dzi = 2 zi ,
1 √1
so that:

2√ n zo1 1 1 n zo
fZ (z) = z · exp − √ = exp −
α α 2 z α α

which is the pdf of an exponential distribution with parameter α. Thus,


" n
#
1X 2
E Yi = E Z̄ = E[Z1 ] = α,
 
n i=1

so that α̂MLE is unbiased for α.


Note: I typically do not remember the “formula” for the pdf of a transformed variable, so I typically start
from:

√ √
for positive z, FZ (z) = P (Z ≤ z) = P (Y 2 ≤ z) = P (Y ≤ z) = FY ( z)

and then take a derivative:

d d √ √ d √
fZ (z) = P (Z ≤ z) = FY ( z) = fY ( z) z
dz dz dz

10
Exercise 9
Let X be a single observation from a Binom(n, p), where p is an unknown parameter. (In this case, we will
consider n known.)
(a) Find the maximum likelihood estimator (MLE) of p.
Solution:
We just have one observation, so the likelihood is just the pmf:
 
n x
L(p) = p (1 − p)n−x , 0 < p < 1, x = 0, 1, . . . n
x

The log-likelihood is:


 
n
log L(p) = log + x log(p) + (n − x) log(1 − p).
x

The derivative of the log-likelihood is:

d x n−x
log L(p) = − .
dp p 1−p

Setting this to be 0, we solve:

x n−x x
− = 0 ⇐⇒ x − px = np − px ⇐⇒ p = .
p 1−p n

Thus, p̂ = x
n is our candidate.
We take the second derivative:

d2 x n−x
log L(p) = − 2 −
dp 2 p (1 − p)2

which is always less than 0; thus

X
p̂ =
n

is the maximum likelihood estimator for p.


(b) Suppose you roll a 6-sided die 40 times and observe eight rolls of a 6. What is the maximum likelihood
estimate of the probability of observing a 6?
Solution:
Here, we can let X be the number of sixes in 40 (independent) rolls of the die: X ∼ Binom(40, p), where p is
the probability of rolling a 6 on this die.
Then
8
p̂ = = 0.2
40

is the maximum likelihood estimate for p.

11
(c) Using the same observed data, suppose you now plan to perform a second experiment with the same die,
and will roll the die 5 more times. What is the maximum likelihood estimate of the probability that you
will observe no 6’s in this next experiment?
Solution:
Let Y ∼ Binom(5, p) represent the number of sixes you will obtain in this second experiment. Based on the
pmf of the binomial, we know that:

5 0
 
P (Y = 0) = p (1 − p)5−0 = (1 − p)5
0

Let us call this new parameter of interest θ. Then we have

θ = (1 − p)5

We are asked to find the MLE θ̂.


Based on the invariance property of the MLE,

θ̂ = (1 − p̂)5

With the observed data, the maximum likelihood estimate is thus

(1 − 0.2)5 = 0.33

Thus, our best guess (using the maximum likelihood framework) at the chance that we will observe no sixes
in the next 5 rolls is 33%.

Exercise 10
Suppose that a random variable X follows a discrete distribution, which is determined by a parameter θ
which can take only two values, θ = 1 or θ = 2. The parameter θ is unknown.
• If θ = 1, then X follows a Poisson distribution with parameter λ = 2.
• If θ = 2, then X follows a Geometric distribution with parameter p = 14 .
Now suppose we observe X = 3. Based on this data, what is the maximum likelihood estimate of θ?
Solution:
Because there are only two possible values of θ (1 and 2) rather than a whole range of possible values (like
examples with 0 < θ < ∞) the approach of taking the derivative of something with respect to θ will not work.
Instead, we need to think about the definition of the MLE. Instead, we just want to determine which value of
θ makes our observed data, X = 3, most likely.
If θ = 1, then X follows a Poisson distribution with parameter λ = 2. Thus, if θ = 1,

e−2 · 23
P (X = 3) = = 0.180447
3!

If θ = 2, then X follows a Geometric distribution with parameter p = 14 . Thus, if θ = 2,

12
3−1
1 1

P (X = 3) = 1− = 0.140625
4 4

Thus, observing X = 3 is more likely when θ = 1 (0.18) than when θ = 2 (0.14), so 1 is the maximum
likelihood estimate of θ.

Exercise 11
Let Y1 , Y2 , . . . , Yn be a random sample from a population with pdf

2θ2
f (y | θ) = , θ≤y<∞
y3

Find the maximum likelihood estimator of θ..


Solution:
The likelihood is:

n
Y 2θ2 2n θ2n
L(θ) = = Qn 3 , 0 < θ ≤ yi < ∞, for every i.
i=1
yi3 i=1 yi

Note that

0 < θ ≤ yi < ∞ for every i ⇐⇒ 0 < θ ≤ min {yi } .

To understand the behavior of L(θ), we can take the log and take the derivative:

n
!
Y
log L(θ) = n log 2 + (2n) log θ − log yi3
i=1

d 2n
log L(θ) = > 0 on θ ∈ (0, min {yi })
dθ θ
Thus, the MLE is the largest possible value of θ:

θ̂ = min{Yi }

13

You might also like