You are on page 1of 61

Lecture 34

Review for Final Examination


Part I: Chapters 1 – 4 & Appendix C
Part II: Chapter 5

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(1) 2023 final examination

• Time and Date


19:00 – 22:00,
12 June 2022, 3 hours
• Venue
The Third Teaching Building,
Rooms 206 & 205 for Class I & Class II
• Range
Chapters 1–5, Appendix C
• Assessment
Final Examination (50%)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(2) Please bring your calculators

• Candidates taking examinations that


permit the use of calculators may use
any calculator which fulfils the follow-
ing criteria:
– (a) it should be self-contained, silent,
battery-operated and pocket-sized.
– (b) it should have numeral-display fa-
cilities only.
– (c) it should be used only for the pur-
poses of calculation.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(2) Please bring your calculators (cont’d)

• It is the candidate’s responsibility to en-


sure that the calculator operates satis-
factorily.
• The candidate must record the name
and type of the calculator on the front
page of the examination scripts.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(3) Please bring two pens/pencils

• When one pen does not work, you can


use another one.
• You can prepare anything on two sides
of an A4 paper and bring it with you to
the Final Examination venue.
• You are not allowed to bring any other
material (including iPhone/iPad) to the
Final Examination venue.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(4) The range of the examination

• Chapter 1 excludes §1.11–§1.12


• Chapter 2
excludes Theorem 2.1 on p.74, §2.5.2,
§2.5.3 and §2.5.4
• Chapters 3
• Appendix C
excludes C.3

• Chapter 4
• Chapter 5
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(5) The distribution and marks of
the questions in the Exam of 2022

• There are five questions in the Final Ex-


amination with a total of 100 marks.
• Q1 has 20 sub-questions with 2 marks
per sub-question, ranging from Chap-
ter 1 to Chapter 5; Directly giving your
answers.
• Q2 in Chapter 1 has a total of 9 Marks.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(5) The distribution and marks of the
questions in the Exam of 2022 (cont’s)

• Q3 (including MLE, unbiased estima-


tor, Fisher information, sufficient statis-
tics, UMVUE) in Chapter 3 has a total
of 18 Marks.
• There is 1 sub-question in Chapter 4
with a total of 3 Marks.
• There are 2 questions in Chapter 5 with
a total of 30 Marks, where MPT (10),
UMPT (10), LRT (0), goodness of fit
test (10).
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(6) Remember some densities

• Please remember the pmf/pdf


of Bernoulli(p), Binomial(n, p),
Uniform(a, b), Beta(a, b), and N (µ, σ 2)
distributions.
• Other pmfs or pdfs will be given.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(7) Key points for Chapters 1–4

Chapter 1: Probability and Distributions


Chapter 2: Sampling Distribution
Chapter 3: Point Estimation
Chapter 4: CI Estimation
Appendix C

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(8) Given conditional expectation/variance
to find the expectation and variance

E(X) = E{E(X|Y )},


Var(X) = E{Var(X|Y )} + Var{E(X|Y )}.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(9) Given two conditional densities,
to find the marginal densities

fX (x)fY |X (y|x) = fY (y)fX|Y (x|y)


1. Continuous case
fX|Y (x|y0)
fX (x) ∝ ,
fY |X (y0|x)
(Z )−1
fY |X (y|x)
fX (x) = dy .
fX|Y (x|y)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Pr(X = xi) Pr(Y = yj |X = xi)
= Pr(Y = yj ) Pr(X = xi|Y = yj )
2. Discrete case
Pr(X = xi|Y = yj0)
pi = Pr(X = xi) ∝ =ˆ qi ,
Pr(Y = yj0|X = xi)
qi
pi = P ,
i0 qi0
 −1
X Pr(Y = y |X = x ) 
j i
Pr(X = xi) = .
 Pr(X = xi|Y = yj ) 
j
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(10) Three methods to find the distribution
of the function of random variables (§2.1)

3. Cumulative distribution function


technique (§2.1.1)
4. Transformation technique (§2.1.2)
(a) Monotone transformation
g(y) = f (x) × |dx/dy| . (2.1)
(b) Piecewise monotone transformation
n −1(y)
dh
f (h−1
X
g(y) = (y)) × i . (2.2)
i dy
i=1
e.g., X ∼ N (0, 1), then Y = X 2 ∼ χ2(1).
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(c) Bivariate transformation
∂(x1, x2)
g(y1, y2) = f (x1, x2) × . (2.3)
∂(y1, y2)
– Examples 2.8 and 2.9 (p.65–68)
– Another example: Let X ∼ Beta(a, b),
Y ∼ U (0, 1), and X ⊥
⊥ Y . Find the
distribution of Z = XY .

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(d) Multivariate transformation
∂(x1, . . . , xn)
g(y1, . . . , yn) = f (x1, . . . , xn) × .
∂(y1, . . . , yn)

5. Moment generating function technique


(§2.1.3)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(11) Order statistics (§2.4, p.81)

6. The cdf and pdf of


X(1) = min{X1, . . . , Xn}
G1(x) = 1 − [1 − F (x)]n,
g1(x) = n[1 − F (x)]n−1f (x).

7. The cdf and pdf of


X(n) = max{X1, . . . , Xn}
Gn(x) = [F (x)]n,
gn(x) = n[F (x)]n−1f (x).

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
8. The cdf and pdf of X(r)
n 
n
F i(x)[1 − F (x)]n−i, (2.21)
X
Gr (x) =
i
i=r

n!
gr (x) = f (x)
(r − 1)!(n − r)!
F r−1(x)[1 − F (x)]n−r . (2.23)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
9. The joint pdf of X(1), . . . , X(n) is
gX ,...,X (x(1), . . . , x(n)) =
(1) (n)

n!fX (x(1)) · · · fX (x(n)). (2.27)


where x(1) 6 · · · 6 x(n) and fX (·) is the
density function of the population ran-
iid
dom variable X, i.e., X1, . . . , Xn ∼ fX (x).

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(12) Central limit theorem (§2.5.5, p.94)

10. Theorem 2.9 Let {Xn}∞ n=1 be a sequence


of i.i.d. r.v.’s with common mean µ and
common variance σ 2 > 0. Let
n
X √
X̄n = Xi/n and Yn = n(X̄n − µ)/σ,
i=1
L
then Yn → Z, where Z ∼ N (0, 1). ¶

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(13) Point estimation (Chapter 3)

11. Joint density and likelihood function


(p.104)
Qn
L(θ) = f (x; θ) = i=1 f (xi; θ), θ ∈ Θ.
– Example 3.3 (p.108):
1
(
f (x; θ) = θn , if 0 < xi 6 θ, i = 1, . . . , n,
0, elsewhere.

 1 , if θ > x =

ˆ max{x1, . . . , xn},
n (n)
L(θ) = θ

0, elsewhere.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
12. MLE and mle (p.105)
θ̂ = arg max L(θ) = arg max `(θ),
θ ∈Θ θ ∈Θ
then θ̂ = u(X1, . . . , Xn) is called the MLE
of θ and u(x1, . . . , xn) is called a maximum
likelihood estimate (mle) of θ.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
13. Unrestricted MLE

Let
d`(θ)
= 0,

we can obtain the unrestricted MLE.

– Example 3.1: Bernoulli(θ)

– Example 3.2: N (µ, σ 2)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
14. How to find MLEs of parameters in oth-
er distributions, e.g.
iid
– X1, . . . , Xn ∼ Poisson(λ),
ind
– Xi ∼ Binomial(ni, θ),
iid
– X1, . . . , Xn ∼ U [θ1, θ2],
iid
– X1, . . . , Xn ∼ Exponential(β),
ind
– Xi ∼ Gamma(ni, β).

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
15. MLE with equality constraints

– Example 3.6: Multinomial distribu-


tion

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
16. MLE with inequality constraints

– Example 3.7: Normal distribution


with constraints a 6 µ 6 b.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
17. Moment estimator
(§3.2): Replace MLE by Moment esti-
mator in Item 14.

18. Bayesian estimator


(§3.3)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
19. Efficiency (§3.4.2):

– 19.1 How to calculate the Fisher


information (Theorem 3.4, p.131): If
E[S(θ)] = 0, then
In(θ) = nI(θ),
where
" 2 #
d log f (X; θ)
I(θ) = E

" #
d2 log f (X; θ)
= E − 2
.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 19.2 How to calculate the efficiency of
an unbiased estimator θ̂ for θ (p. 136):
1/In(θ)
Effθ̂ (θ) = . (3.26)
Var(θ̂)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
20. Sufficiency (§3.4.3):

– 20.1 The definition of a sufficient s-


tatistic.

– 20.2 Use Theorem 3.5 (Factorization


Theorem, p.141) to find a sufficient s-
tatistic T (x) for θ:
f (x1, . . . , xn; θ) = f (x; θ) = g(T (x); θ)×h(x),
(3.27)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 20.3 Jointly sufficient statistics. For
iid
example, let X1, . . . , Xn ∼ Gamma(α, β), the
joint density is
n
Y β α α−1 −βxi
f (x; α, β) = xi e ,
Γ(α)
i=1
then Qn Pn
i=1 Xi and i=1 Xi
are jointly sufficient statistics of (α, β). So
the distribution of
Qn Pn
(X1, . . . , Xn)|( i=1 Xi = t1, i=1 Xi = t2)
does not depends on (α, β).
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
iid
– 20.4 Let X1, . . . , Xn ∼ N (µ, 1), find the
joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)

iid
– 20.5 Let X1, . . . , Xn ∼ Bernoulli(θ), find
the joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)

iid
– 20.6 Let X1, . . . , Xn ∼ Poisson(λ), find
the joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
21. Data reduction
iid
– Let X1, . . . , Xn ∼ f (x; θ). To estimate
the parameter θ, first we need to find a
sufficient statistic T (X1, . . . , Xn) = T (x) =
T for θ.
– Then the MLE, moment estimator,
Bayesian estimator of θ are functions of
T , say g1(T ), g2(T ), g3(T ).
– A pivotal quantity is a function of
both T and θ, i.e., g4(T, θ).
– Finally, the lower limit and upper lim-
it of the CI of θ are also functions of T ,
say [g5(T ), g6(T )]. • • • •
First • •
Prev • •
Next Last Go Back Full Screen Close Quit
22. Completeness (§3.4.4):
– 22.1 How to prove that a statistic
T (X1, . . . , Xn) is complete for θ (Defini-
tion 3.9, p.147):

The statistic T is said to be complete if


for any h(t),
E[h(T )] = 0 for all θ ∈ Θ
implies that h(T ) = 0 with probability 1.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 22.2 How to find the unique UMVUE
for θ (Theorem 3.7, Lehmann-Scheffé The-
orem, p.149):

Step 1: To prove that T (x) is sufficient for


θ;
Step 2: To prove that T (x) is complete for
θ;
Step 3: To find a function of T , say, g(T ),
which is an unbiased estimator of τ (θ).
Then g(T ) is the unique UMVUE for τ (θ).

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
23. Limiting Properties of MLE (§3.5):
L
[nI(θ)]1/2(θ̂n−θ) → Z ∼ N (0, 1) n → ∞. as
(3.34)
– 23.1 Let g(·) be a function and its first
derivative g 0(·) exist. Then, using the
first-order Taylor expansion, we have
g(θ̂n) ≈ g(θ) + (θ̂n − θ)g 0(θ)
∼ N (g(θ), [g 0(θ)]2Var(θ̂n)),
i.e.
p
nI(θ)[g(θ̂n) − g(θ)] .
0 ∼ N (0, 1). (3.35)
g (θ) • • •
First• •
Prev Next Last Go Back •Full Screen •Close •Quit
iid
– 23.2 Let X1, . . . , Xn ∼ Bernoulli(θ),
Pn then
the MLE of θ is θ̂n = (1/n) i=1 Xi. Let

g(x) = arcsin x, then g 0(x) = √ 1 . Note
2 x(1−x)
Var(θ̂n) = Var(X̄) = θ(1 − θ)/n so that
0 2 1
[g (θ)] Var(θ̂n) =
4n
is a constant. From (3.35), we have
√ √
arcsin X̄ − arcsin θ .
√ ∼ N (0, 1),
1/ 4n
which results in a CI for θ.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(14) CI estimation (Chapter 4)

24. Upper α-th quantile points:


α = Pr{Z > zα}, Z ∼ N (0, 1),
α = Pr{t(n) > t(α, n)},
α = Pr{χ2(n) > χ2(α, n)},
α = Pr{F (n, m) > F (α, n, m)}.

25. Pivotal quantity (Definition 4.1, p.164)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
26. The CI of normal mean (§4.2):
– 26.1 If σ02 is known, use the pivotal
quantity

n(X̄ − µ)
∼ N (0, 1)
σ0
to construct a 100(1 − α)% CI of µ as
follows:
 
σ0 σ0
X̄ − zα/2 √ , X̄ + zα/2 √ (4.4)
n n

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 26.2 If σ 2 is unknown, use the pivotal
quantity

n(X̄ − µ)
T = ∼ t(n − 1)
S
to construct a 100(1−α)% CI of µ as follows:
 
S S
X̄ − t(α/2, n − 1) √ , X̄ + t(α/2, n − 1) √
n n
(4.6)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
27. The CI of normal variance (§4.4):
– 27.1 If µ is known, use the pivotal
quantity
n
(Xi − µ)2
∼ χ2(n)
X
σ2
i=1
to construct a 100(1 − α)% CI of σ 2 as
follows:
"P #
n 2 P n 2
i=1 (X i − µ) i=1 (X i − µ)
2
, 2
, (4.14)
χ (α/2, n) χ (1 − α/2, n)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 27.2 If µ is unknown, use the pivotal
quantity
(n − 1)S 2 2(n − 1)
∼ χ
σ2
to construct a 100(1 − α)% CI of σ 2 as fol-
lows:
" #
(n − 1)S 2 (n − 1)S 2
2
, 2 , (4.15)
χ (α/2, n − 1) χ (1 − α/2, n − 1)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
28. Large–sample confidence intervals
(§4.6, three methods)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(15) Appendix C

Please review C.1, C.2 and C.4

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(16) Key points for Chapter 5

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
29. Type I error function

α(θ) = Pr(Type I error)


= Pr(rejecting H0|H0 is true)
= Pr(x ∈ C|θ ∈ Θ0), (5.3)
where C is the critical region.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
30. Type II error function

β(θ) = Pr(Type II error)


= Pr(accepting H0|H0 is false)
= Pr(x ∈ C0|θ ∈ Θ1), (5.4)
where C0 is the acceptance region.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
31. Power function and
the relationship with α(θ) and β(θ)

p(θ) = Pr(rejecting H0|θ) = Pr(x ∈ C|θ)


(5.6)
— If θ ∈ Θ0,
p(θ) = Pr(x ∈ C|θ ∈ Θ0) = α(θ). (5.7)
— If θ ∈ Θ1,
p(θ) = Pr(x ∈ C|θ ∈ Θ1) = 1 − β(θ).

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
32. Neyman–Pearson Lemma
Let X1, . . . , Xn be a random sample from a
population with the pdf (or pmf ) f (x; θ).
Let the likelihood function be L(θ). Then
a test ϕ with critical region
 
> L(θ0)
C = x = (x1, . . . , xn) : 6k (5.12)
L(θ1)
and size α is the most powerful test of size
α for testing H0: θ = θ0 against H1: θ = θ1,
where k is a value determined by the size
α.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
33. Uniformly most powerful test

• Definition 5.3 on page 199


• Examples 5.7 and 5.8
• Questions 5.1–5.12 in Assignment 5.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
34. How to find a UMPT

(a) For the given two composite hypotheses


H0: θ ∈ Θ0 and H1: θ ∈ Θ1, first consider
two simple hypotheses H0s: θ = θ0 ∈ Θ0
and H1s: θ = θ1 ∈ Θ1.

(b) By using the Neyman–Pearson lemma,


to find a most powerful test ϕ of size α
with critical region C.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(c) If C is free from θ1, then ϕ is the UMP-
T of size α for testing H0s: θ = θ0 ∈ Θ0
against H1: θ ∈ Θ1.

(d) If supθ∈Θ0 pϕ(θ) = α = pϕ(θ0), then ϕ is the


UMPT of size α for testing H0: θ ∈ Θ0
against H1: θ ∈ Θ1.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
35. Likelihood ratio (LR) test

• Step 1: Find θ̂R, the restricted MLE of


θ in Θ0, and θ̂, the (un)restricted MLE
of θ in Θ.
• Step 2: Calculate the LR statistic
λ(x) = L(θ̂R)/L(θ̂). (5.23)

• Step 3: Determine λα or c in
C = {x: λ(x) 6 λα} = {x: log λ(x) 6 c}
based on
α = Pr{log λ(x) 6 c|H0 is true}
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
36. How to determine the c in LR Test

• Step I. Express λ(x) or log λ(x) as a


function of a statistic Q, e.g., λ(x) =
h(Q), such that under H0, Q follows a
given distribution (e.g., Normal, Gam-
ma, Chi-squares, t, F).
• Step II. Prove that h(Q) is concave with
a maximum or convex with a minimum.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
• Step III. If h(Q) is concave, then
h(Q) 6 c
is equivalent to
Q 6 c1 or Q > c2.

• Step IV. If h(Q) is convex, then h(Q) > c


is equivalent to
Q 6 c1 or Q > c2.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
• Step V. Use the equal-tail approach, let

α/2 = Pr(Q 6 c1|H0}


and
α/2 = Pr(Q > c2|H0}
to determine the c1 and c2.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
37. Tests on normal means

• One-sample normal test when variance


is known (§5.4.1)
• One-sample t test (§5.4.2)
• Two-sample t test (§5.4.3)

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
38. The critical region approach and
the p-value approach

• If you are asked to use the critical region


approach, you must obey.
• If you are asked to use the p-value ap-
proach, you must obey.
• You could use either approach if the ap-
proach is not be specified in questions.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
39. Goodness of fit test

Please review §5.5.1 – §5.5.3

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
40. All questions in Assignments 1–5.
41. All questions in Tutorials.

•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
End of the Review

GOD Bless You!


•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit

You might also like