Professional Documents
Culture Documents
2023 Final Review
2023 Final Review
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(1) 2023 final examination
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(3) Please bring two pens/pencils
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(4) The range of the examination
• Chapter 4
• Chapter 5
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(5) The distribution and marks of
the questions in the Exam of 2022
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(5) The distribution and marks of the
questions in the Exam of 2022 (cont’s)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(7) Key points for Chapters 1–4
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(8) Given conditional expectation/variance
to find the expectation and variance
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(9) Given two conditional densities,
to find the marginal densities
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
Pr(X = xi) Pr(Y = yj |X = xi)
= Pr(Y = yj ) Pr(X = xi|Y = yj )
2. Discrete case
Pr(X = xi|Y = yj0)
pi = Pr(X = xi) ∝ =ˆ qi ,
Pr(Y = yj0|X = xi)
qi
pi = P ,
i0 qi0
−1
X Pr(Y = y |X = x )
j i
Pr(X = xi) = .
Pr(X = xi|Y = yj )
j
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(10) Three methods to find the distribution
of the function of random variables (§2.1)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(d) Multivariate transformation
∂(x1, . . . , xn)
g(y1, . . . , yn) = f (x1, . . . , xn) × .
∂(y1, . . . , yn)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(11) Order statistics (§2.4, p.81)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
8. The cdf and pdf of X(r)
n
n
F i(x)[1 − F (x)]n−i, (2.21)
X
Gr (x) =
i
i=r
n!
gr (x) = f (x)
(r − 1)!(n − r)!
F r−1(x)[1 − F (x)]n−r . (2.23)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
9. The joint pdf of X(1), . . . , X(n) is
gX ,...,X (x(1), . . . , x(n)) =
(1) (n)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(12) Central limit theorem (§2.5.5, p.94)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(13) Point estimation (Chapter 3)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
13. Unrestricted MLE
Let
d`(θ)
= 0,
dθ
we can obtain the unrestricted MLE.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
14. How to find MLEs of parameters in oth-
er distributions, e.g.
iid
– X1, . . . , Xn ∼ Poisson(λ),
ind
– Xi ∼ Binomial(ni, θ),
iid
– X1, . . . , Xn ∼ U [θ1, θ2],
iid
– X1, . . . , Xn ∼ Exponential(β),
ind
– Xi ∼ Gamma(ni, β).
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
15. MLE with equality constraints
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
16. MLE with inequality constraints
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
17. Moment estimator
(§3.2): Replace MLE by Moment esti-
mator in Item 14.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
19. Efficiency (§3.4.2):
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
20. Sufficiency (§3.4.3):
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 20.3 Jointly sufficient statistics. For
iid
example, let X1, . . . , Xn ∼ Gamma(α, β), the
joint density is
n
Y β α α−1 −βxi
f (x; α, β) = xi e ,
Γ(α)
i=1
then Qn Pn
i=1 Xi and i=1 Xi
are jointly sufficient statistics of (α, β). So
the distribution of
Qn Pn
(X1, . . . , Xn)|( i=1 Xi = t1, i=1 Xi = t2)
does not depends on (α, β).
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
iid
– 20.4 Let X1, . . . , Xn ∼ N (µ, 1), find the
joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)
iid
– 20.5 Let X1, . . . , Xn ∼ Bernoulli(θ), find
the joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)
iid
– 20.6 Let X1, . . . , Xn ∼ Poisson(λ), find
the joint distribution of
Pn
(X1, . . . , Xn)|( i=1 Xi = t)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
21. Data reduction
iid
– Let X1, . . . , Xn ∼ f (x; θ). To estimate
the parameter θ, first we need to find a
sufficient statistic T (X1, . . . , Xn) = T (x) =
T for θ.
– Then the MLE, moment estimator,
Bayesian estimator of θ are functions of
T , say g1(T ), g2(T ), g3(T ).
– A pivotal quantity is a function of
both T and θ, i.e., g4(T, θ).
– Finally, the lower limit and upper lim-
it of the CI of θ are also functions of T ,
say [g5(T ), g6(T )]. • • • •
First • •
Prev • •
Next Last Go Back Full Screen Close Quit
22. Completeness (§3.4.4):
– 22.1 How to prove that a statistic
T (X1, . . . , Xn) is complete for θ (Defini-
tion 3.9, p.147):
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 22.2 How to find the unique UMVUE
for θ (Theorem 3.7, Lehmann-Scheffé The-
orem, p.149):
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
23. Limiting Properties of MLE (§3.5):
L
[nI(θ)]1/2(θ̂n−θ) → Z ∼ N (0, 1) n → ∞. as
(3.34)
– 23.1 Let g(·) be a function and its first
derivative g 0(·) exist. Then, using the
first-order Taylor expansion, we have
g(θ̂n) ≈ g(θ) + (θ̂n − θ)g 0(θ)
∼ N (g(θ), [g 0(θ)]2Var(θ̂n)),
i.e.
p
nI(θ)[g(θ̂n) − g(θ)] .
0 ∼ N (0, 1). (3.35)
g (θ) • • •
First• •
Prev Next Last Go Back •Full Screen •Close •Quit
iid
– 23.2 Let X1, . . . , Xn ∼ Bernoulli(θ),
Pn then
the MLE of θ is θ̂n = (1/n) i=1 Xi. Let
√
g(x) = arcsin x, then g 0(x) = √ 1 . Note
2 x(1−x)
Var(θ̂n) = Var(X̄) = θ(1 − θ)/n so that
0 2 1
[g (θ)] Var(θ̂n) =
4n
is a constant. From (3.35), we have
√ √
arcsin X̄ − arcsin θ .
√ ∼ N (0, 1),
1/ 4n
which results in a CI for θ.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(14) CI estimation (Chapter 4)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
26. The CI of normal mean (§4.2):
– 26.1 If σ02 is known, use the pivotal
quantity
√
n(X̄ − µ)
∼ N (0, 1)
σ0
to construct a 100(1 − α)% CI of µ as
follows:
σ0 σ0
X̄ − zα/2 √ , X̄ + zα/2 √ (4.4)
n n
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 26.2 If σ 2 is unknown, use the pivotal
quantity
√
n(X̄ − µ)
T = ∼ t(n − 1)
S
to construct a 100(1−α)% CI of µ as follows:
S S
X̄ − t(α/2, n − 1) √ , X̄ + t(α/2, n − 1) √
n n
(4.6)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
27. The CI of normal variance (§4.4):
– 27.1 If µ is known, use the pivotal
quantity
n
(Xi − µ)2
∼ χ2(n)
X
σ2
i=1
to construct a 100(1 − α)% CI of σ 2 as
follows:
"P #
n 2 P n 2
i=1 (X i − µ) i=1 (X i − µ)
2
, 2
, (4.14)
χ (α/2, n) χ (1 − α/2, n)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
– 27.2 If µ is unknown, use the pivotal
quantity
(n − 1)S 2 2(n − 1)
∼ χ
σ2
to construct a 100(1 − α)% CI of σ 2 as fol-
lows:
" #
(n − 1)S 2 (n − 1)S 2
2
, 2 , (4.15)
χ (α/2, n − 1) χ (1 − α/2, n − 1)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
28. Large–sample confidence intervals
(§4.6, three methods)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(15) Appendix C
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(16) Key points for Chapter 5
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
29. Type I error function
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
30. Type II error function
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
31. Power function and
the relationship with α(θ) and β(θ)
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
32. Neyman–Pearson Lemma
Let X1, . . . , Xn be a random sample from a
population with the pdf (or pmf ) f (x; θ).
Let the likelihood function be L(θ). Then
a test ϕ with critical region
> L(θ0)
C = x = (x1, . . . , xn) : 6k (5.12)
L(θ1)
and size α is the most powerful test of size
α for testing H0: θ = θ0 against H1: θ = θ1,
where k is a value determined by the size
α.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
33. Uniformly most powerful test
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
34. How to find a UMPT
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
(c) If C is free from θ1, then ϕ is the UMP-
T of size α for testing H0s: θ = θ0 ∈ Θ0
against H1: θ ∈ Θ1.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
35. Likelihood ratio (LR) test
• Step 3: Determine λα or c in
C = {x: λ(x) 6 λα} = {x: log λ(x) 6 c}
based on
α = Pr{log λ(x) 6 c|H0 is true}
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
36. How to determine the c in LR Test
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
• Step III. If h(Q) is concave, then
h(Q) 6 c
is equivalent to
Q 6 c1 or Q > c2.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
• Step V. Use the equal-tail approach, let
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
37. Tests on normal means
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
38. The critical region approach and
the p-value approach
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
39. Goodness of fit test
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
40. All questions in Assignments 1–5.
41. All questions in Tutorials.
•First •Prev •Next •Last •Go Back •Full Screen •Close •Quit
End of the Review