Professional Documents
Culture Documents
Full Chapter Linear Model Theory Exercises and Solutions Dale L Zimmerman PDF
Full Chapter Linear Model Theory Exercises and Solutions Dale L Zimmerman PDF
https://textbookfull.com/product/data-mining-with-spss-modeler-
theory-exercises-and-solutions-1st-edition-tilo-wendler/
https://textbookfull.com/product/spectral-mixture-for-remote-
sensing-linear-model-and-applications-yosio-edemir-shimabukuro/
https://textbookfull.com/product/motivation-in-education-theory-
research-and-applications-dale-h-schunk/
https://textbookfull.com/product/extending-the-linear-model-with-
r-second-edition-julian-james-faraway/
The Python Workbook: A Brief Introduction with
Exercises and Solutions 2nd Edition Ben Stephenson
https://textbookfull.com/product/the-python-workbook-a-brief-
introduction-with-exercises-and-solutions-2nd-edition-ben-
stephenson/
https://textbookfull.com/product/the-python-workbook-a-brief-
introduction-with-exercises-and-solutions-2014th-edition-
stephenson-ben/
https://textbookfull.com/product/statistics-for-the-social-
sciences-a-general-linear-model-approach-russell-warne/
https://textbookfull.com/product/linear-systems-theory-second-
edition-ferenc-szidarovszky/
https://textbookfull.com/product/lonely-planet-chicago-zimmerman/
Dale L. Zimmerman
Linear
Model
Theory
Exercises and Solutions
Linear Model Theory
Dale L. Zimmerman
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
In loving gratitude to my late father, Dean
Zimmerman, and my mother, Wendy
Zimmerman
Contents
1 A Brief Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Selected Matrix Algebra Topics and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3 Generalized Inverses and Solutions to Systems
of Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Moments of a Random Vector and of Linear and Quadratic
Forms in a Random Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Types of Linear Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6 Estimability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 Least Squares Estimation for the Gauss–Markov Model . . . . . . . . . . . . . . 63
8 Least Squares Geometry and the Overall ANOVA . . . . . . . . . . . . . . . . . . . . . 91
9 Least Squares Estimation and ANOVA for Partitioned Models . . . . . . . 103
10 Constrained Least Squares Estimation and ANOVA . . . . . . . . . . . . . . . . . . . 131
11 Best Linear Unbiased Estimation for the Aitken Model . . . . . . . . . . . . . . . 153
12 Model Misspecification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
13 Best Linear Unbiased Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
14 Distribution Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
15 Inference for Estimable and Predictable Functions . . . . . . . . . . . . . . . . . . . . 255
16 Inference for Variance–Covariance Parameters . . . . . . . . . . . . . . . . . . . . . . . . 325
17 Empirical BLUE and BLUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
vii
A Brief Introduction
1
This book contains 296 solved exercises on the theory of linear models. The
exercises are taken from the author’s graduate-level textbook, Linear Model Theory:
With Examples and Exercises, which was published by Springer in 2020. The
exercises themselves have been restated, when necessary and feasible, to make them
as comprehensible as possible independently of the textbook, but the solutions refer
liberally to theorems and other results therein. They are arranged in chapters, the
numbers and titles of which are identical to those of the chapters in the textbook
that have exercises.
Some of the exercises and solutions are short, while others have multiple parts
and are quite lengthy. Some are proofs of theorems presented but not proved in the
aforementioned textbook, but most are specializations of said theorems and other
general results to specific linear models. In this respect they are quite similar to the
textbook’s examples. A few of the exercises require the use of a computer, but none
involve the analysis of actual data.
The author is not aware of any other published set of solved exercises for a
graduate-level course on the theory of linear models. It is hoped that students
and instructors alike, possibly even those not using Linear Model Theory: With
Examples and Exercises for their course, will find these exercises and solutions
useful.
This chapter presents exercises on selected matrix algebra topics and results and
provides solutions to those exercises.
Exercise 4 Prove Theorem 2.8.3: For any matrix A and any nonzero scalar c,
rank(cA) = rank(A).
Solution If A is positive definite, then xT (cA)x = cxT Ax > 0 for all x = 0 [and
trivially 0T (cA)0 = 0], implying that cA is positive definite. Alternatively, if A
is positive semidefinite, then xT (cA)x = cxT Ax ≥ 0 for all x and xT (cA)x =
cxT Ax = 0 for some x = 0, implying that cA is positive semidefinite.
Now, if A and B are nonnegative definite, then xT1 Ax1 ≥ 0 for all x1 and xT2 Bx2 ≥ 0
A0
for all x2 , implying (by the result displayed above) that xT x ≥ 0 for all
0 B
2 Selected Matrix Algebra Topics and Results 5
A0 A0
x, i.e., that is nonnegative definite. Conversely, if is nonnegative
0 B 0 B
definite, then
T T A0 x1 T T A0 0
x1 , 0 ≥ 0 for all x1 and 0 , x2 ≥ 0 for
0 B 0 0 B x2
all x2 , i.e., xT1 Ax1 ≥ 0 for all x1 and xT2 Bx2 ≥ 0 for all x2 . Thus A and B are
nonnegative definite.
On the other hand, if A and B are positive definite, then xT1 Ax1 > 0 for all
x1 = 0 and xT2 Bx2 > 0 for all x2 = 0, implying (by the result displayed above
A0
and by the trivial results 0T A0 = 0T B0 = 0) that xT x > 0 for all x = 0,
0 B
A0 A0
i.e., that is positive definite. Conversely, if is positive definite, then
0 B 0 B
T T A0 x1 T T A0 0
x1 , 0 > 0 for all x1 = 0 and 0 , x2 > 0 for
0 B 0 0 B x2
all x2 = 0, i.e., xT1 Ax1 > 0 for all x1 = 0 and xT2 Bx2 > 0 for all x2 = 0. Thus A
and B are positive definite.
Solution
⎛ ⎞ ⎛ −1 ⎞
A1 0 ··· 0 A1 0 ··· 0
⎜ 0 A2 · · · 0 ⎟ ⎜ −1 ⎟
⎜ ⎟ ⎜ 0 A2 · · · 0 ⎟
⊕ki=1 Ai ⊕ki=1 A−1 =⎜ . .. . . .. ⎟ ⎜ .. .. . . .. ⎟
i ⎝ .. . . . ⎠⎝ . . . . ⎠
0 0 · · · Ak 0 0 · · · A−1
k
⎛ ⎞
I 0 ··· 0
⎜0 I ··· 0⎟
⎜ ⎟
= ⎜ . . . . ⎟ = I.
⎝ .. .. . . .. ⎠
0 0 ··· I
Solution
(a)
⎛ ∂(n ⎞ ⎛ ⎞
i=1 ai xi )
∂x1 a1
∂(aT x) ⎜ ⎟ ⎜ . ⎟
∇f = =⎜
⎝
..
.
⎟ = ⎝ . ⎠ = a.
⎠ .
∂x n
∂( i=1 ai xi ) an
∂xn
(b)
⎛
∂( ni=1 nj=1 aij xi xj )
⎞ ⎛ ⎞
2a x + nj=2 a1j xj + ni=2 ai1 xi
∂(xT Ax) ⎜
∂x1 ⎟ ⎜ 11 1 ⎟
⎜ .. ⎟ ⎜ .. ⎟
∇f = =⎜ . ⎟=⎝ . ⎠
∂x ⎝ ⎠ n n
∂( ni=1 nj=1 aij xi xj )
2ann xn + j =2 anj xj + i=2 ain xi
∂xn
⎛ n ⎞
2 j =1 a1j xj
⎜ .. ⎟
=⎜
⎝ .
⎟ = 2Ax.
⎠
n
2 j =1 anj xj
Generalized Inverses and Solutions to Systems
of Linear Equations 3
Solution
1
(b) a 2 b 2 baT is a generalized inverse of abT because
1 1
ab T
2 2
ba T
abT = 2 2
a b 2
a 2 bT = abT .
a b a b
(e) Let
⎛ ⎞ ⎛ ⎞
c1 dn
⎜ . ⎟ ⎜ . ⎟
C=⎝ .. ⎠ and D=⎝ .. ⎠,
cn d1
1/ck , if ck =
0
where dk = for k = 1, . . . , n. Then D is a generalized inverse
0, if ck = 0
of C because
⎛ ⎞⎛ ⎞⎛ ⎞
c1 dn c1
⎜ . ⎟⎜ . ⎟⎜ . ⎟
CDC = ⎝ .. ⎠⎝ .. ⎠⎝ .. ⎠
cn d1 cn
⎛ ⎞⎛ ⎞
c1 d1 c1
⎜ .. ⎟⎜ . ⎟
=⎝ . ⎠⎝ .. ⎠
cn dn cn
⎛ ⎞
c1 d1 c1
⎜ . ⎟
=⎝ .. ⎠
cn dn cn
⎛ ⎞
c1
⎜ . ⎟
=⎝ .. ⎠.
cn
3 Generalized Inverses and Solutions to Systems of Linear Equations 9
Jm×n [(1/mn)Jn×m ]Jm×n = (1/mn)1m 1Tn 1n 1Tm 1m 1Tn = (1/mn)1m (nm)1Tn = Jm×n .
In the other case, i.e., if b = −a/n, then aI + bJn = a(I − n1 J), and because
I− n1 Jn is idempotent, a1 (I− n1 J) is a generalized inverse by the solutions to parts
(a) and (c) of this exercise. Another generalized inverse in this case is (1/a)In .
(j) A−1 is a generalized inverse of A + bdT because
(A + bdT )A−1 (A + bdT ) = AA−1 A + bdT A−1 A + AA−1 bdT + bdT A−1 bdT
= A + bdT .
(a) Prove that A has a nonsingular generalized inverse. (Hint: Use Theorem 3.1.1,
which was stated in the previous exercise.)
(b) Let G represent a nonsingular generalized inverse of A. Show, by giving a
counterexample, that G−1 need not equal A.
Solution
(a) Let n be the number of rows (and columns) of A, and let P and Q represent
nonsingular matrices such that
Ir 0
PAQ = ,
0 0
3 Generalized Inverses and Solutions to Systems of Linear Equations 11
whichclearly are not consistent. Here A = J2 , for which one generalized inverse is
10
. Then
00
− 11 10 0 0
AA b = = = b.
11 00 1 0
Solution
Exercise 7 Prove Theorem 3.3.5: For any n × m matrix A of rank r, AA− and
A− A are idempotent matrices of rank r, and In −AA− and Im −A− A are idempotent
matrices of ranks n − r and m − r, respectively.
(a) If C(B) is a subspace of C(A) and R(C) is a subspace of R(A) (as would be the
case, e.g., if A was nonsingular), then the partitioned matrix
A− + A− BQ− CA− −A− BQ−
,
−Q− CA− Q−
(b) If C(C) is a subspace of C(D) and R(B) is a subspace of R(D) (as would be the
case, e.g., if D was nonsingular), then
P− −P− BD−
,
−D CP D + D− CP− BD−
− − −
Solution We prove part (a) only; the proof of part (b) is very similar. The given
condition C(B) ⊆ C(A) implies that B = AF for some F, or equivalently that
AA− B = B. Similarly, R(C) ⊆ R(A) implies that C = HA for some H, or
equivalently that CA− A = C. Then
AB A− + A− BQ− CA− −A− BQ− AB
CD −Q− CA− Q− CD
AA− + AA− BQ− CA− − BQ− CA− −AA− BQ− + BQ− AB
=
CA− + CA− BQ− CA− − DQ− CA− −CA− BQ− + DQ− CD
K11 K12
= ,
K21 K22
say, where
K11 = AA− A + AA− BQ− CA− A − BQ− CA− A − AA− BQ− C + BQ− C
= A + BQ− C − BQ− C − BQ− C + BQ− C
= A,
K12 = AA− B + AA− BQ− CA− B − BQ− CA− B − AA− BQ− D + BQ− D
= B + BQ− CA− B − BQ− CA− B − BQ− D + BQ− D
= B,
K21 = CA− A + CA− BQ− CA− A − DQ− CA− A − CA− BQ− C + DQ− C
= C + CA− BQ− C − DQ− C − CA− BQ− C + DQ− C
= C,
and
K22 = CA− B + CA− BQ− CA− B − DQ− CA− B − CA− BQ− D + DQ− D
= CA− B − (D − CA− B)Q− CA− B + (D − CA− B)Q− D
14 3 Generalized Inverses and Solutions to Systems of Linear Equations
Solution Define Q = C + CDA− BC. Because C(H) ⊆ C(A) and R(H) ⊆ R(A),
matrices F and K exist such that H = AF and H = KA. It follows that AA− H =
AA− AF = AF = H and HA− A = KAA− A = KA = H. Therefore
where
A11 A12
Exercise 10 Prove Theorem 3.3.11: Let A = , where A11 is of
A21 A22
G11 G12
dimensions n × m, represent a partitioned matrix and let G = ,
G21 G22
where G11 is of dimensions m × n, represent any generalized inverse of A. If
R(A21 ) ∩ R(A11 ) = {0} and C(A12 ) ∩ C(A11 ) = {0}, then G11 is a generalized
inverse of A11 .
Solution
A11 A12 A11 A12 G11 G12 A11 A12
=
A21 A22 A21 A22 G21 G22 A21 A22
A11 G11 + A12 G21 A11 G12 + A12 G22 A11 A12
= ,
A21 G11 + A22 G21 A21 G12 + A22 G22 A21 A22
implying (by considering only the upper left blocks of the block matrices on the two
sides of the matrix equation) that
A11 G11 A11 + A12 G21 A11 + A11 G12 A21 + A12 G22 A21 = A11 ,
i.e.,
A12 (G21 A11 + G22 A21 ) = A11 (I − G11 A11 − G12 A21 ).
Because the column spaces of A12 and A11 are essentially disjoint,
i.e.,
Because the row spaces of A11 and A21 are essentially disjoint,
i.e.,
Exercise 11
(a) Prove Theorem 3.3.12: Let A and B represent two m × n matrices. If C(A) ∩
C(B) = {0} and R(A) ∩ R(B) = {0}, then any generalized inverse of A + B is
also a generalized inverse of A.
(b) Show that the converse of Theorem 3.3.12 is false by constructing a counterex-
ample based on the matrices
10 00
A= and B= .
00 01
Solution
The columns of the matrix on the left of this last equality are elements of C(A),
while those of the matrix on the right are elements of C(B). Because C(A) ∩
C(B) = {0},
i.e.,
The rows of the matrix on the left of this last equality are elements of R(A),
while those of the matrix on the right are elements of R(B). Because R(A) ∩
R(B) = {0}, A − A(A + B)− A = 0. Thus A = A(A + B)− A, showing that any
generalized inverse of A + B is also a generalized inverse of A. This proves the
theorem.
(b) Let A and B be as defined in the theorem. Then, A + B = I2 , which is
nonsingular so its only generalized inverse is its ordinary inverse, I2 . It is
= A, hence
easily verified that AIA I2 is also a generalized
inverse of A.
1 0
However, C(A) = c : c ∈ R = C(B) = c : c ∈ R . Thus
0 1
C(A) ∩ C(B) = {0}, and by the symmetry of A and B, R(A) ∩ R(B) = {0}
also.
Solution
(a) It is easy to show that (1/c)A+ satisfies the four Moore–Penrose conditions.
(b) It was shown in Exercise 3.1b that a 21 b 2 baT is a generalized inverse of abT .
Observe that
1 1 1
2 2
ba T
ab T
2 2
ba T
= baT ,
a b a b a b 2
2
and
1 1
ab T
2 2
ba T
= aaT ,
a b a 2
where
1/ck , if ck =
0
dk =
0, if ck = 0
18 3 Generalized Inverses and Solutions to Systems of Linear Equations
Observe that
1 1 1 1 1 1 1
(In − Jn ) a(In − Jn ) (In − Jn ) = (In − Jn ).
a n n a n a n
Solution It suffices to show that (A+ )T satisfies the four Moore–Penrose conditions
for the Moore–Penrose inverse of AT . Using the cyclic property of the trace and the
fact that A+ satisfies the four Moore–Penrose conditions for the Moore–Penrose
inverse of A, we obtain:
Solution By the first two Moore–Penrose conditions for A+ and Theorem 2.8.4,
Thus rank(A+ ) = rank(A). This result does not hold for an arbitrary generalized
inverse of A. Consider, for example, A = J2 , for which 0.5I2 is a generalized inverse
as noted in (3.1). But rank(J2 ) = 1, whereas rank(0.5I2 ) = 2.
Moments of a Random Vector and of Linear
and Quadratic Forms in a Random Vector 4
This chapter presents exercises on moments of a random vector and linear and
quadratic forms in a random vector and provides solutions to those exercises.
Solution Let x represent a random k-vector such that E(x) = μ, and let a ∈ Rk .
Then aT var(x)a = aT E[(x − μ)(x − μ)T ]a = E[aT (x − μ)(x − μ)T a] = E{[aT (x −
μ)]2 } ≥ 0. Conversely, let z represent a vector of independent random variables
with common variance one, so that var(z) = I, and let A represent any nonnegative
1
definite matrix. By Theorem 2.15.12, a nonnegative definite matrix A 2 exists such
1 1 1 1 1
that A = A 2 A 2 . Then var(A 2 z) = A 2 IA 2 = A.
Solution Without loss of generality consider cov(x1 , x2 ), where x1 and x2 are the
first two elements of x. Because x1 and x2 are independent,
cov(x1 , x2 ) = E[(x1 − E(x1 ))(x2 − E(x2 ))] = E[x1 − E(x1 )] · E[x2 − E(x2 )] = 0 · 0 = 0.
⎛ ⎞
11 ··· 1 1
⎜1 2 ··· 2 ⎟ 2
⎜ ⎟
⎜ ⎟
= σ 2 ⎜ ... ... ..
.
..
. ⎟,
..
.
⎜ ⎟
⎝1 2 ··· n − 1 n − 1⎠
1 2 ··· n − 1 n
= 0.
n
n
xn = 0, xn−1 + xn = 0, . . . , xi = 0, xi = 0,
i=2 i=1
where σii > 0 and −1 < ρi < 1 for all i, is positive definite.
4 Moments of a Random Vector and of Linear and Quadratic Forms in a. . . 23
Exercise 5 Prove Theorem 4.1.3: Suppose that the skewness matrix = (λij k )
of a random n-vector x exists.
(a) If the elements of x are triple-wise independent (meaning that each trivariate
marginal cdf of x factors into the product of its univariate marginal cdfs), then
for some constants λ1 , . . . , λn ,
λi if i = j = k,
λij k =
0 otherwise,
24 4 Moments of a Random Vector and of Linear and Quadratic Forms in a Random Vector
or equivalently,
Solution
(a) For i = j = k, λij k = λiii ≡ λi . For any other case of i, j , and k, at least one
is not equal to the other two. Without loss of generality assume that i = j and
i = k. Then
(b) Because two random vectors having the same distribution (in this case x − μ
and −(x − μ)) necessarily have the same moments,
Exercise 7 Prove Theorem 4.2.1: Let x represent a random n-vector with mean
μ, let A represent a t × n matrix of constants, and let a represent a t-vector of
constants. Then
E(Ax + a) = Aμ + a.
Exercise 8 Prove Theorem 4.2.2: Let x and y represent random n-vectors, and
let z and w represent random m-vectors. If var(x), var(y), var(z), and var(w) exist,
then:
Solution
(a)