You are on page 1of 12

iv

24
Chapter 2
Quadratic Forms of Random Variables
2.1 Quadratic Forms
For a k k symmetric matrix A = {a
ij
} the quadratic function of k variables x = (x
1
, . . . , x
n
)

dened by
Q(x) = x

Ax =
k

i=1
k

j=1
a
i,j
x
i
x
j
is called the quadratic form with matrix A.
If A is not symmetric, we can have an equivalent expression/quadratic form replacing A by
(A+A

)/2.
Denition 1. Q(x) and the matrix A are called positive denite if
Q(x) = x

Ax > 0, x R
k
, x = 0
and positive semi-denite if
Q(x) x R
k
For negative denite and negative semi-denite, replace the > and in the above denitions
by < and , respectively.
Theorem 1. A symmetric matrix A is positive denite if and only if it has a Cholesky decompo-
sition A = R

R with strictly positive diagonal elements in R, so that R


1
exists. (In practice this
means that none of the diagonal elements of R are very close to zero.)
Proof. The if part is proven by construction. The Cholesky decomposition, R, is constructed a
row at a time and the diagonal elements are evaluated as the square roots of expressions calculated
from the current row of A and previous rows of R. If the expression whose square root is to be
calculated is not positive then you can determine a non-zero x R
k
for which x

Ax 0.
Suppose that A = R

R with R invertible. Then


x

Ax = x

Rx = Rx
2
0
with equality only if Rx = 0. But if R
1
exists then x = R
1
0 must also be zero.
25
26 CHAPTER 2. QUADRATIC FORMS
Transformation of Quadratic Forms:
Theorem 2. Suppose that B is a k k nonsingular matrix. Then the quadratic form Q

(y) =
y

ABy is positive denite if and only if Q(x) = x

Ax is positive denite. Similar results hold


for positive semi-denite, negative denite and negative semi-denite.
Proof.
Q

(y) = y

ABy = x

Ax > 0
where x = By = 0 because y = 0 and B is nonsingular.
Theorem 3. For any k k symmetric matrix A the quadratic form dened by A can be written
using its spectral decomposition as
Q(x) = x

Ax =
k

i=1

i
q

i
x
2
where the eigendecomposition of of A is Q

Q with diagonal with diagonal elements


i
, i =
1, . . . , k, Q is the orthogonal matrix with the eigenvectors, q
i
, i = 1, . . . , k as its columns. (Be
careful to distinguish the bold face Q, which is a matrix, from the unbolded Q(x), which is the
quadratic form.)
Proof. For any x R
k
let y = Q

x = Q
1
x. Then
Q(x) = tr(x

Ax) = tr(x

QQ

x) = tr(y

y) = tr(yy

) ==
k

i=1

i
y
2
i
=
k

i=1

i
q

i
x
2
This proof uses a common trick of expressing the scalar Q(x) as the trace of a 1 1 matrix so
we can reverse the order of some matrix multiplications.
Corollary 1. A symmetric matrix A is positive denite if and only if its eigenvalues are all positive,
negative denite if and only if its eignevalues are all negative, and positive semi-denite if all its
eigenvalues are non-negative.
Corollary 2. rank(A) = rank() hence rank(A) equals the number of non-zero eigenvalues of A
2.2 Idempotent Matrices
Denition 2 (Idempotent). The k k matrix A, is idempotent if A
2
= AA = A.
Denition 3 (Projection matrices). A symmetric, idempotent matrix A is a projection matrix.
The eect of the mapping x Ax is orthogonal projection of x onto col(A).
Theorem 4. All the eigenvalues of an idempotent matrix are either zero or one.
2.3. EXPECTED VALUES AND COVARIANCE MATRICES OF RANDOM VECTORS 27
Proof. Suppose that is an eigenvalue of the idempotent matrix A. Then there exists a non-zero
x such that Ax = x. But Ax = AAx because A is idempotent. Thus
x = Ax = AAx = A(x) = (Ax) =
2
x
and
0 =
2
x x = ( 1)x
for some non-zero x, which implies that = 0 or = 1.
Corollary 3. The kk symmetric matrix A is idempotent of rank(A) = r i A has r eigenvalues
equal to 1 and k r eigenvalues equal to 0
Proof. A matrix A with r eigenvalues of 1 and k r eigenvalues of zero has r non-zero eigenvalues
and hence rank(A) = r. Because A is symmetric its eigendecomposition is A = QQ

for an
orthogonal Q and a diagonal . Because the eigenvalues of are the same as those of A, they
must be all zeros or ones. That is all the diagonal elements of are zero or one. Hence is
idempotent, = , and
AA = QQ

QQ

= QQ

= A
is also idempotent.
Corollary 4. For a symmetric idempotent matrix A, we have tr(A) = rank(A), which is the
dimension of col(A), the space into which A projects.
2.3 Expected Values and Covariance Matrices of Random Vectors
An k-dimensional vector-valued random variable (or, more simply, a random vector), X, is a k-vector
composed of k scalar random variables
X = (X
1
, . . . , X
k
)

If the expected values of the component random variables are


i
= E(X
i
), i = 1, . . . , k then
E(X) =
X
= (
1
, . . . ,
k
)

Suppose that Y = (Y
1
, . . . , Y
m
)

is an m-dimensional random vector, then the covariance of X


and Y, written Cov(X, Y) is

XY
= Cov(X, Y) = E[(X
X
)(Y
Y
)

]
The variance-covariance matrix of X is
Var(X) =
XX
= E[(X
X
)(X

)
Suppose that c is a constant m-vector, A is a constant m k matrix and Z = ZX + c is a
linear transformation of X. Then
E(Z) = AE(X) +c
28 CHAPTER 2. QUADRATIC FORMS
and
Var(Z) = AVar(X)A

If we let W = BY +d be a linear transformation of Y for suitably sized B and d then


Cov(Z, W) = ACov(X, Y)B

Theorem 5. The variance-covariance matrix


X,X
of X is a symmetric and positive semi-denite
matrix
Proof. The result follows from the property that the variance of a scalar random variable is non-
negative. Suppose that b is any nonzero, constant k-vector. Then
0 Var(b

X) = b

XX
b
which is the positive, semi-denite condition.
2.4 Mean and Variance of Quadratic Forms
Theorem 6. Let X be a k-dimensional random vector and A be a constant kk symmetric matrix.
If E(X) = and Var(X) = , then
E(X

AX) = tr(A) +

A
Proof.
E(X

AX) = tr(E(X

AX))
= E[tr(X

AX)]
= E[tr(AXX

)]
= tr(AE[XX

])
= tr(A(Cov(X) +

))
= tr(A
XX
) + tr(A

)
= tr(A
XX
) +

A
2.5 Distribution of Quadratic Forms in Normal Random Variables
Denition 4 (Non-Central
2
). If X is a (scalar) normal random variable with E(X) = and
Var(X) = 1, then the random variable V = X
2
is distributed as
2
1
(
2
), which is called the noncentral

2
distribution with 1 degree of freedom and non-centrality parameter
2
=
2
. The mean and
variance of V are
E[V] = 1 +
2
and Var[V] = 2 + 4
2
2.5. DISTRIBUTION OF QUADRATIC FORMS IN NORMAL RANDOM VARIABLES 29
As described in the previous chapter, we are particularly interested in random n-vectors, Y ,
that have a spherical normal distribution.
Theorem 7. Let Y N(,
2
I
n
) be an n-vector with a spherical normal distribution and A
be an n n symmetric matrix. Then the ratio Y

AY/
2
will have a
2
r
(
2
) distribution with

2
=

A/
2
if and only if A is idempotent with rank(A) = r
Proof. Suppose that A is idempotent (which, in combination with being symmetric, means that it
is a projection matrix) and has rank(A) = r. Its eigendecomposition, A = V V

, is such that
V is orthogonal and is n n diagonal with exactly r = rank(A) ones and n r zeros on the
diagonal. Without loss of generality we can (and do) arrange the eigenvalues in decreasing order
so that
j
= 1, j = 1, . . . , r and
j
= 0, j = r + 1, . . . , n Let X = V

Y
Y

AY

2
=
Y

V V

2
=
X

2
=
n

j=1

j
X
2
j

2
=
r

j=1
X
2
j

2
(Notice that the last sum is to j = r, not j = n.) However,
X
j

N(v

j
/, 1) so
X
2
j

2
1
((v

j
/)
2
). Therefore
r

j=1
X
2
j

2

2
(r)
(
2
) where
2
=

V V

2
=

2
Corollary 5. For A a projection of rank r, (Y

AY)/
2
has a central
2
distribution if and only if
A = 0
Proof. The
2
r
distribution will be central if and only if
0 =

A =

AA =

A = A
2
Corollary 6. In the full-rank Gaussian linear model, Y N(X,
2
I
n
), the residual sum of
squares, y X

2
has a central
2

2
nr
distribution.
30 CHAPTER 2. QUADRATIC FORMS
Proof. In the full rank model with the QR decomposition of X given by
X =
_
Q
1
Q
2

_
R
0
_
and R invertible, the tted values are Q
1
Q

1
Y and the residuals are Q
2
Q
2
y so the residual sum
of squares is the quadratic form Y

Q
2
Q

2
Y. The matrix dening the quadratic form, Q
2
Q

2
, is a
projection matrix. It is obviously symmetric and it is idempotent because Q
2
Q

2
Q
2
Q

2
= Q
2
Q

2
.
As
Q

2
= Q

2
X
0
= Q

2
Q
1
R
0
= 0
..
(np)n
R
0
= 0
..
(np)p

0
= 0
np
the ratio
Y

Q
2
Q

2
Y

2

2
np
and the RSS has a central
2

2
np
distribution.
R Exercises: Lets check some of these results by simulation. First we claim that if X N(, 1)
then X
2

2
(
2
) where
2
=
2
. First simulate from a standard normal distribution
> set.seed(1234) # reproducible "random" values
> X <- rnorm(100000) # standard normal values
> zapsmall(summary(V <- X^2)) # a very skew distribution
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0000 0.1026 0.4521 0.9989 1.3190 20.3300
> var(V)
[1] 1.992403
The mean and variance of the simulated values agree quite well with the theoretical values of 1 and
2, respectively.
To check the form of the distribution we could plot an empirical density function but this
distribution has its maximum density at 0 and is zero to the left of 0 so an empirical density is a
poor indication of the actual shape of the density. Instead, in Fig. 2.1, we present the quantile-
quantile plot for this sample versus the (theoretical) quantiles of the
2
1
distribution.
Now simulate a non-central
2
with non-centrality parameter
2
= 4
> V1 <- rnorm(100000, mean=2)^2
> zapsmall(summary(V1))
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.000 1.773 3.994 5.003 7.144 39.050
> var(V1)
2.5. DISTRIBUTION OF QUADRATIC FORMS IN NORMAL RANDOM VARIABLES 31

1
2
Quantiles
S
a
m
p
l
e

q
u
a
n
t
i
l
e
s
0
2
4
6
8
0 2 4 6 8
Figure 2.1: A quantile-quantile plot of the squares of simulated N(0, 1) random variables versus
the quantiles of the
2
1
distribution. The dashed line is a reference line through the origin with a
slope of 1.
[1] 17.95924
The sample mean is close to the theoretical value of 5 = 1 +
2
and the sample variance is close
to the theoretical value of 2 + 4
2
although perhaps not as close as one would hope in a sample of
size 100,000.
A quantile-quantile plot versus the non-central distribution,
2
1
(4), (Fig. 2.2) and versus the
central distribution,
2
1
, shows that the sample does follow the claimed distribution
2
1
(4) and is
stochastically larger than the
2
1
distribution.
More interesting, perhaps is the distribution of the residual sum of squares from a regression
model. We simulate from our previously tted model lm1
> lm1 <- lm(optden ~ carb, Formaldehyde)
> str(Ymat <- data.matrix(unname(simulate(lm1, 10000))))
num [1:6, 1:10000] 0.088 0.258 0.444 0.521 0.619 ...
- attr(*, "dimnames")=List of 2
..$ : chr [1:6] "1" "2" "3" "4" ...
..$ : NULL
32 CHAPTER 2. QUADRATIC FORMS

1
2
(4) Quantiles
S
a
m
p
l
e

q
u
a
n
t
i
l
e
s
5
10
15
20
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
5 10 15 20

1
2
Quantiles
S
a
m
p
l
e

q
u
a
n
t
i
l
e
s
5
10
15
20
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
2 4 6 8
Figure 2.2: Quantile-quantile plots of a sample of squares of N(2, 1) random variables versus the
quantiles of a
2
1
(4) non-central distribution (left panel) and a
2
1
central distribution (right panel)
> str(RSS <- deviance(fits <- lm(Ymat ~ carb, Formaldehyde)))
num [1:10000] 0.000104 0.000547 0.00055 0.000429 0.000228 ...
> fits[["df.residual"]]
[1] 4
Here the Ymat matrix is 10,000 simulated response vectors from model lm1 using the estimated
parameters as the true values of and
2
. Notice that we can t the model to all 10,000 response
vectors in a single call to the lm() function.
The deviance() function applied to a model t by lm() returns the residual sum of square,
which is not technically the deviance but is often the quantity of interest.
These simulated residual sums of squares should have a
2

2
4
distribution where
2
is the residual
sum of squares in model lm1 divided by 4.
> (sigsq <- deviance(lm1)/4)
[1] 7.48e-05
> summary(RSS)
Min. 1st Qu. Median Mean 3rd Qu. Max.
6.997e-07 1.461e-04 2.537e-04 3.026e-04 4.114e-04 1.675e-03
2.5. DISTRIBUTION OF QUADRATIC FORMS IN NORMAL RANDOM VARIABLES 33

4
2
Quantiles
S
a
m
p
l
e

q
u
a
n
t
i
l
e
s
5
10
15
G
GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
G
5 10 15
Cumulative probability
T
h
e
o
r
e
t
i
c
a
l

c
d
f
.

e
v
a
l
u
a
t
e
d

a
t

o
b
s
e
r
v
e
d

q
u
a
n
t
i
l
e
s
0.2
0.4
0.6
0.8
GGGGG
GGGGGGG
GGGGGGGG
GG
GGGGGGGGGG
GGGGGGGG
GGGGGGGGGGGG
GGGGGGGGGGGGGGG
GGGGGGGGGGGGG
GGGGG
GGGGGGG
GG
GGGGG
GGG
GGG
GGGG
GGGGGGGGGGGGGGG
GGGGGGGGG
G
GG
GGGGGGGGG
GGG
GGGGGGG
G
GGGGGGGGGGGGGGGGGGGGGGGGGGG
GGGGGGGGGGGGGG
GGG
0.2 0.4 0.6 0.8
Figure 2.3: Quantile-quantile plot of the scaled residual sum of squares, RSSsq, from simulated
responses versus the quantiles of a
2
4
distribution (left panel) and the corresponding probability-
probability plot on the right panel.
We expect a mean of 4
2
and a variance of 2 4(
2
)
2
. It is easier to see this if we divide these values
by
2
> summary(RSSsc <- RSS/sigsq)
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.009354 1.953000 3.392000 4.045000 5.500000 22.390000
> var(RSSsc)
[1] 8.000057
A quantile-quantile plot with respect to the
2
4
distribution (Fig. 2.3) shows very good agreement
between the empirical and theoretical quantiles. Also shown in Fig. 2.3 is the probability-probability
plot. Instead of plotting the sample quantiles versus the theoretical quantiles we take equally spaced
values on the probability scale (function ppoints()), evaluate the sample quantiles and then apply
the theoretical cdf to the empirical quantiles. This should also produce a straight line. It has the
advantage that the points are equally spaced on the x-axis.
We could also plot the empirical density of these simulated values and overlay it with the
theoretical density (Fig. 2.4).
34 CHAPTER 2. QUADRATIC FORMS
Scaled residual sums of squares
d
e
n
s
i
t
y
0.00
0.05
0.10
0.15
5 10 15 20
Figure 2.4: Empirical density plot of the scaled residual sums of squares, RSSsq, from simulated
responses. The overlaid dashed line is the density of a
2
4
random variable. The peak of the empirical
density gets shifted a bit to the right because of the way the empirical density if calculated. It uses
a symmetric kernel which is not a good choice for a skewed density like this.

You might also like