You are on page 1of 5

UNIVERSITY OF OSLO

Faculty of Mathematics and Natural


Sciences
Examination in: MAT-INF 4130 — Numerical Linear Algebra.
Day of examination: Tuesday 4. December 2018.
Examination hours: 09:00 – 13:00.
This problem set consists of 5 pages.
Appendices: None.
Permitted aids: None.

Please make sure that your copy of the problem set is


complete before you attempt to answer anything.

All 10 part questions will be weighted equally.

Problem 1.
√ √ 
2 √2
a) Let A be the matrix . Find the singular values of A, and
0 3
compute kAk2 .  
2 2
Solution: We first compute A∗ A = , and we find the eigenvalues
2 5

of this to be 6 and 1. The
√ singular values are thus σ 1 = 6 and σ2 = 1. In
particular kAk2 = σ1 = 6.
 
3 α
b) Consider the matrix A = , where α is a real number. For which
α 1
values of α is A positive definite?
Solution: The eigenvalues of A are solutions to (3 − λ)(1 2
√− λ) − α = 0,
4± 16−4(3−α2 )
so that λ2 − 4λ + 3 − α2 . The solutions to these are 2 , and
p
2
these values are bigger than zero if and only if 16 − 4(3 − α ) is real and
< 4, i.e.
√ when
√ 0 < 4(3 − α2 ) ≤ 16, i.e. when −1 ≤ α2 < 3, i.e. when
α ∈ (− 3, 3) (since α is real).
c) We would like to fit the points p1 = (0, 1), p2 = (1, 0), p3 = (2, 1) to a
straight line in the plane. Find a line p(t) which minimizes
3
X
kp(xi ) − yi k2
i=1

(here pi = (xi , yi )). Is this solution unique?


Solution: With P (t) = x + ty, we would like to find a least squares solution
to    
1 0   1
1 1 x
= 0 .
y
1 2 1

(Continued on page 2.)


Examination in MAT-INF 4130, Tuesday 4. December 2018. Page 2

Denote the coefficient matrix by A, and the right hand side by b. We have
that
   
T 3 3 T 2
A A= A b= .
3 5 2

Using row reduction we obtain


   
3 3 2 1 0 2/3
∼ ,
3 5 2 0 1 0

so that x = 2/3, y = 0. It follows that the horizontal line 2/3 is the


least squares fit. The least squares solution is unique since A has linearly
independent columns.
Problem 2. Assume that A is n × n symmetric positive definite, and with
Cholesky factorization A = LL∗ . Assume also that z is a given column
vector of length n.
a) Explain why A + zz∗ has a unique Cholesky factorization.
Solution: Since A is symmetric positive definite, and zz∗ is symmetric
positive semidefinite, it follows that A + zz∗ is symmetric positive definite.
Any symmetric matrix is positive definite if and only if it has a (unique)
Cholesky factorization (Theorem 3.11 in the book).
 ∗    ∗
L R L
b) Assume that =Q is a known QR-decomposition of ,
z∗ 0 z∗
with R square and upper triangular. Explain why R is nonsingular. Explain
also that, if R also has nonnegative diagonal entries, then A + zz∗ has the
Cholesky factorization R∗ R.
Solution: We obtain
 L∗
 
∗ ∗ ∗
A + zz = LL + zz = L z
z∗
 
R
= R∗ 0 QT Q

0
 
 R
= R∗ 0 = R∗ R.
0
 ∗  
L R
has rank n, and so must also . But this is the case if and only if
z∗ 0
R is nonsingular. If the diagonal elements of R are nonnegative, they must
be positive, since R is nonsingular. R∗ is thus lower triangular with positive
diagonal elements, so that R∗ R is the Cholesky factorization.

Recall that a plane rotation in the i, j-plane, denoted Pi,j , is an n × n-matrix


which differs from the identity matrix only in the entries (i, i), (i, j), (j, i),
(j, j), which equal those of a Given’s rotation, i.e. they are
   
pii pij cos θ sin θ
= .
pji pjj − sin θ cos θ

(Continued on page 3.)


Examination in MAT-INF 4130, Tuesday 4. December 2018. Page 3

c) Explain how one can find plane rotations on the form Pi1 ,n+1 ,
Pi2 ,n+1 ,. . . ,Pin ,n+1 so that
 ∗  0
L R
Pi1 ,n+1 Pi2 ,n+1 · · · Pin ,n+1 = , (1)
z∗ 0
withR0 upper triangular, and explain how to obtain a QR-decomposition of
L∗

from this. In particular you should write down the numbers i1 , . . . , in .
z∗
Is it possible to choose the plane rotations so that R0 in (1) also has positive
diagonal entries?

Solution: In the book, rectangular matrices which  ∗are


 zero below the main
L
diagonal were called upper trapezoidal. B0 := is upper trapezoidal
z∗
except for the last row. We can clearly find a Givens rotation P1,n+1 in the
1, (n + 1)-plane so that B1 := P1,(n+1) B0 has a zero in entry (n + 1, 1), and a
nonzero
 entryin (1, 1) (since
  a Givens rotation with angle θ maps the vector
r cos α r cos(α − θ)
to , all we have to do is choose θ = α (maps to the
r sin α r sin(α − θ)
positive x-axis), or θ = α + π (maps to the negative x-axis). The resulting
matrix will still be upper trapezoidal except for the last row, since P1,n+1
changes only rows 1 and n + 1. Assume now that we have found Givens rota-
tions so that we arrive at a matrix Bk with zeroes in the first k entries of row
n + 1, and upper trapezoidal with nonzero diagonal elements, except for the
last row. We find a Givens rotation Pk+1,n+1 so that Bk+1 := Pk+1,n+1 B0
has a zero in entry (n + 1, k + 1). This rotation will affect only rows n + 1
and k + 1, and since the first k elements in both these rows in Bk are zero,
the same will be the case for Bk+1 . This proves that the final matrix we
obtain after n Givens rotations will be upper trapezoidal, so that
 ∗  0
L R
Pn,n+1 Pn−1,n+1 · · · P1,n+1 ∗ =
z 0
with R0 upper triangular. In particular we can set ik = n + 1 − k for all k.
We now obtain
 ∗  0
L T T T R
= (P1,n+1 ) (P2,n+1 ) · · · (Pn,n+1 ) .
z∗ 0
Since all the Givens rotations are unitary, their product is also unitary, so
that we have factored the matrix as a product of a unitary- and an upper
trapezoidal matrix, i.e. we have a QR-decomposition.
If we choose the angles in the Givens rotations so that all vectors are mapped
to the positive x-axis, the diagonal elements of R0 will also be positive.
Problem 3.
a) Let {vi }ki=1 be a set of linearly independent vectors in Rn , and let h·, ·i
be an inner product in Rn . Explain that the k × k-matrix N with entries
nij = hvi , vj i is symmetric positive definite.
Solution: The matrix is clearly symmetric. Let x 6= 0. We have that
+ k 2
k
* k k
X X X X
xT Nx = xi hvi , vj ixj = xi v i , x j vj = xi vi > 0,


i,j=1 i=1 j=1 i=1 2

(Continued on page 4.)


Examination in MAT-INF 4130, Tuesday 4. December 2018. Page 4

where we used linear independence of the vi . It follows that the matrix is


positive definite.

In the rest of the problem we consider linear systems of the form Ax = b,


where A ∈ Rn×n and b ∈ Rn are given, and x ∈ Rn is the unknown vector.
We assume throughout that A is nonsingular.
b) Let W be any linear subspace of Rn . Show that there is one and only
one vector x̂ ∈ W so that

wT AT Ax̂ = wT AT b, for all w ∈ W,

and that x̂ satisfies

kb − Ax̂k2 ≤ kb − Awk2 , for all w ∈ W.

Solution: Assume that {wi }i is a basis for W . The first equation is satisfied
if and only if
wiT AT Ax̂ = wiT AT b
P
for all i. Any x̂ ∈ W can be written uniquely as j cj wj , and inserting this
gives j wi A Awj cj = wiT AT b. With W the matrix with wi as columns,
P T T

this can also be written as WT AT AWc = WT Ab, or

(AW)T AWc = (AW)T b. (2)

The matrix on the left hand side has entries hAwi , Awj i, which we proved
in a) to be positive definite (the Awi are linearly independent whenever the
wi are, since A is nonsingular), and thus nonsingular. It follows that the cj ,
and hence x̂, is unique.
Equation (2) is also the normal equations for the least squares problem
min kAWc − bk. Thus, c is the least squares solutions to AWc = b, so that
x̂ = Wc is the least squares solution from W to Ax̂ = b. This is equivalent
to the second statement. The least squares view also gives an alternative
proof for the first statement: AW has linearly independent columns, which
ensures uniqueness of the least squares solution.

Recall the definition of the A-inner product in Rn ,

hv, wiA = vT AT Aw.

c) In the rest of this problem we consider the situation above, but where the
vector space W is taken to be the Krylov spaces

Wk = span(b, Ab, . . . , Ak−1 b).

The associated approximations of x, corresponding to x̂ in Wk , are then


denoted xk . Assume that xk ∈ Wk is already determined. In addition,
assume that we already have computed a "search direction" pk ∈ Wk+1 such
that kApk k2 = kpk kA = 1, and such that

hpk , wiA = 0, for all w ∈ Wk .

(Continued on page 5.)


Examination in MAT-INF 4130, Tuesday 4. December 2018. Page 5

Show that xk+1 = xk + αk pk for a suitable αk ∈ R, and express αk in terms


of the residual rk = b − Axk , and pk .
Solution: From b) it follows that, for all w ∈ Wk , wT AT A(xk+1 − xk ) = 0,
so that xk+1 − xk is orthogonal to Wk w.r.t. h·, ·iA . Since the orthogonal
complement of Wk in Wk+1 is one-dimensional, the stated vector pk spans
this orthogonal complement, so that we can write xk+1 −xk = αk pk for some
scalar αk , i.e. xk+1 = xk + αk pk . From a) it follows that

wT AT Axk+1 = wT AT b, for all w ∈ Wk+1 .

To determine αk , insert w = pk and xk+1 = xk + αk pk here to obtain

(pk )T AT A(xk + αk pk ) = (pk )T AT b,

so that

αk (pk )T AT Apk = (pk )T AT (b − Axk ) = (pk )T AT rk .

The left side computes to αk , so that αk = (pk )T AT rk .


d) Assume that A is symmetric, but not necessarily positive definite.
Assume further that the vectors pk−2 , pk−1 , and pk are already known
with properties as above. Show that

Apk−1 ∈ span(pk−2 , pk−1 , pk ).

Use this to suggest how the search vectors pk can be computed recursively.
Solution: It follows by the definitions of the spaces Wk and the properties
of the vectors pj that Apk−1 ∈ Wk+1 . As a consequence, since {pj }kj=0 is
an orthonormal basis for Wk+1 with respect to h·, ·iA , we must have that
k
X k
X
Apk−1 = hApk−1 , pj iA pj = hpk−1 , Apj iA pj ,
j=0 j=0

where we have used the symmetry of A with respect to h·, ·iA . However,
for j < k − 2, Apj ∈ Wk−1 , and as a consequence hpk−1 , Apj iA = 0. We
therefore obtain that
k
X
Apk−1 = hpk−1 , Apj iA pj ,
j=k−2

so that

hpk−1 , Apk iA pk = Apk−1 − hApk−1 , pk−1 iA pk−1 − hApk−1 , pk−2 iA pk−2 .

The right hand side now gives a vector q with the same direction as pk .
Finaly we compute kqkA , and define pk = q/kqkA .

Good luck!

You might also like