You are on page 1of 18

Linear Algebra and Geometry 3

Inner product spaces, quadratic forms, and more advanced problem solving

SVD and Fundamental Theorem of Linear Algebra

Hania Uscka-Wehlou, Ph.D. (2009, Uppsala University: Mathematics)


University teacher in mathematics, Sweden
The Fundamental Theorem of Linear Algebra 7 pages
Video 90
The Four Fundamental Subspaces: 4 Lines 6 pages
Part 2
Gilbert Strang
m×n
Video 90 Parts 1 and 2 of the Theorem:
dim r dim r
Part 2 About 4 fundamental matrix spaces
and their dimensions
Col(A)
T
Col(A ) all Ax
Row(A)
T
all A y
n m
ℝ ℝ

such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )

dim n − r dim m − r
Part 3 of the Theorem:
About orthonormal bases in these spaces,
about SVD, and about the pseudoinverse of A
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue !iℝ"and
ON bases in ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C

inverts A where that is possible, from column space back to row space. It has the same
nullspace as A . It gives the shortest solution to Ax D b, because A b is the particular
T C

solution in the row space: AA b D b. Every matrix is invertible from row space to column
C

space, and A provides the inverse:


C

vi
Pseudoinverse C
A ui D for i D 1; : : : ; r:
!i
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue
ON bases in!iℝ"and ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C
T m T
A inverts
= UΣVA where Basis {u1 , …, um} ⊂ ℝ contains ON eigenvectors of AA
that is possible, from column space back to row space. It has the same
nullspace as A . It gives the shortest solution to Ax D b, because A b is the particular
T C

solution in the row space: AA b D b. Every matrix is invertible from row space to column
C

space, and A provides the inverse:


C

vi
Pseudoinverse C
A ui D for i D 1; : : : ; r:
!i
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue
ON bases in !iℝ"and ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C
T m T
A inverts
= UΣVA where Basis {u 1 , …, u m } ⊂ ℝ contains ON eigenvectors of AA
that is possible, from column space back to row space. It has the same
nullspace as ABasis
T
. It {v
gives
1, …, the
v n} shortest
⊂ ℝn solution
contains ON to Ax
eigenvectorsDof Ab,T because
A A C
b is the particular
solution in the row space: AA b D b. Every matrix is invertible from row space to column
C

space, and A provides the inverse:


C

vi
Pseudoinverse C
A ui D for i D 1; : : : ; r:
!i
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue
ON bases in !iℝ"and ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C
T m T
A inverts
= UΣVA where Basis {u 1 , …, u m } ⊂ ℝ contains ON eigenvectors of AA
that is possible, from column space back to row space. It has the same
nullspace as ABasis
T
. It {v
gives
1, …, the
v n} shortest
⊂ ℝn solution
contains ON to Ax
eigenvectorsDof Ab,T because
A A C
b is the particular
solution in the row space: AA b D b. Every matrix is invertible from row space to column
C
C The diagonal entries σ1, …, σr are non-zero singular values of A
space, and A provides the inverse:
vi
Pseudoinverse C
A ui D for i D 1; : : : ; r:
!i
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue
ON bases in !iℝ"and ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C
T m T
A inverts
= UΣVA where Basis {u 1 , …, u m } ⊂ ℝ contains ON eigenvectors of AA
that is possible, from column space back to row space. It has the same
nullspace as ABasis
T
. It {v
gives
1, …, the
v n} shortest
⊂ ℝn solution
contains ON to Ax
eigenvectorsDof Ab,T because
A A C
b is the particular
solution in the row space: AA b D b. Every matrix is invertible from row space to column
C
C The diagonal entries σ1, …, σr are non-zero singular values of A
space, and A provides the inverse:
From row space to column space: Avi = σiui, i = 1,…, r
† † T
A = VΣ U vi
Pseudoinverse C
A ui D for i D 1; : : : ; r:
!i
are columns of orthogonal matrices U and V , we have the Singular Value Decomposition
A D U †V :T
Part 3 of the Theorem:
About
2 orthonormal
3 bases
2 in these spaces,
32
!1 :
3
about SVD, and about the pseudoinverse of A
Part 3 AV D A 4 v1 ! ! vr ! ! vn 5 D 4 u1 ! ! ur ! ! um 5 4 !
!r 5 D U †:

Gives a kind of orthogonal diagonalization for rectangular matrices and for


2 n m
The v snon-diagonalizable
0
are orthonormal eigenvectors of A
matrices, i.e., T
A, with
gives such eigenvalue
ON bases in !iℝ"and ℝ the eigen-
0. Then
vector matrix V diagonalizes A A D .V † U /.U †V / D V .† †/V . Similarly U
T T T T T T
that the
diagonalizes AA .matrix
T is “diagonal” with respect to them.
When matrices are not symmetric or square, it is A A and AA that make things right.
T T

This summary is completed by one more matrix: the pseudoinverse. This matrix A C
T m T
A inverts
= UΣVA where Basis {u 1 , …, u m } ⊂ ℝ contains ON eigenvectors of AA
that is possible, from column space back to row space. It has the same
nullspace as ABasis
T
. It {v
gives
1, …, the
v n} shortest
⊂ ℝn solution
contains ON to Ax
eigenvectorsDof Ab,T because
A A C
b is the particular
solution in the row space: AA b D b. Every matrix is invertible from row space to column
C
C The diagonal entries σ1, …, σr are non-zero singular values of A
space, and A provides the inverse:
From row space to column space: Avi = σiui, i = 1,…, r
† † T
A = VΣ U vi
Pseudoinverse C
The other basis vectorsAareuin D
i nullspacesfor iD
of A and A T:1;
Av:i :=: ;0,r: A T ui = 0, i > r.
!i
m×n
T
dim r A = UΣV dim r

Col(A)
T
Col(A ) all Ax
Row(A)
T
all A y
n m
ℝ † †
A = VΣ U T

such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )

dim n − r dim m − r
m×n
T
dim r A = UΣV dim r

Col(A)
T
Col(A ) all Ax
Row(A)
u1
T
all A y v1 u2
n m
ℝ v2 † †
A = VΣ U T

v4 u4 u3
v3

such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )

dim n − r dim m − r
T
A = UΣV
0 1
0 1 1
B 1 ... C
B
B ... C
C
B
B C
C
AV = A[v1 · · · vr · · · vn ] = [u1 · · · ur · · · um ] B
B r C
C = U⌃
AV = A[v1 · · · vr · · · vn ] = [u1 · · · ur · · · um ] B
B
@ r
C
C
A = U ⌃
B C
@ A

† † T
A = VΣ U
0 1
01/ 1 1
B 1/ 1 ... C
B
B ... C
C
† † B
B C
C †
A †U = A †[u1 · · · ur · · · um ] = [v1 · · · vr · · · vn ] B
B 1/ r C
C = V ⌃
A U = A [u1 · · · ur · · · um ] = [v1 · · · vr · · · vn ] @ B
B 1/ C
C = V ⌃ †
B r A
C
@ A

0 1
T
A = UΣV
0 1
0 1 1
B 1 ... C
B
B ... C
C
B
B C
C
AV = A[v1 · · · vr · · · vn ] = [u1 · · · ur · · · um ] B
B r C
C = U⌃
AV = A[v1 · · · vr · · · vn ] = [u1 · · · ur · · · um ] B
B
@
C
C
A = U ⌃
Avi = σiui, i = 1,…, r B r C
@ A

† † T
A = VΣ U
0 1
01/ 1 1
B 1/ 1 ... C
B
B ... C
C
† † B
B C
C †
A †U = A †[u1 · · · ur · · · um ] = [v1 · · · vr · · · vn ] B
B 1/ r C
C = V ⌃
A U = A [u1 · · · ur · · · um ] = [v1 · · · vr · · · vn ] @ B
B 1/ C
C = V ⌃ †
B r A
C

vi @ A
A ui = , i = 1,…, r
σi

0 1
m×n
T
dim r A = UΣV dim r

Avi = σiui, i = 1,…, r


Col(A)
T
Col(A ) v i
all Ax

Row(A) A ui = , i = 1,…, r
σi u1
T
all A y v1 u2
n m
ℝ v2 †
A = VΣ U† T

v4 u4 u3
v3

such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )

dim n − r dim m − r
m×n
T
dim r A = UΣV dim r

Avi = σiui, i = 1,…, r


Col(A)
T
Col(A ) v i
all Ax

Row(A) A ui = , i = 1,…, r
σi u1
T
all A y v1 u2
n m
ℝ v2 †
A = VΣ U † T

v4 u4 u3
v3
Avi = 0, i = r + 1,…, n
such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )
† T
A ui = 0 = A ui, i = r + 1,…, m
dim n − r dim m − r
m×n
T
dim r A = UΣV dim r

Avi = σiui, i = 1,…, r


Col(A)
T
Col(A ) v i
all Ax

Row(A) A ui = , i = 1,…, r
σi u1
T
all A y v1 u2
n m
ℝ v2 †
A = VΣ U † T

v4 u4 u3
v3
Avi = 0, i = r + 1,…, n
such y that
Null(A) such x that ATy = 0
Ax = 0 T
Null(A )
† T
A ui = 0 = A ui, i = r + 1,…, m
dim n − r dim m − r
m×n
T
dim r A = UΣV dim r

Avi = σiui, i = 1,…, r


Col(A)
T
Col(A ) v i
all Ax

Row(A) A ui = , i = 1,…, r
σi u1
T
all A y v1 u2
n m
ℝ v2 †
A = VΣ U † T

v4 u4 u3
v3
Avi = 0, i = r + 1,…, n
such y that
Null(A) such x that ATy = 0
Ax = 0 Null(A ) T
† T
A ui = 0 = A ui, i = r + 1,…, m
† T −1 T
dim n − r dim m − r A = (A A) A

You might also like