You are on page 1of 9

VIETNAM NATIONAL UNIVERSITY

HO CHI MINH UNIVERSITY OF TECHNOLOGY


---o0o---

LINEAR ALGEBRA PROJECT REPORT

Instructors: Prof. Dau The Phiet


Prof. Nguyen Tien Dung
Group: 07
Members: Nguyen Hoang Tan Dat – 2052440 [100%]
Lam Duc Duy – 2153248 [0%]
Le Anh Kien – 2153493 [0%]
Tran Ngoc Anh Kiet – 2252408 [0%]
Nguyen Bao Khanh – 2252329 [0%]

Ho Chi Minh City, May 6th, 2023

1
TABLE OF CONTENTS

THE THEORY.................................................................................................3
Introduction...................................................................................................3
Definition of the SVD.....................................................................................3
Matrix approximation...................................................................................4
Eckart - Young - Mirsky theorem...................................................................4
Truncation.....................................................................................................5
THE MATLAB................................................................................................6
Full codes:......................................................................................................6
Simple explanation of the code....................................................................6
THE RESULTS AND CONCLUSION...........................................................8
REFERENCES................................................................................................9

2
THE THEORY
Introduction
The singular value decomposition (SVD) is among the most important matrix
factorizations of the computational era, providing a foundation for nearly all of the data
methods in this book. The SVD provides a numerically stable matrix decomposition that
can be used for a variety of purposes and is guaranteed to exist. We will use the SVD to
obtain low-rank approximations to matrices and to perform pseudo-inverses of non-
square matrices to find the solution of a system of equations Ax = b. Another important
use of the SVD is as the underlying algorithm of principal component analysis (PCA),
where high-dimensional data is decomposed into its most statistically descriptive factors.
SVD/PCA has been applied to a wide variety of problems in science and engineering.

Definition of the SVD


Generally, we are interested in analyzing a large data set X ∈C n ×m :

[ ]
│ │ │
X = x1 x2 x3
│ │ │
The columns X ∈C n ×m may be measurements from simulations or experiments. For
example, columns may represent images that have been reshaped into column vectors
with as many elements as pixels in the image. The column vectors may also represent the
state of a physical system that is evolving in time, such as the fluid velocity at a set of
discrete points, a set of neural measurements, or the state of a weather simulation with
one square kilometer resolution.

The index k is a label indicating the kth distinct set of measurements. For many of the
examples in this book, X will consist of a time-series of data, and xk = x(kΔt). Often the
state-dimension n is very large, on the order of millions or billions of degrees of freedom.
The columns are often called snapshots, and m is the number of snapshots in X. For many
systems n≫m, resulting in a tall-skinny matrix, as opposed to a short-fat matrix when n≪
m.

The SVD is a unique matrix decomposition that exists for every complex-valued matrix X
X ∈C :
n ×m

T
X =UΣ V
where U ∈C n ×m and V ∈C n ×mare unitary matrices with orthonormal columns, and
Σ∈ C is a matrix with real, non-negative entries on the diagonal and zeros off the diagonal.
n× m

When n ≥ m, the matrix Σ has at most m non-zero elements on the diagonal, and may be
written as Σ= []
^
Σ
0
. Therefore, it is possible to exactly represent X using the economy SVD:

3
X =UΣV =[ U
^U^ ⊥
] []
^Σ T ^ ^ T
0
V =U Σ V

The columns of U ^ ⊥ span a vector space that is complementary and orthogonal to that spanned
^ . The columns of U are called left singular vectors of X and the columns of V are right
by U
singular vectors. The diagonal elements of ^Σ∈ C n× m are called singular values and they are
ordered from largest to smallest. The rank of X is equal to the number of non-zero singular
values.

Figure 1
Matrix approximation
Perhaps the most useful and defining property of the SVD is that it provides an optimal low-
rank approximation to a matrix X. In fact, the SVD provides a hierarchy of low-rank
approximations, since a rank-r approximation is obtained by keeping the leading r singular
values and vectors, and discarding the rest.

Schmidt (of Gram-Schmidt) generalized the SVD to function spaces and developed an
approximation theorem, establishing truncated SVD as the optimal low-rank approximation of
the underlying matrix X. Schmidt’s approximation theorem was rediscovered by Eckart and
Young, and is sometimes referred to as the Eckart-Young theorem.

Eckart - Young - Mirsky theorem


The optimal rank-r approximation to X , in a least-squares sense, is given by the rank-r SVD
~
truncation X :
~~~
argmin ‖x−~x‖F =U Σ V T
Here, ~ ~ ~
U and V denote the first r leading columns of U and V, and Σ contains the leading r × r
sub-block of Σ. ‖.‖F is the Frobenius norm.
4
Here, we establish the notation that a truncated SVD basis (and the resulting approximated
~
matrix X ) will be denoted by ~ ~~~T
X =U Σ V . Because Σ is diagonal, the rank-r SVD
approximation is given by the sum of r distinct rank-1 matrices:
r
~
X =∑ σ k uk v Tk =σ 1 u1 v T1 + σ 2 u 2 v T2 +…+ σ r ur v Tr
k=1

Truncation
The truncated SVD is illustrated in Figure 2, with ~ ~ ~
U ; Σ and V denoting the truncated
matrices. If X does not have full rank, then some of the singular values in ^Σ may be zero, and
the truncated SVD may still be exact. However, for truncation values r that are smaller than the
number of non-zero singular values (i.e., the rank of X), the truncated SVD only approximates
X:
~~ ~T
X ≈ U ΣV
There are numerous choices for the truncation rank r. If we choose the truncation value to keep
all non-zero singular values, then X ≈ ~ ~ ~T
U Σ V is exact.

Figure 2

5
THE MATLAB
Full codes:
clear all, close all, clc

A = imread('Elden Lord.jpg');
X = double(rgb2gray(A));
nx = size(X,2); ny = size(X,1);

[U,S,V] = svd(X, 'econ');

figure, subplot(2,2,1)
imagesc(X), axis off, colormap gray
title('Original')

plotind = 2;
for r=[10 50 100];
Xapprox = U(:,1:r)*S(1:r,1:r)*V(:,1:r)';
subplot(2,2,plotind), plotind = plotind + 1;
imagesc(Xapprox), axis off
title(['r=',num2str(r,'%d'),]);
end
set(gcf,'Position',[ 80 240 620 380])

figure
semilogy(diag(S),'k','LineWidth',1.2), grid on
xlabel('r')
ylabel('Singular value, \sigma_r')
xlim([-50 1550])
set(gcf,'Position',[880 240 620 380])

Simple explanation of the code


Load image
A = imread('Elden Lord.jpg');

Convert RGB to Gray scale, double precision and setup matrix size limit.
X = double(rgb2gray(A));
nx = size(X,1); ny = size(X,2);

Computing SVD
[U,S,V] = svd(X, 'econ');

Display the result of the Elden Lord.jpg after conversion.


figure, subplot(2,2,1)
imagesc(X), axis off, colormap gray

6
Matrix Approximation: 1st r column of U × 1st r by r column of 𝞢 × 1st r column of VT
Display Elden Lord.jpg before and after being compressed.
plotind = 2;
for r = [10 50 100];
Xapprox = U(:,1:r)*S(1:r,1:r)*V(:,1:r)';
subplot(2,2,plotind), plotind = plotind + 1;
imagesc(Xapprox), axis off
title(['r=',num2str(r,'%d'),]);
end

Graph the singular value σ r


figure
semilogy(diag(S),'k','LineWidth',1.2), grid on
xlabel('r')
ylabel('Singular value, \sigma_r')
xlim([-50 1550])

7
THE RESULTS AND CONCLUSION

Conclusion:
One important aspect of image compression is whether it is lossy or lossless. A lossy compression
results in some information being lost in the compression, but allows for a more effective compression.
A lossless compression stores all of the same information, but just in a compressed state. With Matlab
we can compress input images which we can’t do using ordinary tools nor be solved by the analytical
method.

8
REFERENCES
[1] Data Driven Science & Engineering Machine Learning, Dynamical Systems, and Control [Book by
J. Nathan Kutz and Steven L. Brunton]
[2] Elementary Linear Algebra [Book by Howard Anton and Chris Rorres] (z-lib.org)
[3] Proof of Eckart–Young–Mirsky theorem (for Frobenius norm) (https://en.wikipedia.org/wiki/Low-
rank_approximation)

You might also like