You are on page 1of 32

By

Prof. (Dr.) Premanand P. Ghadekar

10/14/2016 1
Why dimensionality reduction?
 Some features may be irrelevant
 To visualize high dimensional data
 “Intrinsic” dimensionality may be smaller than the number of
features

10/14/2016 2
What is PCA?

• Analysis of n-dimensional data


• Observes correspondence between different dimensions
• Determines principal dimensions along which the variance of
the data is high

Why PCA?

• Determines a (lower dimensional) basis to represent the data


• Useful compression mechanism
• Useful for decreasing dimensionality of high dimensional data

10/14/2016 3
PCA Algorithm

 PCA algorithm:
 X  Create N x d data matrix, with one row vector xn per
data point
 X subtract mean x from each row vector xn in X
 Σ  Find covariance matrix of X
 Find eigenvectors and eigenvalues of Σ
 PC’s  the M eigenvectors with largest eigenvalues

10/14/2016 4
Steps in PCA: 1
Calculate Adjusted Data Set

Adjusted Data Set: A Data Set: D Mean values: M

Mi is calculated
… n … -
= by taking the
dims
mean of the
values in
dimension i
data samples

10/14/2016 5
Steps in PCA: 2
Calculate Co-variance matrix, C, from Adjusted Data Set, A

Co-variance Matrix: C

Cij = cov(i,j)

10/14/2016 6
Steps in PCA: 3
Calculate eigenvectors and eigenvalues of C

Matrix E
Matrix E
Eigenvalues Eigenvalues

x x

Eigenvectors Eigenvectors

If some eigenvalues are 0 or very small, we can essentially discard those eigenvalues
and the corresponding eigenvectors, hence reducing the dimensionality of the new
basis.

10/14/2016 7
Principal Components
5 1st principal vector
4

 Gives best axis to 2

project 1

 Minimum RMS 0

error -1

 Principal vectors -2

are orthogonal -3

-4 2nd principal vector


-5
-5 -4 -3 -2 -1 0 1 2 3 4 5

10/14/2016 8
10/14/2016 9
Unitary Matrix
 What is Unitary Matrix
A Matrix A is a Unitary Matrix if
A-1 = A*T

A* =Conjugate of A
A*T = Transpose of Conjugate of A

10/14/2016 10
Singular Value Decomposition
 The SVD for Square matrices was discovered independently
by Beltrami in 1873 and Jordan in 1874, and extended to
rectangular matrices by Eckart and Young in 1930s.
 The SVD of a rectangular matrix A is a decomposition of the
form

A = U S VT OR A = U S VH

Where A is an m x n matrix.
U and V are orthogonal/ Unitary matrices.
S is a diagonal matrix comprised of singular values of A.
10/14/2016 11
SVD - Properties

THEOREM : always possible to decompose matrix A into A


= U S VT , where
 U, S, V: unique
 U, V: column orthonormal (ie., columns are unit vectors,
orthogonal to each other)
 UTU = I; VTV = I (I: identity matrix)
 S: singular value are positive, and sorted in decreasing
order.

10/14/2016 12
SVD Properties

V1T
VnT

13
SVD - Properties

‘spectral decomposition’ of the matrix:

1 1 1 0 0
2 2 2 0 0
1 1 1 0 0
x λ1 x
= u1 u2
5 5 5 0 0
λ2
0 0 0 2 2
0 0 0 3 3 v1
0 0 0 1 1
v2
10/14/2016 14
Singular value decomposition
 SVD A = U S VT
n n
k R kR
kR
VT
m m U S
A
Y

 Best rank k approximation.


 PCA is an important application of SVD.

10/14/2016 15
SVD of a Matrix

= * *

M = U S VT

U and V are orthogonal matrices, and S is a diagonal matrix consisting


of singular values.

10/14/2016 16
SVD - Example

 A = U S VT - example:

1 1 1 0 0 0.18 0
2 2 2 0 0 0.36 0
1 1 1 0 0 0.18 0 9.64 0
5 5 5 0 0 = x 0 5.29
x
0.90 0
0 0 0 2 2 0 0.53
0 0 0 3 3 0 0.80 0.58 0.58 0.58 0 0
0 0 0 1 1 0 0.27 0 0 0 0.71 0.71
10/14/2016 17
SVD – Dimensionality reduction

 Q: how exactly is dim. reduction done?


 A: set the smallest singular values to zero:

1 1 1 0 0 0.18 0
2 2 2 0 0 0.36 0
1 1 1 0 0 0.18 0 9.64 0
5 5 5 0 0 = x 0 5.29
x
0.90 0
0 0 0 2 2 0 0.53
0 0 0 3 3 0 0.80 0.58 0.58 0.58 0 0
0 0 0 1 1 0 0.27 0 0 0 0.71 0.71
10/14/2016 18
SVD - Dimensionality reduction

1 1 1 0 0 0.18
2 2 2 0 0 0.36
1 1 1 0 0 0.18 9.64
5 5 5 0 0 ~ x x
0.90
0 0 0 2 2 0
0 0 0 3 3 0 0.58 0.58 0.58 0 0
0 0 0 1 1 0

10/14/2016 19
SVD - Dimensionality reduction

1 1 1 0 0 1 1 1 0 0
2 2 2 0 0 2 2 2 0 0
1 1 1 0 0 1 1 1 0 0
5 5 5 0 0 ~ 5 5 5 0 0
0 0 0 2 2 0 0 0 0 0
0 0 0 3 3 0 0 0 0 0
0 0 0 1 1 0 0 0 0 0

10/14/2016 20
Applications of SVD in Image Processing
SVD approach can be used in Image Compression.
SVD can be used in face recognition.
SVD can be used in watermarking.
SVD can be used for Texture classification.
SVD can be used for Dynamic Texture Synthesis.

10/14/2016 21
Introduction

 Texture-repetition of a specific structure, which is not limited


only to the visual domain.

Example of a texture image (left) and a natural image (right).

10/14/2016 22
Texture Types
 Static Texture- as an image showing spatial stationarity.
 Dynamic Texture- a sequence of images showing temporal stationarity.

Static Texture Dynamic Texture

Bark of tree Flame

10/14/2016 23
Dynamic Texture Synthesis

Synthesis- Definition, Different approach

 Physics-Based Approach- Natural phenomena, High quality, Very


flexible, Specific to particular texture, Costly (Rendering Technique).

 Image Based Approach –Two Types


 Non-Parametric Approach- Extracts different video clips from the
original video and patching them together to obtain a new video, High
quality, Not Flexible, On the fly synthesis is not possible.
 Parametric Approach-Parametric techniques are based on a model of
the dynamic texture. Compact representation, More Flexible. On-the
fly synthesis is possible.

10/14/2016 24
Comparison
Between different approaches for Dynamic Texture Synthesis
Different Physics- Image based
Parameters based Patch Parametric
Analysis Cost HIGH LOW LOW
Synthesis Cost HIGH LOW LOW
Model Size LOW HIGH LOW
Specificity HIGH LOW LOW
Flexibility HIGH LOW HIGH

10/14/2016 25
SVD-Soatto-Doretto Model

Soatto-Doretto’s model for the analysis and


synthesis of dynamic textures.
10/14/2016 26
Analysis

The analysis consists in –


1) Finding an appropriate subspace to represent the trajectory.
2) Identifying the trajectory.

The analysis is done as follows.


First step-
A sequence of color images of size N ×M is reshaped into a sequence of column
vectors of length m = N × M ×3. The original colour encoding is RGB. The column
vectors are ordered in matrix

The temporal mean of each pixel is computed stored in a vector d ЄR m and subtracted from
Then obtained a new matrix by using SVD,

10/14/2016 27
Analysis
The columns of U and V are defined as the left and right singular vectors,
respectively, and S is a diagonal matrix formed by the singular values
of ordered according decreasing energy.

Dimension reduction is performed by retaining the first n singular values


of the decomposition. Indicating with Un and Vn the matrices collecting the
first n columns of U and V, and with Sn the diagonal matrix collecting the
first n diagonal elements of S, the matrix

Above Equation defines the best n-rank approximation of matrix ,


Assume that C = Un and X = SnVnT

10/14/2016 28
Analysis
The generic column vector yk is thus:
where xk is the k-th column of matrix X and e is the residual error.
Written differently, we see that

where ci with i = 1, . . . , n indicates the i-th column vectors of the


matrix C. Above Equation shows that yk is a linear combination of the
columns of matrix C, which contains the basis of the subspace to which
image vectors y are projected.
As an example, in the case of the video “Flame”, the columns of
matrix C contain the “Eigen-flames” whose linear combination
generates one flame image.
10/14/2016 29
Analysis
Matrix X fixes the weights of the linear combination, thus establishing
how frames change in time. In other words, it represents the dynamics of
the video sequence.
The second step is the identification of the system dynamics. This is
done starting from the matrix which is associated to the time
evolution of the video frame
A MAR(1) model is used to represent the evolution of xk

vk is a Gaussian random noise vector i.e. correlation of the noise in time

10/14/2016 30
Synthesis
Second Step-
The final model is thus represented by the following system of
equations:

xk+1 = Axk + Bvk

zk = Cxk + d.

Where Matrix X- fixes the weights of the linear combination,


thus Establishing how frames changes in time, and
are AЄRnxn BЄ Rnxnv matrices computed from X. vk is a
Gaussian random noise vector

10/14/2016 31
Survey
I. Standard SVD model by- Soatto and Doretto.
It perform dimension reduction of the data by applying the SVD to the video
frames unfolded into column vectors. This permits only to exploit the
temporal correlation. Drawbacks

II. The FFT Approach by-Abraham.


Abraham compute the FFT of the single video frames before the analysis step
and then apply the linear mode. This permits to exploit the temporal
correlation and the spatial correlation among pixels.
Drawback -Inverse FFT operation must be done during synthesis
for each frame.

10/14/2016 32

You might also like