Professional Documents
Culture Documents
Jonathan Monsalve.
Thursday 20th February, 2020
Universidad Industrial de Santander
Advisor Ph.D. Henry Arguello
Agenda
1
Spectral imaging
F ∈ RN×N×L with N × N
spatial dimensions and L
spectral bands.
# High sizes and costs.
# Time consuming
acquisition.
2
Compressive Sensing (CS)
x y
Result
ADC Processing 1
x y
Result
Compressive Processing
ADC
1
Y PT X + N, (5)
6
Covariance matrix estimation
8
Specific objectives
1. To determine the most suitable sensing/projection protocols
based on compressive sensing and random projections from the
state-of-the-art applicable to hyperspectral imaging to be used
in the statistics recovery.
2. To design an algorithm based on the gradient descent
method to recover the first and second sample statistical
moments from low-dimensional random projections.
3. To test the performance of the proposed algorithm to recover
the sample statistics in hyperspectral imaging.
4. To adapt a state-of-the-art algorithm to estimate the
vegetation cover using sample statistics and random
low-dimensional projections of hyperspectral images based on
the proposed approach.
5. To verify the performance of the adapted algorithm
9
Proposal
C1 ≈ C2 ≈ · · · ≈ Cp ≈ Σ, (11)
10
Proposed optimization problem
11
Projected gradient descent method
p
Õ
∗
Σ arg min || Σ̃i − PTi ΣPi ||F2 + τψ(Σ)
Σ i1 (16)
subject to Σ 0.
Consider the vector form of the problem
p
Õ
g(Σ) ||Qi vec(Σ) − vec(Σ̃i )||22 + τdT vec(Σ), (17)
i1
Theorem
Suppose that g(Σ) and h(Σ) are a proper closed and convex functions, ad-
ditionally, dom(h(Σ)) ⊆ int(dom(g(Σ))) and that g(Σ) is L-smooth. Let
{Σk }k≥0 be the sequence of points generated by the projected gradient algo-
rithm. Then for any optimal point Σ∗ and k ≥ 0
15
L-smooth function
A function g is said to be L-smooth if it is differentiable and it
holds that exist a L > 0 such that
||∇g(x) − ∇g(y)||2 ≤ L||x − y||2 , (20)
for all x, y ∈ E, with E the domain of function g. Thus, for
function g(Σ) defined in 17,
p
Õ
∇g(x) QTi (Qi x − σ̃) + τd, (21)
i1
then, by replacing (21) in (20)
p
Õ p
Õ
||( QTi (Qi x − σ̃) + τd) − ( QTi (Qi x − σ̃) + τd)||2 , (22)
i1 i1
after some algebra
p
Õ p
Õ
|| (QTi Qi )(x − y)||2 ≤ || (QTi Qi )|| ||x − y||2 . (23)
i1 i1
Íp
Thus, L || i1
(QTi Qi )||. 16
Bias of the estimator
We assume that
C1 ≈ C2 ≈ · · · ≈ Cp ≈ Σ, (24)
this adds a bias to the estimator, for that let define
Ci Σ + Ri , (25)
where R ∈ Rl×l is an error matrix. Thus, the bias is explained
by lemma 2
Lemma
The gradient step for the proposed Algorithm 1 is biased by Bias[∇ f˜(Σ)]
Íp
− i1 Pi PTi Ri Pi PTi .
18
Mitigating the Bias
19
Hyperspectral images used
Figure: Urban dataset. (Left) 2D spatial distribution of the 100th spectral band. (Right)
Three spectral signatures at different pixels of the image.
Figure: Pavia centre dataset. (Left) 2D spatial distribution of the 80th spectral band.
(Right) Three spectral signatures at different pixels of the image. 20
Sensing matrices
# Normal distribution
# Uniform distribution
# Bernoulli distribution
21
Determining the number of subsets
Figure: Mean square error of the reconstructed covariance matrix for Pavia (left) and
Urban (right) images by varying the number of subsets.
22
Accuracy of the recovered eigenvectors of the covariance matrix
Urban Pavia
Binary
Gaussian
Uniform
Figure: Mean MSE error of recovered covariance matrix when the compression ratio
vary for Urban image with 32 subsets.
23
Angle of reconstructed eigenvectors
Average error angle between eigenvectors and their reconstruction
1 ev
2 ev
3 ev Binary Gaussian Uniform
Figure: Mean angle error of recovered eigenvectors when the compression ratio vary
for Pavia image with 32 subsets. 24
Angle of reconstructed eigenvectors
Average error angle between eigenvectors and their reconstruction
1 ev
2 ev
3 ev Binary Gaussian Uniform
Figure: Mean angle error of recovered eigenvectors when the compression ratio vary
for Urban image with 32 subsets.
25
Bias and filtered gradient analysis
Figure: Comparison of the fourth eigenvector of the estimated covariance matrix and
26
the bias term with and without the filtering.
Conclusions
# We proposed an algorithm to recover the covariance matrix
from a set of compressive measurements using the
projected gradient strategy.
# Convergence is proven to this algorithm.
# The theoretical results show that a filtered gradient can
reduce the Bias.
# Experimental results show that the proposed method
outperforms the state-of-art methods.
27
Thanks
Questions?
28
Sensing matrices
How matrices are built?
k k
Codification
snapshot 1 λ
Corresponding ...
sensing matrix
P1 k P2 k P8
Codification
snapshot 2 λ
X 1
Partitions
2
Eigenvectors comparison
Are covariances matrices similar each other?
3
Compressive projection principal component analysis (CPPCA)
CPPCA estimates both the PCA coefficients and the most
relevant eigenvectors of the covariance matrix from an
orthogonal random projection of the data.
P2x
P1x
Drawbacks of CPPCA