You are on page 1of 10

PROCEEDINGS OF SPIE

SPIEDigitalLibrary.org/conference-proceedings-of-spie

Blind hyperspectral sparse unmixing


based on online dictionary learning

Song, Xiaorui, Wu, Lingda, Hao, Hongxing

Xiaorui Song, Lingda Wu, Hongxing Hao, "Blind hyperspectral sparse


unmixing based on online dictionary learning," Proc. SPIE 10789, Image and
Signal Processing for Remote Sensing XXIV, 107890K (9 October 2018); doi:
10.1117/12.2325087

Event: SPIE Remote Sensing, 2018, Berlin, Germany

Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use


Blind hyperspectral sparse unmixing based on online dictionary
learning
Song Xiaorui*a, Wu Lingdaa, Hao Hongxinga
a
Science and Technology on Complex Electronic System Simulation Laboratory,
Space Engineering University, Beijing 101416, China

ABSTRACT

Including the estimation of endmembers and fractional abundances in hyperspectral images (HSI), blind hyperspectral
unmixing (HU) is one of the most prominent research topics in image and signal processing for hyperspectral remote
sensing. In this paper, a method of blind HU based on online dictionary learning and sparse coding is proposed, for the
condition of the spectral signatures unknown in the HSI. An online optimization algorithm based on stochastic
approximations is used for dictionary learning, which performs the optimization on the sparse coding and dictionary
atoms alternately. On the sparse coding, a fully constrained least squares (FCLS) problem is solved because of the
physical significance of fractional abundances. To estimate the endmembers in the HSI, a kind of clustering algorithm is
used to cluster the atoms in the pruned dictionary obtained via the statistics on the sparse codes. With the estimated
endmembers, the final fractional abundances can be obtained by using a variable splitting augmented Lagrangian and
total variation algorithm. The experimental results with the synthetic data and the real-world data illustrate the
effectiveness of the proposed approach.
Keywords: hyperspectral images, blind sparse unmixing, online dictionary learning, sparse coding, endmember
estimation

1. INTRODUCTION
Hyperspectral remote sensing, also known as imaging spectrometry, contributes to many practical applications, such as
geological exploration and environmental monitoring1. Recently, related research of this field is active and has attracted
much attention.
With high spectral resolution, HSIs usually contain hundreds of continuous spectral bands, covering the visible to
shortwave infrared spectral bands. However, compared with the spectral resolution, the spatial resolution of HSIs is
always relatively low. Due to the low level of spatial resolution and the complexity of the actual materials distribution,
the pixels containing multiple materials, termed mixed pixels, are commonly found in HSIs. The emergence of mixed
pixels severely restricts the improvement of processing accuracy of hyperspectral imagery and becomes an important
factor hindering the development of hyperspectral remote sensing technology. HU proposed for this problem has become
a research hotspot, as one of the core developments in signal and image processing for HSIs.
Including the estimation of endmembers and fractional abundances in HSIs, one main category of blind HU methods is
based on the assumption that the presence in the data of at least one pure pixel of each endmember. Approaches of this
category usually have exploited geometric features of hyperspectral mixtures to determine the smallest simplex
containing the data, such as pixel purity index (PPI)2, vertex component analysis (VCA)3, and N-FINDR4. However, the
pure pixel assumption is a strong requisite that may not hold in many HSIs.
With the topic of compressed sensing and sparse regression received enormous attention in various fields, the idea of
sparse representation has also been introduced into HU. One of the advantages of this category of methods is no need to
assume pure materials in HSIs. In 2009, Bobin, Iordache and Bioucas-Dias replaced the endmembers set in the linear
mixture model (LMM) with a spectral library and proposed an LMM based on sparse representation5. In 2011, Bioucas-
Dias et al. proposed the SUnSAL algorithm, which uses the idea of alternating iterative to obtain the sparse solution of
the abundance coefficients under the l1 norm by variable splitting and augmented Lagrangian multiplier method6. On this
basis, a variety of sparse unmixing algorithms such as SUnSAL-TV7, orthogonal matching pursuit (OMP)8 and subspace

*
sxrjmx@163.com; phone (+86)15210980596

Image and Signal Processing for Remote Sensing XXIV, edited by Lorenzo Bruzzone,
Francesca Bovolo, Proc. of SPIE Vol. 10789, 107890K · © 2018 SPIE
CCC code: 0277-786X/18/$18 · doi: 10.1117/12.2325087

Proc. of SPIE Vol. 10789 107890K-1


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
matching pursuit (SMP)9 are proposed. The basic idea of the above algorithms is to solve the sparse representation of
mixed pixels with known spectral libraries. Due to the complex diversity of natural ground spectral, there are always
some cases where the spectral of some features is unknown. And there are comparatively fewer works that study sparse
regression in the presence of spectral libraries unknown.
In this paper, a method of blind HU based on online dictionary learning and sparse coding is proposed, for the condition
of the spectral signatures unknown in the HSI. Firstly, we use a part of mixed pixels in the image to be processed as the
training set to obtain an overcomplete dictionary. In this step, an online optimization algorithm based on stochastic
approximations is used for dictionary learning, which performs the optimization on the sparse coding and dictionary
atoms alternately. Then, we use the learning overcomplete dictionary as a standard spectral library to tackle the unmixing
problem and get coefficients under the unpruned learning dictionary. On this basis, by calculating the times of each atom
in the learning dictionary chosen by the later sparse coding, we use the ratio of the chosen times of the top N most atoms
and chosen times of all atoms to select the atoms most likely to be the unknown spectral signatures in dataset. Then
similar selected atoms can be clustered and the centroid of each class can be used as the estimation of endmembers in the
HSI instead. With the estimated endmembers, fractional abundances can be obtained by using the sparse unmixing
algorithm based on variable splitting augmented Lagrangian and TV to exploit the spatial-contextual information present
in the images.
The paper is organized as follows. Section 2 presents the linear mixture model based on sparse representation in HSIs.
Section 3 is dedicated to the blind sparse unmixing based on online dictionary learning. Section 4 presents a series of
experimental results with synthetic and real-world data. Section 5 concludes the paper.

2. LINEAR MIXTURE MODEL BASED ON SPARSE REPRESENTATION


The spectral mixture models in HSIs can be essentially divided into linear mixture model (LMM) and nonlinear mixture
model (NLMM). In most cases, nonlinear mixture can be converted to linear mixture by linearization. Therefore, linear
mixing model is currently used by most HU algorithms. In this paper, we build the sparse unmixing model based on
LMM.
The basic idea of sparse unmixing is to introduce a spectral library into the LMM and mixed pixels are represented by
pure signatures from the spectral library. The LMM based on sparse representation is as follows.
x = Dα + n (1)
L× MN
where x ∈ R is the pixel spectral matrix in the HSI, in which L is the number of bands, M stands for the sample
number of a single scan and N represents the number of scans. D ∈ R L× k is the spectral dictionary, in which k stands for
the number of pure signatures in spectral library D . α ∈ R k × MN is the fractional abundance matrix compatible with D and
most values of the elements in α are of zero. And n is an error term.
It should be emphasized that, because of the physical meaning, α represents the abundance coefficient in spectral
unmixing and satisfies the constraints with non-negative and sums of one.
k

∑α
i =1
i =1 (2)

0 ≤ αi ≤ 1 (3)
M ×N
Expand by k
L M ×N k
M scan lines
L L

N
= ⅹ
Pixel spectral matrix x Spectral dictionary D
HSI data cube
Codes α

Figure 1. The LMM based on sparse representation.


Due to the sparsity of α , Equation (1) can be equivalent to the constrained optimization as follows.

Proc. of SPIE Vol. 10789 107890K-2


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
min α 0 , s.t. Dα - x 2 ≤ δ (4)
α

where α 0
is the number of non-zero elements in matrix α , usually termed l0 norm. δ is a parameter controlling the
reconstruction error.
The basic idea of sparse unmixing algorithms can be simply understood as a linear decomposition of multidimensional
data. Therefore, the sparse unmixing problem currently studied can be equivalent to finding the simplest solution of the
Equation (4) in the case of a known overcomplete spectral library.

3. BLIND SPARSE UNMIXING BASED ON ONLINE DICTIONARY LEARNING


According to the sparse unmixing model proposed in section 2, the sparse codes α , that is, the abundance coefficient
vector, can be usually obtained under the precondition of spectral dictionary D ∈ R L× k known. However, in some
practical applications, the spectral signatures in dictionary D are unknown. Therefore, in blind sparse unmixing, a
“good” spectral dictionary should be acquired first to do the sparse coding. As the learned dictionary is always
overcomplete, we can use the statistical information of abundance coefficients of the pixels calculated by sparse coding
to prune the learned dictionary to get the pure signatures of ground truth.
An online optimization algorithm for dictionary learning10 based on stochastic approximations is proposed in 2009,
which greatly improves the efficiency of dictionary learning and scales up gracefully to large datasets with millions of
training samples.
In this paper, we use a part of mixed pixels in the image to be processed as the training set to obtain an overcomplete
spectral dictionary D by using online dictionary learning. Then, we use the learned overcomplete dictionary as a
standard spectral library to tackle the unmixing problem and get sparse codes α . Finally, the endmembers in the image
can be estimation by statistical analysis of sparse coding.
3.1 Online spectral dictionary learning algorithm
In order to adapt to the HSI data, we use the pixel spectral matrix as a training set. This means that the learning of the
dictionary yields a large-scale optimization, even considering small images. To cope with this computational complexity,
the overcomplete spectral dictionary D can be obtained through an online spectral dictionary learning algorithm. We
formulate the dictionary learning (DL) as a basis pursuit denoising (BPDN) problem11. In this optimization, the objective
function is the sum of the quadratic norm of the representation error plus a sparsity promoting term, the l1 norm of the
linear regression coefficients. A formulation of the dictionary learning problem is as follows.
Np
1
∑2
2
min xi − Dαi 2
+ λ αi 1
(5)
D∈R , α1 ,Lα N p
i =1

where N p is the number of pixel spectral vectors, the l1 norm promotes sparse codes, the quadratic terms account for the
representation errors and the relative weight between the two terms is established by the regularization parameter λ
(>0).
The optimization with respect to all variables is non-convex, but convex with respect to each of the two variables, the
dictionary D and the coefficients αi , when the other one is fixed. A direct approach to solving this problem is to
alternate between the two variables, keeping one fixed when minimizing over the other one. The optimization with
respect to D and the optimization with respect to αi is decoupled.

The optimization with respect to αi is a BPDN problem. Usually, the Least Angle Regression (LASR) is used to solve it.
However, because of the physical meaning, αi actually represents the abundance coefficient in spectral unmixing and
satisfies the constraints with non-negative and sums of one. We adopt the constrained spectral unmixing by variable
slitting and augmented Lagrangian to solve this BPDN problem, considering the continuity of abundance coefficients of
adjacent pixels.
The optimization with respect to D is equivalent to minimizing the following problem.

Proc. of SPIE Vol. 10789 107890K-3


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
1 t 1 j
gt ( D ) = ∑ x − Dα j
2
+ λ αj (6)
t j =1 2 2 1

where t is the current number of iterations, x j is current training set composed of spectral vectors acquired randomly
from the original training set, which means the whole pixel spectral matrix.
A projected block-coordinate descent method is used to update the columns of the dictionary in the optimization with
respect to the dictionary D .
By analyzing the principle of online dictionary learning, the flow of online spectral dictionary learning algorithm can be
obtained. Firstly, the pixel spectral matrix expanded by the HSI is used as the original training set. Then a random
continuous sequence of pixel spectral vectors is selected from the original training set as the training set in current cycle
and processed sequentially. For each new element in current training set, the sparse coding is calculated by solving the
BPDN problem, and then current dictionary is updated. The pseudo code for the online spectral dictionary learning is
shown in Algorithm 1.
Algorithm 1: Online spectral dictionary learning.
Input: xi ∈ R L×1 , i = 1,K, N p (pixel spectral vectors)
T ∈ N (iterations)
η ∈ N (the number of the pixel spectral vectors per iteration)
λ > 0 (BPDN regularization parameter)
βt (damping sequence)
D0 ∈ R L× k (initial dictionary)
Output: D ∈ R L× k (dictionary)
1 begin
2 Parameter initializations
3 for t = 1 to T do
4 Draw randomly continuous xt = ⎡⎣ xit , i = 1,K,η ⎤⎦ from x
/* Sparse coding (BPDN problems) */
1 t 2
5 α t = arg min x − Dα + λ α 1 + λTV TV ( α )
k ×η
α ∈R 2 F

A = β t A + ∑ i =1 αit ( αit )
η H
6

B = β t B + ∑ i =1 xit ( αit )
η H
7
/* Dictionary update */
8 repeat
9 for j = 1 to k do
1
10 uj =
A ( j, j )
( b j − Da j ) + d j
11 until convergence
12 end for
13 end for
14 end

3.2 Sparse coding


After the overcomplete spectral dictionary is obtained by the dictionary learning approach introduced in Algorithm 1, we
can compute sparse codes of the pixel spectral matrix of the HSI.
Due to noise, δ > 0 in sparse coding of the HSI, and learned spectral dictionaries are overcomplete. So the optimization
(4) is NP-hard and cannot be solved exactly in a straightforward way. With respect to this problem, we can replace the l0

Proc. of SPIE Vol. 10789 107890K-4


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
norm with a convex approximation, more often the l1 norm. As J ( x ) = x 1 is convex, the optimal solution can be found.
And the l1 norm yielding the least absolute shrinkage and selection operator (LASSO), which is equivalent to BPDN.

Due to the physical meaning, actually, αi represents the abundance coefficient and satisfies the constraints with non-
negative and sums of 1. Therefore, the SUnSAL algorithm can also be used to solve the constrained optimization.
Considering the relationship between each pixel and its neighbors in unmixing process, the total variation (TV)
regularization is introduced into the SUnSAL algorithm. Therefore, sparse unmixing can be expressed as
1
x − Dα 2 + λ α 1 +λTV TV ( α )
2
min (7)
D∈R 2
where

TV ( α ) ≡ ∑ αi − α j
1
(8)
{i , j}∈ε

is a vector extension of the nonisotropic TV12 to promote piecewise smooth transitions in the fractional abundance of the
same endmember among neighboring pixels. And ε denotes the set of horizontal and vertical neighbors in the image.
3.3 Endmember estimation
By calculating the times of each atom in learned overcomplete spectral dictionary chosen by the sparse codes obtained in
section 3.2, we use the ratio of the chosen times of the top N a most atoms and chosen times of all atoms to select the
atoms most likely to be the unknown spectral signatures in dataset. Specifically, we arrange the atoms of in the learned
dictionary in a descending order according to the chosen times. Then, the ratios of the chosen times of the top
N a ( = 1, 2,K , k ) most atoms and chosen times of all atoms are calculated successively until the current ratio exceeding
the preset threshold. The mathematical expression of this process is as follows.
Na k
min N a s.t. ∑τ / ∑τ
i =1
i
i =1
i >ϕ (9)

where τ i is the chosen times of the i-th most chosen atom, and ϕ stands for the preset threshold.

These N a selected atoms constitute of the pruned dictionary. As the pruned dictionary need subsequent processing to
finally get the estimated endmembers, the preset threshold applied here has a more relaxed range of values. In this paper,
the value of ϕ is of 0.5.

To obtain the pure spectral signatures, that is, the endmembers in the image, a kind of k-means algorithm is used to
cluster the atoms in the pruned dictionary. Since the atoms representing the spectral signatures, spectral angle distance
(SAD) is used as the distance metric in the k-means clustering. The centroids of all the classes after clustering are the
estimated endmembers in the HSI.
With the estimated endmembers, the fractional abundances can be obtained by using the SUnSAL-TV algorithm again.

4. EXPERIMENTAL RESULTS
In this section, we present a series of experimental results using synthetic and real-world HSI data to illustrate the
performance of the proposed approach.
4.1 Results on synthetic data
With 75 × 75 pixels and 224 bands per pixel, the synthetic data was generated using five randomly selected spectral
signatures from the USGS 1995 spectral library* as endmembers and was generated using a linear mixture model. The
fractional abundances of the data are generated as described in Marian-Daniel Iordache’s paper7. And this synthetic data
imposes the abundance sum-to-one constraint (ASC). Figure 2 shows the five endmembers, the synthetic image and the

*
http://speclab.cr.usgs.gov/spectral.lib.

Proc. of SPIE Vol. 10789 107890K-5


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
true fractional abundances for each of the five endmembers respectively. To evaluate the unmixing performance of the
proposed algorithm, a zero-mean Gaussian noise is added to the clean synthetic data and the level of the noise is
controlled by the SNR.

1 1 1

10 10
0.8 0.8 0.8
20 20
reflectance

0.6 30 0.6 30 0.6

pixels

pixels
40 40
0.4 0.4 0.4
Jarosite GDS99 K,Sy 200C 50 50
Jarosite GDS101 Na,Sy 200
0.2 Anorthite HS349.3B 60 0.2 60 0.2
Calcite WS272
Alunite GDS83 Na63 70 70
0 0 0
0 50 100 150 200 250 20 40 60 20 40 60
spectral band pixels pixels
(a) (b) (c)

1 1 1

10 10 10
0.8 0.8 0.8
20 20 20

30 0.6 30 0.6 30 0.6


pixels

pixels

pixels
40 40 40
0.4 0.4 0.4
50 50 50

60 0.2 60 0.2 60 0.2

70 70 70
0 0 0
20 40 60 20 40 60 20 40 60
pixels pixels pixels
(d) (e) (f)
Figure 2. True fractional abundances of endmembers in the synthetic data. (a) spectral signatures used to generated the
synthetic data. Abundance map of (b) endmember 1, (c) endmember 2, (d) endmember 3, (e) endmember 4, (f) endmember
5.

Including the estimation of endmembers and fractional abundances in HSIs, the proposed blind HU performance needs
to be evaluated from these two aspects. Figure 3 illustrates that the estimated endmembers’ spectral signatures present
high degree of similarity to the reference signatures obtained from the USGS library, especially at high SNR. As the
SNR decreases, the estimations of endmember 3 and 4 deteriorate significantly. The corresponding SAD values of
different noise level are reported in Table 1, evaluating the accuracy of endmember extraction quantitatively. In addition,
we can conclude that the proposed algorithm is sensitive to noise.

Table 1. SAD values of the different noise level with the synthetic data.

Noise Spectral signatures


level EM1 EM2 EM3 EM4 EM5
40dB 0.0077 0.0087 0.0140 0.0059 0.0023
30dB 0.0324 0.0271 0.0416 0.0278 0.0072
20dB 0.0327 0.0398 0.0435 0.1585 0.0137

Proc. of SPIE Vol. 10789 107890K-6


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
EM1 EM2 EM3 EM4 EM5
1 1 0.7 1 1
ground true ground true ground true
estimated 0.9 estimated
estimated
0.8 0.8 0.65 0.9
0.8

SNR= 0.6 0.6 0.7


0.6 0.8
0.6
0.4 0.4 0.55 0.7
40dB 0.5

0.4
0.2 0.2 ground true 0.5 0.6
ground true
estimated 0.3
estimated
0 0 0.45 0.5 0.2
0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250

1 1 0.75 1 1
ground true
ground true 0.9
0.7 estimated
0.8 0.8 estimated 0.9
0.8
0.65
0.6 0.6 0.8 0.7
0.6

30dB
0.6
0.4 0.4 0.55 0.7 0.5
0.5
ground true ground true 0.4
0.2 0.2 0.6 ground true
estimated estimated 0.45 0.3
estimated

0 0 0.4 0.5 0.2


0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250

1 1 0.75 1 1
ground true ground true
0.7 estimated 0.9 estimated
0.8 0.8 0.9
0.8
0.65
0.6 0.6 0.8 0.7
0.6

20dB
0.6
0.4 0.55 0.7
0.4 0.5
0.5
0.4
0.2 0.2 0.6 ground true
ground true ground true
0.45 estimated 0.3
estimated estimated
0 0 0.4 0.5 0.2
0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250

Figure 3. Comparison between the USGS library spectra and the endmember signatures extracted by the proposed algorithm
on the synthetic data under different noise level.

In the other aspect, from Figure 4, it can be observed that the abundance maps obtained under different SNR present high
accuracy and exhibit good spatial consistency. The corresponding signal to reconstruction error (SRE) and root mean
square error (RMSE) values of different noise level are reported in Table 2, evaluating the accuracy of the proposed
blind sparse unmixing quantitatively. Therefore, it can be concluded that the proposed approach is capable of carrying
out the task of blind HU.

1
0.9
0.8 1
10 10 0.8
10 10
0.8 0.7
20 20 0.7 20 20 0.8
0.6
0.6
30 0.6 30 30 30
pixels

pixels

pixels
pixels

0.5
0.5 0.6
40 40 40 0.4 40
0.4 0.4
50 50 50 0.3 50 0.4
0.3

60 0.2
0.2 60 0.2 60 60 0.2
0.1 0.1
70 70 70 70
0 0 0 0
20 40 60 20 40 60 20 40 60 20 40 60
pixels pixels pixels pixels

(a) (b) (c) (d)


Figure 4. Abundance maps obtained by the proposed algorithm under different SNR. (a) The origin abundance map without
noise. Abundance maps obtained under the SNR of (b) 40dB, (c) 30dB, (d) 20dB.

Table 2. SRE and RMSE values of the different noise level with the synthetic data.

Noise level
Index
40dB 30dB 20dB
SRE(dB) 16.39 6.51 4.40
RMSE 0.0256 0.3531 0.3226

Proc. of SPIE Vol. 10789 107890K-7


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
r
4.2 Results on real-world data
In this section, we apply the proposed algorithm to the Hyperspectral Digital Imagery Collection Experiment (HYDICE)
Urban dataset*. The original image has 307 × 307 pixels and 220 spectral bands. In the experiment, water-absorption
bands and noisy bands are removed before the blind unmixing, leaving 162 bands in total.
Figure 5 presents the grayscale abundance maps obtained by the proposed blind sparse unmixing, where a dark pixel
denotes a low abundance of the corresponding endmember. The results illustrate the effectiveness of the proposed
approach qualitatively.

IA
,,,,..,,7:'
..,T

.
'jíT_
1
(a) (b) (c) (d) (e) (f)
Figure 5. Abundance maps of the different endmembers obtained using the proposed algorithm on the Urban dataset. (a)
Concrete road. (b) Roof#1. (c) Roof#2. (d) Tree. (e) Grass. (f) Asphalt road.

5. CONCLUSION
In this paper, we propose a novel blind hyperspectral sparse unmixing mothed, which is based on the theory of sparse
coding extended to the spectral domain. Both online spectral dictionary learning and sparse coding are done in the sparse
domain. Dictionary pruning techniques and clustering algorithms are used for endmember estimation. And the spatial
information is incorporated to calculate the fractional abundances. The experiments are conducted in both synthetic and
real-world conditions to verify the effectiveness of the novel approach. However, since the proposed method is sensitive
to noise, the robustness of the algorithm still needs to be improved.

REFERENCES

[1] Pan, B., Z. Shi, and X. Xu. "MugNet: Deep learning for hyperspectral image classification using limited
samples." Isprs Journal of Photogrammetry & Remote Sensing (2017)
[2] Boardman, J. W. "Automating spectral unmixing of aviris data using convex geometry concepts." Jplairborne
Geoscience Workshop, JPL Pub. 93-26(1), 11–14 (1993).
[3] Nascimento, J. M. P., and J. M. B. Dias. "Vertex component analysis: a fast algorithm to unmix hyperspectral
data." IEEE Transactions on Geoscience & Remote Sensing 43(4), 898-910 (2005).
[4] Winter, Michael E. "N-FINDR: an algorithm for fast autonomous spectral end-member determination in
hyperspectral data." Proc. SPIE 3753, 266-275(1999).
[5] Iordache, Marian. D., J. M. Bioucas-Dias, and A. Plaza. "Sparse Unmixing of Hyperspectral Data." IEEE
Transactions on Geoscience & Remote Sensing 49(6), 2014-2039(2011).
[6] Bioucas-Dias, J. M., and M. A. T. Figueiredo. "Alternating direction algorithms for constrained sparse
regression: Application to hyperspectral unmixing." Hyperspectral Image and Signal Processing: Evolution in
Remote Sensing IEEE, 1-4 (2010).
[7] Iordache, M. D., J. M. Bioucas-Dias, and A. Plaza. "Total Variation Spatial Regularization for Sparse
Hyperspectral Unmixing." IEEE Transactions on Geoscience & Remote Sensing 50(11), 4484-4502 (2012).
[8] Bro, Rasmus, and S. De Jong. "A fast non negativity constrained least squares algorithm." Journal of
Chemometrics 11(5), 393-401 (2015).
[9] Shi, Z., Tang, W., Duren, Z. and Jiang Z. "Subspace Matching Pursuit for Sparse Unmixing of Hyperspectral
Data." IEEE Transactions on Geoscience & Remote Sensing 52(6), 3256-3274 (2014).

*
http://lesun.weebly.com/hyperspectral-data-set.html.

Proc. of SPIE Vol. 10789 107890K-8


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
[10] Mairal, Julien, Bach, F., Ponce, J., and Sapiro, G. "Online dictionary learning for sparse coding." Proc.ICML
DBLP, 689-696 (2009).
[11] Chen, S. S., D. L. Donoho, and M. A. Saunders. "Atomic Decomposition by Basis Pursuit." Siam Review 43(1),
129-159 (2001).
[12] Guo, Z., and T. Wittman. "L1 unmixing and its application to hyperspectral image enhancement." Proc. SPIE
7334, 73341M-73341M-9 (2009).

Proc. of SPIE Vol. 10789 107890K-9


Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 08 Sep 2021
Terms of Use: https://www.spiedigitallibrary.org/terms-of-use

You might also like