Professional Documents
Culture Documents
Low-Rank and Sparse Representation For Hyperspectral Image Processing A Review
Low-Rank and Sparse Representation For Hyperspectral Image Processing A Review
Sparse Representation
for Hyperspectral
Image Processing
A review
BACKGROUND
HSI techniques integrate both imaging
and spectroscopic techniques into one
NLS: nonlocal sparse; FastHyDe: fast HS denoising; CHyDU: coupled HSI denoising and unmixing; RPCA: robust principal component analysis; LRCRD: low-rank collaborative representa-
tion; OSDL: online spectral dictionary learning; SNMF: sparse nonnegative matrix factorization; SNMF-TEMD: sparse nonnegative matrix factorization-thresholded ground distance;
MTSNMF: multitask sparse nonnegative matrix factorization; JSSDSR: joint spectral–spatial distributed SR; SSASR: spectral–spatial adaptive SR; LRSNL: low-rank spectral nonlocal;
SNLRSF: subspace-based nonlocal low-rank and sparse factorization; SSLR: spatial–spectral low rank; NAILRMA: noise-adjusted iterative low-rank matrix approximation; LRTV: total
variation regularized low rank; LLRSSTV: spatial–spectral total variation-regularized local low-rank; LRRSDS: low-rank constraint on the spectral difference; TWNNM: total variation-reg-
ularized weighted nuclear norm minimization; WNNTV: weighted nuclear norm and total variation regularization; LSSTV: low-rank constraint and spatial-spectral total variation; WSN:
weighted Schatten p-norm; SLRR: subspace LRR; LRTA: low-rank tensor approximation; LRTDTV: total variation-regularized low-rank tensor decomposition; GKTD: genetic kernel Tucker
decomposition; CPTD: CANDECOMP/PARAFAC tensor decomposition; R1TD: rank-1 tensor decomposition; STWNNM: structure tensor total variation-regularized weighted nuclear norm
minimization; NLR-CPTD: nonlocal low-rank-regularized CANDECOMP/PARAFAC tensor decomposition; LRTR: low-rank tensor recovery; SSTV-LRTF: spatial-spectral total variation-regu-
larized low-rank tensor factorization; GSLRTD: group sparse and low-rank tensor decomposition; LMM: linear mixing model; RGB: red, green, blue; CNMF: coupled nonnegative matrix
factorization; NLTR: nonlocal tensor ring; CCJSR: correlation coefficient JSR; LRFF: low-rank factorization fusion; LRSSC: low-rank sparse subspace clustering; CSTF: coupled sparse ten-
sor factorization; LSS: rank sparse subspace; KLRSSC: kernel low-rank sparse subspace clustering; LhalfLRR: , 1/2 regularization-based LRR; T-LGMR: tensor-based low-rank graph with
multimanifold regularization; NCTCP: nonlocal coupled tensor canonical polyadic; SSGLRTD: spatial-spectral-graph-regularized low-rank tensor decomposition; LTTR: low tensor-train
rank; WLRTR: weighted low-rank tensor recovery; LTMR: low tensor multirank; NLSTF: nonlocal sparse tensor factorization; kernel low-rank sparse subspace clustering; MTSP: multitask
sparsity pursuit; ISSC: improved sparse subspace clustering; SWLRSC: squaring weighted low-rank subspace clustering; FRSR: fast and robust self-representation; DWSSR: dissimilarity-
weighted sparse self-representation; SSR: symmetric SR; LLRSC: Laplacian-regularized low-rank subspace clustering; FLLRSC: fast and latent low-rank subspace clustering; SLGDA:
sparse and low-rank graph-based discriminant analysis; KSLGDA: kernel sparse and low-rank graph-based discriminant analysis; TSLGDA: tensor sparse and low-rank graph-based dis-
criminant analysis; SLRNILE: sparse and low-rank near-isometric linear embedding; LRR_NP: LRR with neighborhood preserving; WLRR: weighted LRR; ADSpLRU: alternating direction
sparse and low-rank unmixing; JSpBLRU: joint sparse blocks and low-rank unmixing; HURLR-TV: HS unmixing by reweighted low-rank and total variation; SCC-LRR: LRR with space-con-
sistency constraint; GLrNMF: group low-rank, constrained nonnegative matrix factorization; SUnSAL-TV-LA: sparse unmixing via variable splitting augmented Lagrangian and total varia-
tion local abundance; J-LASU: joint local-abundance sparse unmixing; ALMSpLRU: alternating minimization sparse low-rank unmixing; RGBM: robust generalized bilinear; RGBM-SS-
LRR: robust generalized bilinear based on a nonlinear unmixing method with SS and LRR; SULoRA: subspace unmixing with low-rank attribute; cDeUn: coupled denoising and unmixing;
SRC: SR-based classification; SRC-TS: SR-based classification in the tangent space; SNDeUn: simultaneous nonconvex denoising and unmixing; cdSRC: class-dependent SRC; MSRC: multi-
objective-based SR-based classification; SRNN: SR-based nearest-neighbor classification; SSSRC: spectral–spatial-combined SR-based classification; S-RBFKLN: sparse radial basis func-
tion kernel learning network; DWSRC: dissimilarity-weighted SR-based classification; SADL: spatial-aware dictionary learning; mlSRC: multilayer SR-based classification; NLWJSR: nonlo-
cal-weighted JSR; NRJSR: nearest-regularized JSR; MASR: multiscale adaptive SR; MF-JSRC: multiple-feature JSR classification; MFASR: multiple-feature-based adaptive SR; LSGM: local
sparsity graphical model; SRSTSD: SR based on the set-to-set distance; DKSVD: discriminative K-SVD; KJSR: kernel-based JSR; ASOMP: adaptive SOMP: MLEJSR: maximum-likelihood
estimation-based JSR; RPCA-RX: robust principal component analysis receiver; LRMD: low-rank matrix decomposition; LRRSTO: LRR sum-to-one; LRRSTO SLWs_LRRSTO: LRR sum-to-
one single local windows; LRRSTO MLWs_LRRSTO: LRR sum-to-one multiple local windows; LRaSMD: low-rank and sparse matrix decomposition; LSMAD: LRaSMD-based Mahalanobis
distance method for H anomaly detection; LwOaW: LRaSMD with orthogonal subspace projection-based background suppression and adaptive weighting; SLAD: randomized subspace
learning-based anomaly detector; GTVLRR: graph and total variation-regularized LRR; LTDD: low-rank tensor decomposition-based anomaly detection; SRC-CR: SR-based classification
collaborative representation; SRSTSD: SR based on the set-to-set distance ; Spa+Lr: sparse representation and low rank; GLF: global local factorization; FS2LRL: fast superpixel-based
subspace low-rank learning; LRSR: low spatial resolution super-resolution; HySR-SpaSpecF: HS super-resolution based on spatial-spectral correlation fusion; NLSTF_SMBF: nonlocal
sparse tensor factorization for the semiblind fusion; NLRTATV: Nonlocal Low-Rank Tensor Approximation and Total Variation; SBDSM: superpixel-based discriminative sparse model;
MCCJSR: maximum correntropy criterion based JSR; SAJSRC: shape-adaptive JSR classification; LAJSR: local adaptation JSR; SPJSR: self-paced JSR; SMTJSRC: superpixel-level multitask
JSR classification; LSDM-MoG: low-rank and sparse decomposition model with mixture of Gaussian.
FIGURE 1. The denoising results of band 1. (a) The original image, (b) the simulated noise image, (c) an SRROLD, (d) a TensorDL, and
(e) an LRTA [230].
FIGURE 2. The false color images of the denoising results. (a) An original HS, (b) a simulated noise image, (c) an SRROLD, (d) a TensorDL,
and (e) an LRTA.
preserving its spectral information. HS image superreso- techniques [188]–[195]. It is assumed that mixed pixels can
lution is rooted in pansharpening [179], [180], typically be represented as a linear combination of endmembers:
referred to as PAN/MS fusion. To date, there have been a
number of HS image superresolution algorithms, and the X = DA + N X, (10)
methods based on sparse and low-rank characteristics
have been attracting ever-increasing attention in recent where D is the endmember matrix, A is the abundance ma-
years [181]–[187]. trix for each pixel of X, and N X denotes the noise.
Substituting (10) into the observation model defined by
OBSERVATION MODEL (8) and (9) results in
Let the desired HRHS image be denoted as X, an LRHS
image represented as Y, and Z indicated as the HR MS or Y = DABS + N Y . DA Y (11)
PAN image. The LRHS image can be regarded as the spatial- Z = CDA + N Z . D Z A, (12)
degradation version of the HRHS, and the HSMS or PAN is
the spectral-degradation version of the HRHS image. The where A Y and D Z are the spatial-degradation abundance
observation model is matrix and spectral-degradation endmember matrix,
respectively.
Y = XBS + N Y (8) For the LMM-based methods, the idea of using unmix-
Z = CX + N Z, (9) ing for HS fusion was proposed in the early stage. In [51],
LRHS was first unmixed into endmembers and abundanc-
where S denotes the spatial downsampling operation, B is es, and abundance maps were then fused with high-reso-
the spatial blurring operation, and C represents the spectral lution data. Although this approach did not focus on the
downsampling matrix and is generally obtained based on estimation of HRHS data, the idea of using LMM for data
a spectral response function. N Y and N Z represent noise. fusion was physically reasonable and effective for MS/HS
fusion. To the best of our knowledge, Kawakami et al. [52]
MATRIX-BASED METHODS first recommended HS/MS [red-green-blue (RGB)] fusion
via matrix factorization. The scheme was divided into two
LINEAR SPECTRAL MIXING MODEL-BASED METHODS stages: the spectral basis obtained based on the unmixing
The linear mixing model (LMM)-based methods are the of the HS image and the conjunction with the RGB input to
most popular and widely studied HS superresolution produce the desired HRHS image.
Yokoya et al. [53] advised the popular coupled NMF
(CNMF) fusion method for HS and MS data. It estimated
TABLE 2. A QUANTITATIVE EVALUATION OF the endmembers of LRHS and the abundances of high- spa-
THE DENOISING METHODS.
tial-resolution MS (HRMS) images iteratively. Bendoumi
METHODS SAM ERGAS RMSE CC TIME (s) et al. [54] improved the CNMF-based method by dividing
SRROLD 2.48 3.17 60 0.985 80.65 whole images into several subimages. Lin et al. [55] pro-
LRTA 1.35 1.42 33.34 0.996 98.24 posed a convex optimization-based CNMF algorithm for
TensorDL 0.72 0.81 20.28 0.998 156.32 HS/MS fusion by incorporating sparsity-promoting regu-
larization and the sum-of-squared-distances regularizer.
FIGURE 3. The false color image of fusion results (638, 548, and 471 nm). (a) LRHS, (b) HRHS, (c) CNMF, (d) FUSE, and (e) NLSTF.
superresolution method where an LTTR prior was designed We verify the performance of three representative schemes,
to learn the correlations among the spatial, spectral, and i.e., CNMF [53], a fast fusion-based Sylvester equation (FUSE)
nonlocal modes of the nonlocal similar HRHS cubes. [197], and nonlocal sparse tensor factorization (NLSTF)
Chang et al. [70] advocated a unified low-rank tensor re- [198]. Among them, CNMF is a linear spectral unmixing-
covery model for HSI restoration, including denoising, de- based method and uses NMF to obtain the endmembers
blurring, superresolution, and so forth. In the technique, a and abundances of the HS and MS images, and the fused
weighted low-rank tensor recovery model was recommend- images are obtained based on the endmembers of the HS
ed to further improve the capability and flexibility. Dian images and the abundances of the MS images. A FUSE is
et al. [71] offered a subspace-based, low-tensor multirank- a constrained optimization technique that constructs the
regularization method for the fusion, which fully exploited fusion-energy functional based on the observation mod-
the spectral correlations and nonlocal similarities in the els. Furthermore, a FUSE is proposed to improve computa-
HRHS image. Wang et al. [72] extracted the 4D tensors tional efficiency. NLSTF is a tensor-based approach, which
using nonlocal similar patches and imposed a low-rank reformulates the HSI superresolution problem as the esti-
constraint and 3D TV regularization on the reconstructed mation of a sparse core tensor and of dictionaries for each
HRHS. Li et al. [196] proposed a joint noise removal and HS cube by considering the nonlocal spatial self-similarities.
superresolution method, where the low-multilinear-rank The experimental results are presented in Figure 3 and Ta-
property of the tensor was employed to indicate the high ble 3, respectively.
spatiospectral redundancy, and the variational properties As depicted in Figure 3, from the visual effects of the
were used to excavate the differences between the desired three fusion results, they all show good performance in spec-
HRHS and noisy images. tral fidelity. However, for spatial enhancement, CNMF and
FUSE obtain more consistent visual effects with the HRHS,
EXPERIMENTAL RESULTS AND ANALYSIS while the fused image of NLSTF seems to be slightly blurry.
This Chikusei data set, with 400 # 400 pixels and 100 The quantitative evaluation results are listed in Table 3 and
bands in the denoising experiments, is used for data fusion. indicate that CNMF has the best performance in the four
Taking the these data as the reference images (HRHS), we quantitative evaluation indices, except for computational ef-
use the spectral response function of the Gaofen (GF)-1 sen- ficiency. The FUSE method has the fastest computing time.
sor to spectrally downsample the HRHS image to obtain The performance of NLSTF is slightly poor. It should be not-
the HRMS image. The LRHS image was obtained based on ed that, for NLSTF, the spectral relationship matrix, which
Gaussian blurring and downsampling with a factor of four. reflects the spectral combination relationship between the
fused HRHS image and the HRMS image, in the original pa-
per is not used due to different experimental data. In the ex-
TABLE 3. A QUANTITATIVE EVALUATION OF periments, the typical adaptive calculation of the spectral re-
THE FUSION METHODS.
lation matrix in the popular CNMF method was introduced.
METHODS SAM ERGAS RMSE CC TIME (S) This may be the main reason for its poor performance.
CNMF 2.62 3.99 97.78 0.99 56.24
FUSE 2.87 4.08 162.48 0.96 42.12 LSASR FOR HSI DIMENSIONALITY REDUCTION
NLSTF 3.63 6.11 208.48 0.93 89.34 The high dimensionality of an HSI brings a large com-
putational burden and also complicates the subsequent
SPARSE NMF METHODS where B is the HSI band matrix, Z is the factorization lo-
Band selection is used to choose a subset of informa- calizing matrix, and Tr ($) is the trace operation. FRSR in-
tive bands to effectively reduce the amount of data while corporates the structured random projections into a robust
maintaining the analysis performance. Li et al. suggested a self-representation to reduce the computational burden. A
sparse NMF (SNMF) model for band selection of HSIs [85], dissimilarity-weighted sparse self-representation (DWSSR)
which can be presented as algorithm [91] was proposed for HSI band selection and
n can be formulated as
min V - WH T 2
F +h W 2
F + b / H ( j, $) 2
1
W, H
j=1 , 1
argmin m Z + ntr (D T Z) + 2 B - BZ F2,
1, 2
s.t. W, H H 0 (24) Z
s.t. Z H 0, diag (Z) = 0, 1 T Z = 1 T , (26)
where V is a 2D matrix, reshaped from the 3D HSI; H is
the coefficient matrix; W is the basis matrix; and H ( j, $) where Z 1, 2 is the sum of the , 2 norm for the coefficient
is the jth row vector of H. Parameter b 2 0 can adjust vector in all the rows, and D is the dissimilarity-weighted
the sparseness in the rows of H, and h 2 0 can adjust matrix. DWSSR improves the traditional sparse self-represen-
the size of the W entries to prevent a very large value, tation model by incorporating an additional dissimilarity-
which may cause unsteady results. SNMF does not utilize weighted regularization term into the optimization model.
the distance metric of bands but introduces sparsity on
the coefficient matrix. The clustering results of different LSASR METHODS
bands can be conducted through the largest entry in each Sun et al. offered a symmetric SR method for HS band selec-
column of the matrix. tion [92], which transfers the band-selection issue into an
65 min X . (28)
X 1/2
60
5 10 15 20 25 30 35 40 45 50 On this basis, many extensions have been reported to
Dimensionality further improve the unmixing performance by consider-
(b) ing spatial structure [201], hidden information [202], ro-
bustness [203], [204], and so on. Nevertheless, , p-norm
regularizers are noncontinuous and nondifferentiable for
0.95
0 1 p 1 1. Hence, an arctan function was used as a mea-
sure of the sparsity in [205], which is given as
0.9
arctan (wX i, j)
min / arctan (w) , (29)
κ Coefficient
X
i, j
0.85
where w is a parameter used to control the sparsity of the
abundances, and X i, j is an element in the ith row, jth col-
0.8 umn of matrix X. Through w, this function allows for ad-
justing the sparsity level of the obtained abundances.
0.75 In addition, by carefully weighting the , 1-norm regular-
5 10 15 20 25 30 35 40 45 50 izer, the performance can also be greatly enhanced. There-
Dimensionality
fore, a reweighted sparse regularizer was introduced in [97]
(c)
and [98] to obtain more sparse abundances:
TSLGDA FLLRSC
ISSC KSLGDA min W 9 X 1, (30)
X
(k + 1) (k)
FIGURE 5. The classification performance of different dimen- where W is the weight matrix with W i, j = 1/ (; X i, j ; + f)
sion-reduction methods as the dimensionalities change. (a) The and k represents the kth iteration. The weights of this regu-
overall accuracy (OA), (b) the average accuracy (AA), and the larizer are adaptively updated relative to the abundance
(c) kappa (l) coefficient. matrix. To enhance the sparsity of fractional abundances
= f / e / x Gi,j o p = d / x Gi q n ,
q 1
P m Gi P
the rows and dense among the columns, collaborative spar- p p
p q
x G, p, q
sity [101], [102] was introduced to promote row sparsity i= 1 j= 1 i= 1
(joint sparsity), given as
in which G i denotes the ith group ^i = 1, f, Sh, which con-
min X , (32)
2, 1 tains s Gi signatures. When p = 2 and q = 1, the group the
X
least absolute shrinkage and selection operator (LASSO) reg-
where X 2, 1 = R mi = 1 X i 2. The , 2, 1-norm regularizer en- ularizer with the , G, 2, 1 norm enforces sparsity on the vector
courages sparsity among the endmembers simultaneously whose entries are the x Gi. That is, the whole group is discard-
(collaboratively) for all pixels, i.e., the collaborative spar- ed entirely if one of these entries is zero because x Gi has a
sity of the abundance matrix. By doing so, only the true zero norm. Within each group, there is no sparsity, and thus
endmembers have contributions to the estimated abun- most or all of the signatures are likely to be active. If a small
dances. By applying the , 2, 1-norm regularizer on similar case in each group is selected in each pixel but nearly all the
local abundances (i.e., blocks), [103] proposed joint sparse- groups are active, i.e., with and without group sparsity, it is
block unmixing via variable splitting augmented Lagrang- suitable to utilize the elitist LASSO regularizer with an , G, 1, 2
ian and TV. Specifically, a TV regularization is imposed, norm. Moreover, the fractional LASSO regularizer with an
sparse unmixing is used to promote the similarity of adja- , G, p, q-norm promotes with and without group sparsity.
cent pixels. Thus, the adjacent fractional abundances have Recently, some approaches were developed based on an
similar structural sparsity. To this end, it utilized the joint , 0 norm. For example, an , row,0 norm [106] was incorpo-
sparse blocks to encourage the pixels in each local block to rated for sparse unmixing, which can be expressed as
share the same sparse structure. Furthermore, a weighted
, 2, 1-norm regularizer is used to enhance sparsity along the min X row,0 . (35)
X
lines within each block in [103].
To fully utilize the spatial information and sparse struc- Gong et al. [206] utilized an , 0-norm as a sparsity regulariz-
ture of an HSI, [104] incorporated a modified mixed-norm er that directly optimizes the nonconvex , 0-norm problem.
regularization by integrating the spatial group structure
and sparsity of the abundances, i.e., the spatial group spar-
sity regularizer. In this method, the spatial groups (i.e., su- TABLE 5. THE COMPUTATIONAL TIME (IN SECONDS) OF
perpixels) were generated by an image-segmentation strat- DIFFERENT DIMENSION-REDUCTION METHODS.
egy. Assuming that there are S superpixels, the abundance TSLGDA ISSC FLLRSC KSLGDA
matrix is divided into S groups as X = (X 1, f, X S) ! R M # N, 5 32.579 7.148 656.925 1.849
in which X s = [x 1, f, x n s] ! R M # n s denotes the abundance 10 32.592 7.241 662.694 1.805
matrix of spatial group j s. The spatial group sparsity regu- 15 32.737 7.332 661.178 1.865
larizer is 20 32.919 7.494 661.334 1.798
25 32.891 7.635 664.907 1.804
S
min / / g
c j W X j 2, (33) 30 33.375 7.703 664.505 1.813
X
s = 1 X j ! js 35 33.259 7.852 661.286 1.835
40 33.575 7.978 661.097 1.792
s s s M#M
where W = diag (w , f, w ) ! R
1 controls the collab-
M 45 34.677 8.29 663.448 1.793
orative (row) representation, which is updated iteratively. 50 34.965 8.379 664.836 1.789
Moreover, a pixelwise confidence index c j = 1/D sj (D sj is
FIGURE 6. (a) The subscene of the AVIRIS Cuprite image and (b) the United States Geological Survey (USGS) map (Tricorder 3.3 product), showing the location of different minerals in the Cuprite
Mei et al. presented a robust, generalized bilinear
mode (RGBM) based on a nonlinear unmixing ap-
Calcite + Montmorillonite
tackle spectral variability, unlike some unmixing
methods that directly act in the original space, Hong
Na–Montmorillonite
High-Al Muscovite
Calcite + Kaolinite
Med-Al Muscovite
Low-Al Muscovite
et al. proposed subspace unmixing with low-rank at-
Buddingtonite
tribute embedding [118], which is a general subspace
Chalcedony
Nontronite
unmixing framework that jointly estimates the sub-
Jarosite
Chlorite
Calcite
space projections and abundance maps to model a
raw subspace with low-rank attribute embedding.
LLR can be used to exploit the local spatial cor-
Chlorite + Montmorillonite
is much smaller than that of noisy HSIs, low-rank
Smectite or Muscovite
Pyrophyllite + Alunite
N
matrix decomposition (LRMD) has been success-
Na82–Alunite 100C
Na40–Alunite 400C
fully applied to noise removal. Hence, other kinds
K–Alunite 150C
K–Alunite 250C
K–Alunite 450C
of unmixing methods united with denoising were
or Muscovite
Kaolinite wxl
Kaolinite pxl
2 km
proposed, and the uniform framework can be sim-
Kaolinite +
Halloysite
ply formulated as
Dickite
2 2
min Y - G - S F + G - AX T F,
G, S
s.t. rank (G) # r, card (S) # k, X * 0, (39)
(b)
sparse unmixing.
In [108]–[111], the sparsity of abundances is
also considered, promoting the low ranking of the
abundance matrix and better capturing the global
structure. Besides, spectral information is exploited
to enhance performance in [112]–[115]. However,
the aforementioned low-rank, constrained unmix-
ing methods suffer from high-complexity problems
due to rank minimization with the nuclear norm. By
using the upper bound of the low-rank promoting
trace norm as an alternative way, the techniques in
[116] and [207] can successfully reduce the time cost.
In addition, the unmixing methods united with de-
noising [13], [119], [120] effectively improve the ro-
(a)
FIGURE 7. The fractional abundance maps estimated by different methods for six endmembers on the AVIRIS Cuprite subscene. From top to
bottom: the Buddingtonite GDS85 D-206, Kaolin/Smect H89-FR-5 30 K, Kaolinite KGa-2, Montmorillonite+Illi CM37, Nontronite NG-1.a, and
Sphene HS189.3B. From left to right: (a) SUnSAL, (b) CSUnSAL, (c) DRSU-TV, (d) VCA-FCLS. (Continued)
FIGURE 7. (Continued) The fractional abundance maps estimated by different methods for six endmembers on the AVIRIS Cuprite sub-
scene. From top to bottom: the Buddingtonite GDS85 D-206, Kaolin/Smect H89-FR-5 30 K, Kaolinite KGa-2, Montmorillonite+Illi CM37,
Nontronite NG-1.a, and Sphene HS189.3B. From left to right: (e) , 1/2-NMF, (f) TV-RSNMF, (g) SGSNMF, and (h) RGBM-SS-LRR.
TABLE 6. THE PERFORMANCE OF SAD ALONG WITH THE STANDARD DEVIATION OF THE AVIRIS CUPRITE DATA SET
FOR THE DIFFERENT METHODS. BOLD DENOTES THE BEST RESULTS UNDER EACH CONDITION.
CLASS TRAIN TEST SVM SVM-CK SRC RSC OMP SADL JSR WJSR ASOMP MLEJSR MFASR KJSR WKJSR
1 5 49 66.47 73.76 59.77 76.19 62.68 92.52 83.09 91.55 89.8 93.88 97.56 84.28 96.53
2 143 1,291 82.47 92.99 77.28 81 64.5 96.15 93.88 94.98 95.49 95.84 96.42 95.35 96.31
3 83 751 74.78 94.41 69.6 75.19 62.03 97.91 91.9 92.83 94.88 95.38 99.01 95.56 97.52
4 23 211 69.4 90.18 64.59 77.73 42.79 90.52 92.82 94.65 82.26 97 90.99 94.64 96.07
5 50 447 93.48 95.43 92.17 92.47 89.29 96.72 93.13 94.12 93.93 93.06 94.1 94.67 96.8
6 75 672 96.64 99.02 96.83 97.47 94.77 99.16 98.79 99.68 97.66 97.27 99.76 99.12 99.05
7 3 23 83.85 79.5 61.49 86.96 83.85 100 63.97 85.71 93.79 78.26 98.4 43.04 94.35
8 49 440 97.92 97.27 99.19 98.86 97.73 100 99.9 100 99.19 99.92 100 100 99.89
9 2 18 64.29 90.48 68.25 59.26 53.17 100 2.38 7.94 0 9.26 90 1.11 63.89
10 97 871 79.65 91.45 74.77 77.23 73.86 96.79 89.16 90.5 92.57 94.76 96.54 92.09 94.48
11 247 2,221 86.12 95.18 86.28 86.85 77.86 98.09 97.06 98.17 97.17 99.08 98.42 98.5 98.64
12 61 553 83.21 94.63 78.38 78.42 54.82 92.41 88.53 93.46 88.07 93.18 98.31 97.03 95.57
13 21 191 98.28 99.25 99.03 99.3 97.38 98.95 97.08 99.03 99.48 93.19 99.46 99.42 99.16
14 129 1,165 95.25 97.72 96.69 96.28 94.2 99.6 99.25 99.72 99.88 99.26 99.84 99.7 99.85
15 38 342 57.69 89.89 56.14 54.09 42.4 96.25 98.37 96.53 98.25 98.54 97.69 97.63 96.37
16 10 85 90.25 95.46 93.11 87.06 90.08 96.88 94.45 99.66 97.98 88.63 98.07 91.65 97.88
OA 85.45 94.76 83.59 85.22 76.05 97.38 94.9 96.15 95.62 96.69 97.94 96.74 97.58
AA 82.48 92.29 79.8 82.77 73.84 97 86.49 89.91 88.77 89.16 97.16 86.55 95.15
l coefficient 0.834 0.94 0.812 0.831 0.726 0.97 0.942 0.956 0.95 0.962 0.976 0.963 0.973
FIGURE 8. The classification maps for the Indian Pines data set. (a) The ground truth, (b) an SVM (85.45), (c) SVM composite kernels (94.76),
(d) SRC (83.59), (e) RSC (85.22), (f) OMP (76.05), (g) SADL (97.38), (h) JSR (94.9), (i) WJSR (96.15), (j) ASOMP (95.62), (k) MLEJSR (96.69),
(l) MFASR (97.94), (m) KJSR (96.74), and (n) WKJSR (97.58%).