You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/335745481

Sparse-Adaptive Hypergraph Discriminant Analysis for Hyperspectral Image


Classification

Article in IEEE Geoscience and Remote Sensing Letters · September 2019


DOI: 10.1109/LGRS.2019.2936652

CITATIONS READS
129 250

6 authors, including:

Luo Fulin Liangpei Zhang


Chongqing University Wuhan University
52 PUBLICATIONS 1,484 CITATIONS 983 PUBLICATIONS 50,612 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Deep learning in remote sensing View project

National Natural Science Foundation of China View project

All content following this page was uploaded by Luo Fulin on 08 June 2023.

The user has requested enhancement of the downloaded file.


1082 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 17, NO. 6, JUNE 2020

Sparse-Adaptive Hypergraph Discriminant Analysis


for Hyperspectral Image Classification
Fulin Luo , Liangpei Zhang , Fellow, IEEE, Xiaocheng Zhou, Tan Guo , Yanxiang Cheng, and Tailang Yin

Abstract— Hyperspectral image (HSI) contains complex I. I NTRODUCTION


multiple structures. Therefore, the key problem analyzing the
intrinsic properties of an HSI is how to represent the structure
relationships of the HSI effectively. Hypergraph is very effective
to describe the intrinsic relationships of the HSI. In general, H YPERSPECTRAL images (HSIs) contain hundreds
of 2-D images that are captured under different
electromagnetic spectra [1]–[3]. In an HSI, each pixel is a
Euclidean distance is adopted to construct the hypergraph.
However, this method cannot effectively represent the structure continuous spectral curve that has good discriminant perfor-
properties of high-dimensional data. To address this problem, mance for materials. Thus, the HSI has been widely used in
we propose a sparse-adaptive hypergraph discriminant analy- the fields of target detection [4], anomaly detection [5], and
sis (SAHDA) method to obtain the embedding features of the land-cover classification [6]. However, a large number of bands
HSI in this letter. SAHDA uses the sparse representation to will result in the Hughes phenomenon [7]. Therefore, a key
reveal the structure relationships of the HSI adaptively. Then,
problem is to reduce the number of bands.
an adaptive hypergraph is constructed by using the intraclass
sparse coefficients. Finally, we develop an adaptive dimensionality Dimensionality reduction is an effective manner, which
reduction mode to calculate the weights of the hyperedges and can transform high-dimensional data into a low-dimensional
the projection matrix. SAHDA can adaptively reveal the intrinsic space and preserve some significant information [8]. The
properties of the HSI and enhance the performance of the extracted low-dimensional features possess better discrimi-
embedding features. Some experiments on the Washington DC nant performance than the original features. Graph learning
Mall hyperspectral data set demonstrate the effectiveness of the is an effective method to represent the intrinsic properties
proposed SAHDA method, and SAHDA achieves better classifi-
cation accuracies than the traditional graph learning methods.
of data [9]. Some classic methods were proposed such as
locally linear embedding (LLE) [10] and Laplacian eigenmaps
Index Terms— Dimensionality reduction, hypergraph learning,
(LEs) [11]. Motivated by these methods, some novel methods
hyperspectral image (HSI), sparse representation.
were developed such as regularized local discriminant embed-
ding (RLDE) [8] and local geometric structure Fisher analysis
Manuscript received April 8, 2019; revised June 20, 2019; accepted (LGSFA) [12]. However, these algorithms adopt the K-nearest
August 19, 2019. Date of publication September 10, 2019; date of current neighbor (NN) to represent the structure relationships of data,
version May 21, 2020. This work was supported in part by the National which cannot accurately describe the intrinsic properties of
Science Foundation of China under Grant 41431175 and Grant 61801336,
in part by the Science and Technology Research Program of the Chongqing high-dimensional data.
Municipal Education Commission under Grant KJQN201800632, in part by To better represent the similarity of the data, sparse repre-
the Open Research Fund of Key Laboratory of Spatial Data Mining and sentation was proposed to reveal the intrinsic relationships of
Information Sharing of Ministry of Education, Fuzhou University, under Grant the data adaptively [13], [14]. With the sparse coding, many
2019LSDMIS06, in part by the Open Research Fund of Hubei Key Laboratory
of Applied Mathematics (Hubei University) under Grant HBAM201803, spare graph methods were designed to reduce the dimension
in part by the Open Research Fund of Key Laboratory of Digital Earth Science, of data [15]. The representative methods include sparsity-
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences preserving projection (SPP) [16], sparsity-preserving analysis
under Grant 2017LDE002, in part by the National Postdoctoral Program for
Innovative Talents under Grant BX201700182, and in part by the China (SPA) [17], and sparse graph-based discriminant analysis
Postdoctoral Science Foundation under Grant 2017M622521. (Corresponding (SGDA) [18]. These methods are based on the simple graph
authors: Yanxiang Cheng; Tailang Yin.) that just denotes the binary relationship of data, while the real
F. Luo and L. Zhang are with the State Key Laboratory of Information data contain multiple relationships in general [19].
Engineering in Surveying, Mapping and Remote Sensing, Wuhan University,
Wuhan 430079, China, and also with the Hubei Key Laboratory of Applied To better reveal the intrinsic properties of real data, a
Mathematics, Faculty of Mathematics and Statistics, Hubei University, Wuhan hypergraph was developed to represent the complex multi-
430062, China (e-mail: luoflyn@163.com; zlp62@whu.edu.cn). ple relationships, where each edge contains more than two
X. Zhou is with the Key Laboratory of Spatial Data Mining and Information
Sharing of Ministry of Education, Fuzhou University, Fuzhou 350116, China
vertices. The representative hypergraph methods have the
(e-mail: zhouxc@fzu.edu.cn). binary hypergraph (BH) [20] and the discriminant hyper-
T. Guo is with the School of Communication and Information Engineering, Laplacian projection (DHLP) [21]. However, researchers
Chongqing University of Posts and Telecommunications, Chongqing 400065, generally use Euclidean distance to construct a hyper-
China (e-mail: guot@cqupt.edu.cn).
Y. Cheng is with the Gynecology Department, Renmin Hospital of Wuhan graph. In fact, Euclidean distance is not an effective man-
University, Wuhan 430060, China (e-mail: yanxiangCheng@whu.edu.cn). ner to represent the intrinsic structures of high-dimensional
T. Yin is with the Reproductive Medicine Center, Renmin Hospital of Wuhan data, which will result in the construction of hypergraph
University, Wuhan 430060, China (e-mail: reproductive@whu.edu.cn). inaccurately.
Color versions of one or more of the figures in this letter are available
online at http://ieeexplore.ieee.org. To enhance the accuracy of hypergraph construction, in this
Digital Object Identifier 10.1109/LGRS.2019.2936652 letter, we propose a spare-adaptive hypergraph discriminant
1545-598X © 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: Wuhan University. Downloaded on June 13,2020 at 03:40:32 UTC from IEEE Xplore. Restrictions apply.
LUO et al.: SAHDA FOR HSI CLASSIFICATION 1083

With the intraclass sparse coefficients, we construct a hyper-


graph G = {X, E, W}, where X is the vertex set, E is the
hyperedge set, W is the weight matrix of the hyperedges, and
each hyperedge ei ∈ E has a weight wi that can be adaptively
calculated in this letter.
To represent the relationship between the vertex
and the hyperedge, we construct an incidence matrix
H = [h i j ]i, j ∈ |V|×|E| with intraclass sparse coefficients,
where h i j denotes the relationship between xi and e j and is
defined as 
si, j , if (xi ) = (x j ) and i = j
Fig. 1. Flowchart of the proposed SAHDA method. hi j = . (2)
0, otherwise

analysis (SAHDA) method to implement the dimensionality According to the incidence matrix, we can obtain the
reduction of the HSI. SAHDA first uses the sparse repre- degrees of xi and e j , which are represented as
sentation to reveal the intrinsic structure relationships of the 
n 
n

HSI adaptively. Then, we use the intraclass sparse coefficients div = w j h i j d ej = |e j | = hi j . (3)
to construct an adaptive hyperedge. Finally, we construct j =1 i=1
a dimensionality-reduction model to compute the projection In a low-dimensional space, we preserve the structures of
matrix and the hyperedge weights adaptively, which can be a hypergraph and compact the homogeneous data as close as
solved by the alternating direction method of multipliers possible. That is to say, the vertices on the same hyperedge
(ADMM). SAHDA is more robust to data and can better should be close in low-dimensional space, while the similarity
reveal the intrinsic properties of the HSI. Experiments on a of each hyperedge can be effectively calculated by an adaptive
hyperspectral data set achieve better performance than BH and weight. Thus, the objective function  can be denoted
2 as
SGDA.  T 
1  wi   V x j VT xk   n
The rest of this letter is organized as follows. Section II J (V, W) = min  −  +α wi2
die  v v 
(x j ,xk )∈ei k 
details our method. Experimental results are presented in 2 dj d
ei ∈E i=1
Section III to demonstrate the effectiveness of the proposed
method. Finally, Section IV draws some conclusions. 
n
s.t. tr(V XX V) = 1,
T T
wi = 1 (4)
II. SAHDA i=1

To represent the intrinsic properties of the HSI adaptively, where W = [wi ]ni . α > 0 is a balanced parameter.
a novel dimensionality-reduction method, termed SAHDA, For the optimal problem of (4), we construct an augmented
was proposed in this letter, as shown in Fig. 1. First, spare Lagrangian function with a Lagrangian multiplier δ and λ as
representation is adopted to represent the intrinsic relation-
ships of the HSI adaptively. Then, according to the intra- L(V, W, δ, λ)
 2
 T 
class sparse coefficients, we construct an adaptive hypergraph 1  wi   V x j
  −   V T x 
k
model. Finally, an adaptive dimensionality reduction model = 
is designed to learn the weights of the hyperedges and the 2 die
ei ∈E (x j ,xk )∈ei  d vj dkv 
projection matrix.  n 
n 
Suppose a hyperspectral data set X = [x1 , x2 , . . . , +α wi +δ
2
wi −1 +λ(1−tr(VT XXT V)). (5)
xn ] ∈  D×n contains n pixels with D spectral bands. i=1 i=1
(xi ) ∈ {1, 2, . . . , c} denotes the class label of xi , where c
is the class number of the land cover. The low-dimensional Then, we use the ADMM to update the solution of (5).
embedding of X denotes Y = [y1 , y2 , . . . , yn ] ∈ d×n , where We first fix W to update V, and the objective function can be
d is the embedding dimension. We can get Y = VT X with a represented as
projection matrix V ∈  D×d , where d << D.  2
Sparse representation aims to represent a data point with    T 
1 wi   V xj V xk 
T
 
L(V, λ) =   v − d v 
a dictionary and obtain the representation coefficients as
de
e ∈E i (x ,x )∈e  k 
sparse as possible. For the representation coefficients, most 2 d
i j k i j
of them are zero, and only a few elements are nonzeros
that corresponding data points possess strong relevance. These + λ(1 − tr(V XX V)) T T
 2
nonzero coefficients corresponding to data points can reveal  
1  h i j h ik wi  VT x j VT xk 
the intrinsic properties of the data. For a data point xi , its =  −  v 
sparse coefficients can be obtained by the following problem: 2 die  d v d k 
ei ∈E (x j ,xk )∈X j

min si 1 s.t. xi − Xsi  < ε (1) + λ(1 − tr(V XX V)) T T
si
= tr(VT XL H XT V) + λ(1 − tr(VT XXT V)) (6)
where ε > 0 is the error tolerance. || • ||1 is the l1 -norm
−1/2 T −1/2
that controls the sparsity of coefficients. si = [si,1 , si,2 , where L H = I − Dv HWD−1
e H Dv is the
. . . , si,i−1 , 0, si,i+1 , . . . , si,n ] are the sparse coefficients. hyper-Laplacian matrix. I is an identity matrix.

Authorized licensed use limited to: Wuhan University. Downloaded on June 13,2020 at 03:40:32 UTC from IEEE Xplore. Restrictions apply.
1084 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 17, NO. 6, JUNE 2020

Dv = diag([d1v , d2v , . . . , dnv ]) and De = diag([d1e , d2e , . . . , dne ])


are the hyperedge and vertex degree matrices.
We can obtain the partial derivative of (6) with respect to V,
that is,
∂L(V, λ)
= XL H XT V − λXXT V. (7)
∂V
We set (7) to zero, and the solution of V can be obtained, Fig. 2. Washington DC Mall HSI including (Left) false color and (Right)
which is denoted as a generalized eigenvalue problem ground truth. (Note that the number of samples for each class is shown in
brackets.)
XL H XT V = λXXT V. (8)
data set, as shown in Fig. 2. This data set was cap-
Then, the optimal projection matrix V = [V1 , V2 , . . . , Vd ] tured by the airborne hyperspectral digital imagery collection
can be obtained by d smallest eigenvalue-corresponding experiment (HYDICE) from the mall in Washington DC.
eigenvectors. The original size is 1208 × 307 with a total of 210 bands
To update W, we fix V and can obtain the following in the region of the visible and infrared spectra. In this letter,
objective function with respect to W as: we use a size of 250 × 307 with 191 bands that removed the
L(W, δ) water absorption bands.
 2
 T 
1  wi   V x j
  V xk 
T
=  −  B. Experimental Setup
2 die
ei ∈E (x j ,xk )∈ei  d vj dkv 
In the experiments, we select two corresponding methods,
 n  i.e., BH and SGDA, to compare with SAHDA. For BH, we set
n 
+α wi2 + δ wi − 1 the neighbor size to 5. For SAHDA and SGDA, the error
i=1 i=1 tolerance of spare representation was set to 0.1. In this letter,
 

n 
n 
n we adopted the SPGL1 toolbox [22] to solve the sparse
=− 2
wi qi ||ri || + α wi2 +δ wi − 1 (9) problem of SGDA and SAHDA. The parameter α is set to
i=1 i=1 i=1 5 for SAHDA. After obtaining the low-dimensional features,
the NN classifier and the support-vector machine (SVM) were
where qi is the ith element of the diagonal vector of De −1 and
used to discriminate the class types of unknown data, and
ri is the ith column of VT XDv−1/2 H.
we also showed the classification results of “RAW” spectrum.
For (9), we use the coordinate descent algorithm to solve
For the SVM, we adopted the LibSVM Toolbox [23] with an
this problem. In each iteration, we choose two elements for
radial basis function (RBF) kernel, and a grid search method
updating, while the other elements are fixed. For example,
was used to select the penalty term C and the RBF kernel
in an iteration, we update two elements wa and wb . For
n width σ in a given set {2−10 , . . . , 210 }. The embedding dimen-
i=1 wi = 1, the summation of wa and wb will be a fixed sion was set to 30 for all the methods. For the experimental
value. Thus, the updating of wa and wb can be denoted as
results, we use the accuracy of each class, average accuracy
⎧ k+1

⎪ wa = 0, wbk+1 = wak + wbk , (AA), overall accuracy (OA), and Kappa coefficient (KC) to

⎪   evaluate the effect of each algorithm. To represent the results

⎪ if 2α wak + wbk + (tb − ta ) ≤ 0

⎪ k+1 robustly, we repeated each experiment ten times and showed

⎨wa = wak + wbk , wbk+1 = 0,
 k  the AA with standard deviations (STD).
⎪  kif 2α w
 a + wb
k + (t − t ) ≤ 0
a b (10)



⎪ 2α wa + wb + (tb − ta )
k
C. Classification Results

⎪ wak+1 = , otherwise

⎪ 4δ
⎩ k+1 To evaluate the classification accuracies of each class,
wa = wa + wb − wa
k k k+1
we randomly selected 30 samples from each class as the
where ta = −qa ||ra ||2 . By updating all the wi , we can adap- training set and the remaining samples were used for testing.
Table I shows the results of different methods under different
tively obtain the optimum weight matrix of the hypergraph.
With the alternate updating of W and V up to convergence, classifiers.
According to Table I, under the NN and SVM classifiers,
we can achieve the optimal projection matrix and hyperedge
weights. Since the procedure is adaptive to construct a hyper- both the proposed method achieves better results than BH and
SGDA for most classes. For BH, it uses Euclidean distance
graph, SAHDA is more robust to data and can better represent
the intrinsic properties of the HSI. According to the projection to describe the structure of the HSI data, while Euclidean dis-
tance is inaccurate to represent the complex high-dimensional
matrix, the low-dimensional embedding of xi is
data in general. For SGDA, it can adaptively represent the
yi = V T xi . (11) intrinsic relationship of data, while it just considers the simple
one-to-one relationship, which is very difficult to reveal the
III. E XPERIMENTAL R ESULTS complex structures of the HSI. SAHDA inherits the merits
of BH and SGDA. It not only adaptively represents the
A. Data Set intrinsic structures of the HSI but also reveals the multiple
To demonstrate the effectiveness of the proposed method, properties of the HSI. BH, SGDA, and SAHDA generate better
we conduct some experiments on the Washington DC Mall accuracies than RAW because the dimensionality reduction

Authorized licensed use limited to: Wuhan University. Downloaded on June 13,2020 at 03:40:32 UTC from IEEE Xplore. Restrictions apply.
LUO et al.: SAHDA FOR HSI CLASSIFICATION 1085

Fig. 3. Classification map with different methods, where the first and second rows show the results of SVM and NN, and the first to fourth columns show
the results of RAW, BH, SGDA, and SAHDA.

TABLE I
C LASSIFICATION R ESULTS OF E ACH C LASS (%)

Fig. 4. Classification results with SVM (the first row) and NN (the second
row) under different embedding dimensions.

discriminative features. When the discriminant features are


sufficient to represent the intrinsic properties of the data,
the classification results will reach a peak value and keep a
fixed value even increasing the embedding dimension. In addi-
tion, SAHDA generates the best results than the compared
methods; the reason is that the proposed method can adaptively
represent the complex multiple properties of the HSI.

methods can reduce the redundancy and preserve the valuable E. Results With Different Numbers of Training Samples
information to enhance the discriminant performance of the In this section, we analyzed the results with different
HSI. For all the compared methods, SAHDA possesses the numbers of training samples. We selected randomly 5, 10,
best AA, OA, and KC. The visualized results are shown 15, 20, 25, and 30 samples from each class for training and
in Fig. 3. The proposed method achieves a smoother region repeated ten times under each condition. Table II shows the
than RAW, BH, and SGDA, especially in the areas of Road and average OAs with STD and KCs.
water because SAHDA can adaptively represent the intrinsic In Table II, the results indicate that the accuracies improve
multiple relationships and the similarity of the homogeneous with the increasing of training samples because more informa-
samples. tion can be used to construct the training model. Furthermore,
the proposed method achieves better OAs and KCs than the
D. Dimensionality Analysis other methods under each case.
To analyze the effect of embedding dimension, 30 samples
were randomly selected from each class for training and the F. Parameter Analysis
other samples were used for testing. We repeated ten times For SAHDA, it has a parameter α to adjust the weights.
under each case and Fig. 4 shows the results of each method To evaluate the influence of α, we randomly selected 30 sam-
under different embedding dimensions. ples from each class as the training set and the remaining
According to Fig. 4, the classification accuracies improve samples were considered as the test set. We set α to 0.01, 0.1,
and then reach a stable value with the increasing of embedding 1, 5, 10, 15, 20, 25, and 30, and Fig. 5 shows the average OAs
dimension, because the increasing dimension can obtain more with a ten-time-repeated experiment under each condition.

Authorized licensed use limited to: Wuhan University. Downloaded on June 13,2020 at 03:40:32 UTC from IEEE Xplore. Restrictions apply.
1086 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 17, NO. 6, JUNE 2020

TABLE II
C LASSIFICATION R ESULTS W ITH D IFFERENT N UMBERS OF T RAINING S AMPLES (OA±STD (KC)%)

[4] B. Du, Y. Zhang, L. Zhang, and D. Tao, “Beyond the sparsity-based


target detector: A hybrid sparsity and statistics-based detector for
hyperspectral images,” IEEE Trans. Image Process., vol. 25, no. 11,
pp. 5345–5357, Nov. 2016.
[5] F. Li, X. Zhang, L. Zhang, D. Jiang, and Y. Zhang, “Exploiting structured
sparsity for hyperspectral anomaly detection,” IEEE Trans. Geosci.
Remote Sens., vol. 56, no. 7, pp. 4050–4064, Jul. 2018.
[6] J. Peng, W. Sun, and Q. Du, “Self-paced joint sparse representation for
the classification of hyperspectral images,” IEEE Trans. Geosci. Remote
Sens., vol. 57, no. 2, pp. 1183–1194, Feb. 2019.
[7] L. Zhang, L. Zhang, B. Du, J. You, and D. Tao, “Hyperspectral image
unsupervised classification by robust manifold matrix factorization,” Inf.
Sci., vol. 485, pp. 154–169, Jun. 2019.
Fig. 5. Classification results with respect to α. [8] Y. Zhou, J. Peng, and C. L. P. Chen, “Dimension reduction using spatial
and spectral regularized local discriminant embedding for hyperspectral
image classification,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 2,
From Fig. 5, the OAs first increase and then decrease with pp. 1082–1095, Feb. 2015.
[9] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph
the increasing of α. However, the change of OAs is very small, embedding and extensions: A general framework for dimensionality
which indicates that the value of α has a large range and it is reduction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 1,
very easy to select a value for parameter α. In the experiments, pp. 40–51, Jan. 2007.
we set α to 5. [10] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by
locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326,
Dec. 2000.
[11] M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality
IV. C ONCLUSION reduction and data representation,” Neural Comput., vol. 15, no. 6,
pp. 1373–1396, 2003.
Graph learning has been widely used to represent the [12] F. Luo, H. Huang, Y. Duan, J. Liu, and Y. Liao, “Local geometric
intrinsic properties of the HSI. In this letter, we proposed a structure feature for dimensionality reduction of hyperspectral imagery,”
novel method, termed SAHDA, to reduce the dimension of Remote Sens., vol. 9, no. 8, p. 790, Aug. 2017.
the HSI. SAHDA can adaptively reveal the intrinsic structure [13] N. Wang, X. Gao, L. Sun, and J. Li, “Anchored neighborhood index
for face sketch synthesis,” IEEE Trans. Circuits Syst. Video Technol.,
relationships of the HSI with sparse representation, and then, vol. 28, no. 9, pp. 2154–2163, Sep. 2018.
an adaptive hypergraph is constructed by using the intraclass [14] W. Wu, Y. Zhang, Q. Wang, F. Liu, P. Chen, and H. Yu, “Low-dose
sparse coefficients. Finally, a dimensionality reduction model spectral CT reconstruction using image gradient 0 –norm and tensor
dictionary,” Appl. Math. Model., vol. 63, pp. 538–557, Nov. 2018.
is designed to obtain the projection matrix and the hyperedge [15] B. Cheng, J. Yang, S. Yan, Y. Fu, and T. S. Huang, “Learning with
weights, which can adaptively reveal the similarity of each 1 -graph for image analysis,” IEEE Trans. Image Process., vol. 19, no. 4,
hyperedge. Experiments on a hyperspectral data set show that pp. 858–866, Apr. 2010.
the proposed SAHDA method generates better classification [16] L. Qiao, S. Chen, and X. Tan, “Sparsity preserving projections with
applications to face recognition,” Pattern Recognit., vol. 43, no. 1,
accuracies than two traditional algorithms, which indicates pp. 331–341, 2010.
that our method can effectively represent the complex multiple [17] F. Luo, H. Huang, J. Liu, and Z. Ma, “Fusion of graph embedding
properties of the HSI. To improve the effect of the proposed and sparse representation for feature extraction and classification of
method further, we will consider the spatial information of the hyperspectral imagery,” Photogramm. Eng. Remote Sens., vol. 83, no. 1,
pp. 37–46, Jan. 2017.
HSI in the future works. [18] N. H. Ly, Q. Du, and J. E. Fowler, “Sparse graph-based discriminant
analysis for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens.,
vol. 52, no. 7, pp. 3872–3884, Jul. 2014.
R EFERENCES [19] F. Luo, B. Du, L. Zhang, L. Zhang, and D. Tao, “Feature learning
using spatial-spectral hypergraph discriminant analysis for hyperspectral
[1] Z. Wang, B. Du, L. Zhang, L. Zhang, and X. Jia, “A novel semisuper- image,” IEEE Trans. Cybern., vol. 49, no. 7, pp. 2406–2419, Jul. 2019.
vised active-learning algorithm for hyperspectral image classification,” [20] H. Yuan and Y. Y. Tang, “Learning with hypergraph for hyperspectral
IEEE Trans. Geosci. Remote Sens., vol. 55, no. 6, pp. 3071–3083, image feature extraction,” IEEE Geosci. Remote Sens. Lett., vol. 12,
Jun. 2017. no. 8, pp. 1695–1699, Aug. 2015.
[2] H. Luo, C. Liu, C. Wu, and X. Guo, “Urban change detection based [21] S. Huang, D. Yang, Y. Ge, and X. Zhang, “Discriminant hyper-Laplacian
on Dempster–Shafer theory for multitemporal very high-resolution projections and its scalable extension for dimensionality reduction,”
imagery,” Remote Sens., vol. 10, no. 7, p. 980, 2018. Neurocomputing, vol. 173, pp. 145–153, Jan. 2016.
[3] Z. Lv, T. Liu, Y. Wan, J. A. Benediktsson, and X. Zhang, “Post- [22] Spectral Projected Gradient for L1 Minimization (SPGL1) Tool Box.
processing approach for refining raw land cover change detection of very [Online]. Available: https://www.cs.ubc.ca/mpf/spgl1/download.html
high-resolution remote sensing images,” Remote Sens., vol. 10, no. 3, [23] LIB SVM Toolbox. [Online]. Available: https://www.csie.ntu.edu.tw/cjlin/
p. 472, 2018. libsvm/

View publication stats Authorized licensed use limited to: Wuhan University. Downloaded on June 13,2020 at 03:40:32 UTC from IEEE Xplore. Restrictions apply.

You might also like