You are on page 1of 20

Received March 16, 2020, accepted March 28, 2020, date of publication April 3, 2020, date of current version

April 17, 2020.


Digital Object Identifier 10.1109/ACCESS.2020.2985596

An Improved Harris’s Hawks Optimization for


SAR Target Recognition and Stock
Market Index Prediction
HONGPING HU , YAN AO, YANPING BAI , RONG CHENG, AND TING XU
School of Science, North University of China, Taiyuan 030051, China
Corresponding author: Hongping Hu (hhp92@163.com)
This work was supported in part by the Natural Science Foundation of Shanxi Province under Grant 201801D121026 and
Grant 201801D121008, in part by the National Nature Science Foundation of China under Grant 61774137, and in part by the Key
Research and Development Projects of Shanxi Province under Grant 201903D121156.

ABSTRACT Harris’s hawks optimization (HHO) algorithm proposed in 2019 is a novel population-based,
nature-inspired optimization paradigm that imitates the cooperative behavior and chasing style of Harris’s
hawks in nature called surprise pounce. Inspired by particle swarm optimization algorithm, velocity is added
into the HHO algorithm in the exploration phase. The soft besiege with progressive rapid dives and the hard
besiege with progressive rapid dives in the attacking stages of the HHO algorithm are improved by use of the
crossover operator of the artificial tree algorithm. Thus the improved HHO algorithm is obtained, written as
IHHO. The effectiveness of the IHHO algorithm is tested on 23 benchmark problems by comparison with
the other 11 state-of-art meta-heuristic algorithms. The IHHO algorithm is used to optimize the parameters
of support vector machine for synthetic aperture radar (SAR) target recognition and of the extreme learning
machine for stock market index prediction by considering Google Trends. The comparable results show that
the IHHO algorithm is very promising and has some competitive potential.

INDEX TERMS Extreme learning machine, function optimization, Harris’s hawk optimization, stock
prediction, support vector machine, synthetic aperture radar target recognition.

I. INTRODUCTION The meanings of the abbreviate algorithms in Figure 1 see


In recent years, meta-heuristic algorithms have been popular the reference [2].
in dealing with various matters, such as real engineering The genetic algorithm (GA) [9], proposed in early
design problems [1], [2], signal denoising [3], [4], DOA 1970s, simulated Darwin’s evolutional principles, and parti-
estimation [5], air quality indices and mathematical opti- cle swarm optimization (PSO) [10] algorithm, proposed by
mization problems [5], [6], and fault diagnosis [7]. These Kennedy and Eberhart in 1995, simulated birds’ behavior
meta-heuristic algorithms have advantages in their simplicity in foraging for food; these have been improved and various
and easy implementation process, whose core operations do hybrid GAs and PSOs are obtained. For example, adaptive
not rely on gradient information of the objective landscape range genetic algorithm (ARGA) [11] is a new hybrid variant
or on the algorithms’ mathematical traits [2], [8]. But these of GA, which uses the sample space reduction technique of
meta-heuristic algorithms have their drawbacks: a delicate the cohort intelligence algorithm. ARGA is tested by 50 stan-
sensitivity to the tuning of user-defined parameters, and con- dard test functions and is applied to the design and economic
vergence that may not always be the global optimum [2], [8]. optimization problem of a shell and tube heat exchanger.
According to the reference [2], meta-heuristic algo- A non-revisiting genetic algorithm based on a novel binary
rithms can be categorized in four main groups: Evolution- space partition tree [12] is applied to eight benchmark prob-
ary Algorithms (EAs), Physics-based, Human-based, and lems, the power system fault diagnosis problem, and the
Swarm Intelligence (SI) algorithms, shown in Figure 1. molecular signatures selection problem and exhibits better
overall performance. Chaotic particle swarm optimization
The associate editor coordinating the review of this manuscript and with sigmoid-based acceleration coefficients [13] is proposed
approving it for publication was Xiaochun Cheng. to validate the performance of 28 optimization functions.

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 8, 2020 65891
H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

and chemical compound activities [35]; a hybridization


(HHO-DE) that is the combination of HHO and DE for
color image multilevel thresholding segmentation [36]; the
combination DHHO/M of dynamic HHO with a mutation
mechanism for a satellite image segmentation technique [37];
diversification-enriched HHO with chaotic drifts for param-
eters estimation of photovoltaic models [38]; and a modi-
fied version CMVHHO of MVO that uses the chaos theory
and the HHO to solve engineering problems [39]. SVM
and extreme learning machine (ELM) are two kinds of the
important machine learning. SVM, proposed in 1995, is based
on the VC dimension theory of statistical theory and the
structural risk minimization principle [40], while ELM, pro-
posed in 2006 [41], is a simple learning algorithm for a
single-hidden layer feedforward neural network. These two
machine learning algorithms are combined with the swarm
FIGURE 1. Classification of meta-heuristic techniques [2].
intelligence algorithms to establish the hybrid models to solve
the various problems. For instance, the combination of IWO
and SSA is applied to optimize the penalty parameter C and
An Improved exponential decreasing inertia weight-particle the kernel function parameter γ of a SVM for performing the
swarm optimization algorithm [14] is combined with a radial classification of air quality index [5]. The improved DA is
basis function neural network and is applied to perform the used to choose the optimal parameters of SVM for predicting
air quality index prediction. the wind power derived from La Haute Borne wind farm in
Given their strong ability to solve the optimization Farm [42]. In [43], WOA-ELM (where WOA is optimized by
problems and the wide applications, swarm intelligence ELM) is tested on several benchmark datasets, and an aging
algorithms have been proposed widely and developed, degree evaluation model of insulated gate bipolar transistor
among them the ant lion operator(ALO) [15], differen- modules to ensure their stability during operation is proposed
tial evolution (DE) algorithm [16], sine cosine algorithm based on WOA-ELM. In [44], a novel hybrid model based
(SCA) [17], the moth-flame optimization (MFO) algo- on PSO and ELM is proposed to forecast the temperature
rithm [18], the whale optimization algorithm (WOA) [19], to optimize the use of energy further. There exist also many
the dragonfly algorithm (DA) [20], the grey wolf opti- proposed models by the way of the SVM and ELM opti-
mizer (GWO) [21], the multi-objective ant lion optimizer mized by the other swarm intelligence algorithms for the wide
(MOALO) [22], the multi-verse optimizer (MVO) [23], applications.
the artificial tree (AT) algorithm [24], the artificial bee In this paper, the velocity is added into the HHO algorithm
colony (ABC) algorithm [25], the fruit fly optimization algo- in the exploration phase inspired by particle swarm optimiza-
rithm (FOA) [26], the bat algorithm (BA) [27], the invasive tion algorithm, and the soft besiege with progressive rapid
weed optimization (IWO) [28], the squirrel search algo- dives and the hard besiege with progressive rapid dives in the
rithm (SSA) [5], [29] and the Harris’s hawk optimization attacking stages of HHO algorithm are improved by use of
(HHO) [2]. the crossover operator of the artificial tree algorithm. Thus
In particular, the Harris’s hawk optimization (HHO) pro- the improved HHO algorithm is obtained, written as IHHO.
posed in 2019 [2] is a novel population-based, nature-inspired IHHO is tested with 23 benchmark functions for optimiza-
optimization paradigm and a population-based, gradient-free tion, and the results show that IHHO is very competitive by
optimization technique. HHO has been applied to different comparison with the other 11 state-of-art meta-heuristic algo-
questions, such as function optimization [2], predicting the rithms. IHHO is combined with a support vector machine for
factor of safety in the presence of rigid foundations [30], synthetic aperture radar (SAR) target recognition and with an
the design of microchannel heat sinks [31] and predicting the extreme learning machine for predicting Standard & Poor’s
cooling load of residential buildings [32]. And HHO has been 500 (S&P 500) and Dow Jones Industrial Average (DJIA)
improved constantly, resulting in a novel hybrid optimiza- Indices by considering data in Google Trends. The results
tion algorithm (H-HHONM) that combines the Nelder-Mead show that the proposed classified model for SAR target recog-
local search algorithm with the Harris’s hawks optimiza- nition and the proposed predicted model for the stock indices
tion algorithm [33] for solving real-world optimization prob- prediction are all effective and feasible for classification and
lems; the binary version of HHO (BHHO) to solve the prediction problems.
feature selection problem in classification tasks [34]; two This paper is organized as follows: in Section II, HHO is
classification approaches (HHOSVM and HHO-kNN) that given. In Section III, the improved HHO (IHHO) algorithm is
hybridize HHO with support vector machines (SVM) and the proposed. In Section IV, we test IHHO on 23 benchmark func-
k-nearest neighbors (k-NN) for chemical descriptor selection tions for optimization. In Section V, we combine IHHO with

65892 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

SVM and ELM to establish the hybrid model IHHO-SVM iteration; and r3 is a scaling coefficient to further increase
for SAR recognition and the hybrid model IHHO-ELM the random nature of rule once r4 takes values close to 1 and
for predicting the stock market indices by considering the similar distribution patterns may occur; LB and UB show the
Google Trends. In Section VI, result analyses are given. lower and upper bounds of variables; Xrand (t) is a randomly
In Section VII discussion and conclusion are obtained. selected hawk from the current population; and
N
II. HARRIS’S HAWKS OPTIMIZATION 1 X
Xm (t) = Xi (t) (2)
Similar to the other swarm intelligence algorithms, the N
i=1
Harris’s hawk optimization (HHO) proposed in 2019 [2] also
is the average position of the current population of hawks,
has exploratory and exploitative phases, a structure that is
where N denotes the total number of hawks. The locations
inspired by the hawks exploring a prey, a surprise pounce,
of Harris’s hawks are all within the group’s home range
and different attack. Figure 2 shows the exploratory and
(LB, UB).
exploitative phases of HHO in detail.
B. TRANSITION FROM EXPLORATION TO EXPLOITATION
In HHO, the escaping energy E of the prey decreases and is
defined as:
t
E = 2E0 (1 − ), (3)
T
where T is the number of maximum iterations, and the initial
state E0 ∈ (−1, 1) of its energy randomly changes at each
iteration. Exploration happens when |E| ≥ 1, while exploita-
tion happens in later steps when |E| < 1.

C. EXPLOITATION PHASE
Let r be the chance of a prey before surprise pounce. Accord-
ing to r and E, four possible strategies in the HHO to model
the attacking stage are soft besiege, hard besiege, soft besiege
with progressive rapid dives, and hard besiege with progres-
sive rapid dives.

1) SOFT BESIEGE
When r ≥ 0.5 and |E| ≥ 0.5, the attacking stage in HHO
is in Soft besiege. This updated position vector of the Harris’
Hawk is defined by (4) and (5):
X (t + 1) = 1X (t) − E|JXrabbit (t) − X (t)|, (4)
FIGURE 2. Different phases of HHO [2].
1X (t) = Xrabbit (t) − X (t), (5)
where 1X (t) is the difference between the position vector
A. EXPLORATION PHASE of the rabbit and the current location at the t th iteration,
Let q be an equal chance for each perching strategy in the J = 2(1 − r5 ) represents the random jump strength of the
exploration phase. If q < 0.5, the Harris’s hawks perch rabbit throughout the escaping procedure, and r5 is a random
based on the positions of other family members (to be close number inside (0, 1).
enough to them when attacking) and the rabbit. Otherwise,
the Harris’s hawks perch on random tall trees (random loca- 2) HARD BESIEGE
tions inside the groups home range). The updated position in When r ≥ 0.5 and |E| < 0.5, the attacking stage in HHO is
the exploration phase is modeled in (1): in Hard besiege. The updated positions of the Harris’ Hawks
Xi (t + 1) are defined using (6):
(
Xrand (t)−r1 |Xrand (t)−2r2 Xi (t)|, q ≥ 0.5, X (t + 1) = Xrabbit − E|1X (t)|. (6)
=
(Xrabbit (t)−Xm (t))−r3 (LB+r4 (UB−LB)), q < 0.5
3) SOFT BESIEGE WITH PROGRESSIVE RAPID DIVES
(1)
When |E| ≥ 0.5 and r < 0.5, the attacking stage in HHO is
where Xi (t) and Xrabbit (t) denote the position of the ith hawks in soft besiege with progressive rapid dives. In HHO, the levy
and rabbit at the t th iteration, respectively, r1 , r2 , r3 , r4 , and q flight (LF) is utilized to mimic the real zigzag deceptive
are random numbers inside (0, 1) which are updated in each motions of preys (particularity rabbits) during escaping phase

VOLUME 8, 2020 65893


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

and irregular, abrupt, and rapid team dives of hawks around The crossover operator of the AT algorithm is added into
the escaping prey. The hawks can evaluate (decide) their next soft besiege with progressive rapid dives, and hard besiege
move to perform a soft besiege according to (7): with progressive rapid dives in the attacking stages. In the
Y = Xrabbit (t) − E|JXrabbit (t) − X (t)|. (7) stage of the soft besiege with progressive rapid dives, hawks
can evaluate (decide) their next move to perform a soft
The hawks will do the dives in term of the LF-based patterns besiege according to (16).
with (8):
Y = Xrabbit (t) − E|JXrabbit (t) − X (t)|. (16)
Z = Y + S × LF(D), (8)
and and they will do the dives in terms of the LF-based
where D is the dimension of problem and S is a random
patterns with (17).
vector by size 1 × D and LF is the levy flight function,
defined as (9) [45]: Z = Y + rand(0, 1) × LF(D). (17)
 1
πβ β

u×σ 0(1 + β) × sin 2
Hence, the final strategy for updating the positions of hawks
LF(x) = 1
, σ =    β−1
 , (9) in the soft besiege phase can be performed by (18)
|v| β 0 1+β
2 ×β ×2 2 
rand(0, 1)×Y
 + rand(0, 1)×X (t),if F(Y ) < F(X (t))

where u, v ∈ (0, 1) are random values, β is a default constant


set to be 1.5. Hence, the positions of hawks in the soft besiege X (t + 1) = (18)
rand(0, 1)×Z
if F(Z ) < F(X (t))

phase can be performed by (10):



( + rand(0, 1)×X (t).
Y , if F(Y ) < F(X (t))
X (t + 1) = (10) In the stage of the hard besiege with progressive rapid
Z , if F(Z ) < F(X (t)) dives, the hawks can evaluate (decide) their next move to
where Y and Z are obtained using (7) and (8). perform a hard besiege according to (19):
Y 0 = Xrabbit (t)−E|JXrabbit (t) − Xm (t)|, (19)
4) HARD BESIEGE WITH PROGRESSIVE RAPID DIVES
When |E| < 0.5 and r < 0.5, the attacking stage in HHO and the hawks will do the dives in term of the LF-based
is Hard besiege with progressive rapid dives. The updated patterns with (20)
position vector of the Harris’ Hawk is obey to (11) in hard
besiege condition. Z 0 = Y 0 + rand(0, 1) × LF(D). (20)
(
Y 0 , if F(Y 0 ) < F(X (t)) Therefore, the updated position vector of the Harris’ Hawk is
X (t + 1) = (11) obey to (21) in hard besiege condition.
Z 0 , if F(Z 0 ) < F(X (t))
rand(0, 1) × Y 0

where Y 0 and Z 0 are obtained using new rules in (12) and (13). 

 if F(Y 0 ) < F(X (t))
 + rand(0, 1) × X (t),
Y 0 = Xrabbit (t)−E|JXrabbit (t) − Xm (t)|, (12) X (t + 1) = (21)
rand(0, 1) × Z 0
Z 0 = Y 0 + S × LF(D). if F(Z ) < F(X (t))
0

(13) 


+ rand(0, 1) × X (t).
III. THE PROPOSED ALGORITHM Thus, the improved HHO(IHHO) is obtained. The flow
A. THE IMPROVED HHO ALGORITHM chart of IHHO is shown in Figure 3.
The flight of a Harris’s hawk is characterized by the bird’s
velocity and its position in the air. Inspired by PSO algorithm, B. COMPUTATIONAL COMPLEXITY
the velocity (14) According to the comparison of HHO and IHHO, the compu-
v(t + 1) = ω × v(t) + c1 × rand(0, 1) × (p(t) − X (t)) tational complexities of these two algorithms mainly depend
+c2 × rand(0, 1) × (Xrabbit (t) − X (t)) (14) on three processes: initialization, fitness evaluation, and
updating of hawks. Let N be the number of hawks in the popu-
is added into the Harris’s hawk optimization algorithm in lation. In IHHO, the velocity is added into the initialization of
the exploration phase, where ω is inertia weight, c1 , c2 HHO, thus the computational complexity of the initialization
are the acceleration coefficients, v(t) is the velocity at the of IHHO is O(2N ), that is O(N ), which is the same as that
t th iteration, p(t) is the local optimum Hawk. The position of HHO.
vector of Hawks in the exploration phase is updated as (15). Although the updated position vector in the attacking
stages of IHHO is obtained from HHO in soft besiege with

v(t + 1) + xrand (t)
q ≥ 0.5,

progressive rapid dives, and hard besiege with progressive


 − r1 |Xrand (t) − 2r2 X (t)|,
X (t + 1) = rapid dives by the crossover operator of the AT algorithm,
v(t + 1) + (Xrabbit (t) − Xm (t)) the computational complexity of IHHO in the attacking stages
q < 0.5




− r3 (LB + r4 (UB − LB)). remains unchanged. Thus, the computational complexity of
(15) the updating mechanism is O(T × N ) + O(T × N × D), which

65894 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 3. The flow chart of IHHO.

VOLUME 8, 2020 65895


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

comprises searching for the best location and updating the TABLE 2. Description of multimodal benchmark functions F8 (x) − F13 (x).
location vector of all hawks, where T is the maximum number
of iterations and D is the dimension of specific problems.
Therefore, the computational complexity of IHHO is
O(N × (T + TD + 1)), which is the same as that of HHO.

C. RATE OF CONVERGENCE
The rate of convergence of algorithm is defined as
kX (k+1) − X ∗ k
lim = σ ∈ (0, 1). (22)
k→∞ kX (k) − X ∗ kβ
When β = 1, the algorithm has the linear rate of convergence,
and it is regarded as the worst algorithm; when β = 2,
the algorithm has a quadratic rate of convergence, and it is
regarded as the best algorithm; when β ∈ (1, 2), the algorithm
has superlinear rate of convergence, and it is regarded to be
the better algorithm.
In this subsection, the rate of convergence of the proposed
IHHO algorithm is calculated according to (22) in view of the
following 23 benchmark functions shown in Table 1, Table 2 TABLE 3. Description of fixed-dimension benchmark functions
and Table 3. And the results are that β = 2 and σ ∈ (0, 1) F14 (x) − F23 (x).

for these 23 benchmark functions. Therefore, the proposed


IHHO algorithm is the best algorithm and has has quadratic
rate of convergence.

TABLE 1. Description of unimodal benchmark functions F1 (x) − F7 (x).

IV. EXPERIMENTAL RESULTS AND DISCUSSION B. EXPERIMENTAL SETUP


A. 23 BENCHMARK FUNCTIONS In this paper, all the implemented algorithms were performed
In this paper, 23 benchmark functions are employed to verify under Matlab R2014a on a computer with a Windows 10
validation of IHHO algorithm, which derive from the refer- 64-bit Professional platform and 64 GB RAM.
ences [2], [15], [24], [46], [47] and are shown in Table 1, For all these 12 comparable algorithms, the population size
Table 2 and Table 3. Table 1, Table 2 and Table 3 denote is set to be 30, the number of the maximum iterations is
the representations, the dimensions and the minimum val- set to be 500. Every comparable algorithm is run 30 times
ues of seven unimodal benchmark functions F1 (x) − F7 (x) independently. The parameters of these 12 algorithms are set
with the dimension n = 30, 100, 300, 500, six multimodal to be shown in Table 4.
benchmark functions F8 (x) − F13 (x) with the dimension
n = 30, 100, 300, 500 and ten fixed-dimension benchmark C. EXPERIMENTAL RESULTS ON 23
functions F14 (x) − F23 (x), respectively. BENCHMARK FUNCTIONS
The proposed IHHO is used for comparison with the 1) IHHO VERSUS AT, PSO AND HHO
other 11 well-established optimization techniques: ant lion In this subsection, AT, PSO, and HHO algorithms are
optimizer (ALO), dragonfly algorithm (DA), differential employed for comparison with IHHO algorithm under the
evolution algorithm (DE), genetic algorithm (GA), grey different conditions with different dimensions 30, 100, 300,
wolf optimization (GWO), moth-flame optimization (MFO), and 500 for 13 benchmark functions, F1 (x) − F13 (x), and
sine cosine algorithm (SCA), whale optimization algorithm with the fixed dimensions for 10 of the benchmark func-
(WOA), PSO, AT and HHO. These 12 algorithms are per- tions, F14 (x) − F23 (x). According to the above experimen-
formed for comparison by the standard deviation (Std.) and tal setup, the corresponding results are obtained as shown
the average value (Avg.). in Table 5 and Table 6, which show the Avg. and Std.

65896 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

TABLE 4. Parameters in 12 comparable algorithms. TABLE 5. Results of benchmark functions F1 (x) − F13 (x) with the
different dimensions.

of benchmark functions F1 (x) − F13 (x) and F14 (x) − F23 (x),
respectively.
Table 5 shows that for every algorithm except the AT
algorithm, the Avg. values of most of benchmark functions
F1 (x) − F13 (x) increase with the increasing dimension, while
the Avg. values of function F7 (x) performed by AT decrease
with the increasing dimension. Under different dimensions,
IHHO arrives at the minimum Avg. values on the benchmark
functions F1 (x), F3 (x), F4 (x), F8 (x) − F1 (x), while for the
benchmark functions F2 (x), F5 (x), F6 (x), F12 (x), and F13 (x),
AT arrives at the minimum Avg. values, and according to
the Avg. values, the order of these four algorithms is AT,
IHHO, HHO and PSO according to the Avg. values. Espe-
cially, for function F7 (x), IHHO arrives at the minimum
Avg. value under the condition with dimensions 30 and 100,
while AT arrives at the minimum Avg. values under the
condition with dimensions 300 and 500. Therefore, the results
show that IHHO is superior to those of AT, PSO, and HHO
with different dimensions and is the most suitable for the
functions with multi-dimensions. From Table 6, we also
observe that the Avg. values of benchmark functions F14 (x)−
F18 (x), F21 (x) − F23 (x) performed by IHHO arrives at the
optimum values. In addition, the Avg. values of functions
F16 (x) − F19 (x) performed by HHO reach the optimum
values and the Avg. values of functions F19 (x) − F20 (x)
performed by PSO reach the optimum value and the min-
imum value, respectively. Therefore, it is shown that for
the fixed dimension, IHHO has the best performance on
function optimization. Figure 4 and Figure 5 show that the
convergence curves of functions F1 (x) − F23 (x) performed
by PSO, AT, HHO and IHHO, where the dimensions of
functions F1 (x) − F13 (x) are 30 and the convergence curves performed by PSO and the actual value. From Figure 4 and
of function F8 (x), are obtained by performing AT, HHO Figure 5, it is also shown that IHHO outperforms PSO, AT,
and IHHO with the large difference between the Avg. value and HHO.

VOLUME 8, 2020 65897


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 4. The convergence curves of function F1 (x) − F13 (x).

65898 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 5. The convergence curves of function F14 (x) − F23 (x).

2) IHHO BY COMPARED WITH THE OTHER We observe fro Table 7-Table 10 that the Avg. values
11 COMPARABLE ALGORITHMS of functions F1 (x), F3 (x), F4 (x), and F8 (x) − F11 (x) per-
According to the above experimental setup, the Avg. formed by IHHO are the nearest to the optimum value,
values and the Std. of benchmark functions F1 (x) − and the Avg. values of functions F2 (x), F5 (x), F6 (x), and
F13 (x) with the dimension 30, 100, 300, and 500, and F12 (x) − F13 (x) performed by AT are nearest to the opti-
F14 (x) − F23 (x) with the fixed dimension obtained by mum value. From Table 11, we also observe that the Avg.
performing ALO, DA, DE, GA, GWO, MFO, SCA, values of functions F14 (x) − F18 (x), and F21 (x) − F23 (x)
WOA, PSO, AT, HHO and IHHO algorithms are shown performed by IHHO are nearest to the optimum value. As can
in Table 7, Table 8, Table 9, Table 10 and Table 11, be seen in Table 7-Table 11, the results show that the pro-
respectively. posed IHHO can expose excellent results in all dimensions,

VOLUME 8, 2020 65899


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

TABLE 6. Results of benchmark functions F14 (x) − F23 (x) with the fixed TABLE 7. Results of benchmark functions F1 (x) − F13 (x) with the
dimensions. dimension 30.

and its performance remains consistently superior to the


other 11 algorithms when realizing cases with many vari-
ables. Therefore, it is shown that the proposed IHHO out-
performs the other 11 algorithms and is suitable for function
optimization.

3) WILCOXON TEST
In this subsection, a Wilcoxon test on rank sum [48] of IHHO
vs. the other 11 algorithms is applied to further verify the
validation of the proposed IHHO, whose results are shown
in Table 12 where
X 1X
R+ = rank(di ) + rank(di ), (23)
2
di >0 di =0
X 1X
R− = rank(di ) + rank(di ), (24)
2
di >0 di =0
T = max{R+ , R− }, (25)

where di the difference between the performance scores of the


two algorithms on ith out of N functions, R+ and R− denote
the sum of ranks for the functions on which the second algo-
rithm outperformed the first and on which the first algorithm
outperformed the second, respectively. And then, if T is less
than or equal to the value of the distribution of Wilcoxon for
N degrees of freedom (Table B.12 [49]), the null hypothesis obtained from IHHO vs. AT only for functions F1 (x)−F13 (x),
of equality of means is rejected. and under the remaining conditions, R− are lager than R+ ,
For the level α = 0.05 of significance, we can observe which illustrate IHHO outperforms the other 11 comparable
from Table 12 that T are all lager than T0.05(1),13 = algorithms and has the stronger competitiveness.
21, T0.05(2),13 = 17 obtained from Table B.12 [49] for
functions F1 (x) − F13 (x) with dimension 30, T0.05(1),10 = D. AFFECT ON 12 SHIFTED FUNCTIONS PERFORMED
10, T0.05(2),10 = 8 obtained from Table B.12 [49] for func- BY IHHO ALGORITHM
tions F14 (x) − F23 (x), and T0.05(1),23 = 83, T0.05(2),23 = 73 In this subsection, we discuss 12 shifted functions obtained
obtained from Table B.12 [49] for F1 (x) − F23 (x), respec- from 12 benchmark functions F1 (x) − F7 (x), and F9 (x) −
tively, showing that the null hypothesis of equality of means F13 (x) by use of shifting each variable a unit, named by
is rejected. We also observe that are R+ larger than R− SF1 (x) − SF12 (x), respectively. That is, the benchmark

65900 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

TABLE 8. Results of benchmark functions F1 (x) − F13 (x) with the TABLE 9. Results of benchmark functions F1 (x) − F13 (x) with the
dimension 100. dimension 300.

According to the above experimental setup, we also per-


functions F1 (x) − F7 (x) are changed to be the shifted form the proposed IHHO algorithm on these 12 shifted func-
functions SF1 (x) − SF7 (x) and the benchmark functions tions 30 independently, respectively. When the shifted unit a
F9 (x) − F13 (x) are changed to be the shifted functions is changed from −2 to 2 with the step 0.2, the Avg. values of
SF8 (x) − SF12 (x). The global optimal of these 12 shifted these 12 shifted functions are obtained and shown in Figure 6.
functions are all (a, a, · · · , a), located far from the ori- From Figure 6, we observe that although these 12 shifted
· , 0). For example, the global opti-
gin position (0, 0, · ·P functions SF1 (x) − SF12 (x) are obtained from 12 benchmark
n
i=1 xi is (0, 0, · · · , 0) where xi ∈
mum of F1 (x) = 2 functions F1 (x)−F7 (x), and F9 (x)−F13 (x) by simply shifting
P = 1, 2, · · · , n), while the global optimum of
[−100, 100](i every variable a unit, the Avg. values of shifted function
SF1 (x) = ni=1 (xi − 1)2 is (a, a, · · · , a) where xi ∈ [−100 + SFi (x)(i = 1, 2, · · · , 12) change with the shift of variables.
a, 100 + a](i = 1, 2, · · · , n). Further, the results obtained from Figure 6 show that the

VOLUME 8, 2020 65901


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 6. The Avg. values of shifted functions SF1 (x) − SF12 (x).

proposed IHHO algorithm is affected by being performed on machine(ELM) for predicting two kinds of stocks: Stan-
the shifted functions. dard & Poors’ 500 (S&P 500) and Dow Jones Industrial
Average (DJIA) Indices by considering the Google Trends,
V. APPLICATIONS respectively.
According to the optimization of 23 benchmark functions
above, IHHO is superior to the other 11 comparable algo- A. SAR TARGET RECOGNITION
rithms. Especially, IHHO, AT, and HHO also outperform 1) DATA SOURCE
the other comparable 9 algorithms. And IHHO is obtained In this subsection, the adopted SAR dataset is derived
by the inspiration from AT and PSO on the base of HHO. from the public MSTAR database, which was co-funded by
Therefore, in this section, the selected PSO, AT, HHO, and the National Defense Research Planning Bureau (DARPA)
IHHO are applied to be combined with SVM for synthetic and the U.S. Air Force Research Laboratory (AFRL).
aperture radar (SAR) target recognition and extreme learning The MSTAR database consists of three classes of targets:

65902 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

TABLE 10. Results of benchmark functions F1 (x) − F13 (x) with the TABLE 11. Results of benchmark functions F14 (x) − F23 (x).
dimension 500.

BMP2, BTR70, and T72, shown in Figure 7. The versions classification labels of BMP2, BTR70, and T72 are dis-
of BMP2 are SN− 9563, SN− 9566, and SN− C21; the version tributed as 1, 2, and 3, respectively. Thus the 1622 items of
of BTR70 is only SN− C71; and the versions of T72 are training data and the 1365 items of testing data are obtained.
SN− 132, SN− 812, and SN− S7. The SAR images obtained Then principal component analysis (PCA) is used to reduce
from a 17◦ angle are regarded as the trained set, and those the dimension of the training data and the testing data. Given
obtained from a 15◦ angle are regarded as the tested set. The the accumulative contributions ratio 70%, the 191 principal
concrete MSTAR database are shown in Table 13. components are obtained as the new data with dimension
191 for performing SAR target recognition.
2) DATA PREPROCESSING
The SAR 64 × 64 images obtained by cutting out the cen- 3) CLASSIFIER BASED ON IHHO AND SVM
tral regions with the size 64 × 64 from the SAR images Support vector machine (SVM), a well-known data-driven
are sketched into the vectors with dimension 4096 and the technique, is used to perform the classification of

VOLUME 8, 2020 65903


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 7. Three types of targets in MSTAR dataset.

TABLE 12. Wilcoxon tests of IHHO vs. the other 11 algorithms. Optimizing by IHHO, the optimum C and γ in SVM
are obtained, and then the obtained SVM is performed for
SAR target classification. Thus the hybrid model for SAR
target classification based on IHHO and SVM, IHHO-SVM,
is established.

4) EXPERIMENTAL RESULTS
For this part of the investigation, the experimental tool for
SAR target classification is MATLAB R2014a simulation
software. Here, PSO, AT, and HHO are combined with
SVM to establish the hybrid models PSO-SVM, AT-SVM,
and HHO-SVM, respectively, for the comparison with
TABLE 13. The concrete MSTAR database. IHHO-SVM. For convenience, model 1, model 2, model 3,
model 4, and model 5 represent five comparable models
(SVM, PSO-SVM, AT-SVM, HHO-SVM, and IHHO-SVM)
for SAR target recognition, respectively. The selections of
parameters in these five models are shown in Table 14.

TABLE 14. Parameters in PSO-SVM, AT-SVM, HHO-SVM and IHHO-SVM


models.

linear and nonlinear data. The basic theory of SVM is


described in [5], [6].
In this paper, the penalty parameter C and kernel function
parameter γ in radial basis function (RBF) are two parameters
of SVM that need to be optimized. Every individual in the These five models are performed 10 times independently.
population of IHHO algorithm is composed of two parame- The whole average accuracy rates of the tested data and the
ters: the penalty parameter C and the kernel function param- average correct classification number of three classes in the
eter γ of RBF, and the accuracy rate (AR) of classification is tested data are shown in Table 15 and Table 16, Figure 8
regarded as the fitness function of IHHO, defined by and Figure 9, where Num. denotes the number of the correct
classified SAR images and model 0 denotes the raw tested
NSCC
AR = (26) data.
NTS From Table 15 and Table 16, we observe that among
where NSCC denotes the number of samples with the cor- these five models, the total average accuracy rate of model 5
rect classification and NTS denotes the number of the total rises to 94.84%(1294.5/1365)at its highest; the summits
samples. are 98.69%(574.4/582) and 76.94%(150.8/196) for the SAR

65904 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 8. The accuracy rates of three classes and the totality.

TABLE 15. The accuracy rates of five models. B. STOCK MARKET PREDICTION
1) DATA SOURCE
In this subsection, we discuss two major financial indicators
in the USA: the Standard & Poor’s 500 (S&P 500) and
Dow Jones Industrial Average (DJIA) Indices, whose data
was loaded from Yahoo Finances from April 23, 2014 to
April 22, 2019. Every dataset establishes a matrix with the
TABLE 16. The number of the correct classified samples of five models. size 1258 × 5, where the number of the rows denotes
1258 trading days and the number of the columns denotes
the numbers of the features: the opening price, the maxi-
mum price, the minimum price, the closing price and the
trading volume of the current trading day. We also obtain
the Google Trends data on ‘‘BUY and SELL’’ over the corre-
sponding period by use of the specific key words ‘‘S&P 500’’
and ‘‘DJIA’’ for predicting the opening prices of S&P 500
recognitions with T72 and BMP2 types, respectively; and the
and DJIA.
minimum value is 96.98%(569.3/587) for the SAR recogni-
We use two methods to predict the opening prices of the
tion with BTR70 type. And for the SAR recognition with
DJIA and S&P 500 stocks, respectively. One method is to
BTR70 type, the average accuracy rate of model 2 is up
use the opening price, the maximum price, the minimum
to the highest 98.48%( 578.1/587), but the differences of
price, the closing price, and the trading volume of the current
the average accuracy rate are not very great among these
trading day to predict the opening price of the next day, which
five models. And the average accuracy rates for the SAR
designated Type I. Another method is to use the opening price,
recognition with BMP2 of model 1-model 4 are all less than
the maximum price, the minimum price, the closing price,
43%. In addition, from Figure 8 to Figure 9, we more easily
the trading volume, the stock BUY and the stock SELL to
observe that same results. But for SAR images with BTR20,
predict the opening price of the next day, which is designated
the accuracy rate and the number of total correct classified
Type II.
SAR images all change little. And the space that is conducive
to increasing the accuracy rate and the number of the correct
classified SAR images exists in the subclass BMP2 of SAR 2) PREDICTED MODEL
images. Therefore, model 5, IHHO-SVM, outperforms mod- The Extreme Learning Machine (ELM) proposed in
els 1 through 4 and is suitable for SAR target recognition. 2006 [41] is a new feed-forward neural network method with

VOLUME 8, 2020 65905


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

FIGURE 9. Number of the correct classified SAR images of three classes and the totality.

rapid learning and the generalization ability, whose structure In this subsection, IHHO is used to optimize the connection
is composed of three layers: the input layer, one hidden layer, weight W and the threshold value b of ELM for for predicting
and the output layer, and where the full connections are the opening prices of stocks in the S&P 500 and DJIA. Thus
between the input layer, and the hidden layer and between the predicted model is established, named by IHHO-ELM.
the hidden layer and the output layer. The concrete steps of IHHO-ELM are in the following:
There are n neural nodes in the input layer, l neural nodes Step 1. Initialize the parameters the population, the number
in the hidden layer and m neural nodes in the output layer. of maximum iteration, the number of the neural nodes in the
The connection weight matrix is W = (wji )l×n between the hidden layer. Give the terminate condition of IHHO-ELM.
input layer and the hidden layer and β = (βjk )l×m between Step 2. Map every individual of IHHO into the connection
the hidden layer and the output layer, where wji denotes the weight W and the threshold value b. Then for the trained
connection weight from the ith neural node in the input layer input matrix X and the actual corresponding output matrix T ,
to the jth neural node in the hidden layer and βjk denotes the the trained output matrix Y = (y1s )1×Q is obtained from
connection weight from the jth neural node in the hidden layer ELM. The function
to the k th neural node in the output layer. And the threshold
Q
value in the hidden layer is the matrix b = (bj1 )l×1 . 1X
Let X = (xis )n×Q and T = (tks )m×Q be the input matrix o= (y1s − t1s )2 (29)
Q
and the actual output matrix, where Q is the number of the s=1
trained samples. Let g(x) be the activation function in the
is regarded as the fitness function of IHHO.
hidden layer. In the ELM neural network, the connection
Step 3. Judge whether the terminate condition is satisfied
weight matrix W and the threshold value b are randomly
or the number of the maximum iterations is arrived at. If yes,
generated, while the connection matrix β is satisfied with the
the prey rabbit is obtained and Step 4 is turned. Otherwise,
equation H 0 β = T 0 , where T 0 is the transpose of the matrix T ;
IHHO is used to update the individual and the new population
H = g(WX + b) is the output matrix of the hidden layer in
is created. Then turn step 2.
ELM; and the connection weight
Step 4. Map the prey rabbit of IHHO into the connection
β = (H 0 )+ T 0 (27) weight W and the threshold value b. Thus the predicted model
IHHO-ELM is established.
is obtained by the way of the least square solution of
Five evaluating indicators are introduced to assess the pre-
min ||H 0 β − T 0 ||, (28) diction performance of the models: mean square error (MSE),
β
mean absolute error (MAE), root mean square error (RMSE),
where (H 0 )+ is the Moore-Penrose generalized inverse of H 0 . mean absolute percentage error (MAPE), and coefficient of

65906 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

determination R2 . These are defined are as follows: five models are set to be twice of the number of the neural
Q nodes in the input layer. Thus the structures of these five
1 X
models are 5-10-1 for Type I and 7-14-1 for Type II.
MSE = (ys − ts )2 , (30)
Q We run these five models 20 times, and the corresponding
s=1
Q results are obtained on the average MSE, MAE, RMSE,
1 X
MAPE, and R2of two stocks S&P 500 and DJIA for Type I
MAE = |ys − ts |, (31)
Q and Type II, shown in Table 17 and Table 18, respectively.
s=1
v
u Q
1u X TABLE 17. The average evaluating indicators of five models for DJIA.
RMSE = t (ys − ts )2 , (32)
Q
s=1
Q
1 X y s − ts
MAPE = , (33)
Q ts
s=1
Q Q Q
ts ))2
P P P
(Q ys ts − ( ys )(
s=1 s=1 s=1
2
R = . (34)
Q Q Q Q
ts2 )2 )(Q y2s )2 )
P P P P
(Q −( ts −( ys
s=1 s=1 s=1 s=1

where ys and ts denote the predicted value and the actual value
on the corresponding output of the sth sample, respectively.

3) EXPERIMENTAL RESULTS
In this subsection, we discuss two stocks followed on the S&P
500 and DJIA for predicting the opening price. We choose
880 data points from April 23, 2014 to October 17, 2017 from
S&P 500 or DJIA databases as the trained data sets and the
remaining 337 data points from October 18, 2017 to April 22,
2019, to be the tested data sets for Type I ; and we choose
880 data points from April 23, 2014 to October 17, 2017, from
S&P 500 or DJIA databases and the corresponding Google
Trends ‘‘BUY’’ and ‘‘SELL’’ to be the trained data sets and
the remaining 337 data from October 18, 2017 to April 22,
2019 to be the tested data sets for Type II.
The trained data sets and the tested data sets have five
features for Type I : opening price, maximum price, minimum
price, closing price and the trading volume of the current
trading day, while the trained data sets and the tested data
sets have seven features for Type II : opening price, maxi-
mum price, minimum price, closing price, trading volume,
the BUY on Google Trends, and the SELL on Google Trends
of the current trading day.
Before performing the prediction, the trained data sets and
the tested data sets are all preprocessed to occur in the interval
[1,2], according to the following formula:
x − xmin
y = ymin + (ymax − ymin ) , (35)
xmax − xmin
where ymin = 1, ymax = 2, xmin and xmax denote the minimum
value and the maximum value the original data, respectively,
and x is the original data before normalization, y is the data
after normalization. Table 17 and Table 18 show the average evaluating indi-
ELM, PSO-ELM, AT-ELM, and HHO-ELM are also cators of five models for predicting the opening prices of
employed for comparison with IHHO-ELM. The parameters DJIA indices and S&P 500 indices, respectively. For DJIA
are set to be the same as those in Table 14. In addition, and S&P 500, the average MSE, MAE, RMSE, and MAPE
the number of the neural nodes in the hidden layer of these of IHHO-ELM model are all up to the minimum values for

VOLUME 8, 2020 65907


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

TABLE 18. The average evaluating indicators of five models for S&P 500. Type II are all less than those for Type I, and the average R2 for
Type II are all larger than those for Type I. The results show
that the proposed model IHHO-ELM outperforms the other
four comparable models and that Google Trends ‘‘BUY’’ and
‘‘SELL’’ data have some influence on the stock prediction.
Therefore, the swarm intelligence algorithms have the advan-
tages of optimizing the parameters of ELM for performing
stock prediction.

VI. RESULT ANALYSES


According the above experimental results, we can obtain the
following results:
(1) For 23 benchmark functions, the propsed IHHO mostly
has the minimum average function value. IHHO is firstly
compared with PSO, AT and HHO for 13 functions F1 (x) −
F13 (x) with four different dimensions, and then IHHO is com-
pared with ALO, DA, DE, GA, GWO, MFO, SCA, WOA,
AT, and HHO for 23 benchmark functions. The experimental
results show that IHHO is the best algorithm for optimization
among these comparable algorithms.
(2) For SAR recognition, the proposed model IHHO-SVM
has the highest total accuracy rate 94.84% of SAR recognition
and the highest accuracy rates 98.69% and 76.94% of SAR
recognition with T72 and BMP2 types, but the accuracy rate
of SAR recognition with BTR70 type is 96.98% and the dif-
ferences is within ±1.5%. The results denote the IHHO-SVM
is valid for SAR recognition.
(3) For the stock predictions, the proposed model
IHHO-ELM has the minimum errors MSE, RMSE, MAE and
MAPE and the maximum R2 among the comparable models
for two kinds of stock indices with Type I and Type II. And
the errors of Type II is less than those of Type I and R2
of Type II is larger than that of Type I by use of IHHO
algorithm. The results indicates that IHHO-ELM is useful for
stock prediction and Google Trends has some influences on
stock prediction.
In this paper, the improved Harris’s Harks algorithm is
obtained from two aspects: one is that the velocity is added
into the HHO algorithm in the exploration phase, inspired
by PSO algorithm; another is that the cross operator of AT
algorithm is added into the soft besiege with progressive
rapid dives and the hard besiege with progressive rapid dives
in the attacking stages of the HHO algorithm. The experi-
ments results show that IHHO algorithm increases the pop-
both type I and Type II, and the average R2 of IHHO-ELM ulation individual multiplicity and speed up the objective
all reach the maximum value for both Type I and Type II, function convergence rate. The introduction of the veloc-
which make clear that IHHO-ELM model outperforms the ity and the cross operator in IHHO algorithm makes the
other four models for the stock market indices prediction. diversity of individuals be adjusted and the individual apt
And then, the average MSE, MAE, RMSE, and MAPE of to trend towards the optimal solution, which makes the
model IHHO-ELM for Type II are less than those for Type I, functions find out their optimal solution by use of IHHO
and the average R2 of model IHHO-ELM for Type II is algorithm.
larger than that for Type I. In addition, the average MSE, Based on these analyses, the proposed IHHO algorithm
MAE, RMSE, and MAPE of ELM are all larger than those of has its ability in performing the function optimizations and
the other four comparable models. For models AT-ELM and applications by use of the combinations with SVM and ELM
HHO-ELM, the average MSE, MAE, RMSE and MAPE for for SAR recognition and stock predictions.

65908 VOLUME 8, 2020


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

VII. DISCUSSION AND CONCLUSION [4] H. Hu, L. Zhang, H. Yan, Y. Bai, and P. Wang, ‘‘Denoising and baseline
In this work, the position vectors of the Harris’s hawks in drift removal method of MEMS hydrophone signal based on VMD and
wavelet threshold processing,’’ IEEE Access, vol. 7, pp. 59913–59922,
the exploration phase of an HHO algorithm are improved by 2019, doi: 10.1109/ACCESS.2019.2915612.
a particle swarm optimization algorithm, and those vectors [5] H. Hu, L. Zhang, Y. Bai, P. Wang, and X. Tan, ‘‘A hybrid algo-
in the soft besiege with progressive rapid dives and the hard rithm based on squirrel search algorithm and invasive weed optimization
for optimization,’’ IEEE Access, vol. 7, pp. 105652–105668, 2019, doi:
besiege with progressive rapid dives in the attacking stages 10.1109/ACCESS.2019.2932198.
of HHO algorithm are improved by the crossover operator [6] H. Xue, Y. Bai, H. Hu, T. Xu, and H. Liang, ‘‘A novel hybrid model
of an artificial tree algorithm. Thus the improved HHO (now based on TVIW-PSO-GSA algorithm and support vector machine for
classification problems,’’ IEEE Access, vol. 7, pp. 27789–27801, 2019,
called IHHO) is obtained. The IHHO algorithm is tested on doi: 10.1109/ACCESS.2019.2897644.
23 benchmark functions. The experimental results show that [7] Z. Wang, G. He, W. Du, J. Zhou, X. Han, J. Wang, H. He, X. Guo,
the IHHO algorithm is capable of finding excellent solu- J. Wang, and Y. Kou, ‘‘Application of parameter optimized variational
mode decomposition method in fault diagnosis of gearbox,’’ IEEE Access,
tions compared to 11 other comparable algorithms. Specif- vol. 7, pp. 44871–44882, 2019, doi: 10.1109/ACCESS.2019.2909300.
ically, IHHO is compared with the other three algorithms for [8] S. J. VoB Dreo, A. Petrowski, P. Siarry, and E. Taillard, ‘‘Metaheuristics
13 benchmark functions with different dimensions. Addition- for hard optimization: Methods and case studies,’’ Math. Meth. Oper. Res.,
vol. 66, pp. 557–558, Sep. 2007, doi: 10.1007/s00186-007-0180-y.
ally, the results of synthetic aperture radar target recognition [9] D. E. Goldberg and J. H. Holland, ‘‘Genetic algorithms and machine learn-
reveals that the classified model based on IHHO and support ing,’’ Mach. Learn., vol. 3, nos. 2–3, pp. 95–99, 1988. [Online]. Available:
vector machine can give superior results by comparison with https://link.springer.com/article/10.1023%2FA%3A1022602019183
[10] J. Kennedy and R. Eberhart, ‘‘Particle swarm optimization,’’ in Proc.
other classified models. The results of stock market indices IEEE Int. Conf. Neural Netw., vol. 4, Nov. 1995, pp. 1942–1948, doi:
prediction also reveal that the predicted model based on 10.1109/ICNN.1995.488968.
IHHO and extreme learning machine can give the best eval- [11] V. H. Iyer, S. Mahesh, R. Malpani, M. Sapre, and A. J. Kulkarni, ‘‘Adaptive
range genetic algorithm: A hybrid optimization approach and its appli-
uating indicators by comparison with the predicted models. cation in the design and economic optimization of Shell-and-Tube heat
Smaller average accuracy rates for synthetic aperture radar exchanger,’’ Eng. Appl. Artif. Intell., vol. 85, pp. 444–461, Oct. 2019, doi:
target recognition with BMP2 type and the adjusted numbers 10.1016/j.engappai.2019.07.001.
[12] Y. Su, N. Guo, Y. Tian, and X. Zhang, ‘‘A non-revisiting genetic algo-
of the neural nodes in the hidden layer of IHHO-ELM influ- rithm based on a novel binary space partition tree,’’ Inf. Sci., vol. 512,
ence the results. Like HHO, IHHO is as simple as possible, pp. 661–674, Feb. 2020, doi: 10.1016/j.ins.2019.10.016.
with few exploratory and exploitative mechanisms. [13] D. Tian, X. Zhao, and Z. Shi, ‘‘Chaotic particle swarm optimization with
sigmoid-based acceleration coefficients for numerical function optimiza-
But in PSO and IHHO, we take the inertia weight ω to tion,’’ Swarm Evol. Comput., vol. 51, Dec. 2019, Art. no. 100573, doi:
be the constant 1, which maybe cause the minimum average 10.1016/j.swevo.2019.100573.
function value to be a little high. If the different inertia weight [14] J. N. Lu, H. P. Hu, and Y. P. Bai, ‘‘Radial basis function neural net-
work based on an improved exponential decreasing inertia weight-particle
ω in PSO and its variants is regarded to be that of IHHO, swarm optimization algorithm for AQI prediction,’’ Abstract Appl. Anal.,
IHHO will have different the minimum average function vol. 2014, Jul. 2014, Art. no. 178313, doi: 10.1155/2014/178313.
value and the convergence speed. Therefore, the appropriate [15] S. Mirjalili, ‘‘The ant lion optimizer,’’ Adv. Eng. Softw., vol. 83, pp. 80–98,
May 2015, doi: 10.1016/j.advengsoft.2015.01.010.
inertia weight ω helps IHHO to reach the minimum value of [16] E. G. Martinez-Soltero and J. Hernandez-Barragan, ‘‘Robot navigation
functions, possibly improve the accuracy rate of SAR target based on differential evolution,’’ IFAC-PapersOnLine, vol. 51, no. 13,
recognition and decrease the predicted errors of the stock pp. 350–354, 2018, doi: 10.1016/j.ifacol.2018.07.303.
[17] S. Mirjalili, ‘‘SCA: A sine cosine algorithm for solving optimization
indices. In addition, the numbers of the neural nodes in the problems,’’ Knowl.-Based Syst., vol. 96, pp. 120–133, Mar. 2016, doi:
hidden layer of ELM, PSO-ELM, AT-ELM, HHO-ELM and 10.1016/j.knosys.2015.12.022.
IHHO-ELM are the adjusted parameters, which affect the [18] S. Mirjalili, ‘‘Moth-flame optimization algorithm: A novel nature-inspired
heuristic paradigm,’’ Knowl.-Based Syst., vol. 89, pp. 228–249, Nov. 2015,
predicted results. doi: 10.1016/j.knosys.2015.07.006.
[19] S. Mirjalili and A. Lewis, ‘‘The whale optimization algorithm,’’ Adv.
ACKNOWLEDGMENT Eng. Softw., vol. 95, pp. 51–67, May 2016, doi: 10.1016/j.advengsoft.
2016.01.008.
The authors would like to thank the editors and anony- [20] S. Mirjalili, ‘‘Dragonfly algorithm: A new meta-heuristic optimization
mous reviewers for their valuable comments and suggestions, technique for solving single-objective, discrete, and multi-objective prob-
which greatly improved the presentation of the article. lems,’’ Neural Comput. Appl., vol. 27, no. 4, pp. 1053–1073, 2016, doi:
10.1007/s00521–015–1920–1.
[21] S. Mirjalili, S. M. Mirjalili, and A. Lewis, ‘‘Grey wolf optimizer,’’ Adv.
REFERENCES Eng. Softw., vol. 69, pp. 46–61, Mar. 2014, doi: 10.1016/j.advengsoft.
2013.12.007.
[1] C. Blum, J. Puchinger, G. R. Raidl, and A. Roli, ‘‘Hybrid meta- [22] S. Mirjalili, P. Jangir, and S. Saremi, ‘‘Multi-objective ant lion optimizer: A
heuristics in combinatorial optimization: A survey,’’ Appl. Soft Comput., multi-objective optimization algorithm for solving engineering problems,’’
vol. 11, no. 6, pp. 4135–4151, Sep. 2011. [Online]. Available: http://hal. Appl. Intell., vol. 46, no. 1, pp. 79–95, Jan. 2017, doi: 10.1007/s10489-016-
inria.fr/hal-01224683. 0825-8.
[2] A. A. Heidari, S. Mirjalili, H. Faris, I. Aljarah, M. Mafarja, and [23] S. Mirjalili, S. M. Mirjalili, and A. Hatamlou, ‘‘Multi-verse optimizer: A
H. Chen, ‘‘Harris hawks optimization: Algorithm and applications,’’ nature-inspired algorithm for global optimization,’’ Neural Comput. Appl.,
Future Gener. Comput. Syst., vol. 97, pp. 849–872, Aug. 2019, doi: vol. 27, no. 2, pp. 495–513, Feb. 2016, doi: 10.1007/s00521-015-1870-7.
10.1016/j.future.2019.02.028. [24] Q. Q. Li, K. Song, Z. C. He, E. Li, A. G. Cheng, and T. Chen, ‘‘The
[3] H. Yan, T. Xu, P. Wang, L. Zhang, H. Hu, and Y. Bai, ‘‘MEMS hydrophone artificial tree (AT) algorithm,’’ Eng. Appl. Artif. Intell., vol. 65, pp. 99–110,
signal denoising and baseline drift removal algorithm based on parameter- Oct. 2017, doi: 10.1016/j.engappai.2017.07.025.
optimized variational mode decomposition and correlation coefficient,’’ [25] D. Karaboga, ‘‘Artificial bee colony algorithm,’’ Scholarpedia, vol. 5,
Sensors, vol. 19, no. 21, p. 4622, Oct. 2019, doi: 10.3390/s19214622. no. 3, p. 6915, 2010, doi: 10.4249/scholarpedia.6915.

VOLUME 8, 2020 65909


H. Hu et al.: IHHO for SAR Target Recognition and Stock Market Index Prediction

[26] W.-T. Pan, ‘‘A new fruit fly optimization algorithm: Taking the financial [47] J. G. Digalakis and K. G. Margaritis, ‘‘On benchmarking functions for
distress model as an example,’’ Knowl.-Based Syst., vol. 26, pp. 69–74, genetic algorithms,’’ Int. J. Comput. Math., vol. 77, no. 4, pp. 481–506,
Feb. 2012, doi: 10.1016/j.knosys.2011.07.001. Jan. 2001, doi: 10.1080/00207160108805080.
[27] Q. Liu, L. Wu, W. Xiao, F. Wang, and L. Zhang, ‘‘A novel hybrid bat algo- [48] S. García, D. Molina, M. Lozano, and F. Herrera, ‘‘A study on the use of
rithm for solving continuous optimization problems,’’ Appl. Soft Comput., non-parametric tests for analyzing the evolutionary algorithms’ behaviour:
vol. 73, pp. 67–82, Dec. 2018, doi: 10.1016/j.asoc.2018.08.012. A case study on the CEC’2005 special session on real parameter opti-
[28] A. R. Mehrabian and C. Lucas, ‘‘A novel numerical optimization algo- mization,’’ J. Heuristics, vol. 15, no. 6, pp. 617–644, Dec. 2009, doi:
rithm inspired from weed colonization,’’ Ecol. Informat., vol. 1, no. 4, 10.1007/s10732-008-9080-4.
pp. 355–366, Dec. 2006, doi: 10.1016/j.ecoinf.2006.07.003. [49] J. H. Zar, Biostatistical Analysis. Upper Saddle River, NJ, USA: Prentice-
[29] M. Jain, V. Singh, and A. Rani, ‘‘A novel nature-inspired algorithm for Hall, 1999.
optimization: Squirrel search algorithm,’’ Swarm Evol. Comput., vol. 44,
pp. 148–175, Feb. 2019, doi: 10.1016/j.swevo.2018.02.013.
[30] H. Moayedi, A. Osouli, H. Nguyen, and A. S. A. Rashid, ‘‘A novel
Harris hawks’ optimization and k–fold cross–validation predicting slope HONGPING HU received the Ph.D. degree from
stability,’’ Eng. Comput., pp. 1–11, Jul. 2019, doi: 10.1007/s00366-019- the North University of China, China, in 2009. She
00828-8. is currently an Associate Professor and a Master
[31] A. Abbasi, B. Firouzi, and P. Sendur, ‘‘On the application of Harris hawks Tutor with the Department of Mathematics, North
optimization (HHO) algorithm to the design of microchannel heat sinks,’’ University of China. Her research interests include
Eng. Comput., pp. 1–20, Dec. 2019, doi: 10.1007/s00366-019-00892-0. combinatorial mathematics, artificial intelligence,
[32] H. Moayedi, M. A. Mu’azu, and L. K. Foong, ‘‘Novel swarm-based and image processing.
approach for predicting the cooling load of residential buildings based on
social behavior of elephant herds,’’ Energy Buildings, vol. 206, Jan. 2020,
Art. no. 109579, doi: 10.1016/j.enbuild.2019.109579.
[33] A. R. Yıldız, B. S. Yıldız, S. M. Sait, S. Bureerat, and N. Pholdee, ‘‘A
new hybrid harris hawks-Nelder-Mead optimization algorithm for solv-
ing design and manufacturing problems,’’ Mater. Test., vol. 61, no. 8, YAN AO graduated from the School of Taiyuan
pp. 735–743, Aug. 2019. [Online]. Available: https://www.researchgate.
Normal University, in 2018. She is currently pursu-
net/publication/334193865
ing the master’s degree with the North University
[34] Too, Abdullah, and M. Saad, ‘‘A new quadratic binary harris hawk
of China. Her major research interest is signal
optimization for feature selection,’’ Electronics, vol. 8, no. 10, p. 1130,
Oct. 2019, doi: 10.3390/electronics8101130.
processing.
[35] E. H. Houssein, M. E. Hosney, D. Oliva, W. M. Mohamed, and
M. Hassaballah, ‘‘A novel hybrid harris hawks optimization and sup-
port vector machines for drug design and discovery,’’ Comput. Chem.
Eng., vol. 133, Feb. 2020, Art. no. 106656, doi: 10.1016/j.compchemeng.
2019.106656.
[36] X. Bao, H. Jia, and C. Lang, ‘‘A novel hybrid harris hawks optimization for
color image multilevel thresholding segmentation,’’ IEEE Access, vol. 7,
pp. 76529–76546, 2019, doi: 10.1109/ACCESS.2019.2921545. YANPING BAI received the Ph.D. degree from the
[37] H. Jia, C. Lang, D. Oliva, W. Song, and X. Peng, ‘‘Dynamic harris hawks North University of China, China, in 2005. She
optimization with mutation mechanism for satellite image segmentation,’’ is currently a Professor and a Ph.D. Tutor with
Remote Sens., vol. 11, no. 12, p. 1421, Jun. 2019, doi: 10.3390/rs11121421. the School of Information and Communication
[38] H. Chen, S. Jiao, M. Wang, A. A. Heidari, and X. Zhao, ‘‘Parameters iden- Engineer, North University of China. Her research
tification of photovoltaic cells and modules using diversification-enriched interests include optimization algorithm, artificial
harris hawks optimization with chaotic drifts,’’ J. Cleaner Prod., vol. 244, intelligence, signal processing, image processing,
Jan. 2020, Art. no. 118778, doi: 10.1016/j.jclepro.2019.118778. and MEMS reliability research.
[39] A. A. Ewees and M. A. Elaziz, ‘‘Performance analysis of chaotic multi-
verse harris hawks optimization: A case study on solving engineering
problems,’’ Eng. Appl. Artif. Intell., vol. 88, Feb. 2020, Art. no. 103370,
doi: 10.1016/j.engappai.2019.103370.
[40] X. Xie, ‘‘Improvement on projection twin support vector machine,’’ Neural RONG CHENG received the M.S. degree from
Comput. Appl., vol. 30, no. 2, pp. 371–387, Jul. 2018, doi: 10.1007/s00521- Shaanxi Normal University, in 2003, and the Ph.D.
017-3237-8. degree from the North University of China, China,
[41] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, ‘‘Extreme learning machine: The- in 2015. She is currently a Lecturer with the North
ory and applications,’’ Neurocomputing, vol. 70, nos. 1–3, pp. 489–501,
University of China. Her main research interests
Dec. 2006, doi: 10.1016/J.NEUCOM.2005.12.126.
include machine learning, optimization algorithm,
[42] L.-L. Li, X. Zhao, M.-L. Tseng, and R. R. Tan, ‘‘Short-term wind power
pattern recognition, and prediction.
forecasting based on support vector machine with improved dragonfly
algorithm,’’ J. Cleaner Prod., vol. 242, Jan. 2020, Art. no. 118447, doi:
10.1016/j.jclepro.2019.118447.
[43] L.-L. Li, J. Sun, M.-L. Tseng, and Z.-G. Li, ‘‘Extreme learning machine
optimized by whale optimization algorithm using insulated gate bipolar
transistor module aging degree evaluation,’’ Expert Syst. Appl., vol. 127,
pp. 58–67, Aug. 2019, doi: 10.1016/j.eswa.2019.03.002. TING XU received the M.S. degree from the
[44] S. Kumar, S. K. Pal, and R. Singh, ‘‘A novel hybrid model based on particle School of Science, North University of China,
swarm optimisation and extreme learning machine for short-term tem- in 2011. She is currently pursuing the Ph.D. degree
perature prediction using ambient sensors,’’ Sustain. Cities Soc., vol. 49, with the School of Information and Communica-
Aug. 2019, Art. no. 101601, doi: 10.1016/j.scs.2019.101601. tion Engineering, North University of China. She
[45] X. S. Yang, Nature-Inspired Metaheuristic Algorithms, 2nd ed. Cambridge, is a Lecturer with the North University of China.
U.K.: Cambridge Univ. Press, 2010. Her major research interests include SAR target
[46] X. Yao, Y. Liu, and G. Lin, ‘‘Evolutionary programming made faster,’’ recognition and signal processing.
IEEE Trans. Evol. Comput., vol. 3, no. 2, pp. 82–102, Jul. 1999, doi:
10.1109/4235.771163.

65910 VOLUME 8, 2020

You might also like