You are on page 1of 25

(/)

Journals (/about/journals) Topics (/topics) Information (/authors) Author Services (/authors/english) Initiatives (/about

/initiatives) About (/about)

Search for Articles:


Axioms | An Open Access Journal from MDPI https://www.mdpi.com/journal/axioms

(https://www.mdpi.com/about/openaccess) ISSN: 2075-1680


(/)
Latest Articles
Open Access Article (/2075-1680/11/7/304/pdf?version=1655974522)

Several Double Inequalities for Integer Powers of the Sinc and Sinhc Functions with Applications to the Neuman–Sándor Mean and the
First Seiffert Mean (/2075-1680/11/7/304)
by Wen-Hui Li (https://sciprofiles.com/profile/2224162),
Qi-Xia Shen (https://sciprofiles.com/profile/author/a0RQbTJPeVgyK3U0NWYxVTlhV1FOeVQ4cWFrRjREaWZ2NEdDT09Ub0piZz0=) and
Bai-Ni Guo (https://sciprofiles.com/profile/594091)
Axioms 2022, 11(7), 304; https://doi.org/10.3390/axioms11070304 (registering DOI) - 23 Jun 2022
Abstract In the paper, the authors establish a general inequality for the hyperbolic functions, extend the newly-established inequality to
trigonometric functions, obtain some new inequalities involving the inverse sine and inverse hyperbolic sine functions, and apply these inequalities
to the Neuman–Sándor mean and the [...] Read more.
(This article belongs to the Special Issue A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor
of Professor Feng Qi ( /journal/axioms/special_issues/math_inequalities ))

Open Access Article (/2075-1680/11/7/303/pdf?version=1655967298)

A Simultaneous Estimation of the Baseline Intensity and Parameters for Modulated Renewal Processes (/2075-1680/11/7/303)
by Jiancang Zhuang (https://sciprofiles.com/profile/1167044) and
Hai-Yen Siew (https://sciprofiles.com/profile/author/dkNIUWZtNG5SblBnei9NZ0VTbWtmMEFPaGY3RzNyOGNnb3dTeHlxU1JvMD0=)
Axioms 2022, 11(7), 303; https://doi.org/10.3390/axioms11070303 (registering DOI) - 23 Jun 2022
Abstract This paper proposes a semiparametric solution to estimate the intensity (hazard) function of modulated renewal processes: a
nonparametric estimate for the baseline intensity function together with a parametric estimate of the model parameters of the covariate processes.
Based on the martingale property associated [...] Read more.
(This article belongs to the Special Issue Stochastic Modelling and Statistical Methods in Earth and Environmental Sciences ( /journal
/axioms/special_issues/earth_and_environmental_sciences ))

► Show Figures
(/axioms/axioms-11-00303/article_deploy/html/images/axioms-11-00303-g001-550.jpg) (/axioms/axioms-11-00303/article_deploy

/html/images/axioms-11-00303-g002-550.jpg)

Open Access Article (/2075-1680/11/7/302/pdf?version=1655817900)

Information Measures Based on T-Spherical Fuzzy Sets and Their Applications in Decision Making and Pattern Recognition (/2075-1680
/11/7/302)
by Xiaomin Shen (https://sciprofiles.com/profile/2273304),
Sidra Sakhi (https://sciprofiles.com/profile/author/RmlsWnVhZXBMcmRMejJSb2M0VjFLWDhOVXJiM0ZJWExhdFRXRnVrUCt0WT0=),
Kifayat Ullah (https://sciprofiles.com/profile/324588), Muhammad Nabeel Abid (https://sciprofiles.com/profile/2221808) and
Yun Jin (https://sciprofiles.com/profile/989840)
Axioms 2022, 11(7), 302; https://doi.org/10.3390/axioms11070302 (https://doi.org/10.3390/axioms11070302) - 21 Jun 2022
Abstract The T-spherical fuzzy set (TSFS) is a modification of the fuzzy set (FS), intuitionistic fuzzy set (IFS), Pythagorean fuzzy set (PyFS), q-rung
orthopair fuzzy set (q-ROFS), and picture fuzzy set (PFS), with three characteristic functions: the membership degree (MD) denoted by , the [...]
Read more.
(This article belongs to the Special Issue Soft Computing with Applications to Decision Making and Data Mining ( /journal/axioms
/special_issues/Soft_Computing_Decision_Making ))

Open Access Article (/2075-1680/11/7/301/pdf?version=1655815191)

New Theorems in Solving Families of Improper Integrals (/2075-1680/11/7/301)


by Mohammad Abu Ghuwaleh (https://sciprofiles.com/profile/2171782), Rania Saadeh (https://sciprofiles.com/profile/844879) and
Aliaa Burqan (https://sciprofiles.com/profile/1923409)
Axioms 2022, 11(7), 301; https://doi.org/10.3390/axioms11070301 (https://doi.org/10.3390/axioms11070301) - 21 Jun 2022
Abstract Many improper integrals appear in the classical table of integrals by I. S. Gradshteyn and I. M. Ryzhik. It is a challenge for some
researchers to determine the method in which these integrations are formed or solved. In this article, we present some [...] Read more.

Open Access Article (/2075-1680/11/7/300/pdf?version=1655804743)

Smooth Versions of the Mann–Whitney–Wilcoxon Statistics (/2075-1680/11/7/300) Back to TopTop

2 de 14 23/06/2022 02:15 p. m.
Axioms | An Open Access Journal from MDPI https://www.mdpi.com/journal/axioms

by Netti Herawati (https://sciprofiles.com/profile/1676375) and


(/)
Ibrahim A. Ahmad (https://sciprofiles.com/profile/author/T2hYVU5qU0dkZkwvNzlCQkdaNW1lZ3RwR0xhdFlVclNBMm9nM2M3aE9jcz0=)
Axioms 2022, 11(7), 300; https://doi.org/10.3390/axioms11070300 (https://doi.org/10.3390/axioms11070300) - 21 Jun 2022
Abstract The well-known Mann–Whitney–Wilcoxon (MWW) statistic is based on empirical distribution estimates. However, the data are often drawn
from smooth populations. Therefore, the smoothness characteristic is not preserved. In addition, several authors have pointed out that empirical
distribution is often an inadmissible estimate. Thus, [...] Read more.
(This article belongs to the Section Mathematical Analysis (/journal/axioms/sections/mathematical_analysis))

Open Access Article (/2075-1680/11/6/299/pdf?version=1655974215)

A New Survey of Measures of Noncompactness and Their Applications (/2075-1680/11/6/299)


by Moosa Gabeleh (https://sciprofiles.com/profile/author/VzZ1blZMQ2tQcVowdDB1ak01TlVqcTlQbzk4eWZvMGlRbmc5SHpJR3crbz0=),
Eberhard Malkowsky (https://sciprofiles.com/profile/1596279), Mohammad Mursaleen (https://sciprofiles.com/profile/550068) and
Vladimir Rakočević (https://sciprofiles.com/profile/746653)
Axioms 2022, 11(6), 299; https://doi.org/10.3390/axioms11060299 (https://doi.org/10.3390/axioms11060299) - 20 Jun 2022
Abstract We present a survey of the theory of measures of noncompactness and discuss some fixed point theorems of Darbo’s type. We apply the
technique of measures of noncompactness to the characterization of classes of compact operators between certain sequence spaces, in solving
infinite [...] Read more.
(This article belongs to the Special Issue Operator Theory and Its Applications ( /journal/axioms/special_issues/operator_theory ))

Open Access Article (/2075-1680/11/6/298/pdf?version=1655725882)

Dynamic Behaviors of an Obligate Commensal Symbiosis Model with Crowley–Martin Functional Responses (/2075-1680/11/6/298)
by Lili Xu (https://sciprofiles.com/profile/2236861),
Yalong Xue (https://sciprofiles.com/profile/author/MElPN1VFclJxUW9PY0QyZHgrYUozZTJvWk1OQ1RYT04zT3lJZ1Y2cFF1OD0=),
Xiangdong Xie (https://sciprofiles.com/profile/author/Z1FRejUxcUp1WWdNK1oyUHgzTk9QT2dZOGViZVE3aWt5dnVmVjg4QXlQND0=)
and
Qifa Lin (https://sciprofiles.com/profile/author/SDg0TVVtbmlWUExkNDlOUlJPcmJVUGY1WmpyS2Z0eS80WVZLUlRoOFZ6TT0=)
Axioms 2022, 11(6), 298; https://doi.org/10.3390/axioms11060298 (https://doi.org/10.3390/axioms11060298) - 20 Jun 2022
Abstract A two species obligate commensal symbiosis model with Crowley–Martin functional response was proposed and studied in this paper. For
an autonomous case, local and global dynamic behaviors of the system were investigated, respectively. The conditions that ensure the existence of
the positive equilibrium [...] Read more.
(This article belongs to the Special Issue Advances in Applied Mathematical Modelling ( /journal/axioms/special_issues
/applied_mathematical_modelling ))

► Show Figures
(/axioms/axioms-11-00298/article_deploy/html/images/axioms-11-00298-g001-550.jpg) (/axioms/axioms-11-00298/article_deploy

/html/images/axioms-11-00298-g002-550.jpg) (/axioms/axioms-11-00298/article_deploy/html/images/axioms-11-00298-g003-550.jpg)

(/axioms/axioms-11-00298/article_deploy/html/images/axioms-11-00298-g004-550.jpg)

Open Access Article (/2075-1680/11/6/297/pdf?version=1655805344)

Context-Free Grammars for Several Triangular Arrays (/2075-1680/11/6/297)


by Roberta Rui Zhou (https://sciprofiles.com/profile/2188220),
Jean Yeh (https://sciprofiles.com/profile/author/RGRUUWFML2xFdGlSSWRtVWZEdFdTSHpTNGUyNzBpWEdjSk5ROE9VK1Vkcz0=) and
Fuquan Ren (https://sciprofiles.com/profile/1640624)
Axioms 2022, 11(6), 297; https://doi.org/10.3390/axioms11060297 (https://doi.org/10.3390/axioms11060297) - 20 Jun 2022
Abstract In this paper, we present a unified grammatical interpretation of the numbers that satisfy a kind of four-term recurrence relation, including
the Bell triangle, the coefficients of modified Hermite polynomials, and the Bessel polynomials. Additionally, as an application, a criterion for real
zeros [...] Read more.
(This article belongs to the Special Issue A Themed Issue on Mathematical Inequalities, Analytic Combinatorics and Related Topics in Honor
of Professor Feng Qi ( /journal/axioms/special_issues/math_inequalities ))

Open Access Article (/2075-1680/11/6/296/pdf?version=1655804722)

Public Opinion Spread and Guidance Strategy under COVID-19: A SIS Model Analysis (/2075-1680/11/6/296)
by Ge You (https://sciprofiles.com/profile/1287472),
Shangqian Gan (https://sciprofiles.com/profile/author/KzJoeG8yd1lSZWhNakNqZWt2ZlphTDRxcU9LVUtxN0F3ZkVYZldjVW4yYz0=),
Hao Guo (https://sciprofiles.com/profile/author/clVreGZuNlJhNExVbklQMzFZaVNzQT09) and
Back to TopTop

3 de 14 23/06/2022 02:15 p. m.
Axioms | An Open Access Journal from MDPI https://www.mdpi.com/journal/axioms

Abd Alwahed Dagestani (https://sciprofiles.com/profile/703319)


(/)
Axioms 2022, 11(6), 296; https://doi.org/10.3390/axioms11060296 (https://doi.org/10.3390/axioms11060296) - 17 Jun 2022
Abstract Both the suddenness and seriousness of COVID-19 have caused a variety of public opinions on social media, which becomes the focus of
social attention. This paper aims to analyze the strategies regarding the prevention and guidance of public opinion spread under COVID-19 in [...]
Read more.
(This article belongs to the Special Issue Soft Computing with Applications to Decision Making and Data Mining ( /journal/axioms
/special_issues/Soft_Computing_Decision_Making ))

► Show Figures
(/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g001-550.jpg) (/axioms/axioms-11-00296/article_deploy

/html/images/axioms-11-00296-g002-550.jpg) (/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g003-550.jpg)

(/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g004-550.jpg) (/axioms/axioms-11-00296/article_deploy

/html/images/axioms-11-00296-g005-550.jpg) (/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g006-550.jpg)

(/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g007-550.jpg) (/axioms/axioms-11-00296/article_deploy
/html/images/axioms-11-00296-g008-550.jpg) (/axioms/axioms-11-00296/article_deploy/html/images/axioms-11-00296-g009-550.jpg)

Open Access Article (/2075-1680/11/6/295/pdf?version=1655458598)

Existence Results for a Multipoint Fractional Boundary Value Problem in the Fractional Derivative Banach Space (/2075-1680/11/6/295)
by Djalal Boucenna (https://sciprofiles.com/profile/2278953), Amar Chidouh (https://sciprofiles.com/profile/1535392) and
Delfim F. M. Torres (https://sciprofiles.com/profile/555136)
Axioms 2022, 11(6), 295; https://doi.org/10.3390/axioms11060295 (https://doi.org/10.3390/axioms11060295) - 16 Jun 2022
Abstract We study a class of nonlinear implicit fractional differential equations subject to nonlocal boundary conditions expressed in terms of
nonlinear integro-differential equations. Using the Krasnosel’skii fixed-point theorem we prove, via the Kolmogorov–Riesz criteria, the existence of
solutions. The existence results are established in [...] Read more.
(This article belongs to the Special Issue Calculus of Variations, Optimal Control, and Mathematical Biology: A Themed Issue Dedicated to
Professor Delfim F. M. Torres on the Occasion of His 50th Birthday ( /journal/axioms/special_issues/delfim ))

Open Access Article (/2075-1680/11/6/294/pdf?version=1655363485)

RbfDeSolver: A Software Tool to Approximate Differential Equations Using Radial Basis Functions (/2075-1680/11/6/294)
by Ioannis G. Tsoulos (https://sciprofiles.com/profile/1582137), Alexandros Tzallas (https://sciprofiles.com/profile/72475) and
Evangelos Karvounis (https://sciprofiles.com/profile/2076427)
Axioms 2022, 11(6), 294; https://doi.org/10.3390/axioms11060294 (https://doi.org/10.3390/axioms11060294) - 16 Jun 2022
Abstract A new method for solving differential equations is presented in this work. The solution of the differential equations is done by adapting an
artificial neural network, RBF, to the function under study. The adaptation of the parameters of the network is done with [...] Read more.
(This article belongs to the Special Issue Differential Equations and Genetic Algorithms ( /journal/axioms/special_issues
/equations_algorithms ))

► Show Figures
(/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g001-550.jpg) (/axioms/axioms-11-00294/article_deploy

/html/images/axioms-11-00294-g002-550.jpg) (/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g003-550.jpg)

(/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g004-550.jpg) (/axioms/axioms-11-00294/article_deploy

/html/images/axioms-11-00294-g005-550.jpg) (/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g006-550.jpg)
(/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g007-550.jpg) (/axioms/axioms-11-00294/article_deploy

/html/images/axioms-11-00294-g008-550.jpg) (/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g009-550.jpg)

(/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g010-550.jpg) (/axioms/axioms-11-00294/article_deploy

/html/images/axioms-11-00294-g011-550.jpg) (/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g012-550.jpg)
(/axioms/axioms-11-00294/article_deploy/html/images/axioms-11-00294-g013-550.jpg) (/axioms/axioms-11-00294/article_deploy

/html/images/axioms-11-00294-g014-550.jpg)

More Articles... (/search?q=&journal=axioms&sort=pubdate&page_count=50)

(/journal/axioms)
Back to TopTop

4 de 14 23/06/2022 02:15 p. m.
axioms
Article
PSO with Dynamic Adaptation of Parameters for
Optimization in Neural Networks with Interval
Type-2 Fuzzy Numbers Weights
Fernando Gaxiola 1 , Patricia Melin 2 , Fevrier Valdez 2, * , Juan R. Castro 3 and
Alain Manzo-Martínez 1
1 Engineering Faculty, University Autonomous of Chihuahua, Chihuahua 31125, Mexico;
lgaxiola@uach.mx (F.G.); amanzo@uach.mx (A.M.-M.)
2 Division of Graduate Studies, Tijuana Institute of Technology, Tijuana 22414, Mexico; pmelin@tectijuana.mx
3 Chemical Faculty of Sciences and Engineering, University Autonomous of Baja California, Tijuana 14418,
Mexico; jrcastror@uabc.edu.mx
* Correspondence: fevrier@tectijuana.mx

Received: 4 December 2018; Accepted: 9 January 2019; Published: 24 January 2019 

Abstract: A dynamic adjustment of parameters for the particle swarm optimization (PSO) utilizing
an interval type-2 fuzzy inference system is proposed in this work. A fuzzy neural network with
interval type-2 fuzzy number weights using S-norm and T-norm is optimized with the proposed
method. A dynamic adjustment of the PSO allows the algorithm to behave better in the search for
optimal results because the dynamic adjustment provides good synchrony between the exploration
and exploitation of the algorithm. Results of experiments and a comparison between traditional
neural networks and the fuzzy neural networks with interval type-2 fuzzy numbers weights using
T-norms and S-norms are given to prove the performance of the proposed approach. For testing the
performance of the proposed approach, some cases of time series prediction are applied, including
the stock exchanges of Germany, Mexican, Dow-Jones, London, Nasdaq, Shanghai, and Taiwan.

Keywords: particle swarm optimization; fuzzy numbers; type-2 fuzzy weights; neural networks;
backpropagation; time series prediction

1. Introduction
In time series prediction problems, the main objective is to obtain results that approximate the
real data as closely as possible with minimum error. The use of intelligent systems for working with
time series problems is widely utilized [1,2]. For example, Zhou et al. [3] used a dendritic mechanism
in a neural model and phase space reconstruction (PSR) for the prediction of a time series, and
Hrasko et al. [4] presented a prediction of a time series with a Gaussian-Bernoulli restricted Boltzmann
machine hybridized with the backpropagation algorithm.
For optimization problems, the principle objective is to work in a search space to encounter
an optimal choice among a set of potential solutions. In many cases, the search space is too wide,
which means that the time used for obtaining an optimal solution is very extensive. To improve the
optimization and search problems, a set of methods of computational intelligence was the focus of
some recent work in improving the solving of optimization and search problems [5–7]. These methods
can achieve competitive results, but not the best solutions. Furthermore, heuristic algorithms can be
used like alternative methods. Although these algorithms are not guaranteed to find the best solution,
they are able to obtain an optimal solution in a reasonable time.
The main contribution is the dynamic adaptation of the parameters for particle swarm
optimization (PSO) used to optimize parameters for a neural network with interval type-2 fuzzy

Axioms 2019, 8, 14; doi:10.3390/axioms8010014 www.mdpi.com/journal/axioms


Axioms 2019, 8, 14 2 of 21

numbers weights with different T-norms and S-norms, proposed in Gaxiola et al. [8] and used to
perform the prediction of financial time series [9–11].
The adaptation of parameters is performed with a Type-2 fuzzy inference system (T2-FIS) to adjust
the c1 and c2 parameters for the particle swarm optimization. This adaptation is based on the work of
Olivas et al. [12], who used interval type-2 fuzzy logic to improve the convergence and diversity of
PSO to optimize the minimum for several mathematical functions. In this work, control of the c1 and
c2 parameters is performed using fuzzy rules to achieve a good dynamic adaptation [13,14].The use
of type-2 fuzzy logic gives a better performance in our work and this is justified by recent research
from Wu and Tan [15] and Sepulveda et al. [16], who demonstrated that interval type-2 fuzzy systems
perform better than type-1 fuzzy systems.
The neural network with interval type-2 fuzzy numbers weights optimized with the proposed
approach is different from others works using the adjustment of weights in neural networks [17,18],
such as James and Donald [19,20].
A comparison of the performance of the traditional neural network compared to the proposed
optimization for the fuzzy neural network with interval type-2 fuzzy numbers weights is performed
in this paper. In this case, a prediction for a financial time series is used to verify the efficiency of the
proposed method.
The use of T-norms and S-norms of sum-product can be seen in Hamacher [21,22]. Olivas [23]
developed a new method for PSO of parameter adaptation using fuzzy logic. In their work, the authors
improved the performance of the traditional PSO. We decided to use the parameter adaptation because
in previous works, we have had good results with other metaheuristic methods. For instance, in [24],
we proposed a gravitational search algorithm (GSA) with parameter adaptation to improve the original
GSA metaheuristic algorithm. The next section states the theoretical approach to the work in the paper.
Section 3 presents the background research performed to adjust the parameters in PSO, as well as
research performed in the neural network area with fuzzy numbers. Section 4 explains the proposed
approach and a description of the problem to be solved in the paper. Section 5 presents the simulation
results for the proposed approach. Section 6 presents a discussion of the results of the experiments.
Finally, in Section 7, conclusions are shown.

2. Theoretical Basement

2.1. Particle Swarm Optimization (PSO)


Kennedy and Eberhart [25] introduced a bio-inspired algorithm based on the behavior of a swarm
of particles “flying” (moving around) through a multidimensional space search, and the potential
solutions to the problem are represented as particles [26]. The dynamics of the algorithm are performed
by adjusting the position of each particle based on the experience acquired for each particle and its
neighbors [27,28]. The search in PSO is defined by the equations to update the position (Equation (1))
and velocity (Equation (2)) of each particle, respectively:

x i ( t + 1) = x i ( t ) + v i ( t + 1) (1)

where:
xi is the particle i.
t is the current iteration.
vi is the velocity of the particle i.
h i
vi (t + 1) = vij (t) + c1 r1 (t) yij (t) − xij (t) + c2 r2 (t) yˆj (t) − xij (t)
 
(2)

where:
vi is the velocity of the particle i in the dimension j.
c1 is the cognitive factor (importance of the best previous position of the particle).
c2 is the social factor (importance of the best global position of the swarm).
Axioms 2019, 8, 14 3 of 21

Axioms2018, 7, x FOR PEER REVIEW 3 of 21


r1 , r2 are random values in the range of [0, 1].
yij is the best
𝑐 isposition
the socialof particle i in
the(importance
factor thebest
of the dimension j. of the swarm).
global position
𝑟 , 𝑟 areposition
xij is the current random values
of theinparticle
the rangeiof
in[0,1].
the dimension j.
𝑦 is the best position of the particle 𝑖 in the dimension 𝑗.
yˆj is the best
𝑥 global position
is the current of the
position swarm
of the particlein the
𝑖 in dimension 𝑗. j.
thedimension
𝑦 ^ is the best global position of the swarm in the dimension 𝑗.
In Figure In
1, Figure
the movements of a of
1, the movements particle ininthe
a particle the search space
search space according
according to Equations
to Equations (1) and (2)
(1) and (2)
are presented.
are presented.

1. Movement
FigureFigure of of
1. Movement thetheparticle
particle ininthe
the search
search space.
space.

The triangleThe and rhombus


triangle points
and rhombus represent
points representthe
the movement
movement of of the the particle
particle with awith changea in
change
the in the
c1 and c2 𝑐. The
parameters,parameters, and 𝑐triangle
. The triangle
point point
is theisresult
the result for the
for the case case whenc 𝑐 >
when 1
> c𝑐 ,, in this case, the
2 in this case, the swarm
swarm performs exploration and “fly” in the search space to find a best area which allows the
performs exploration and “fly” in the search space to find a best area which allows the obtainment
obtainment of the global best position; so, the exploration allows the particles to perform long
of the global best position;
movements so, the
for travelling in allexploration allows
the search space. the particles
The rhombus point isto
theperform
result for long movements
the case when for
𝑐 <
travelling in all𝑐 the search
, in this case, space.
the swarmThe rhombus
uses exploitationpoint is the
and “fly” in result
the bestfor
areathe case
of the when
search space < c2 , in this
c1 with
short movements
case, the swarm to performand
uses exploitation an exhaustive search
“fly” in the in the
best bestofarea.
area the search space with short movements
to perform 2.2.
an Type-2
exhaustive search
Fuzzy Systems
in the best area.
Zadeh
2.2. Type-2 Fuzzy [29] introduced the approach of type-2 fuzzy sets as an extension of the type-1 fuzzy
Systems
sets. Later, Mendel and John [30] defined the type-2 fuzzy set as follows:
Zadeh [29] introduced the approach of𝜇type-2
𝐴~ = { (𝑥, 𝑢), fuzzy sets as an extension of the type-1
~ (𝑥, 𝑢) |∀𝑥 ∈ 𝑋, ∀𝑢 ∈ 𝐽 ⊆ 0,1 (3)fuzzy sets.
Later, Mendel and~ John [30] defined the type-2 fuzzy set as follows:
where 𝐴 is a type-2 fuzzy set.
𝜇 ~ (𝑥, 𝑢) is a type-2 membership function.
𝐽 is called primary A∼ = {[( x, ufunction
membership x,𝐴u~)]|∀
), µ A∼ (of . x ∈ X, ∀u ∈ Jx ⊆ [0, 1]} (3)
The type-2 fuzzy set (T2FS) consists of generating a bounded region of uncertainty in the
where primary membership, called the footprint of uncertainty (FOU) [31]. The interval type-2 fuzzy sets
A∼ is a type-2 are afuzzy set.case of T2FS in which the FOU is generated by the union of two type-1 membership
particular
µ A∼ ( x, u) isfunctions
a type-2 membership
(MF), a lower MF function.
(LMF) and an upper MF (UMF) [32]. For IT2FS, the defuzzification is
Jx is called performed by the calculation of the centroid∼ of the FOU founded in Karnik and Mendel [33]. The
primary membership function of A .

The type-2 fuzzy set (T2FS) consists of generating a bounded region of uncertainty in the primary
membership, called the footprint of uncertainty (FOU) [31]. The interval type-2 fuzzy sets are a
particular case of T2FS in which the FOU is generated by the union of two type-1 membership
functions (MF), a lower MF (LMF) and an upper MF (UMF) [32]. For IT2FS, the defuzzification is
performed by the calculation of the centroid of the FOU founded in Karnik and Mendel [33]. The use
of a T2FS allows the handling of more uncertainty and provides fuzzy systems with more robustness
than a type-1 fuzzy sets [34–36].
Axioms 2019, 8, 14 4 of 21

2.3. Fuzzy Neural Network


The area of fuzzy neural networks consists in developing hybrid algorithms that implement the
approach of neural networks architectures with fuzzy logic theory [17,18,37,38].

3. Antecedents Development
The PSO bio-inspired algorithm has received many improvements in their performance by many
researchers. The most important modifications made to PSO are to increment the diversity of the
swarm and to enhance the convergence [39,40].
Olivas et al. [12] proposed a method to dynamically adjust the parameters of particle swarm
optimization using type-2 and type-1 fuzzy logic.
Muthukaruppan and Er [41] presented a hybrid algorithm of particle swarm optimization using a
fuzzy expert system in the diagnosis of coronary artery disease.
Taher et al. [42] developed a fuzzy system to perform an adjustment of the parameters, c1 , c2 , and
w (inertia weight), for the particle swarm optimization.
Wang et al. [43] presented a PSO variant utilizing a fuzzy system to implement changes in the
velocity of the particle according to the distance between all particles. If the distance is small, the
velocity of some particles is modified drastically.
Hongbo and Abraham [44] proposed a new parameter, called the minimum velocity threshold,
to the equation used to calculate the velocity of the particle. The approach consists in that the new
parameter performs the control in the velocity of the particles and applies fuzzy logic to make the new
parameter dynamically adaptive.
Shi and Eberhart [45] presented a dynamic adaptive PSO to adjust the inertia weight utilizing a
fuzzy system. The fuzzy system works with two input variables, the current inertia weight and the
current best performance evaluation, and the change of the inertia weight is the output variable.
In neural networks, the use of fuzzy logic theory to obtain hybrid algorithms has improved the
results for several problems [46–48]. In this area, some significant papers of works with fuzzy numbers
are presented [49,50]:
Dunyak et al. [51] proposed a fuzzy neural network that works with fuzzy numbers for the
obtainment of new weights (inputs and outputs) in the step of training. Coroianu et al. [52] presented
the use of the inverse F-transform to obtain the optimal fuzzy numbers, preserving the support and
the convergence of the core.
Li Z. et al. [53] proposed a fuzzy neural network that works with two fuzzy numbers to calculate
the results with operations, like subtraction, addition, division, and multiplication. Fard et al. [54]
presented a fuzzy neural network utilizing sum and product operations for two interval type-2
triangular fuzzy numbers in combination with the Stone-Weierstrass theorem.
Molinari [55] proposed a comparison of generalized triangular fuzzy numbers with other fuzzy
numbers. Asady [56] presented a comparison of a new method for approximation trapezoidal fuzzy
numbers with methods of approximation in the literature.
Figueroa-García et al. [57] presented a comparison for different interval type-2 fuzzy numbers
performing distance measurement. Requena et al. [58] proposed a fuzzy neural network working
with trapezoidal fuzzy numbers to obtain the distance between the numbers and utilized a proposed
decision personal index (DPI).
Valdez et al. [59] proposed a novel approach applied to PSO and ACO algorithms. They used
fuzzy systems to dynamically update the parameters for the two algorithms. In addition, as another
important work, we can mention Fang et al. in [60], who proposed a hybridized model of a phase
space reconstruction algorithm (PSR) with the bi-square kernel (BSK) regression model for short term
load forecasting. Also, it is worth mentioning the interesting work of Dong et al. [61], who proposed
a hybridization method using the support vector regression (SVR) model with a cuckoo search (CS)
algorithm for short term electric load forecasting.
Axioms 2019, 8, 14 5 of 21

4. Proposed Method and Problem Description


Axioms2018, 7, x FOR PEER REVIEW 5 of 21
In this work, the optimization for interval type-2 fuzzy number weights neural
Axioms2018, 7, x FOR PEER REVIEW 5 of 21
networks
(IT2FNWNN) using Method
4. Proposed particleand swarm
Problem optimization
Description (PSO) is proposed. The PSO is dynamically adapted
4. Proposed
utilizing interval
In this Method
type-2 andoptimization
work, fuzzy
the Problem
systems. Description
forAinterval
comparison of thenumber
type-2 fuzzy traditional
weightsneural network and the
neural networks
IT2FNWNN (IT2FNWNN)
optimized
In this work, usingtheparticle
with PSO swarm optimization
is performed.
optimization for interval The (PSO) is proposed.
operations
type-2 fuzzy number The neurons
in the PSOneural
weights is dynamically
are obtained with
networks
calculationadapted
(IT2FNWNN)utilizing
of T-norms interval
using
and type-2
particle
S-norms offuzzy
swarm systems. A (PSO)
theoptimization
sum-product, comparison of the traditional
is proposed.
Hamacher and PSO neural
TheFrank is[62–64]. network
dynamically
and the IT2FNWNN
adapted optimized
utilizing interval type-2with PSOsystems.
fuzzy is performed. The operations
A comparison of the in the neurons
traditional are obtained
neural network
For testing
with
and the
the proposed
calculation
IT2FNWNN of T-normsmethod,
and with
optimized
the
S-norms prediction
PSOofisthe
of the
sum-product,
performed.
financial
Hamachertime
The operations and
in
series
theFrank
ofarethe
[62–64].
neurons
stock exchange
obtained
markets ofwithGermany,
For Mexican, Dow-Jones, London, Nasdaq, Shanghai, and
calculation of T-norms and S-norms of the sum-product, Hamacher and Frank [62–64]. stock
testing the proposed method, the prediction of the financial time Taiwan
series of is
theperformed. The
databases exchange
consisted markets
For testing theof proposed
of 1250 Germany,
data Mexican,
for method,
each time Dow-Jones,
the series London,
(2011
prediction of to Nasdaq, Shanghai,
the2015).
financial time seriesand of Taiwan
the stock is
performed. The databases consisted of 1250Dow-Jones,
data for each time series
exchange markets
The architecture of theof traditional
Germany, Mexican,
neural network London,
used this(2011
in Nasdaq, to 2015). and Taiwan is
Shanghai,
paper (see Figure 2) consists of 16
The architecture
performed. of the
The databases traditional
consisted neural
of 1250 network
data for eachused
timeinseries
this paper
(2011 to (see Figure 2) consists of
2015).
neurons for the hidden
16 neurons layer
for the hidden
The architecture
and one
layer
of the
neuron
and oneneural
traditional
for the
neuronnetwork output
for the output layer.
used inlayer.
this paper (see Figure 2) consists of
16 neurons for the hidden layer and one neuron for the output layer.

2. Scheme
Figure Figure of the
2. Scheme architecture
of the ofthe
architecture of thetraditional
traditional neural
neural network.
network.
Figure 2. Scheme of the architecture of the traditional neural network.
The architecture of theofinterval
The architecture type-2
the interval fuzzy
type-2 fuzzy number weight
number weight neural
neural network
network (IT2FNWNN)
(IT2FNWNN) has has
the same
The structure
architectureof the
of traditional
the interval neural
type-2 network,
fuzzy ,with
number 16,
weight 39, and
neural 19 neurons
network in the
(IT2FNWNN)
the same structure of the traditional neural network, with 16, 39, and 19 neurons in the hidden layer hidden
has
layer for the
the same sum-product,
structure Hamacher neural
of the traditional and Frank, respectively,
network, ,with 16,and one
andneuron for theinoutput layer
for the sum-product,
(see
Hamacher and Frank, respectively, and39, one 19 neurons
neuron for thetheoutput
hidden
layer (see
layerFigure
for the3)sum-product,
[8]. Hamacher and Frank, respectively, and one neuron for the output layer
Figure 3) [8].
(see Figure 3) [8].

Figure 3. Scheme of the architecture of the interval type-2 fuzzy number weight neural network.
Scheme
Figure 3. Figure of the of
3. Scheme architecture of of
the architecture thetheinterval type-2
interval type-2 fuzzy
fuzzy number
number weightweight neural network.
neural network.
The IT2FNWNN work with fuzzy numbers weights in the connections between the layers, and
theseThe
weights
The IT2FNWNN are
IT2FNWNN represented
work withwith
work as follows
fuzzy
fuzzy [8]:
numbers
numbers weights
weights inin
thethe connections
connections between
between the
the layers, andlayers, and
these weights are represented as follows [8]:[8]: 𝑤 = 𝑤, 𝑤
these weights are represented as follows (4)
𝑤 = 𝑤, 𝑤 (4)
e = [w, w]
w (4)

where w and w are obtained using the Nguyen-Widrow algorithm [65] in the initial execution of
the network.
The linear function as the activation function for the output neuron and the secant hyperbolic for
the hidden neurons are applied. The S-Norm and T-Norm for obtaining the lower and upper outputs
Axioms 2019, 8, 14 6 of 21

of the neurons are applied, respectively. For the lower output, the S-Norm is used and for the upper
output, the T-Norm is utilized. The T-Norms and S-Norms used in this work are described as follows:
Sum-product:

TNorm Net, Net = Net. × Net (5)
 
SNorm Net, Net = Net + Net − TNorm Net, Net (6)

Hamacher: for γ > 0.

 Net. × Net
TNormH Net, Net, γ =  (7)
γ + (1 − γ) Net + Net − Net. × Net

Net + Net + (γ − 2) Net. × Net

SNormH Net, Net, γ =  (8)
1 + (γ − 1) Net. × Net
Frank: for s > 0.   
1 + s Net − 1 s Net − 1

TNormF Net, Net, s = logs   (9)
s−1
  
1 + s 1− Net − 1 s 1− Net − 1

SNormF Net, Net, s = 1 − logs   (10)
s−1

The back-propagation algorithm using gradient descent and an adaptive learning rate is
implemented for the neural networks.
A modification of the back-propagation algorithm that allows operations with interval type-2
fuzzy numbers to obtain new weights for the forthcoming epochs of the fuzzy neural network is
proposed [8].
The particle swarm optimization (PSO) is used to optimize the number of neurons for the
IT2FNWNN for operations with S-norm and T-norm of Hamacher and Frank. Also, for Hamacher, the
parameter, γ, is optimized, and for Frank, the parameter, s, is optimized.
The parameters used in the PSO to optimize the IT2FNWNN are presented in Table 1. The same
parameters and design are applied to optimize all the IT2FNWNN for all the financial time series.

Table 1. Parameters of PSO to optimize IT2FNWNN.

Parameters Values
Particles 50
Dimensions 2
Iterations 50
Inertia Weight 0.1
Constriction Coefficient (C) 1
R1, R2 Random in [0, 1]
c1 , c2 Dynamic adjust IT2FIS

In the literature [27], the recommended values for the c1 and c2 parameters are in the interval of
0.5 and 2.5; also, the change of these parameters in the iterations of the PSO can generate better results.
In the PSO used to optimize the IT2FNWNN, the parameters, c1 and c2 , from Equation (2)
are selected to perform the dynamic adjustment utilizing an interval type-2 fuzzy inference system
(IT2FIS).This selection is in the base as these parameters allow the particles to generate movements for
performing the exploration or exploitation in the search space. The structure of the IT2FIS utilized in
the PSO is presented in Figure 4.
of 0.5 and 2.5; also, the change of these parameters in the iterations of the PSO can generate better
results.
In the PSO used to optimize the IT2FNWNN, the parameters, 𝑐 and 𝑐 , from Equation (2) are
selected to perform the dynamic adjustment utilizing an interval type-2 fuzzy inference system
(IT2FIS).This selection is in the base as these parameters allow the particles to generate movements
Axiomsfor
2019, 8, 14
performing the exploration or exploitation in the search space. The structure of the IT2FIS 7 of 21
utilized in the PSO is presented in Figure 4.

Figure Structure
4. 4.
Figure Structureof
oftype-2 fuzzyinference
type-2 fuzzy inferencesystem
system
forfor
thethe PSO.
PSO.

Based on the
Based literature
on the literature[12,66,67],
[12,66,67], the
the inputs selectedfor
inputs selected forthethe IT2FIS
IT2FIS are are
the the diversity
diversity of theof the
swarm and the percentage of iterations. Also, in [68], a problem of online adaptation
swarm and the percentage of iterations. Also, in [68], a problem of online adaptation of radial of radial basis basis
function
function (RBF)
(RBF) neural
neural networksisispresented.
networks presented. The
The authors
authorspresent
present a new adaptive
a new adaptivetraining, which
training, is
which is
able able to modify
to modify bothboth
thethe structureofofthe
structure thenetwork
network (the
(thenumber
numberofof nodes in the
nodes hidden
in the layer)
hidden and the
layer) and the
output weights.
output weights.
The diversity input is defined by Equation (11), in which the degree of the dispersion of the
The diversity input is defined by Equation (11), in which the degree of the dispersion of the
swarm is calculated. This means that for less diversity, the particles are closer together, and for high
swarm is calculated. This means that for less diversity, the particles are closer together, and for high
diversity, the particles are more separated. The diversity equation can be taken as the average of the
diversity, the particles
Euclidean distances are moreeach
amongst separated.
particle The diversity
and the equation can be taken as the average of the
best particle:
Euclidean distances amongst each particle and the best particle:
1
𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦(𝑡) = u nx 𝑥 (𝑡) − 𝑥 (𝑡)
v
ns (11)
1𝑛
∑ t ∑ xij (t) − x j (t)
u  
Diversity(t) = (11)
ns i =1 j =1
where: 𝑥 (𝑡): represents the particle in the iteration t.
where:(𝑡): represents the best particle in the iteration t.
𝑥
xij (t): represents the particle
For the diversity in the
variable, iteration t. is performed based on Equation (12), taking values in
normalization
x j (t):the range of 0the
represents to 1:best particle in the iteration t.
𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦𝑁𝑜𝑟𝑚
For the diversity = {0, 𝑖𝑓𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟
variable, normalization = 𝑚𝑎𝑥𝐷𝑖𝑣𝑒𝑟𝐷𝑖𝑣𝑒𝑟𝑁𝑜𝑟𝑚,
is performed 𝑖𝑓𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟
based on Equation (12), taking(12)
values in
≠ 𝑚𝑎𝑥𝐷𝑖𝑣𝑒𝑟
the range of 0 to 1:
where: minDiver: Minimum Euclidian distance for the particle.
DiversityNorm
maxDiver: = {0, i f minDiver
Maximum Euclidian =the
distance for maxDiverDiverNorm,
particle. i f minDiver 6= maxDiver (12)
DiverNorm: Value obtained with Equation (13).
where:
minDiver: Minimum Euclidian distance for the particle.
maxDiver: Maximum Euclidian distance for the particle.
DiverNorm: Value obtained with Equation (13).

Diversity − minDiver
DiverNorm = (13)
maxDiver − minDiver

The membership functions for the input diversity are presented in Figure 5.
Axioms2018, 7, x FOR PEER REVIEW 8 of 21

𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 − 𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟
𝐷𝑖𝑣𝑒𝑟𝑁𝑜𝑟𝑚 = 𝐷𝑖𝑣𝑒𝑟𝑠𝑖𝑡𝑦 − 𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟 (13)
𝐷𝑖𝑣𝑒𝑟𝑁𝑜𝑟𝑚 = 𝑚𝑎𝑥𝐷𝑖𝑣𝑒𝑟 − 𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟 (13)
AxiomsThe
2019,membership
8, 14
𝑚𝑎𝑥𝐷𝑖𝑣𝑒𝑟
functions for the input diversity are − 𝑚𝑖𝑛𝐷𝑖𝑣𝑒𝑟
presented in Figure 5. 8 of 21
The membership functions for the input diversity are presented in Figure 5.

Figure 5. Membership function for the input diversity.


Figure 5. Membership function for the input diversity.
diversity.
The iteration input is considered as a percentage of iterations during the execution of the PSO,
The iteration
the values for thisinput
inputisare
considered as a from
in the range percentage
0 to 1.ofIniterations
the start during the execution
of the algorithm, the of the PSO,
iteration is
the values for
considered thisand
as 0% input are in
in the range
rangeit from
theuntil
is increased from 00 to
reaches 1.
1. Inatthe
to100% thestart
endof
ofthe
thealgorithm,
the execution. the
algorithm, Theiteration is
values are
considered
obtained as as 0% and is increased until it reaches
follows: reaches 100%100% atat the
the end
end of
of the
the execution.
execution. The values are
obtained as follows:
𝐶𝑢𝑟𝑟𝑒𝑛𝑡𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛
Currentiteration
𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛
Iteration= = 𝐶𝑢𝑟𝑟𝑒𝑛𝑡𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 (14)
(14)
𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 = 𝑇𝑜𝑡𝑎𝑙𝑜𝑓𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠
Totalo f iterations
(14)
The 𝑇𝑜𝑡𝑎𝑙𝑜𝑓𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠
The membership
membership functions
functions for
for the
the input
input iteration
iteration areare presented
presented inin Figure
Figure6.6.
The membership functions for the input iteration are presented in Figure 6.

Figure 6. Membership function for the input iteration.


Figure 6. Membership function for the input iteration.
The membership functions for the outputs, c1 and c2 , are presented in Figures 7 and 8, respectively.
The values for the two outputs are in the range from 0 to 3.
Axioms2018, 7, x FOR PEER REVIEW 9 of 21

AxiomsThe
2019,membership
8, 14 functions for the outputs, 𝑐 and 𝑐 , are presented in Figures 7 and 8,
9 of 21
respectively. The values for the two outputs are in the range from 0 to 3.

Figure 7.
Figure 7. Membership
Membership function
function for
for the output, c𝑐1 ..
the output,

Figure 8.
Figure 8. Membership function
function for
for the
the output,
output, cc222.

Based
Based onon the
theliterature
literatureofofthethebehavior
behaviorforfor c1 𝑐and
thethe andc2 𝑐parameters
parametersdescribed in Section
described 2, for
in Section 2, the
for
dynamic adjustment
the dynamic adjustment of cthe
of the 1 and c2 parameters
𝑐 and 𝑐 parameters in each iteration
in each of the
iteration ofPSO, a fuzzy
the PSO, rules rules
a fuzzy set with
set
nine rules is designed as follows:
with nine rules is designed as follows:
1. If (Iteration
1. is Low)
If (Iteration andand
is Low) (Diversity is Low)
(Diversity is Low)then (𝑐 (cis1 VeryHigh)
then is VeryHigh) (𝑐 (cis2 VeryLow).
is VeryLow).
2. If (Iteration
2. is Low)
If (Iteration andand
is Low) (Diversity is Medium)
(Diversity is Medium) then
then(𝑐 (cis1 High) (𝑐 (cis2 Medium).
is High) is Medium).
3.
3. If (Iteration is Low) and (Diversity is High) then (𝑐 is High) (𝑐 is Low).
If (Iteration is Low) and (Diversity is High) then (c1 is High) (c2 is Low).
4. If (Iteration
4. is Medium)
If (Iteration is Medium)andand
(Diversity is Low)
(Diversity is Low)then
then(𝑐 (cis High) (𝑐 is Low).
1 is High) (c2 is Low).
5.
5. If (Iteration is Medium) and (Diversity is Medium) then (𝑐 is
If (Iteration is Medium) and (Diversity is Medium) then (c1 is Medium) Medium) (𝑐 (cis Medium).
2 is Medium).
6. If (Iteration is Medium) and (Diversity is High) then (𝑐 is Low) (𝑐 is High).
6. If (Iteration is Medium) and (Diversity is High) then (c1 is Low) (c2 is High).
7. If (Iteration is High) and (Diversity is Low) then (𝑐 is Medium) (𝑐 is VeryHigh).
7. If (Iteration is High) and (Diversity is Low) then (c1 is Medium) (c2 is VeryHigh).
8. If (Iteration is High) and (Diversity is Medium) then (c1 is Low) (c2 is High).
Axioms 2019, 8, 14 10 of 21
Axioms2018, 7, x FOR PEER REVIEW 10 of 21

8. If (Iteration is High) and (Diversity is Medium) then (𝑐 is Low) (𝑐 is High).


9. If (Iteration is High) and (Diversity is High) then (c1 is VeryLow) (c2 is VeryHigh).
9. If (Iteration is High) and (Diversity is High) then (𝑐 is VeryLow) (𝑐 is VeryHigh).
In
In Figure
Figure 9,
9, aa block
block diagram
diagramof
ofthe
theprocedure
procedurefor
forthe
theproposed
proposedapproach
approachisispresented.
presented.

Figure Block diagram


Figure9.9.Block diagram of
of the
the proposed
proposed approach.
approach.

5. Simulation Results
5. Simulation Results
We achieved the experiments for the financial time series of the stock exchange markets of
We achieved the experiments for the financial time series of the stock exchange markets of
Germany, Mexico, Dow-Jones, London, Nasdaq, Shanghai, and Taiwan, and for all experiments, we
Germany, Mexico, Dow-Jones, London, Nasdaq, Shanghai, and Taiwan, and for all experiments, we
used 1250 data points. In this case, 750 data points are considered for the training stage and 500 data
used 1250 data points. In this case, 750 data points are considered for the training stage and 500
points for the testing stage.
data points for the testing stage.
In Table 2, the results of the experiments for the financial time series of stock exchange markets
In Table 2, the results of the experiments for the financial time series of stock exchange markets
for the traditional neural network with 16 neurons in the hidden layer are presented.
for the traditional neural network with 16 neurons in the hidden layer are presented.
The results of the experiments
Table aretraditional
2. Results of the obtainedneural
with the mean
network forabsolute errorseries.
the financial (MAE) and root mean
standard error (RMSE). Thirty experiments with the same parameters and conditions to obtain the
Error error
average Results Germany but
were performed, Mexican
only theDow-Jones
best resultLondon
and averageNasdaq
of theShanghai Taiwanare
30 experiments
presented. Best 1089.19 733.50 920.89 167.05 468.81 361.95 383.00
MAE
Average 1138.48 755.34 956.03 168.76 470.43 367.90 412.52
BestTable 2.Results
1264.68 of the traditional1075.70
917.93 neural network for the financial
200.13 550.95 series.
561.29 458.28
RMSE
Average 1332.28 935.58 1109.73 203.29 558.95 580.00 480.11
Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan

The results ofBest 1089.19 are


the experiments 733.50
obtained920.89 167.05absolute
with the mean 468.81error361.95 383.00
(MAE) and root mean
MAE
standard error (RMSE). Thirty experiments with the same parameters and conditions to obtain the
Average 1138.48 755.34 956.03 168.76 470.43 367.90 412.52
average error were performed, but only the best result and average of the 30 experiments are presented.
Best 1264.68 917.93 1075.70 200.13 550.95 561.29 458.28
RMSE
Average 1332.28 935.58 1109.73 203.29 558.95 580.00 480.11
Axioms 2019, 8, 14 11 of 21

In Table 3, the results of the experiments for the financial time series of the stock exchange for
the IT2FNWNN with 16 neurons in the hidden layer for the S-norm and the T-norm of the sum of the
product are presented (IT2FNWNNSp).

Table 3. Results of the IT2FNWNN with the S-norm and T-norm of the sum of the product for the
financial time series.

Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan


Best 1038.84 739.26 736.26 155.51 585.70 391.21 321.86
MAE
Average 1148.27 760.49 937.09 159.20 725.07 403.97 339.18
Best 1338.21 915.45 840.21 192.14 786.84 458.75 499.62
RMSE
Average 1414.93 939.70 1098.61 196.65 884.07 474.99 615.84

In Table 4, the results of the experiments for the financial time series of the stock exchange markets
for the IT2FNWNN with 39 neurons in the hidden layer for the S-norm and T-norm of Hamacher
are presented (IT2FNWNNH). The value for the parameter, γ, in the Hamacher IT2FNWNN without
optimization is 1.

Table 4. Results of the IT2FNWNN with the S-norm and T-norm of Hamacher for the financial
time series.

Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan


Best 1055.06 730.13 686.05 151.76 463.77 314.71 349.78
MAE
Average 1093.54 756.95 787.61 158.96 711.63 334.81 392.70
Best 1216.24 912.65 800.68 188.46 570.36 569.32 449.41
RMSE
Average 1334.42 945.19 925.43 198.98 854.65 626.49 472.24

In Table 5, the results of the experiments for the financial time series of the stock exchange for the
IT2FNWNN with 19 neurons in the hidden layer for the S-norm and T-norm of Frank are presented
(IT2FNWNNF). The value for the parameter, s, in the Frank IT2FNWNN without optimization is 2.8.

Table 5. Results of the IT2FNWNN with the S-norm and T-norm of Frank for the financial time series.

Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan


Best 1058.87 730.96 853.69 153.35 663.02 354.07 362.56
MAE
Average 4439.15 756.19 944.12 159.58 741.04 336.64 399.85
Best 1256.37 915.84 972.34 186.68 730.97 607.68 448.20
RMSE
Average 1402.82 936.95 1101.39 197.71 902.34 640.26 470.09

We performed the optimization of the Hamacher IT2FNWNN and Frank IT2FNWNN for the
number of neurons in the hidden layer and the parameters, γ and s, respectively, with PSO using the
dynamic adjustment for the c1 and c2 parameters of Equation (2), applying the interval type-2 fuzzy
inference system of Figure 4 and the parameters of Table 1.
In Table 6, the optimization for each financial time series for the Hamacher IT2FNWNN is
presented. We presented the best result of the five experiments performed.
Axioms 2019, 8, 14 12 of 21

Table 6. Results of the optimization with Dynamic PSO for Hamacher IT2FNWNN for the financial
time series.

Financial Time Series Prediction Error No. Neurons Parameter γ


Germany 417.33 39 1.6471
Mexican 744.34 21 0.9997
Dow-Jones 517.58 15 1.9270
London 155.72 22 1.0016
Nasdaq 227.53 56 0.6822
Shanghai 211.40 99 1.7397
Taiwan 236.15 28 1.1997

In Table 7, the results of the experiments for the financial time series of the stock exchange for
the IT2FNWNN for the Hamacher S-norm and T-norm optimized with dynamic PSO are presented
(IT2FNWNNH-PSO). The value for the number of neurons in the hidden layer and the parameter, γ,
in the Hamacher IT2FNWNN are taken from Table 6.

Table 7. Results of the IT2FNWNN with the S-norm and T-norm of Hamacher optimized with dynamic
PSO for the financial time series.

Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan


Best 417.33 728.61 517.58 154.87 227.53 211.40 236.15
MAE
Average 813.21 751.48 881.97 160.47 342.14 313.86 401.49
Best 507.43 909.39 769.15 192.81 283.92 443.85 331.32
RMSE
Average 1172.56 930.47 1290.34 197.99 530.34 543.39 472.44

In Table 8, the optimization for each financial time series for the Frank IT2FNWNN is presented.
We are only showing the best result of five performed experiments.

Table 8. Results of the optimization with dynamic PSO for Frank IT2FNWNN for the financial
time series.

Financial Time Series Prediction Error No. Neurons Parameter γ


Germany 935.77 89 1.7796
Mexican 726.84 22 1.4607
Dow-Jones 768.81 119 3.1526
London 153.60 32 1.2669
Nasdaq 302.19 73 1.2910
Shanghai 277.72 79 1.9962
Taiwan 359.21 124 1.6347

In Table 9, the results of the experiments for the financial time series of the stock exchange
for the IT2FNWNN for Frank S-norm and T-norm optimized with dynamic PSO are presented
(IT2FNWNNF-PSO). The value for the number of neurons in the hidden layer and the parameter, s,
in the Frank IT2FNWNN are taken for Table 8.

Table 9. Results of the IT2FNWNN with the S-norm and T-norm of Frank optimized with dynamic
PSO for the financial time series.

Error Results Germany Mexican Dow-Jones London Nasdaq Shanghai Taiwan


Best 935.77 726.84 768.81 153.60 288.70 277.72 359.21
MAE
Average 1050.87 773.60 863.79 163.12 391.39 373.38 399.28
Best 1226.07 911.63 979.26 188.63 349.02 491.98 439.93
RMSE
Average 1302.99 963.74 1082.81 201.38 469.52 597.62 481.62
Axioms 2019, 8, 14 13 of 21

In Table 10, a comparison of the best results for the prediction for all financial time series of the
stock exchange markets with the traditional neural network (TNN), the interval type-2 fuzzy numbers
weights neural network with the sum of the product (IT2FNWNNSp), Hamacher (IT2FNWNNH)
and Frank (IT2FNWNNF) without optimization, and Hamacher (IT2FNWNNH-PSO) and Frank
(IT2FNWNNF-PSO) optimized with dynamic PSO is presented.

Table 10. Comparison of the best results of financial time series prediction with all neural networks.

Financial
Error TNN IT2FNWNNSp IT2FNWNNH IT2FNWNNF IT2FNWNNH-PSO IT2FNWNNF-PSO
Time Series
MAE 1089.19 1038.84 1055.06 1058.87 417.33 935.77
Germany
RMSE 1264.68 1338.21 1216.24 1256.37 507.43 1226.07
MAE 733.50 739.26 730.13 730.96 728.61 726.84
Mexican
RMSE 917.93 915.45 912.65 915.84 909.39 911.63
MAE 920.89 736.26 686.05 853.69 517.58 768.81
Dow-Jones
RMSE 1075.70 840.21 800.68 972.34 769.15 979.26
MAE 167.05 155.51 151.76 153.35 154.87 153.60
London
RMSE 200.13 192.14 188.46 186.68 192.81 188.63
MAE 468.81 585.70 463.77 663.02 227.53 288.70
Nasdaq
RMSE 550.95 786.84 570.36 730.97 283.92 349.02
MAE 361.95 391.21 314.71 354.07 211.40 277.72
Shanghai
RMSE 561.29 458.75 569.32 607.68 443.85 491.98
MAE 383.00 321.86 349.78 362.56 236.15 359.21
Taiwan
RMSE 458.28 499.62 449.41 448.20 331.32 439.93

In Figures 10–16, the plots of the real data of each stock exchange time series against the predicted
data of the7,best
Axioms2018, x FORresults for the Interval type-2 fuzzy number weights neural network taken from
PEER REVIEW 14 ofthe
21
comparison in Table 10 are presented.

Figure 10. Illustration of the real data against the prediction data of the Germany stock exchange time
Figure 10.the
series for Illustration of the
fuzzy neural real data
network against
using the prediction
Hamacher optimizeddata
with of the Germany
dynamic PSO. stock exchange
time series for the fuzzy neural network using Hamacher optimized with dynamic PSO.
Figure 10. Illustration of the real data against the prediction data of the Germany stock exchange
Axioms 2019,
time 8, 14for the fuzzy neural network using Hamacher optimized with dynamic PSO.
series 14 of 21

Axioms2018,
Figure7, x11.
FOR PEER REVIEW
Illustration 15 of 21
of the real data against the prediction data of the Mexican stock exchange time
Figure 11. the
series for Illustration of the
fuzzy neural real data
network against
using Frankthe prediction
optimized withdata of thePSO.
dynamic Mexican stock exchange
time series for the fuzzy neural network using Frank optimized with dynamic PSO.

Figure 12. Illustration of the real data against the prediction data of the Dow-Jones stock exchange
Figure 12. Illustration
time series of the
for the fuzzy realnetwork
neural data against
usingthe prediction
Hamacher data of with
optimized the Dow-Jones stock exchange
dynamic PSO.
time series for the fuzzy neural network using Hamacher optimized with dynamic PSO.
Figure 12. Illustration of the real data against the prediction data of the Dow-Jones stock exchange
time
Axioms series
2019, 8, 14 for the fuzzy neural network using Hamacher optimized with dynamic PSO. 15 of 21

Figure
Axioms2018, 7, 13. Illustration
x FOR of the real data against the prediction data of the London stock exchange time
PEER REVIEW 16 of 21
Figure 13. the
series for Illustration of thenetwork
fuzzy neural real data against
using the prediction
Hamacher withoutdata of the London stock exchange time
optimization.
series for the fuzzy neural network using Hamacher without optimization.

Figure 14. Illustration of the real data against the prediction data of theNasdaq stock exchange time
Figure 14. the
series for Illustration of the
fuzzy neural real data
network against
using the prediction
Hamacher data
optimized of theNasdaq
with stock exchange time
dynamic PSO.
series for the fuzzy neural network using Hamacher optimized with dynamic PSO.
Figure 14. Illustration of the real data against the prediction data of theNasdaq stock exchange time
Axioms 8, 14
2019,for
series the fuzzy neural network using Hamacher optimized with dynamic PSO. 16 of 21

Figure7,15.
Axioms2018, Illustration
x FOR of the real data against the prediction data of the Shanghai stock exchange time
PEER REVIEW 17 of 21
Figure 15.the
series for Illustration of the
fuzzy neural real data
network against
using the prediction
Hamacher data with
and optimized of thethe
Shanghai
dynamicstock
PSO.exchange
time series for the fuzzy neural network using Hamacher and optimized with the dynamic PSO.

Figure 16. Illustration of the real data against the prediction data of Taiwan stock exchange time series
Figure 16. Illustration
for the fuzzy of the using
neural network real data againstoptimized
Hamacher the prediction data of Taiwan
with dynamic PSO. stock exchange time
series for the fuzzy neural network using Hamacher optimized with dynamic PSO.
A statistical test of T-student is applied to verify the accuracy of the results. In Table 11, the best
results statistical
A test of T-student
for each financial time seriesisisapplied
comparedto verify theTNN.
with the accuracy of the
The null results. Inis Table
hypothesis 11, and
H1 = H2 the
best results for each
the alternative financial
hypothesis is time
H1 >series is compared
H2 (AVG with
= average, SDthe
= TNN. Thedeviation,
standard null hypothesis
SEM =is standard
H1 = H2
and
errorthe alternative
of the mean, ED hypothesis is H1
= estimation > H2
of the (AVG =LLD
difference, average,
95% =SD = standard
lower limit 95%deviation, SEM =
of the difference,
standard errorofofliberty)
GL = degrees the mean,
[69]. ED = estimation of the difference, LLD 95% = lower limit 95% of the
difference, GL = degrees of liberty) [69].

Table 11.Comparison of the best results of financial time series prediction with all neural networks.

Financial Comparison
N AVG SD SEM ED LLD 95% T Value P Value GL
Time Series Results

H1:TNN 30 1332.3 22.8 4.2


Germany 159.7 66 2.9 0.004 29
H2: IT2FNWNNH-
30 1173 301 55
PSO
Axioms 2019, 8, 14 17 of 21

Table 11. Comparison of the best results of financial time series prediction with all neural networks.

Financial LLD T P
Comparison Results N AVG SD SEM ED GL
Time Series 95% Value Value
H1:TNN 30 1332.3 22.8 4.2
Germany 159.7 66 2.9 0.004 29
H2: IT2FNWNNH-PSO 30 1173 301 55
H1:TNN 30 935.6 12.5 2.3
Mexican −28.15 −35.27 −5.83 1 44
H2: IT2FNWNNF-PSO 30 9637 23.3 4.3
H1:TNN 30 1082.8 47.1 8.6
Dow-Jones −208 −491 −1.24 0.888 29
H2: IT2FNWNNH-PSO 30 1290 912 167
H1:TNN 30 203.29 2.48 0.45
London 4.304 2.927 5.24 0.0004 50
H2: IT2FNWNNH 30 198.98 3.76 0.69
H1:TNN 30 558.95 4.19 0.77
Nasdaq 28.6 −26 0.89 0.190 29
H2: IT2FNWNNH-PSO 30 530 176 32
H1:TNN 30 580 7.48 1.4
Shanghai 36.61 22.93 4.54 0.0004 30
H2: IT2FNWNNH-PSO 30 543.4 43.5 7.9
H1:TNN 30 480.11 7.31 1.3
Taiwan −1.51 −5.88 −0.58 0.717 47
H2: IT2FNWNNH-PSO 30 481.6 12.2 2.2

6. Discussion of Results
For the best results of the financial time series, the results in Table 10 show that the interval type-2
fuzzy numbers weights neural network with S-Norms and T-norms Hamacher (IT2FNWNNH-PSO)
and Frank (IT2FNWNNF-PSO) optimized using particle swarm optimization (PSO) with dynamic
adjustment obtains better results than the neural networks without optimization; only for the London
time series is the best result the result of the interval type-2 fuzzy numbers weights neural network
with Hamacher (IT2FNWNNH).
The data of the statistical test of T-student in Table 11 demonstrate that the IT2FNWNNH-PSO
and IT2FNWNNF-PSO have acceptable accuracy for almost all the financial time series against the
traditional neural network. Only in the Nasdaq and Shanghai time series, the T value is minimal to
consider a meaningful difference

7. Conclusions
Based on the experiments, we have reached the conclusion that the interval type-2 fuzzy
numbers weights neural network with S-Norms and T-norms Hamacher (IT2FNWNNH) and Frank
(IT2FNWNNF) optimized using particle swarm optimization (PSO) with dynamic adjustment achieves
better results than the traditional neural network and the interval type-2 fuzzy numbers weights neural
network without optimization, in almost all the cases of study, for the stock exchanges time series
used in this work. This assertion is based on the comparison of prediction errors for all stock exchange
time series shown in Table 10 and the data of the statistical test of T-student for best results against the
traditional neural network (TNN).
The proposed approach of dynamic adjustment in PSO for optimization of interval type-2 fuzzy
numbers weights neural network was compared with the traditional neural network to demonstrate
the effectiveness of the proposed method.
The proposed optimization applying an interval type-2 fuzzy inference system to perform
dynamic adjustment of the c1 and c2 parameters in PSO allows the algorithm find optimal results
for the testing problems. In almost all the cases of testing, the results obtained with the IT2FNWNN
optimized were the better results, only in the London stock exchanges time series did the IT2FNWNNH
obtained better results.
These results are good considering that the number of iterations and the number of particles for
the executions of the PSO are relatively small, only 50 iterations with 50 particles. We believe that
performing experiments with more iterations and particles will allow it to find better optimal results.
Axioms 2019, 8, 14 18 of 21

Author Contributions: F.G. worked on the conceptualization and proposal of the methodology, P.M. on the
formal analysis, writing and review of the paper, F.V. on the investigation and validation of results, J.R.C. on the
software and validation, and A.M.-M. on methodology and visualization.
Funding: This research was funded by CONACYT grant number 178539.
Acknowledgments: The authors would like to thank CONACYT and Tijuana Institute of Technology for the
support during this research work.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Wang, X.; Han, M. Online sequential extreme learning machine with kernels for nonstationary time series
prediction. Neurocomputing 2014, 145, 90–97. [CrossRef]
2. Xue, J.; Zhou, S.H.; Liu, Q.; Liu, X.; Yin, J. Financial time series prediction using `2,1RF-ELM. Neurocomputing
2018, 227, 176–186. [CrossRef]
3. Zhou, T.; Gao, S.; Wang, J.; Chu, C.; Todo, Y.; Tang, Z. Financial time series prediction using a dendritic
neuron model. Knowl.-Based Syst. 2016, 105, 214–224. [CrossRef]
4. Hrasko, R.; Pacheco, A.G.C.; Krohlinga, R.A. Time Series Prediction using Restricted Boltzmann Machines
and Backpropagation. Procedia Comput. Sci. 2015, 55, 990–999. [CrossRef]
5. Baykasoglu, A.; Özbakır, L.; Tapkan, P. Artificial bee colony algorithm and its application to generalized
assignment problem. In Swarm Intelligence: Focus on Ant and Particle Swarm Optimization; Felix, T.S.C.,
Manoj, K.T., Eds.; Itech Education and Publishing: Vienna, Austria, 2007; pp. 113–143.
6. Qin, A.K.; Huang, V.L.; Suganthan, P.N. Differential evolution algorithm with strategy adaptation for global
numerical optimization. IEEE Trans. Evol. Comput. 2009, 13, 398–417. [CrossRef]
7. Vesterstrom, J.; Thomsen, R. A comparative study of differential evolution, particle swarm optimization, and
evolutionary algorithms on numerical benchmark problems. In Proceedings of the CEC2004 IEEE Congress
on Evolutionary Computation 2004, Portland, OR, USA, 19–23 June 2004; Volume 2, pp. 1980–1987.
8. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O.; Castro, J.R. Comparison of T-Norms and S-Norms for Interval
Type-2 Fuzzy Numbers in Weight Adjustment for Neural Networks. Information 2017, 8, 114. [CrossRef]
9. Kim, K.; Han, I. Genetic algorithms approach to feature discretization in artificial neural networks for the
prediction of stock price index. Expert Syst. Appl. 2000, 19, 125–132. [CrossRef]
10. Dash, R.; Dash, P.K. Prediction of Financial Time Series Data using Hybrid Evolutionary Legendre Neural
Network: Evolutionary LENN. Int. J. Appl. Evol. Comput. 2016, 7, 16–32. [CrossRef]
11. Kim, K. Financial time series forecasting using support vector machines. Neurocomputing 2003, 55, 307–319.
[CrossRef]
12. Olivas, F.; Valdez, F.; Castillo, O.; Melin, P. Dynamic parameter adaptation in particle swarm optimization
using interval type-2 fuzzy logic. Soft Comput. 2014, 20, 1057–1070. [CrossRef]
13. Abdelbar, A.; Abdelshahid, S.; Wunsch, D. Fuzzy PSO: A generalization of particle swarm optimization. In
Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31
July–4 August 2005; Volume 2, pp. 1086–1091.
14. Valdez, F.; Melin, P.; Castillo, O. An improved evolutionary method with fuzzy logic for combining particle
swarm optimization and genetic algorithms. Appl. Soft Comput. 2011, 11, 2625–2632. [CrossRef]
15. Wu, D.; Tan, W.W. Interval type-2 fuzzy PI controllers why they are more robust. In Proceedings of the 2010
IEEE International Conference on Granular Computing, San Jose, CA, USA, 14–16 August 2010; pp. 802–807.
16. Sepulveda, R.; Castillo, O.; Melin, P.; Rodriguez-Diaz, A.; Montiel, O. Experimental study of intelligent
controllers under uncertainty using type-1 and type-2 fuzzy logic. Inf. Sci. 2007, 177, 2023–2048. [CrossRef]
17. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O. Interval Type-2 Fuzzy Weight Adjustment for Backpropagation
Neural Networks with Application in Time Series Prediction. Inf. Sci. 2014, 260, 1–14. [CrossRef]
18. Gaxiola, F.; Melin, P.; Valdez, F.; Castillo, O. Generalized Type-2 Fuzzy Weight Adjustment for
Backpropagation Neural Networks in Time Series Prediction. Inf. Sci. 2015, 325, 159–174. [CrossRef]
19. Dunyak, J.P.; Wunsch, D. Fuzzy regression by fuzzy number neural networks. Fuzzy Sets Syst. 2000, 112,
371–380. [CrossRef]
Axioms 2019, 8, 14 19 of 21

20. Ding, S.; Li, H.; Su, C.; Yu, J.; Jin, F. Evolutionary Artificial Neural Networks: A Review. Artif. Intell. Rev.
2013, 39, 251–260. [CrossRef]
21. Weber, S. A general concept of fuzzy connectives, negations and implications based on t-norms and t-conorms.
Fuzzy Sets Syst. 1983, 11, 115–134. [CrossRef]
22. Hamacher, H. Überlogischeverknupfungenunscharferaussagen und derenzugehorigebewertungsfunktionen.
In Progress in Cybernetics and Systems Research, III.; Trappl, R., Klir, G.J., Ricciardi, L., Eds.; Hemisphere: New
York, NY, USA, 1975; pp. 276–288.
23. Melin, P.; Olivas, F.; Castillo, O.; Valdez, F.; Soria, J.; Valdez, M. Optimal design of fuzzy classification systems
using PSO with dynamic parameter adaptation through fuzzy logic. Expert Syst. Appl. 2013, 40, 3196–3206.
[CrossRef]
24. Olivas, F.; Valdez, F.; Melin, P.; Sombra, A.; Castillo, O. Interval type-2 fuzzy logic for dynamic parameter
adaptation in a modified gravitational search algorithm. Inf. Sci. 2019, 476, 159–175. [CrossRef]
25. Kennedy, J.; Eberhart, R. Swarm Intelligence; Morgan Kaufmann: San Francisco, CA, USA, 2001.
26. Jang, J.; Sun, C.; Mizutani, E. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and
Machine Intelligence; Prentice-Hall: Upper Saddle River, NJ, USA, 1997.
27. Engelbrecht, A. Fundamentals of Computational Swarm Intelligence; University of Pretoria: Pretoria, South
Africa, 2005.
28. Bingül, Z.; Karahan, O. A fuzzy logic controller tuned with PSO for 2 DOF robot trajectory control. Expert
Syst. Appl. 2001, 38, 1017–1031. [CrossRef]
29. Zadeh, L. The concept of a linguistic variable and its application to approximate reasoning. Inf. Sci. 1975, 8,
199–249. [CrossRef]
30. Mendel, J.; John, R. Type-2 fuzzy sets made simple. IEEE Trans. Fuzzy Syst. 2002, 10, 117–127. [CrossRef]
31. Liang, Q.; Mendel, J. Interval type-2 fuzzy logic systems: Theoryand design. IEEE Trans. Fuzzy Syst. 2000, 8,
535–550. [CrossRef]
32. Chandra Shill, P.; Faijul Amin, M.; Akhand, M.A.H.; Murase, K. Optimization of interval type-2 fuzzy logic
controller using quantum genetic algorithms. In Proceedings of the IEEE World Congress Computational
Intelligence, Brisbane, Australia, 10–15 June 2012; pp. 1027–1034.
33. Karnik, N.N.; Mendel, J.M. Centroid of a type-2 fuzzy set. Inf. Sci. 2001, 132, 195–220. [CrossRef]
34. Biglarbegian, M.; Melek, W.W.; Mendel, J.M. On the robustness of Type-1 and Interval Type-2 fuzzy logic
systems in modeling. Inf. Sci. 2011, 181, 1325–1347. [CrossRef]
35. Mendel, J.M. Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions; Prentice Hall PTR:
Upper Saddle River, NJ, USA, 2001.
36. Wu, D.; Mendel, J.M. On the continuity of type-1 and interval type-2 fuzzy logic systems. IEEE Trans. Fuzzy
Syst. 2011, 19, 179–192.
37. Ishibuchi, H.; Nii, M. Numerical Analysis of the Learning of Fuzzified Neural Networks from Fuzzy If–Then
Rules. Fuzzy Sets Syst. 1998, 120, 281–307. [CrossRef]
38. Feuring, T. Learning in Fuzzy Neural Networks. In Proceedings of the IEEE International Conference on
Neural Networks, Washington, DC, USA, 3–6 June 1996; Volume 2, pp. 1061–1066.
39. Valdez, F.; Melin, P.; Castillo, O. A survey on nature-inspired optimization algorithms with fuzzy logic for
dynamic parameter adaptation. Expert Syst. Appl. 2014, 41, 6459–6466. [CrossRef]
40. Li, C.; Wu, T. Adaptive fuzzy approach to function approximation with PSO and RLSE. Expert Syst. Appl.
2011, 38, 13266–13273. [CrossRef]
41. Muthukaruppan, S.; Er, M.J. A hybrid particle swarm optimization based fuzzy expert system for the
diagnosis of coronary artery disease. Expert Syst. Appl. 2012, 39, 11657–11665. [CrossRef]
42. Taher, N.; Ehsan, A.; Masoud, J. A new hybrid evolutionary algorithm based on new fuzzy adaptive PSO
and NM algorithms for distribution feeder reconfiguration. Energy Convers. Manag. 2012, 54, 7–16.
43. Wang, B.; Liang, G.; Wang, C.; Dong, Y. A new kind of fuzzy particle swarm optimization fuzzy PSO
algorithm. In Proceedings of the 1st International Symposium on Systems and Control in Aerospace and
Astronautics, ISSCAA, Harbin, China, 19–21 January 2006; pp. 309–311.
Axioms 2019, 8, 14 20 of 21

44. Hongbo, L.; Abraham, M. Fuzzy Adaptive Turbulent Particle Swarm Optimization. In Proceedings of the
IEEE Fifth International Conference on Hybrid Intelligent Systems (HIS’05), Rio de Janeiro, Brazil, 6–9
November 2005; pp. 445–450.
45. Shi, Y.; Eberhart, R. Fuzzy Adaptive Particle Swarm Optimization. In Proceedings of the IEEE International
Conference on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; IEEE Service Center: Piscataway,
NJ, USA, 2001; pp. 101–106.
46. Yang, D.; Li, Z.; Liu, Y.; Zhang, H.; Wu, W. A Modified Learning Algorithm for Interval Perceptrons with
Interval Weights. Neural Process Lett. 2015, 42, 381–396. [CrossRef]
47. Kuo, R.J.; Chen, J.A. A Decision Support System for Order Selection in Electronic Commerce based on
Fuzzy Neural Network Supported by Real-Coded Genetic Algorithm. Experts Syst. Appl. 2004, 26, 141–154.
[CrossRef]
48. Karnik, N.N.; Mendel, J. Operations on type-2 fuzzy sets. Fuzzy Sets Syst. 2001, 122, 327–348. [CrossRef]
49. Chai, Y.; Xhang, D. A Representation of Fuzzy Numbers. Fuzzy Sets Syst. 2016, 295, 1–18. [CrossRef]
50. Chu, T.C.; Tsao, T.C. Ranking Fuzzy Numbers with an Area between the Centroid Point and Original Point.
Comput. Math. Appl. 2002, 43, 111–117. [CrossRef]
51. Dunyak, J.; Wunsch, D. Fuzzy Number Neural Networks. Fuzzy Sets Syst. 1999, 108, 49–58. [CrossRef]
52. Coroianu, L.; Stefanini, L. General Approximation of Fuzzy Numbers by F-Transform. Fuzzy Sets Syst. 2016,
288, 46–74. [CrossRef]
53. Li, Z.; Kecman, V.; Ichikawa, A. Fuzzified Neural Network based on fuzzy number operations. Fuzzy Sets
Syst. 2002, 130, 291–304. [CrossRef]
54. Fard, S.; Zainuddin, Z. Interval Type-2 Fuzzy Neural Networks Version of the Stone–Weierstrass Theorem.
Neurocomputing 2011, 74, 2336–2343. [CrossRef]
55. Molinari, F. A New Criterion of Choice between Generalized Triangular Fuzzy Numbers. Fuzzy Sets Syst.
2016, 296, 51–69. [CrossRef]
56. Asady, B. Trapezoidal Approximation of a Fuzzy Number Preserving the Expected Interval and Including
the Core. Am. J. Oper. Res. 2013, 3, 299–306. [CrossRef]
57. Figueroa-García, J.C.; Chalco-Cano, Y.; Roman-Flores, H. Distance Measures for Interval Type-2 Fuzzy
Numbers. Discret. Appl. Math. 2015, 197, 93–102. [CrossRef]
58. Requena, I.; Blanco, A.; Delgado, M.; Verdegay, J. A Decision Personal Index of Fuzzy Numbers based on
Neural Networks. Fuzzy Sets Syst. 1995, 73, 185–199. [CrossRef]
59. Valdez, F.; Vazquez, J.C.; Gaxiola, F. Fuzzy Dynamic Parameter Adaptation in ACO and PSO for Designing
Fuzzy Controllers: The Cases of Water Level and Temperature Control. Adv. Fuzzy Syst. 2018, 2018, 1274969.
[CrossRef]
60. Fan, G.F.; Peng, L.L.; Hong, W.C. Short term load forecasting based on phase space reconstruction algorithm
and bi-square kernel regression model. Appl. Energy 2018, 224, 13–33. [CrossRef]
61. Dong, Y.; Zhang, Z.; Hong, W.-C. A hybrid seasonal mechanism with a chaotic cuckoo search algorithm with
a support vector regression model for electric load forecasting. Energies 2018, 11, 1009. [CrossRef]
62. Tung, S.W.; Quek, C.; Guan, C. eT2FIS: An Evolving Type-2 Neural Fuzzy Inference System. Inf. Sci. 2013,
220, 124–148. [CrossRef]
63. Castillo, O.; Melin, P. A review on the design and optimization of interval type-2 fuzzy controllers. Appl. Soft
Comput. 2012, 12, 1267–1278. [CrossRef]
64. Zarandi, M.H.F.; Torshizi, A.D.; Turksen, I.B.; Rezaee, B. A new indirect approach to the type-2 fuzzy systems
modeling and design. Inf. Sci. 2013, 232, 346–365. [CrossRef]
65. Nguyen, D.; Widrow, B. Improving the Learning Speed of 2-Layer Neural Networks by choosing Initial
Values of the Adaptive Weights. In Proceedings of the International Joint Conference on Neural Networks,
San Diego, CA, USA, 17–21 June 1990; Volume 3, pp. 21–26.
66. Caraveo, C.; Valdez, F.; Castillo, O. Optimization of fuzzy controller design using a new bee colony algorithm
with fuzzy dynamic parameter adaptation. Appl. Soft Comput. 2016, 43, 131–142. [CrossRef]
67. Castillo, O.; Neyoy, H.; Soria, J.; Melin, P.; Valdez, F. A new approach for dynamic fuzzy logic parameter
tuning in Ant Colony Optimization and its application in fuzzy control of a mobile robot. Appl. Soft Comput.
2015, 28, 150–159. [CrossRef]
Axioms 2019, 8, 14 21 of 21

68. Alexandridis, A.; Sarimveis, H.; Bafas, G. A new algorithm for online structure and parameter adaptation of
RBF networks. Neural Netw. 2003, 16, 1003–1017. [CrossRef]
69. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests
as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput.
2011, 1, 3–18. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like