Professional Documents
Culture Documents
Powder Technology
a r t i c l e i n f o a b s t r a c t
Article history: Accurate prediction of the complicated nonlinear relationship among the grade efficiency, geometrical dimen-
Received 15 October 2018 sions, and operating parameters based on limited experimental data is the most effective way to design a
Received in revised form 21 January 2019 high-efficiency cyclone separator. Herein, a hybrid PCA-PSO-SVR model is proposed to predict the grade effi-
Accepted 25 January 2019
ciency of cyclone separators with the operating parameters based on 217 sets of experimental data provided
Available online 1 March 2019
in the literature. The experimental data are preprocessed using the random sampling technique together
Keywords:
with the normalization method and principal component analysis (PCA) at first; subsequently, the particle
Cyclone separator swarm optimization (PSO) algorithm is incorporated to optimize the parameters for the support vector regres-
Grade efficiency sion (SVR), including the penalty factor C, kernel function parameter g, and insensitive loss ε. Finally, the SVR
Support vector regression algorithm model with the optimized parameters is trained with 80% pretreatment data, and the generalization ability of
Particle swarm optimization the model is tested with the remaining 20% data. The mean squared error of the test sets is 6.948 × 10−4 with
Principal component analysis a correlation coefficient of 0.982. The comparison results show that the PCA-PSO-SVR model has higher accuracy,
better generalization ability, and stronger robustness than the existing models for predicting the cyclone separa-
tor efficiency in the case with only a few experimental data.
© 2019 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.powtec.2019.01.070
0032-5910/© 2019 Elsevier B.V. All rights reserved.
W. Zhang et al. / Powder Technology 347 (2019) 114–124 115
dust particle size distribution σ. However, the effect of the mean square
error of dust particle size distribution on the separation performance
can be neglected with regard to both physical and mathematical as-
pects. To sum up, there are a total of eight input variables. The grade ef-
ficiency of particles ηi is selected as the output variable. Table 2
summarizes the input and output variables of SVR and gives some ex-
perimental data.
Table 2
Input and output variables of the SVR model with corresponding experimental data.
x1 x2 x3 x4 x5 x6 x7 x8 y1
D Ka ~r
d vi Ci δ ρp dm ηi
Fig. 3. Schematic of SVR model. K ðx; xi Þ ¼ exp −g kx−xi k2 ð7Þ
Solving the minimization problem of Eq. (1) is then transformed into where g is kernel function parameter. Here, changing the value of g
solving the quadratic programming problem of Eq. (3) after introducing indirectly changes the nonlinear mapping function, which can deter-
the concept of relaxation variable ξi, mine the complexity and performance of the model directly.
In this study, the purpose of SVR model training is to find an appro-
m
minJ ¼ 1 2
kωk2 þ C∑i¼1 ξi þ ξi ð3Þ priate correspondence to satisfy Eq. (8) after the input and output vari-
8 ables are settled.
< yi −ðω ϕðxÞÞ−u ≤ε þ ξi
s:t: ðω ϕðxÞÞ þ u−yi ≤ε þ ξi
: ~r ; v ; C ; δ; ρ ; dm
ηi ¼ f D; K a ; d ð8Þ
ξi ; ξi ≥0 i i p
where ω is weight vector; 1 2 kωk2 represents model complexity. C is 217 sets of experimental data from literature [14–16] as shown in
penalty factor, which keeps a balance between the complexity and Table 2 are used to train and test the SVR model. The range of each
empirical risk [30]. Increasing the value of C indicates that the more input parameter is shown in Table 3.
attention is paid to the empirical risk, and the greater the possibility of According to the random sampling technique, 80% of the data
over-fitting occurs. Otherwise, the phenomenon of under-fitting easily are randomly selected as the training set of SVR, and the remain-
occurs. Therefore, the selection of an appropriate penalty factor is ing 20% are used as the test set to verify the generalization ability
required. Choosing a suitable value of C is crucial during the establish- of the model. Before training, the input data need to be normal-
ment of a favorable SVR model. ized so that each variable can be converted into a number be-
The Lagrangian multipliers method and KKT conditions can be used tween 0 and 1. The output results after training should be
to transform the quadratic programming problem of Eq. (3) into the reversely normalized.
dual optimization problem of Eq. (4)
8 9
>
< 1 >
m
m
m
=
max J ðα Þ ¼ max − ∑ i ¼ 1 α i −α i α j −α j Kðxi ; x j Þ−ε∑i¼1 α i þ α i þ ∑i¼1 yi α i −α i
>
: 2 >
;
j¼1
8
>
> X
m
>
< α i −α i ¼ 0 2.3. Dimensionality reduction based on PCA
s:t: i¼1 ð4Þ
>
> 0bα i bC
>
:
0bα i bC When modeling multivariate data, the model complexity and com-
putation time could be increased by the large amount of variables. To
where αi, α i , αj and α j are Lagrangian operators; K(xi, xj) is the kernel solve this problem, the principal component analysis (PCA) is adopted
function with which the input space of data can be transformed into a to reduce the dimension of the dataset. PCA is one of the most com-
nonlinear and high-dimensional space. According to the αi and α i cal- monly used dimensionality reduction algorithms, which can well over-
culated from Eq. (4), the support vector xi (with αi and α i are not come the disadvantages of computational complexity resulted from too
both 0) and the standard support vector xi (with one of αi and α i is many dependent variables. The idea of PCA is to map the n-dimensional
C) can be determined, and then the threshold value u can be calculated features to k dimensions (k b n) according to the maximum variance
according to Eq. (5),
1 n h
l
i h
l
io
u¼ ∑0bα i bC yi −∑ j¼1 α j −α j Kðxi ; x j Þ−ε þ ∑0bα bC yi −∑ j¼1 α j −α j Kðxi ; x j Þ þ ε
N NSV i
ð5Þ
where NNSV is number of standard support vectors,l is number of sup- theory. The k-dimensional feature matrix is called the master element,
port vectors. Then the resulting approximation function can be written and it is a linear combination of the previous features. The new k fea-
as Eq. (6): tures are independent and reflect most information of the sample space.
l Decision of the reduced number of dimensions related to PCA
f ðxÞ ¼ ∑i¼1 α i −α i K ðxi ; xÞ þ u ð6Þ is a critical step. As the number of dimensions is only a few, some
118 W. Zhang et al. / Powder Technology 347 (2019) 114–124
Table 3
Range of input parameters.
D (mm) Ka ~r
d vi (m/s) Ci (g/m3) δ (μm) ρp (kg/m3) dm (μm)
information could be lost by the dimension-reduced matrix. Inversely, 2.4. Parameter optimization of SVR by PSO
as the dimensions remain high, the complexity of the regression
model also becomes too high. In both cases, the generalization ability The generalization capacity of SVR greatly depends on the hyper-
of the regression model is low. In this study, the original SVR model parameters, i.e., the penalty factor C, kernel function parameter g, and
is an eight-dimensional space, which could be reduced from eight to insensitive loss ε. However, it is difficult to determine the proper value
three by PCA. The performance parameters of the model from the test of these parameters by prior knowledge, and the process of tuning
set after the dimension reduction and their corresponding SVR parame- parameters manually is time-consuming. Furthermore, the effect of
ters are listed in Table 4. These performance parameters include the these three parameters on the model performance is still uncertain.
information retention ratio, and mean square error and correlation coef- Thus, the particle swarm optimization (PSO) is adopted for the para-
ficient of the test set. It is observed directly from Fig. 4 that the informa- meter's optimization.
tion retention rate becomes lower and lower with the decrease in the The PSO algorithm was proposed firstly by Kennedy and Eberhart
dimensions. However, the greatest information loss occurs when the di- [33] inspired by the hunting of birds. In the optimization process, each
mension is reduced from five to four. After the dimensionality-reduced particle has its own speed, location, and fitness value determined by
models are tested one by one using the test set, it is found that the cor- the target function. In each iteration, the particle updates its speed
relation coefficient of the model is the largest and the root mean square and position based on the best historical position (individual best)
error is the smallest when the dimensions are reduced to five. Thusly, that the particle passes through and the best position (global best)
the dimension reduction from eight to five is the best one. that all particles can be found. The formula for updating speed and po-
In this study, the dimension reduction matrix W is obtained by the sition are as follows,
following steps.
xid ðt þ 1Þ ¼ xid ðt Þ þ vid ðt þ 1Þ t þ 1
Step 1: Normalizing the training set. ¼ ω t þ C1 r1 Pid−xidt þ C2 r2 ðGid−xidðtÞÞ ð11Þ
Step 2: Centralizing the training set.
Step 3: Calculating the covariance matrix of the training set. vid ðt þ 1Þ ¼ ω vid ðt Þ þ C 1 r 1 ðP id −xid ðt ÞÞ þ C 2 r2 ðGid −xid ðt ÞÞ ð12Þ
Step 4: Calculating the eigenvalues of the covariance matrix and the
corresponding eigenvectors. where i is ith particle, d is dimension, t is iteration number, C1 and C2 are
Step 5: Sorting the eigenvalues from large to small, the eigenvectors learning factors, r1 and r2 are random numbers between 0 and 1, ω is
corresponding to the first five eigenvalues are found. The five eigen- inertial weight of linear decreasing, Pid is individual extreme value of
vectors form the dimension reduction matrix W shown in Eq. (9). the ith particle on the d dimension, and Gid is global extreme value of
all particles.
According to Eq. (10), the input matrix A consisting of eight input The 5-fold cross-validation is used to evaluate the fitness of each
variables is reduced to a five-dimensional feature matrix N by the 8 particle to maintain a balance between computation cost and effective-
× 5 dimension reduction matrix W. The newly generated matrix N is ness of parameters optimization. Training sets are randomly divided
composed of five independent variables N1, N2, N3, N4 and N5. into five non-intersecting subsets with a roughly equivalent number
of data patterns. For every set of SVR parameters C, g and ε, extracting
2 3 from the corresponding particle, four subsets are selected randomly to
−0:0068 0:0547 0:1711 0:3651 0:9095
6 0:6800 −0:2824 −0:4245 0:3984 −0:0366 7 be the training set for establishing SVR model, and the performance of
6 7
6 0:1585 0:8006 −0:3241 0:1317 −0:0345 7 this SVR model is measured by calculating RMSE on the remaining
6 7
6 0:5024 −0:2846 0:2136 −0:2094 0:0256 7 one subset according to Eq. (13).
W ¼6
6 −0:3822
7 ð9Þ
6 −0:3918 −0:0870 0:2790 −0:0314 7
7
6 −0:2003 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
6 −0:0450 −0:1815 0:5427 −0:2155 7
7 1 n
4 −0:2123 −0:0941 −0:1701 0:2292 −0:1155 5 RMSE ¼ ∑ ðy −f ðxi ÞÞ
2
ð13Þ
n i¼1 i
0:1696 0:1758 0:7553 0:4711 −0:3300
where, n is the number of samples; yi is the true value; f(xi) is the pre-
N ¼AW ð10Þ dicted value of the model.
This process is repeated for five times until each of the five subsets
where, N = [N1 N2 N3 N4 N5]T, A ¼ ½ δ D ρp dm vi Ci Ka ~
d has been used once (only once) as the testing subset in turn. Eventually,
r
Table 4
Performance parameters of the SVR model after dimension reduction.
Dimension Information retention ratio Mean square error of test set Correlation coefficient of test set Parameters of SVR
Step 1: The PSO parameters are set and the particle swarm is initial- For evaluating the performance of the model for the grade efficiency
ized as shown in Table 5. The parameters include the swarm size, the prediction, the normalized mean squared error MSE and the correlation
maximum iterations, the acceleration coefficients c1 and c2, the in- coefficient R are defined as
ertia weight, the penalty factor C∈[0.1, 800], the RBF kernel parame-
ter g∈[0.1, 10], and the ε-insensitive loss function parameter ε∈[0, 1], 1 n 2
MSE ¼ ∑ ðy −f ðxi ÞÞ ð14Þ
respectively. Then, a population of initial particles is generated with n i¼1 i
the random position and velocity.
n
Step 2: For the training set, a five-fold cross-validation is used to cal- ∑i¼1 ðyi −yÞ f ðxi Þ− f
R2 ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2 ð15Þ
culate the fitness value of different parameter combinations and Pn 2 Pn
then the calculated result is taken as the initial individual pbest for i¼1 ðyi −yÞ i¼1 f ðxi Þ−f
each particle. Here, the best pbest is set in particle swarm as the ini-
tial gbest. where, n is number of samples; yi is true value; f(xi) is predicted value of
Step 3: The speed and position of the particle are updated according the model; y is average valuation of true values; f is average of the pre-
to Eqs. (11) and (12), and then the fitness value before updating dicted values. The smaller the mean squared error MSE, the higher the
accuracy of the model prediction. Meanwhile, the greater the correla-
pbest and gbest is calculated.
tion coefficient, the higher the correlation between experimental data
Step 4: Step 3 is repeated until the end condition is met and the op- and predicted values. Moreover, R2 = 1 indicates that the predicted
timal parameter is finally obtained. value is completely correlated with the experimental data; that is,
there is a linear relationship in the sense that the probability is 1. Be-
Fig. 5 shows the optimization result varying with the number of iter- sides, in the following, the simulation time (CPU time) t is also consid-
ations. The whole evolutionary process illustrates the changing trend of ered to evaluate the computational efficiency.
the best population fitness during the evolution process. The fitness de-
creases with the increasing generation number and converges at about 3. Comparison and discussion
generation 25. After 50 iterations, the RMSE obtained by the training set
is 3.123 × 10−4 through the five-fold cross-validation, and the value of 3.1. Comparison between the prediction results of PCA-PSO-SVR and exper-
{C, g, ε} in the final optimization results is {660, 0.673, 0.026}. imental data
The SVR model configured with the optimal value {C, g, ε} obtained
by the particle swarm optimization is trained based on the training Fig. 6 shows the comparison between the predicted results of the
data selected at random until it meets the convergence conditions. PCA-PSO-SVR model and experimental data for the grade efficiency of
cyclone separators. The abscissa represents the experimental data of
grade efficiency as reported in the literature [14–16], and the ordinate
Table 5 represents the predicted values of grade efficiency output of the PCA-
PSO parameter settings.
PSO-SVR model. The red balls illustrate the predicted results of grade
Particle swarm size 50 efficiency by the PCA-PSO-SVR model for the training samples. The
Maximum iterations 50 green triangles are the grade efficiency values predicted by the PCA-
(C,g,ε) Min = (0.1,0.1,0) PSO-SVR model for the test samples. They are all concentrated near
search range Max = (800,10,1) the x = y line, indicating that the predicted results are consistent with
The initial position of the particle swarm randomly generated the experimental data. The normalized mean squared error MSE and
The initial velocity of the particle swarm randomly generated
the correlation coefficient R of the training samples and testing samples
120 W. Zhang et al. / Powder Technology 347 (2019) 114–124
Fig. 8 (continued).
122 W. Zhang et al. / Powder Technology 347 (2019) 114–124
Table 6
Evaluation parameters and hyper-parameters of SVR hybrid with PSO and PCA.
Fig. 8 (continued).
Fig. 11. Comparison of the PCA-PSO-SVR model with the BP, RBF and GRNN models for the
grade efficiency.
Table 7
Evaluation parameters of different models.
reduces the dimensionality of feature space and improves the generali- Notation
zation ability of the model. The mean squared error of PSO-SVR method
is 1.010 × 10−3, lower than those of PCA-SVR and SVR models which a Inlet height,mm
means that the particle swarm optimization improves the modeling b Inlet width,mm
accuracy of the SVM. Table 6 lists the mean squared error MSE and cor- H1 Cylinder height, mm
H2 Cone height, mm
relation coefficient R for evaluating the performance of the models com-
S Length of vortex finder, mm
bined with the hyper-parameters of SVR {C, g, ε} for the grade efficiency B Particle exit diameter, mm
prediction. C The penalty factor
Fig. 10 shows the time consuming of CPU for SVR and PCA-SVR with Ci Concentration of inlet particles,g/m3
the standard grid method (t = 145.07 s and 3508.85 s, respectively) and D Cyclone diameter,mm
dr Cyclone gas outlet diameter,mm
the time consuming of CPU for PSO-SVR and the PCA-PSO-SVR with PSO ~
dr The ratio of diameter of vortex finder to that of cyclone (dimensionless)
algorithms (t = 25.63 s and 502.65 s, respectively). The time required by ~
dr=dr/D
PSO algorithm is far less than that with standard grid method because g The parameter of the kernel function
the optimization process needs 2500 times calculation with the 5-fold Ka The ratio of cyclone cross-section area to inlet cross-sectional area,
cross-validation to confirm the fitness function when the iteration is (dimensionless) Ka = πD2/4ab
vi Gas velocity at cyclone inlet,m/s
50 and particle number is 50 when using particle swarm algorithm for
δ Particle diameter,μm
optimization. However, the optimization process needs 125,000 tim- dm Median size of particle,μm
es calculation with the 5-fold cross-validation to acquire the fitness ε The insensitive loss
function when each optimization parameter is set to 50 levels using a η Overall efficiency,%
standard grid search method for the optimization. In summary, as an ηi Graded efficiency of particles,%
ρp Particle density,kg/m3
advanced evolutionary algorithm, the particle swarm optimization can
ωi The weight vector
replace the standard grid search method to find better model parame- u The threshold value
ters to improve the optimization speed and accuracy.
3.5. Comparison among PCA-PSO-SVR, BP, RBF and GRNN models Acknowledgments
To test the validity of the PCA-PSO-SVR model, three types of ANN Authors acknowledge support from the National Key Research
(Artificial Neural Network) models are adopted to model the cyclone and Development Program of China (2018YFB0604603-03),
grade efficiency, namely, back propagation (BP), radial basis function National Natural Science Foundation of China (No. 21506139),
(RBF), and general regression neural network (GRNN). The BP neural NSFC-Shanxi Joint Fund for Coal-Based Low-Carbon Technology
network adopts a single hidden layer structure with 10 neurons. The (No. U1710101) and Special Talent Program of Shanxi Province
radial basis function has a spread velocity of 7.5 in the radial basis neural (No. 201605D211005).
network, and the spread velocity of the probabilistic neural network in
the generalized regression neural network is set to 0.1. Most of the pre- References
diction results of the PCA-PSO-SVR model cluster near the x = y line in
[1] B. Zhao, Development of a dimensionless logistic model for predicting cyclone sep-
Fig. 11, which means that the accuracy of PCA-PSO-SVR model is supe- aration efficiency, Aerosol Sci. Technol. 44 (12) (2010) 1105–1112, https://doi.org/
rior to the other three neural networks. Some values predicted by RBF 10.1080/02786826.2010.512027.
are lower than that obtained from the experimental data, while some [2] B. Zhao, Prediction of gas-particle separation efficiency for cyclones: a time-of-flight
model, Sep. Purif. Technol. 85 (2012) 171–177, https://doi.org/10.1016/j.seppur.
values predicted by GRNN are higher than that obtained from the exper- 2011.10.006.
imental data. This phenomenon is especially noticeable when there are [3] G. Lidén, A. Gudmundsson, Semi-empirical modelling to generalise the dependence
only a few data (distributing in grade efficiency less than 80%). Table 7 of cyclone collection efficiency on operating conditions and cyclone design, J. Aero-
sol Sci. 28 (5) (1997) 853–874, https://doi.org/10.1016/S0021-8502(96)00479-X.
lists the evaluation parameters of BP, RBF, GRNN, and PCA-PSO-SVR
[4] Y.F. Qiu, B.Q. Deng, N.K. Chang, Numerical study of the flow field and separation ef-
models. It shows that the PCA-PSO-SVR model achieves the minimum ficiency of a divergent cyclone, Powder Technol. 217 (2012) 231–237, https://doi.
mean square error and high correlations compared with the other org/10.1016/j.powtec.2011.10.031.
[5] G.G. Sun, J.Y. Chen, M.X. Shi, Optimization and applications of reverse-flow cyclones,
three ANN models.
China Particuology 3 (2005) 43–46, https://doi.org/10.1016/S1672-2515(07)
60162-6.
4. Conclusions [6] J.X. Yang, G.G. Sun, M.S. Zhan, Prediction of the maximum-efficiency inlet velocity in
cyclones, Powder Technol. 286 (2015) 124–131, https://doi.org/10.1016/j.powtec.
2015.07.024.
The PCA-PSO-SVR modeling method, which combines the principal [7] W. Barth, Design and layout of the cyclone separator on the basis of new investiga-
component analysis, particle swarm optimization, and support vector tions, Brennstoff-Warme-Kraft 8 (1956) 1–9, http://refhub.elsevier.com/s0032-
regression algorithm, is proposed to model the cyclone efficiency 5910(17)30882-3/rf0100.
[8] P.W. Dietz, Collection efficiency of cyclone separators, AICHE J. 27 (1981) 888–892,
using the experimental data. The simulation results show that PCA, as http://refhub.elsevier.com/s0032-5910(17)30882-3/rf0105.
an unsupervised dimensionality reduction algorithm, can effectively [9] D. Leith, W. Licht, The collection efficiency of cyclone type particle collectors: a new
reduce the dimensionality of feature space, eliminate partial noise theoretical approach, AIChE Symp. Ser. 68 (1972) 196–206, http://refhub.elsevier.
com/s0032-5910(16)30086-9/rf0030.
data, reduce the complexity of the model, and improve the generaliza- [10] S.E. Rafiee, M.M. Sadeghiazad, Efficiency evaluation of vortex tube cyclone separator,
tion ability of the model. As an optimization algorithm, PSO has the Appl. Therm. Eng. 114 (2017) 300–327, https://doi.org/10.1016/j.applthermaleng.
excellent optimization ability to gain the proper parameters of SVR 2016.11.110.
[11] Y. Zhu, K.W. Lee, Experimental study on small cyclones operating at high flowrates,
model. With the optimized parameters, SVR is successfully used to pre- J. Aerosol Sci. 30 (10) (1999) 1303–1315, https://doi.org/10.1016/S0021-8502(99)
dict the grade efficiency of cyclone separator. The prediction results 00024-5.
show that PCA-PSO-SVR model has strong predictive ability, high stabil- [12] J.Y. Chen, M.X. Shi, Analysis on cyclone collection efficiencies at high temperatures,
China Particuology 1 (2003) 20–26, https://doi.org/10.1016/S1672-2515(07)
ity, high generalization ability and robustness compared with the classi-
60095-5.
cal theoretical models, i.e.,PSO-SVR, SVR, PCA-SVR, and some types of [13] K.S. Lim, H.S. Kim, K.W. Lee, Characteristics of the collection efficiency for a cyclone
ANN models. As a future extension of this work, the development of with different vortex finder shapes, J. Aerosol Sci. 35 (2004) 743–754, https://doi.
higher performance artificial intelligence models and advanced optimal org/10.1016/j.jaerosci.2003.12.002.
[14] X.L. Luo, J.Y. Chen, Research on the effect of the particale concentration in gas upon
search algorithms is necessary to predict the grade efficiency of cyclone the performance of cyclone separators, J. Eng. Thermophys-rus. 13 (3) (1992)
separator more accurately and guide its optimization design. 282–285, http://jetp.iet.cn/EN/Y1992/V13/I3/282.
124 W. Zhang et al. / Powder Technology 347 (2019) 114–124
[15] Y.H. Jin, J.Y. Chen, Computation method of PV™ cyclone performance, Acta Pet. Sin. 2 [25] K. Elsayed, C. Lacor, Modeling and pareto optimization of gas cyclone separator
(1995) 93–99, http://lib.cqvip.com/qk/81668X/200001/1878380.html. performance using RBF type artificial neural networks and genetic algorithms,
[16] Y.H. Jin, M.X. Shi, Experimental studies on scale-up of cyclone separator, J. China Powder Technol. 217 (2) (2012) 84–99, https://doi.org/10.1016/j.powtec.2011.10.
Univ. Pet. Ed. Nat. Sci. 5 (1990) 46–55, http://qikan.cqvip.com/article/detail.aspx? 015.
id=353292. [26] K. Yetilmezsoy, Determination of optimum body diameter of air cyclones using a
[17] X. Sun, Y.Y. Joon, Multi-objective optimization of a gas cyclone separator using ge- new empirical model and a neural network approach, Environ. Eng. Sci. 23 (4)
netic algorithm and computational fluid dynamics, Powder Technol. 325 (2018) (2006) 680–690, https://doi.org/10.1089/ees.2006.23.680.
347–360, https://doi.org/10.1016/j.powtec.2017.11.012. [27] A. Khalkhali, H. Safikhani, Pareto based multi-objective optimization of a cyclone
[18] M. Francesco, R. Francesco, N.G. Carlo, Separation efficiency and heat exchange op- vortex finder using CFD, GMDH type neural networks and genetic algorithms,
timization in a cyclone, Sep. Purif. Technol. 179 (2017) 393–402, https://doi.org/10. Eng. Optim. 44 (1) (2012) 105–118, https://doi.org/10.1080/0305215X.2011.
1016/j.seppur.2017.02.024. 564619.
[19] A.N. Huang, I. Keiya, F. Tomonori, F. Kunihiro, K. Hsiu-Po, Effects of particle mass [28] G.G. Sun, M.X. Shi, The proper design and application of PV cyclone, Pet. Refin. Eng.
loading on the hydrodynamics and separation efficiency of a cyclone separator, J. 32 (9) (2002) 4–7, in Chinese https://doi.org/10.3969/j.issn.1002-106X.2002.09.
Taiwan Inst. Chem. E. 90 (2018) 61–67, https://doi.org/10.1016/j.jtice.2017.12.016. 002.
[20] D. Misiulia, A.G. Andersson, T.S. Lundström, Effects of the inlet angle on the collec- [29] M.P. Wang, Q. Tian, Dynamic heat supply prediction using support vector regression
tion efficiency of a cyclone with helical-roof inlet, Powder Technol. 305 (2017) optimized by particle swarm optimization algorithm, Math. Probl. Eng. 1 (2016)
48–55, https://doi.org/10.1016/j.powtec.2016.09.050. 1–10, https://doi.org/10.1155/2016/3968324.
[21] W.I. Mazyana, A. Ahmadib, J. Brinkerhoffa, H. Ahmedc, M. Hoorfar, Enhancement of [30] R. Dash, P.K. Sa, B. Majhi, Particle swarm optimization based support vector regres-
cyclone solid particle separation performance based on geometrical modification: sion for blind image restoration, J. Comput. Sci. Technol. 27 (5) (2012) 989–995,
numerical analysis, Sep. Purif. Technol. 191 (2018) 276–285, https://doi.org/10. https://doi.org/10.1007/s11390-012-1279-z.
1016/j.seppur.2017.09.040. [31] Y.Y. Chen, Q.F. Xiong, Support Vector Machine Method and Application Course [M].
[22] F. Zhou, G.G. Sun, Y. Zhang, H. Ci, Q. Wei, Experimental and CFD study on the effects Beijing, 2011.
of surface roughness on cyclone performance, Sep. Purif. Technol. 193 (2018) [32] Y. Yajima, H. Ohi, M. Mori, Extracting feature subspace for kernel based linear pro-
175–183, https://doi.org/10.1016/j.seppur.2017.11.017. gramming support vector machines, J. Oper. Res. Soc. Jan. 46 (4) (2003) 395–408,
[23] K. Elsayed, C. Lacor, CFD modeling and multi-objective optimization of cyclone geom- https://doi.org/10.15807/jorsj.46.395.
etry using desirability function, artificial neural networks and genetic algorithms, Appl. [33] E. Russell, K. James, Particle swarm optimization, in: IEEE proceedings, Neural Netw.
Math. Model. 37 (8) (2013) 5680–5704, https://doi.org/10.1016/j.apm.2012.11.010. 4 (1995) 1942–1948, https://doi.org/10.1109/ICNN.1995.488968.
[24] B. Zhao, Modeling pressure drop coefficient for cyclone separators: a support vector [34] Z. Zhong, D. Pi, Forecasting satellite attitude volatility using support vector regres-
machine approach, Chem. Eng. Sci. 64 (2009) 4131–4136, https://doi.org/10.1016/j. sion with particle swarm optimization, IAENG Int. J. Comput. Sci. 41 (3) (2014)
ces.2009.06.017. 153–162http://www.iaeng.org/IJCS/issues_v41/issue_3/IJCS_41_3_01.pdf.