You are on page 1of 8

Hybrid Neural Network Extreme Learning Machine and

Flower Pollination Algorithm to Predict Fire Extensions on


Kalimantan Island
N Nalaratih1, a), A Damayanti2, b) and E Winarko3, c)
1,2,3
Department of Mathematics, Faculty of Science and Technology, Universitas Airlangga, Surabaya, Indonesia
b)
Corresponding author: aulid@fst.unair.ac.id.

Abstract. The forest and land fires are one of the environmental problems that often occur in several regions of
Indonesia, one of which is the Kalimantan Island. Ecologically, forest fires result in loss of nutrients, low soil infiltration,
and high erosion. Therefore, rehabilitation needs to be done to improve and re-increase land productivity after a forest
fire. One effort that can be done in preparing for the rehabilitation process is to predict the extent of forest fires. The
purpose of this study is to predict the extent of fires on the island of Kalimantan using a hybrid artificial neural network
method of extreme learning machine (ELM) and flower pollination algorithm (FPA). The flower pollination algorithm is
one of the algorithms used in optimization problems. In the Extreme Learning Machine training process, this algorithm
plays a role in optimizing the weight so that the optimal weight is obtained. The optimal weight used to predict is the best
interest value (GBest) flower pollination algorithm. The stages of predicting the area of fire on the island of Kalimantan
use the flower pollination algorithm and extreme learning machine, including data normalization, training process,
validation test process, data denormalization, and error value calculation using mean square error (MSE). Based on the
training process, obtained the smallest MSE of 0.016753377 with the MSE validation test of 0.017989928.

INTRODUCTION
Forest and land fires are one of the environmental problems that often occur in several regions of Indonesia, one
of which is the island of Kalimantan. In 2018, recorded area and land burnt on the island of Kalimantan reached
235,701,75 Ha and became the island's largest contributor to fires compared to other islands in Indonesia [1].
Ecologically, forest fires result in loss of nutrients, low soil infiltration, and high erosion. So that rehabilitation needs
to be done to improve and re-increase land productivity after a forest fire. Several stages of the rehabilitation process
include the procurement of plant seeds, land preparation, fertilizer application, planting, and maintenance. To
support the success of the land rehabilitation process, efforts are needed with careful preparation and planning [2].
One effort that can be done is to predict the extent of forest fires in a certain period of time.
Prediction is a process of estimating systematically about something that is most likely to occur in the future
based on information in the past and present. There are various methods in prediction, one method that has been
developed to recognize patterns and predictions is artificial neural networks. The network architecture model in the
artificial neural network method makes it possible to make better prediction models [3].
In artificial neural networks, there are various learning methods that can be used to predict, one of which is the
method of extreme learning machine (ELM). the advantages of the ELM method are that it has a very fast learning
speed, and generalization performance is better than traditional learning methods [4]. But in the ELM method, the
weights and biases of each iteration are not saved, so it takes a long time to get the best weights and biases for better
prediction results. Thus, in this study, a hybrid ELM method with the Flower Pollination Algorithm (FPA) method
was carried out to be able to compute quickly with better prediction results.
METHODOLOGY
Extreme Learning Machine (ELM) is a training method that is supervised in Artificial Neural Networks (ANN).
In the ELM method, the initial weight and bias are determined randomly. After that, the output matrix of the hidden
layer is used in the final weight calculation. The final weight is obtained computationally using Moore-Penrose
Generalized Inverse [5]. ELM architecture is presented in Figure1.

FIGURE 1. ELM Architecture [6]


Based on Figure 1, there are three layers in the ELM architecture, namely the input layer, hidden layer, and the
output layer. The weight between the nodes in the input layer and the hidden layer is denoted by 𝑎, the bias in the
hidden layer is denoted by b, and the weight between the nodes in the hidden layer and the output layer is denoted
by 𝛽. The steps in the prediction process with the hybrid FPA ELM are presented in Figure 2.

FIGURE 2. Flowchart of the Prediction process

The Data Normalization

Data normalizing is carried out to change the data value into intervals between 0 to 1. Transforming data into
intervals between 0 to 1 can be done using the following equation [7].

0.8( x p  min( x p ))
x  0.1 (1)
(max( x p )  min( x p ))
Where:
𝑥 = Data values after normalization
𝑥𝑝 = Original data before normalization where p = 1, 2,.., number of data
min(𝑥𝑝) = The minimum value on original data before normalization
max(𝑥𝑝) = The maximum value on the original data before normalization

The Training Process


Before conducting the prediction process, the training process is carried out first to get the best weight and bias
values. The Hybrid ELM FPA process is as follows.
1. Input normalized training data patterns.
2. Specifies the Hybrid ELM FPA parameter. FPA parameters include the amount of flower, the step controller (α),
switch probability (p), and λ. While the ELM parameters include the MSE limit and the maximum iteration.
3. Generates real numbers randomly between 0 to 1 as the initial population of flower.
4. Converts an initial population of flowers to ELM weights and biases.
5. Calculate the value of the objective function (MSE) with the following steps [5].
a) Calculates the output value from the input layer to the hidden layer G(a ∗j , bj , x∗ ) using Eq.2:
N
G(a* j , b j , x* )  g(  a ij x i  b j ) j = 1,2,…, L (2)
i 1
b) Activating the output value on the hidden layer using the binary sigmoid function on Eq.3[8]
1 (3)
Y  f(x)
1  exp( x )
c) Calculate the final weight from the hidden layer to the output layer (β) using Eq.4:

  H T (4)
d) Calculate all network output values (Y) using Eq.5:
L
Y    j G( a* j , b j , x* ) (5)
j 1
L is the number of nodes in the hidden layer.
e) Calculate the MSE value of each flower using Eq.6 [9]

1 P
MSE   ( Yi  Ti )2
p i 1
(6)

P is the number of training data.


6. Check the stopping condition. If the MSE value MSE limit or iteration ≥ maximum iteration, then the best
temporary flower (g*) can be used as weights and biases in the validation test process. If not, then the FPA
process is carried out.
7. The FPA process is carried out with the following steps [10].
a) Enter the initial population of flowers.
b) Generating real numbers between 0 to 1 randomly (randi,) as many flowers.
c) Compare switch probability with randi,
i. If randi < 𝑝, then the element of flower will be updated with global pollination. Updating the
elements/weights on selected flower with global pollination is carried out with the following steps:
1) Determine the values of u and v by generating random real numbers with normal distribution N (0,1)
of aflower elements.
2) Determine the value of σ using Eq. 7 [11]
1
 
 
 (1  ) sin(  / 2)  (7)

 1    1 
   2 
  2  2 
3) Calculate the step length (s) based on Eq.8 [12].
U (8)
s 1
v
4) Calculate the value of L using Eq. 9 [13]

() sin(  )
L~ 2 1 (9)
 s1
5) Determine the new flower value bi using the global pollination equation in Eq.10.
x t 1  xit  L(  )( g*  xit ) (10)
6) Determining new flower floweri.
ii. If randi ≥ 𝑝 then the element of flower will be updated with local pollination. Updating elements or
weights on selected flowers with local pollination is done by the following steps.
1) Determine two flowers at random, namely flower i and flowerj.
2) Determine the value of ε by generating a random real number with a uniform distribution for elements
number of flower.
3) Determine the new flower value floweri using the local pollination equation on Eq.11.

x it 1  x it  (x tj  x kt ) (11)

4) Determine new flower floweri.


d) Calculate the ELM objective function for each flower.
e) Updating flower by comparing the current flower objective function (𝐹𝑗) with the new flower objective
function (𝐹𝑖). If 𝐹𝑖 ≤ 𝐹𝑗, then replace the current flower with a new flower.
f) Sorts of flower-based on the value of the destination function.
g) Determine and save the best flower 𝑔∗
8. Repeat steps 5 to 7.

The Validation Test Process

In the training process, weights, biases, and β values are obtained which will be used in the validation test
process. The stages in the validation test process are almost the same as the training process. However, the data used
are normalized validation test data. The steps in the validation testing process are as follows:
1. Enter the normalized validation test data.
2. Entering weights and biases obtained from the training process.
3. Calculate the value of the objective function (MSE) as in step 5 of the training process.

The Data Denormalization

The denormalization process aims to make the forecasting data with neural networks easily visible in the same
value as the origin. So that the denormalization of data into intervals (0,1) can be done by equation [7]
( x p  0.1 )(max( x p )  min( x p ))
x  min( x p ) (12)
0.8
where :
𝑥 = Data values after denormalization
𝑥𝑝 = Output data before denormalization
min(𝑥𝑝) = Minimum value on original data before normalization
max(𝑥𝑝) = The maximum value on the original data before normalization

RESULTS AND DISCUSSION

The broad prediction of forest and land fires is one of the efforts made in the preparation and planning of the
rehabilitation process. Rehabilitation is important to improve and re-increase land productivity after a forest fire. In
this study, the data used are data on forest fires on the island of Kalimantan totaling 216 data from January 1998 to
December 2015. The data was obtained from the LIPI Informatics Research Center.
The ELM FPA hybrid program begins with the input parameters needed, including the MSE limit, the maximum
iteration, the amount of flower, the step controller (α), switch probability (Pa), and λ. In the training process, 10
experiments were carried out on each parameter variation, namely iteration max, number of flowers, stepsize control
value (α), and λ, to obtain the smallest MSE value. The results of program implementation in the training process
and validation test with a limit of MSE = 0.001 and switch probability = 0.6 can be seen in Table 1.
TABLE 1. Program Implementation

Iteration Number of Flower   Training MSE Validation Test MSE


0.1 0.020572512 0.289970763
2 0.5 0.018607124 0.32388647
1 0.017662536 0.357802176
0.1 0.017827552 0.269970763
5 1.5 0.5 0.018041452 0.178011098
1 0.020712376 0.229970763
0.1 0.01721661 0.298045689
1 0.5 0.020812357 0.175615591
1 0.01963452 0.234567443
0.1 0.01791652 0.166813345
2 0.5 0.018001576 0.162412221
1 0.020743625 0.158011098
0.1 0.022407635 0.153609974
100 10 1.5 0.5 0.018256744 0.149208851
1 0.018341801 0.144807728
0.1 0.019859698 0.140406604
1 0.5 0.020505621 0.136005481
1 0.018596969 0.131604357
0.1 0.017301378 0.640235827
2 0.5 0.017021458 0.043520327
1 0.018278308 0.144293283
0.1 0.017289449 0.018545659
20 1.5 0.5 0.01713469 0.112489726
1 0.017147283 0.023187406
0.1 0.016753377 0.017989928
1 0.5 0.017415605 0.121032148
1 0.019079615 0.578076351
0.1 0.020572512 0.289970763
2 0.5 0.018607124 0.32388647
1 0.017827552 0.269970763
0.1 0.018001452 0.178011098
5 1.5 0.5 0.020012476 0.229970763
1 0.019712376 0.229970763
500 0.1 0.01721661 0.298045689
1 0.5 0.020812357 0.175615591
1 0.01963452 0.234567443
0.1 0.017775123 0.166813345
2 0.5 0.018001576 0.162412221
10
1 0.020743625 0.158011098
1.5 0.1 0.018407635 0.153609974
Iteration Number of Flower   Training MSE Validation Test MSE
0.5 0.018256744 0.149208851
1 0.019342801 0.144807728
0.1 0.017025304 0.268316112
1 0.5 0.018367455 0.130654667
1 0.018897855 0.131604357
0.1 0.017256781 0.140235827
2 0.5 0.017200511 0.043520327
1 0.018232308 0.144005283
0.1 0.017023449 0.01899659
20 1.5 0.5 0.017111469 0.112489726
1 0.01713483 0.023187406
0.1 0.016631514 0.0179728
1 0.5 0.016893341 0.0179999
1 0.017003278 0.01983258

Based on Table 1 it is known that the smallest MSE value in the training process is 0.016631514. This value is
achieved by iteration of 500 iterations, the amount of flower = 20, Switch Probability = 0.6, λ = 1, Stepsize (α) =
0.1. The best weights and biases from the assessment results are used for the validation test process. The best
weights and biases of program implementation are presented in Table 2.
TABLE 2. The Best Weight and Bias
Input Weights
Bias
𝒙𝟏 𝒙𝟐 𝒙𝟑 𝒙𝟒 𝒙𝟓 ⋯ 𝒙𝟏𝟐
-0.36 1.65 0.8 0.96 -0.30 ⋯ 1.26 0.31
-0.45 0.75 0.22 1.02 1.14 ⋯ 0.84 0.67
0.66 0.89 -0.68 0.13 0.70 ⋯ 0.26 -0.21
0.78 0.46 0.36 0.81 0.08 ⋯ 0.56 0.29
0.82 1.02 0.52 0.34 0.51 ⋯ 0.64 0.06
0.14 0.31 -0.39 -0.32 1.13 ⋯ 0.13 -0.06
-0.06 1.13 0.82 -0.79 0.21 ⋯ 0.18 0.70
0.56 0.82 1.35 0.64 1.28 ⋯ 1.35 0.93
-0.55 0.54 0.65 0.42 0.68 ⋯ 0.17 -0.36
0.25 0.81 0.71 1.12 0.73 ⋯ 0.33 1.00
0.88 0.92 0.09 -0.14 0.74 ⋯ 0.89 1.25
-0.73 0.22 0.83 1.70 -0.01 ⋯ -0.04 1.03

Based on these weights and biases, values are obtained β = [33,70 920,27 0,12 -28,92 -0,82 -0,13 -374,68
155445,00 -2,17 -155881,20 124017,98 -126166,50]T. Prediction values in the validation test process are
presented in Table 3.

TABLE 3. Prediction Value of Validation Test

Pattern Forecasting results Denormalization Target Error


1 0.737839014 22009.067 21726 0.013029
2 0.498637299 13802.169 13836 0.002445
3 0.183519563 2990.627 3083 0.029962
4 0.118081218 745.468 769 0.030601
5 0.9 27572.734 27012 0.020759
6 0.1 125.110 112 0.117054
7 0.104407813 276.340 269 0.027286
8 0.105093978 299.882 314 0.044962
9 0.105061567 298.770 525 0.430914
10 0.1545846 1997.882 1427 0.400057
11 0.176962901 2765.671 2288 0.208772
12 0.160212716 2190.980 2093 0.046813
13 0.357411276 8956.770 9502 0.057381
14 0.379044478 9698.995 9970 0.027182
15 0.37546658 9576.239 9267 0.03337
16 0.110884527 498.553 594 0.160685
17 0.108251731 408.223 366 0.115363
18 0.11672273 698.859 766 0.087651
Pattern Forecasting results Denormalization Target Error
19 0.108361962 412.005 393 0.048359
20 0.116422376 688.554 647 0.064226
21 0.107344723 377.104 357 0.056314
22 0.105826952 325.030 311 0.045113
23 0.122573524 899.597 904 0.004871
24 0.118669944 765.667 618 0.238943
25 0.533653589 15003.561 12070 0.243046
26 0.397316311 10325.893 10943 0.056393
27 0.39459934 10232.675 10113 0.011834
28 0.140108972 1501.230 989 0.517927
29 0.154999733 2012.125 1909 0.05402
30 0.121878761 875.760 919 0.047051
31 0.191581551 3267.230 4528 0.278439
32 0.13423422 1299.670 1547 0.159877
33 0.119240164 785.231 710 0.105959
34 0.119822481 805.210 789 0.020545
35 0.121986311 879.450 973 0.096146
36 0.318303542 7615.002 6184 0.231404
37 0.329695945 8005.870 8101 0.011743
38 0.776009107 23318.665 22779 0.023691
39 0.876492541 26766.204 26760 0.000232
40 0.864901757 26368.530 26654 0.01071
41 0.103716194 252.611 220 0.148231
42 0.110590119 488.452 423 0.154733
43 0.102504552 211.040 146 0.445479
44 0.154351371 1989.880 2202 0.096331
45 0.140118299 1501.550 1329 0.129834
46 0.136798551 1387.651 1376 0.008467
47 0.136536204 1378.650 1366 0.009261
48 0.358544781 8995.660 9302 0.032933
49 0.181370394 2916.890 2830 0.030703
50 0.166379224 2402.550 1845 0.302195
Average error 0.110786

Next, a graph of the prediction value of the fire area from the validation test results with the actual fire area is
presented in Figure 3.

Validation test
60000.000
50000.000
40000.000
30000.000
20000.000
10000.000
0.000
1 3 5 7 9 1113151719212325272931333537394143454749

Denormalization Target

FIGURE 3. Validation test prediction results


Based on the graph, the average difference in error obtained is quite good, so it can be said that the ELM FPA
hybrid is able to produce a fire area prediction that is close to the true value.
Next, the prediction of the area of forest fires on the island of Borneo for the first month of the following month,
January 2016. The prediction process is done using the best weights and biases obtained through the training process
(with MSE Training of 0.016631514). The predicted output values are presented in Table 4.
Table 4. Value of Output Prediction Results
Month Prediction Results
January 2016 56865 km2

CONCLUSIONS

Based on the results and discussion, it can be concluded that the extreme learning machine (ELM) and flower
pollination algorithm (FPA), Hybrid Neural Network can be used to predict the extent of fires on Kalimantan Island.
With a maximum iteration of 500 iterations, the number of flower = 20, Switch Probability = 0.6, λ = 1, Stepsize (α)
= 0.1, the smallest MSE can be obtained in the training process is 0.016631514, the MSE validation test is
0.0179728 and average error of prediction value is 0.110786. It can be said that the hybrid ELM FPA is capable of
producing fire area predictions that are close to the true value.

ACKNOWLEDGMENTS

The author would like to thanks to the head of the Department of Mathematics, Faculty of Science and
Technology, Universitas Airlangga, Surabaya, Indonesia to permit us for using the laboratory and thank to Faculty
of science and technology for providing funding for the publication of the paper.

REFERENCES

1. SIPONGI, 2019, http://sipongi.menlhk.go.id/hotspot/luaskebakaran [accessed March 30 2019]


2. Wibisono, I.T., dkk., 2019, Rehabilitasi Hutan/Lahan Rawa Gambut Bekas Terbakar, CCFPI, Wetlands
International – Indonesia Programme.
3. Herawati, S. dan Djunaidy, A., 2014, Peramalan Harga Minyak Mentah menggunakan Gabungan Metode
Ensemble Empirical Mode Decomposition (EEMD) dan Jaringan Syaraf Tiruan, Jurnal SimanteC, Vol 4, 60.
4. Huang, G.B., 2004, Extreme Learning Machine : A New Learning Scheme of Feedforward Neural Networks,
Proceedings of International of Joint Conference in Neural Networks, Budapest.
5. Huang, G.B., Zhu, Q.Y., dan Siew, C.K., 2006, Extreme Learning Machine : Theory and Apllications,
Neurocomputing, Vol 70(1), 489-501.
6. Siang, J.J., 2005, Jaringan Syaraf Tiruan dan Pemrogramannya menggunakan Matlab, Andi publisher,
Yogyakarta.
7. Kusumadewi, S. dan Hartati, S., 2010, Neuro Fuzzy: Integrasi Sistem Fuzzy & Jaringan Syaraf Edisi 2, Graha
Ilmu, Yogyakarta.
8. Sun, Z. L., Choi, T.M., Au, K.F., dan Yu, Y., 2008, Sales Forecasting using Extreme Learning Machine with
Applications in Fashion Retailing. Decision Support Systems, 46. 411-419.
9. Yang, X.S., 2012. Flower Pollination Algorithm for Global Optimization, in Unconventional Computation and
Natural Computation, Lecture Notes in Computer Science, Vol 7445, 240-249.
10. Vasant, P., Weber, G.W., Dieu, V.N., 2016, Handbook of Research on Modern Optimization Algorithms and
Applications in Engineering and Economics. IGI Global, USA.
11. Mantegna, R.N, 1994, Fast, Accurate Algorithm for Numerical Simulation of Levy Stable Stochastic
Processes Phys. Rev E 49, 4677-4683.
12. Kaveh, A., dan Bakhspoori, T., 2013, Optimum Design of Steel Frames Using Cuckoo Search Algorithm with
Levy Flights, Struct Des Tall Spec Build. 22: 2013-1036.
13. Pavlyukevich, I., 2007, Levy Flights, Non-local Search and Simulated Annealing. Computational Physics
journal, 226, 1830-1844

You might also like