You are on page 1of 31

Soft Computing Based Analysis for Crop yielding

prediction using Spiking Neural Network with GATE-Neural


Network for Industry Revolution 5.0 1

Abstract
Crop yield is made by several components, like genotype, climate, and actions.
Therefore, it's necessary to understand the dependence between yield and other
variables, and to come up with accurate predictions requires big data and well-
developed algorithms. In this study, a machine learning-based model is developed to
predict crops. The paper has proposed a machine learning algorithm, i.e., GATE-Neural
Network, which is implemented by using the three features vectors such as
Standardized Precipitation Index (SPI), Vegetation Condition Index (VCI) and
Normalized Difference Vegetation Index (NDVI). The model is also compared with the
Spiking Neural Network and has achieved the prediction accuracy of 95.005%, which
is much better than the other states of the art algorithms. The parameters used for the
evaluation of the proposed model other than accuracy are MBE (Mean Bias Error),
MAE (Mean Absolute Error), RMSE (Root Mean Square Error), Time etc. GATE-
Neural Network model has achieved better results than other existing models in terms
of these evaluation parameters.

Keywords: Neural Network, RMSE, Winner-Take-All network, NDVI, training time


2010 MSC: 00-01,99-00

1. Introduction

Yielding of crop determination is crucial to food production around the world.


Policymakers invest in reliable food surveys to estimate trade volumes. Manufacturers
need to forecast the performance of new varieties they want to introduce in different
conditions to breed better varieties [1]. With yield prediction, farmers can make

1This is a footnote associated with the title.

Preprint submitted to Journal of Information Processing and Management


management and financial decisions. However, yielding crop determination is
demanding due to too many aspects. Genotype information is typically depicted by
high-dimensional marker data, containing several thousands of markers for each
person. Effects of these markers need to be measured and influenced by various
environmental variables, including which management strategies were used in the field.
Many experiments have focused on how the genetic structure of an organism
affects its physical characteristics. One of the simple and conjoint steps was
concentrating on the colorant effects of GDP and labor force while considering their
interfaces as noise [2]. A conjoint approach to look at the effects of mega regions is to
look at the results rather than concentrate on more granular environmental factors.
Several researches proposed to group environmental sites based on their relationships
with aggregate data. Burgueño et al. [3] proposed a strategy to cluster ecosystems using
factor methodical techniques and linear mixed models. It can boost agency decision-
making by about 6% when there are complex (G × E) trends in the data [4]. Both linear
and diversified prototypes have been used to analyze improver and collaborating
possessions of specific genes and situations for development.
The use of artificial intelligence is to improve crop yield forecasting. Machine
learning, an artificial intelligence branch, is a practical methodology that can better
predict the performance centered on several characteristics. Machine learning can
recognise and detect data set knowledge by identifying patterns and links. Therefore,
the models need to be trained with datasets, which have shown results in the past. The
predictor model consists of various characteristics and is thus determined with historical
data during the training period using model parameters. During the test phase, some
historical information not used for training is used for performance assessment.
Machine Learning models find the performance as an implicit function of the applied
data. We employed a multi-layer neural network to predict corn yield, using soil
characteristics, weather conditions, and inputs. In 2003, a model was rendered by
multiple linear regression, projection pursuit regression, and neural networks [5]. The
neural network demonstrated the greater numerical capacity and algorithms. Marko et
al. [6] used stem weight histograms to estimate genetic output of different soybean
strains.
Here deep neural network is used to analyze yield and assess yield differences
between hybrids and locations. Deep neural networks are computational models
designed to find the underlying representations without hand-crafted features. DNNs
have several stacked layers. Each layer transforms the data into a higher and more
abstract representation [7]. NLP as a science gets better insights to achieve higher
accuracy results. With correct training and testing datasets, Deep Neural Networks may
approximate any function.
In this paper, SNN on different performance/evaluation parameters using one spatial
feature, i.e., NDVI, was implemented as in [8]. The results and performance were found
based on various evaluation parameters. The results obtained were then compared using
GATE-NN. To improvise the performance of SNN, it was also implemented by using
the three feature vectors, i.e., NDVI, VCI, and SPI; these were passed for training as
the input features in SNN [9].
The majority of the SNN training algorithms is based on BP works, focus on FF
networks. SNNs are an important class due to its ability to process temporal signals
such as data series and speech data.
Prediction of crop yielding is one of the significant issues faced by the farmers
and the agriculture industry. Therefore, this crop yielding prediction issue has become
the motivating factor for writing this paper.
This paper aims to build a machine learning model that can be used to predict crop
yielding. The model will help the crop industry to take some beneficial decisions.
The first section covers the introduction, and the rest of the paper is structured as
Section 2 reviews the related work, and section 3 presents the proposed schemes
followed by the results. Lastly, the conclusion and the futures work have been discussed
in Section 4.

2. Literature Survey

Predicting crop yield is extremely relevant to farmers and local authorities at the
national and regional levels. A yield for crop analysis model can dramatically boost
agriculturalist decisions. There are various methods for forecasting crop yields. This
section will explore the use of ML in yield for crop analysis. Chlingaryan et al. [10] did
an article on machine learning methods to estimate nitrogen status. The article
concludes that automation and machine learning can create new technical solutions that
will be cost-effective for agriculture. Elavarasan et al. [11] conducted a publication
survey of ML models on yield for crop analysis correlated with environmental
parameters. The paper suggests creating a broader scope to review yield. A
documentary exploring the application of ML in agriculture industries was released
recently. The study analyzed the literature on crops, animals, water, and soil
conservation.
Li et al. [12] have done comprehensive research for ripeness points to
determine the optimal harvest time and prediction of the yield. Mayuri and Priya [13]
discussed the agricultural applications and the difficulties faced in image processing
and machine learning in the management of plant diseases. Somvanshi and Mishra [14]
explored various machine learning techniques and their plant biology applications.
Gandhi and Armstrong on agricultural sectors carried out an analysis of data mining
applications. They concluded that more research needs to be done before data mining
becomes realistic for handling the sizeable agricultural data sets [15]. Beulah found that
using data mining, the project of crop yield prediction could be solved [16].
Chen and Cournede [17] focus are on determining how best to predict maize yield
from the weather. Liakos et al. [18] describes the use of agricultural machinery learning
and the artificial neural networks was applied in this work. This study has shown that
machine learning models are used in several fields, mainly in crop production and
management decisions.
Priya et al [19] discuss about using the random forest algorithm to predict and
improve crop yields. For yield production, a random forest algorithm was used using a
four-character or parameter dataset. Training set for the training of the algorithm rules
applied to the remainder of the data sets. The results demonstrated that the random
forest algorithm allows us to achieve a precise crop yield prediction. The Random
Forest Algorithm achieves the lowest model crop yield models. In agricultural planning,
it is suitable for large crop yield prediction.
Table 1. Comprehensive table of literature Survey
Author Year Methodology Limitations

J. Burgueño, J. 2008 Genotypes without The discussion is


Crossa, P. L. crossover one linear regression
Cornelius, and R. C. but for multiple
Yang [3] variables there is no
method
P.J.Ramosa, 2017 Computer vision is Machine learning
F.A.Prieto, used to count the fruits algorithm is
E.C.Montoyaa, on coffee branches implemented but it
C.E.Oliverosa [20] automatically. can be improved by
using deep learning
methods
E. Khosla, R. 2020 Crop yield prediction The method
Dharavath, and R. using a support vector discussed in this
Priya [21] regression model and paper is based on
aggregated rainfall- regression.
based modular
artificial neural
networks is
demonstrated in this
paper.
P. Nevavuori, N. 2019 Advances in deep The paper discussed
Narra, and T. convolutional neural the convolution
Lipping [22] networks for the neural networks, but
prediction of crop in this paper to
production perform the crop
prediction, hybrid
deep learning model
is used.

3. Proposed Scheme

In this section, the focus is on the proposed model i.e., GATE Neural Network, this has
also covered the spiking neural network and the comparison of the spiking neural
network with the GATE neural network.

3.1 Spiking neural networks

Pritam Bose and his students developed the first SNN model for crop yield
estimation [23]. The study used both remote sensing and Landsat to monitor the crops
and land use changes that were taking place during the study period and produced a
report on its results.
3.1.1. Spiking Neuron Model
It is assumed that the time of spikes instead of the type of spikes carries information
in the overriding majority of spiking neurons. Thus, splitting neurons and efficient
neuronal synapses contribute to contemporary thought, choice and learning.
The unit of the SNN is the neuron, which is created to be the one similar to
biological. This neuron processes the information that had arrived from the pre-synaptic
neurons (PrSyN) and it sends the train of spikes (sequence of spikes) to the postsynaptic
neurons (PoSyN) through the axon. The probability that the neuron will fire increases
with the increasing of the membrane potential: here we say that a spike is to be
generated, only if the membrane potential reaches a threshold (called spike threshold)
[24].
.
Figure 1: Representation of an action potential
From the above figure 1, we can see that the approximate plotting of a neuron's spike.
The depolarization occurs when the input stimulus is higher enough to reach the spike
threshold. When this happens, the action potential rises and, until the ending of the
repolarization, the neuron is in the absolute refractory period, in which it cannot fire
again. When the repolarization state gets over the crossing of the resting potential, the
hyperpolarization starts where it is more difficult that the neuron fires, but it is possible.
This time period is called the relative refractory period. In the end, the cell returns to
resting potential.
Some of the neuron models are as mentioned below:
1. Hodgking-Huxley (HH) [25]
2. Leaky integrate-and-fire (LIF) [26]
3. Spike response (SRM)
4. Izhikevich neuron (IZK)
In this work we have used the LIF neuron model as taken in the previous work [23].
3.1.2 Spiking Neural Network Components
Leaky-Integrate-and-Fire neurons and plastic synapses are fundamental units of the

SNNs and are also biologically feasible computational elements. The dynamics of LIF

active neuron is formulated as the following:


𝑑𝑉𝑚𝑒𝑚
𝜏𝑚 = −𝑉𝑚𝑒𝑚 + 𝐼(𝑡) → Eq. (1)
𝑑𝑡

Here

Vmem - post-neuronal membrane potential

τm - time constant for membrane potential decay

. The input current, I(t), is a weighted summation of pre-spikes at each time stage and

is given by.
𝑛𝑙

𝐼(𝑡) = ∑( (𝑤𝑖 ∑) 𝜃𝑖 (𝑡 − 𝑡𝑘 )) → Eq. (2)


𝑖=1 𝑘

Here

nl - number of pre-synaptic weights,

wi - synaptic weight connecting ith pre-neuron to post-neuron.

θi(t − tk) - spike event from ith pre-neuron at time tk,

this function can be written as :

θ(t−tk)=1, if t=tk or is equal to 0, otherwise →Eq.(3)

here

tk - time at which kth spike occurred.


Fig. 2 shows the LIF neuronal dynamics. In order to deliver a current influx to the post

neuron, θi(t − tk),, the effect of each pre-spike is modified by the associated synaptic

weight (wi). Remember, there is no name for the units. The input current is built into

the post-neuronal membrane potential (Vmem), which leaks with time constant

exponential over time. The neuron produces a pulse and adjusts its membrane potential

to previous state if the membrane potential reaches a threshold (Vth). The Fig. 2 gives

the illustration of LIF neuron dynamics.

Figure 2: Spiking Neuron Model

Table 2. Meaning of Notations used in Equations

Notations Meaning
𝜣 Spike even
𝑥 Sum of pre-spike events over time
𝑤 Synaptic wright
𝑉𝑚𝑒𝑚 Membrane potential
𝑉𝑡ℎ Neuronal firing threshold
𝐼 Input current at each time step
𝑛𝑒𝑡 Total current influx over time
𝑎 Activation of spiking neuron
E Loss function
δ Error Gradient
3.2 SNN for Crop Yield prediction

These models successfully identify and forecast the crop yield. The training and
testing algorithms work according to the need efficiently. To remedy the problems, we
must take three-step solution.
The first step involved transforming the digital image into vectors so that any
vectors became an image candidate for a row pixel. The software chain was integrated
into several parts in the second phase to perform the reduction in image size. A small
amount of the image is specified in every element in an array. The training layer
includes learning, output spike firing, and a win-take-all network rivalry through
inhibitory neuron modification. Moreover, a mathematical model is applied to deal with
ambiguity from related circumstances.

3.3 Diagrammatic Explanation of SNN [27]

The architecture of SNN is shown as in Fig. 3, which basically includes the three
components, first is a neural spike generator, second is the segmentation of image, and
third is the learning phase and output pattern generation. The brief and implementation
process is being explained further.
Pre-Processing:
First and Second Layers

First Layer

N- N-
1 ….. K-1 K K+1 K+2 2K-1 2K
K+1 K+2
N

|| | | | | | | || |||| ||| | | | || | | || ||||

Second 1 …….. 2 N/K


Layer

STDP

| | | ||| || 1 ……. P ||| ||| || |

Third Layer

Inhibition
Class label

Figure 3: Supervised Network (SNN), integrating spike-transcription-layer, spike-


train classification, STDP learning, pattern-firing and an inhibitory neuron. R is the
column number. K is a single neuron's number of neighbors. P is the element number.
All neurons output is marked as groups for black circle blocks.

3.4 Brief explanation for Training and testing for yield prediction of
wheat crop in the selected region for this work was performed
using SNN

3.4.1 Method for Implementation of Artificial Spiking Neural Network


(ASNN) for training and testing purpose of data:

Initialization of Pre and Post Synaptic of SNN -


As each neuron connects through the next neuron by a junction which is term as
synapses. So, incoming signals are initially passed through the pre-synaptic area and
then postsynaptic junction make changes in input signal. So, evaluation of these values
was done by eq. 4 and 5.
−t

Prs (t) = ∫0 K syn (t − t f )e τ Eq. 4
Where; P is
−t
What is P, K, Ksyn,tf, e τ
Key controls the peak conductance value, t is a range of time firing instance and τ is
the synaptic time constant. In this work infinity is nothing but the input vectors for
training.
The above formula detects autocorrelation over time. A postsynaptic current can be
obtained from equation (5)
∞ ∞
Pos (t) = ∫0 Pos (t) − V(t) ∫0 Pos (t)------Eq. 5
So, the weight of the neurons is initialized by below Eq. 6 where each neuron was
considered as vertex and synapses were considered as the edge between them, here
input is spikes of neural network.
𝑤𝑖𝑗 (𝑡) = 𝑎𝑜 + 𝑎1 𝑃𝑟𝑠 (𝑡) + 𝑎2 𝑃𝑜𝑠 (𝑡) + 𝑎3 𝑃𝑟𝑠 (𝑡)𝑃𝑜𝑠 (𝑡) + 𝑎4 𝑃𝑟𝑠 (𝑡)𝑃𝑜𝑠 (𝑡)---- Eq. 6
In above Eq.6, 𝑤𝑖𝑗 (𝑡) is the weight between ith and jth layers. a0, a1, a2, a3 and a4 are
coefficient constant to manage the changing rate of synapses.
Steps for finding out the desired output:
a. Let us presume that there is a 3-layer neural network.
b. Consider the node I should be the input layer of the neural network while node
j should be the hidden layer. K is one of the input layers for the output layer.
c. wij is a weight between the nodes of the two consecutive layers.
d. So the output (D) of the layer depends on the below equation 7:
Xj= xi .wijbj---- Eq. 7
where, 1i n; n is the number of inputs xi to node j, and bj is the biasing for
node j.
e. Desired output D is passed to check the learning process and the Error is found by
equation 8.
E = D-sum(X) ---------- Eq. 8
f. Then, the weight Adjustment is done by below equation 9.
wij=wij*rand() ----------- Eq. 9
Now, by repeating the above steps, and adjusting the error values, the final output
was obtained.
3.5 GATE Neural Network

The study has proposed the hybrid model termed as GATE Neural Network that is
combination of Genetic Algorithm, teaching learning-based optimization (TLBO)
algorithms and Error Back propogation algorithm.
The proposed model’s block diagram is shown in figure 4. The proposed model
consists of certain steps such as generating the population, iterative steps, teacher and
student phase, cluster representation and finally the neural network. Let discuss these
steps in details:
1. Generate Population: The population is generated by using the features contained
in the dataset.
2. Iterate: The iteration process is done across the generated population
3. Teaching–learning-based optimization: The sampled population is passed through
teaching learning-based optimization algorithm. In this algorithm two important phases
are there; one is student phase and the other one is teacher phase.
(i) Teacher Phase
Suppose we have two classes of curves, curve 1 and curve 2. MA andMB represents
the mean marks obtained by the learners for the curve 1 and curve 2 respectively.
According to a good teacher, the average class increases fromMA toMB , good teacher is
someone who brings its students to their knowledge level. However, in practice this is
not possible and only depending on the capacity of the class can a teacher move the
mean of one class to a certain extent [28]. This takes place at random depending on a
number of factors.
Let Ax be the average, and the teacherTx at any iteration x. Tx tries to pull the
Ax to its side. The solution is updated according to the difference between the new
average Anew and the existing average.
Mean_Differencex = ri (Anew − Tf Ax )
Next the updation of the problem is done by:
Pnew,x = Pold,x + Mean_Differencex

(ii) Learner Phase


Learners improve their understanding in two ways: one through the teacher's input
and the other through interaction. First, a student interacts randomly in groups, talks,
formal communications, etc., with other students. If another pupil has more knowledge
than him or her, a pupil learns something new.
(iii) Error Back Propagation Algorithms - For this study, the selected artificial neural
network (ANN) algorithm was the Error Back Propagation (EBP). The BP (back
propogation)is a supervised education technology for neural network training. It is a
technique of gradient decrease to reduce the error criteria. The BP is widely used in
complicated nonlinear functionality approximation. The structure of the neural network
in this study had an input layer, a hidden layer and an output layer in a three level
learning network. The training data is used by reduced error data to train the network.
Through monitoring training and prevent overwork, cross-validity data are used to find
network performance.
The test data are then used to train and validate the overall performance of the
network. Cross-validation is intended to divide training into two training programmes,
including training and validation. As an indicator for the completion of the training, the
early stop procedure was used with an increase in running times and an average weight
change. Cross-validation is therefore required to monitor the representation where the
validation error set is minimal, since the learning may continue until the validation set
error begins to increase and then the network stops.
During training, the neural network algorithm BP has two propagation steps to
calculate all gradients (forward/back propagation). For the forward pass, the input
vector activation pattern is propagated to produce output via the network. Each input 𝑥𝑖
has an adjustablewij (weight) constant, which is supplied by an equation to the
processing element (PE) in the output layer.
The function of the network's activation was to define tan-sigmoid as [−1; 1]. The
output is calculated layer by layer during forward propagation, i.e. the input to the
output is computed, and the desired output error is calculated.
The back propagation pass uses an error backpackaging method to adjust the
interconnection weight during training to adjust the weight for each connection. By
reshaping the error value into the hidden layers up to the input layer, the total error of
the output layer is reduced.

Figure 4: Flowchart of the GATE Neural Network


4. Algorithm: GATE Neural Network

Step 1: Collect the Crop dataset sample


Step 2: Generate population from the samples
Step3: Perform the iteration at all the selected features
Step 4: Finding the best features using Teaching–learning-based optimization
method
Step 5: Finally through error back propagation algorithm prediction of crop is done

3.6 Experimental Results and Analysis in tabular way based of various


evaluation parameters.
SNN was simulated using MATLAB and the results were found on various
parameters as in [23]. MATLAB libraries are used for implementation becaused
MATLAB tools are rich in libraries based on deep learning model, The results obtained
were then compared with GATE-NN.
Table 3 Experimental Results
2013 2014 2015 2016 2017
Years
Field Truth
Value ( in 12937.0 17103.90 17688.67 17939.33 15910.79
NN metric tonns)
SNN Yield 16041.57 17958.22 19843.72 10564.3 14693.68
RMSE 4531.126 3862.979 2881.659 8868.786 3129.08
MAE 3104.552 854.3246 2155.048 7375.031 1217.111
RE 23.99743 4.994911 12.18321 41.11097 7.649597
R 7.01E-16 -9.7E-16 -2.2E-15 1.34E-16 0
MBE -3104.55 -854.325 -2155.05 7375.031 1217.111
Accuracy 76.00257 95.00509 87.81679 58.88903 92.3504
Time 0.001379 0.001356 0.001295 0.001226 0.001239
GATE
-NN Yield 14378.27273 14183.69048 14413.95556 14084.82222 14307.84444
RMSE 6895.615117 3629.348516 2844.935907 14627.81955 10765.43452
MAE 2589.333333 301.0941111 2689.944444 13089.88972 8326.216889
RE 19.71563596 2.327384366 15.72708239 74.00155903 46.41320393
R 4.47E-16 1.12E-16 6.98E-16 2.19E-16 1.50E-16
-
MBE 2589.333333 301.0941111 2689.944444 13089.88972 8326.216889
Accuracy 80.28436404 97.67261563 84.27291761 25.99844097 53.58679607
Time 0.1967603 0.2502229 0.2617267 0.0919764 0.1792881

3.6.1. Comparison of results of SNN with GATE-NN obtained using single


feature vaue (NDVI):
i. Comparison between SNN and GATE-NN in terms of wheat crop yield
prediction in metric tonnes is given in table 3.
Table 4 : Comparison between SNN and GATE-NN in terms of wheat crop yield
prediction in metric tonnes
Truth Ground Value

Years SNN GATE-NN


16041.56887
2013 12937 14378.27

2014 17103.9 17958.22457 14183.69

2015 17688.7 19843.71513 14413.96

2016 17939.33 10564.29669 14084.82

2017 15910.8 14693.67685 14307.84

Comparison between SNN and GATE-NN


in terms of yield prediction in metric tonnes
25000
20000
15000
10000
5000
0
2013 2014 2015 2016 2017
Truth Ground
12937 17103.9 17688.6717939.3315910.79
Value
SNN 16041.56917958.225
19843.715
10564.29714693.677
GATE-NN 14351.9716089.131
16767.22215289.8214136.156
Truth Ground Value SNN GATE-NN

Figure 5: Comparative graph between SNN and GATE-NN in terms of wheat crop
yield prediction.
Table 5: Comparative table between SNN and GATE-NN in terms of RMSE

Year SNN GATE-NN

2013 4531.126423 6895.62

2014 3862.979352 3629.35

2015 2881.659039 2844.94

2016 8868.785924 14627.8

2017 3129.080252 10765.4


Comparison between SNN and GATE-NN in terms of RMSE
10000

9000

8000

7000

6000

5000

4000

3000

2000

1000

0
2013 2014 2015 2016 2017
SNN 4531.126423 3862.979352 2881.659039 8868.785924 3129.080252
GATE-NN 248.983 297.9 1184.668 4752.328 2826.788
SNN GATE-NN

Figure 6: Comparative graph between SNN and GATE-NN in terms of RMSE

Table 6: Comparative table between SNN and GATE-NN in terms of RE

Year SNN GATE-NN

2013 19.71563596
23.99743
2014 2.327384366
4.994911
2015 15.72708239
12.18321
2016 74.00155903
41.11097
2017 7.649597 46.41320393

Comparison between SNN and GATE-NN


in terms of RE
45
40
35
30
25
20
15
10
5
0
2013 2014 2015 2016 2017
SNN 23.99743214.994910912.1832107
41.1109675
7.64959691
GATE-NN 1.924578 1.741708 6.697325 26.49111 17.76649
SNN GATE-NN

Figure 7. Comparative graph between SNN and GATE-NN in terms of


RE

Table 7: Comparative table between SNN and GATE-NN in terms of Training Time

Improvement comparison in
Year Training Time
Percentage
SNN GATE-NN SNN& GATE-NN

2013 0.0013785 0.19676 99%

2014 0.0013558 0.25022 99%

2015 0.0012949 0.26173 99%

2016 0.001226 0.09198 100%

2017 0.0012394 0.17929 99%


Table 8 Experimental Results using 3 features
Years 2013 2014 2015 2016 2017
Ground
value 21219.6 21593.9 25413.5 20126.5 20951.7
Yield 8310.566 4564.206 7840.188 2438.459 5079.672
RMSE 8282.606 4490.027 7724.856 2187.159 5040.868

SNN MAE 64.02253 26.25148 43.67122 12.19198 31.68208


RE -4.01E-15 -1.11E-15 -2.72E-15 1.69E-15
R -8282.61 -4490.03 -7724.86 -2187.16 -5040.87
MBE 35.97747 73.74852 56.32878 87.80802 68.31792
Accuracy 0.011232 0.006858 0.007009 0.006954 0.006294
Time 21219.6 21593.9 25413.5 20126.5 20951.7
Years 2013 2014 2015 2016 2017

Yield 14351.97 16089.13 16767.22 15289.82 14136.16


RMSE 248.983 297.9 1184.668 4752.328 2826.788
MAE 248.983 297.9 1184.668 4752.328 2826.788
GATE- RE 1.924578 1.741708 6.697325 26.49111 17.76649
NN R 4.68E-17 1.09E-16 -1.33E-16 9.01E-17 -5.24E-17
MBE -248.983 297.9 1184.668 4752.328 2826.788
Accuracy 98.07542 98.25829 93.30268 73.50889 82.23351
Time 0.183717 0.240022 0.237696 0.082106 0.161405
Comparison between SNN and GATE-NN in terms of training
time
0.3

0.25

0.2

0.15

0.1

0.05

0
2013 2014 2015 2016 2017
SNN 0.0013785 0.0013558 0.0012949 0.001226 0.0012394
GATE-NN 0.183717 0.240022 0.237696 0.082106 0.161405
SNN GATE-NN

Figure 8. Comparative graph between SNN and GATE-NN in terms of


Training Time

Hence by using the one feature vector value: NDVI, the GATE-NN performed better
than SNN in terms of execution time taken and the wheat crop yield prediction.

3.5.2 The training and testing of the data was found on the following
parameters R, RE, RSME, MAE, MBE, time taken and wheat crop yield
prediction and % of efficiency using 3 features- NDVI, VCI and SPI.
3.5.2.1 Comparison of results of SNN with GATE-NN obtained using three
feature values (NDVI, VCI, SPI):
i. Comparison between SNN and GATE-NN in terms of yield
prediction

Table 9 : Comparison between SNN and GATE-NN in terms of yield prediction in metric tonnes
Years Truth Ground Value SNN GATE-NN
2013 12937 21219.6 14378.27
2014 17103.9 21593.9 14183.69
2015 17688.7 25413.5 14413.96
2016 17939.33 20126.5 14084.82
2017 15910.8 20951.7 14307.84

Comparison between SNN and GATE-NN in terms


of yield prediction in metric tonnes
30000
25000
20000
15000
10000
5000
0
2013 2014 2015 2016 2017
Truth Ground Value 12937 17103.9 17688.67 17939.33 15910.79
SNN 21219.62316 21593.92668 25413.52391 20126.49 20951.6558
GATE-NN 14351.9697 16089.13111 16767.22222 15289.82 14136.1555

Truth Ground Value SNN GATE-NN

Figure 9: Comparative graph between SNN and GATE-NN in terms


of yield prediction
Table 10 : Comparative table between SNN and GATE-NN in terms of RMSE

Year SNN GATE-NN

2013 248.983 6895.62

2014 297.9 3629.35

2015 815.6675 2844.94

2016 4752.328 14627.8

2017 2826.788 10765.4

Figure 10: Comparative graph between SNN and GATE-NN in terms


of RMSE

Table 11 : Comparative table between SNN and GATE-NN in terms of RE

Year SNN GATE-NN


64.02253441
2013 1.924578131

26.25147876
2014 1.741708032

43.67121726
2015 6.697324714
12.19198
2016 26.49111

31.68207552
2017 17.76648649

Comparison between SNN and GATE-NN in terms of RE


70

60

50

40

30

20

10

0
2013 2014 2015 2016 2017
SNN 64.02253441 26.25147876 43.67121726 12.19198 31.68207552
GATE-NN 1.924578 1.741708 6.697325 26.49111 17.76649
SNN GATE-NN
Figure 11: Comparative graph between SNN and GATE-NN in terms of RE
Table 12. Comparative table between SNN and GATE-NN in terms of Training Time

Year Training Time Improvement comparison in


Percentage
SNN GATE-NN SNN& GATE-NN
2013 0.0112318 0.183717 99%

2014 0.0068575 0.240022 99%

2015 0.0070085 0.237696 99%

2016 0.006954 0.082106 100%

2017 0.0062937 0.161405 99%

Comparison between CNN and GATE-NN in terms of


training time
0.3

0.25

0.2

0.15

0.1

0.05

0
2013 2014 2015 2016 2017
CNN 0.2005263 0.2606407 0.261101 0.10418 0.1898385
GATE-NN 0.183717 0.240022 0.237696 0.082106 0.161405
CNN GATE-NN

Figure 12: Comparative graph between SNN and GATE-NN in terms of Training
Time
4. Discussion

This study has proposed a method for crop yield production. The method used for
yielding the crop is based on the deep learning models. Mainly neural networks
are used with different layers, and the neurons with different weights are
implemented. There are 3 different types of datasets used while building the
model, and neural networks such as Spiking Neural Network and Gate Neural
Network are applied to the particular dataset to perform the analysis. Furthermore,
different parameters are used to evaluate the models, such as training and testing
time, accuracy, precision and many more.

5. Conclusion and Future Work

In this paper, the understanding of SNN for crop yield prediction was
done through explaining steps, diagrams. The work was also tried to make
understandable through step-by-step explanation. The results obtained from SNN using
NDVI alone and then by using NDVI, VCI and SPI values were then compared with
GATE-NN in terms of evaluation parameters. From the results obtained and that are
shown in tables and in the analytical graphs, it is concluded that the proposed GATE-
NN has improved results in all terms of evaluation parameters, from crop yield
prediction to execution time, specially using three feature vector value input. Results
has indicated that the value obtained for RMSE and RE value of GATE-NN is lesser as
compared SNN except for year 2016. Hence the proposed GATE-NN performed better
than SNN.
Nowadays, capsule convolutional neural network has also used for the analysis
purpose. So, in the future work, the crop prediction can also be done with the help of
capsule network and it might happen that it can generate good prediction accuracy as
compared to GATE Neural Network.
It has been observed from table 4 that, the proposed GATE-NN
developed for crop yield prediction using three features works well as
compared to the previous method SNN. From table 4, it is observed that
the RMSE value of GATE-NN is also found to be less for every test year.
Similarly, from table 6, it is observed that the RE value of GATE-NN is
also found to be less.
Next, the results obtained by the use of three feature vector value
(NDVI, VCI and SPI), the proposed GATE-NN performed better than
SNN in almost all the evaluation parameters and the close results to truth
value of crop production was found using GATE-NN. The execution
time was also found to be lesser in GATE-NN.
Also, a comparison of results in terms of Crop yield
predicted values was done for GATE-NN using single feature and three
features. It was found the proposed GATE-NN when implemented with
three featured values performed better. In the future some more
implementation will be done on crop image dataset using different types
of Convolution Neural Network algorithms, Currently, images are
mostly used for the analysis purpose, so in the future we will also focus
on the image analysis through deep learning models.

Acknowledgements

We are very thankful to RGPV and Vellore Institute of Technology for their
continuous support and encouragement.
References

[1] S. Khaki and L. Wang, “Crop yield prediction using deep neural networks,”
Frontiers in Plant Science, 10: 621, 2019, doi: 10.3389/fpls.2019.00621.
[2] N. Heslot, D. Akdemir, M. E. Sorrells, and J. L. Jannink, “Integrating
environmental covariates and crop modeling into the genomic selection
framework to predict genotype by environment interactions,” Theoretical and
Applied Genetics, 127(2):463-80, 2014, doi: 10.1007/s00122-013-2231-5.
[3] J. Burgueño, J. Crossa, P. L. Cornelius, and R. C. Yang, “Using factor analytic
models for joining environments and genotypes without crossover genotype x
environment interaction,” Crop Science, 48(4): 1291-1305, 2008, doi:
10.2135/cropsci2007.11.0632.
[4] J. Burgueño, J. Crossa, J. M. Cotes, F. S. Vicente, and B. Das, “Prediction
assessment of linear mixed models for multienvironment trials,” Crop Science,
51(3): 944-954, 2011, doi: 10.2135/cropsci2010.07.0403.
[5] A. Braun, M. Kohler, and A. Krzyzak, “Analysis of the rate of convergence of
neural network regression estimates which are easy to implement,” arXiv. 2019.
[6] O. Marko et al., “Portfolio optimization for seed selection in diverse weather
scenarios,” PLoS ONE, 12(9): e0184198, 2017, doi:
10.1371/journal.pone.0184198.
[7] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, 521:436–
4442015. doi: 10.1038/nature14539.
[8] G. M. Gandhi, S. Parthiban, N. Thummalu, and A. Christy, “Ndvi: Vegetation
Change Detection Using Remote Sensing and Gis - A Case Study of Vellore
District,” 57: 1199-1210, 2015. doi: 10.1016/j.procs.2015.07.415.
[9] E. Stromatias, M. Soto, T. Serrano-Gotarredona, and B. Linares-Barranco, “An
event-driven classifier for spiking neural networks fed with synthetic or
dynamic vision sensor data,” Frontiers in Neuroscience, 2017, doi:
10.3389/fnins.2017.00350.
[10] A. Chlingaryan, S. Sukkarieh, and B. Whelan, “Machine learning approaches
for crop yield prediction and nitrogen status estimation in precision agriculture:
A review,” Computers and Electronics in Agriculture. 151: 61-69, 2018. doi:
10.1016/j.compag.2018.05.012.
[11] D. Elavarasan, D. R. Vincent, V. Sharma, A. Y. Zomaya, and K. Srinivasan,
“Forecasting yield by integrating agrarian factors and machine learning models:
A survey,” Computers and Electronics in Agriculture. I55:257-282, 2018. doi:
10.1016/j.compag.2018.10.024.
[12] B. Li, J. Lecourt, and G. Bishop, “Advances in non-destructive early
assessment of fruit ripeness towards defining optimal time of harvest and yield
prediction—a review,” Plants. 7(1):3, 2018. doi: 10.3390/plants7010003.
[13] M. K. P and V. C. Priya, “Role of Image Processing and Machine Learning
Techniques in Disease Recognition, Diagnosis and Yield Prediction of Crops:
a Review,” International Journal of Advanced Research in Computer Science,
9(2), 2018.
[14] P. Somvanshi and B. N. Mishra, “Machine learning techniques in plant
biology,” in PlantOmics: The Omics of Plant Science, 2015. doi: 10.1007/978-
81-322-2172-2_26.
[15] N. Gandhi and L. Armstrong, "Applying data mining techniques to predict yield
of rice in humid subtropical climatic zone of India," 2016 3rd International
Conference on Computing for Sustainable Global Development (INDIACom),
2016, pp. 1901-190.
[16] R. Beulah, “A Survey on Different Data Mining Techniques for Crop Yield
Prediction,” International Journal of Computer Sciences and Engineering,
7(1), 2019, doi: 10.26438/ijcse/v7i1.738744.
[17] X. Chen and P.-H. Cournède, “Model-Driven and Data-Driven Approaches for
Crop Yield Prediction: Analysis and Comparison,” International Journal of
Mathematical and Computational sciences, 11(7), 2017.
[18] K. G. Liakos, P. Busato, D. Moshou, S. Pearson, and D. Bochtis, “Machine
learning in agriculture: A review,” Sensors (Switzerland)., 18(8), 2674, 2018.
doi: 10.3390/s18082674.
[19] Priya, Muthaiah, and Balamurugan, “Predicting yield of the crop using machine
learning algorithms,” International journal of engineering sciences and
reseach technology, 7(4), 2018.
[20] P. J. Ramos, F. A. Prieto, E. C. Montoya, and C. E. Oliveros, “Automatic fruit
count on coffee branches using computer vision,” Computers and Electronics
in Agriculture, 137, 2017, doi: 10.1016/j.compag.2017.03.010.
[21] E. Khosla, R. Dharavath, and R. Priya, “Crop yield prediction using aggregated
rainfall-based modular artificial neural networks and support vector
regression,” Environment, Development and Sustainability, 22(6), 2020, doi:
10.1007/s10668-019-00445-x.
[22] P. Nevavuori, N. Narra, and T. Lipping, “Crop yield prediction with deep
convolutional neural networks,” Computers and Electronics in Agriculture,
163, 2019, doi: 10.1016/j.compag.2019.104859.
[23] P. Bose, N. K. Kasabov, L. Bruzzone and R. N. Hartono, "Spiking Neural
Networks for Crop Yield Estimation Based on Spatiotemporal Analysis of
Image Time Series," in IEEE Transactions on Geoscience and Remote Sensing,
54(11): 6563-6573, Nov. 2016, doi: 10.1109/TGRS.2016.2586602.
[24] L. Wang, H. Wang, L. Yu, and Y. Chen, “Spike-threshold variability originated
from separatrix-crossing in neuronal dynamics,” Scientific Reports, 6, 31719,
2016, doi: 10.1038/srep31719.
[25] P. Vázquez-Guerrero, J. F. Gómez-Aguilar, F. Santamaria, and R. F. Escobar-
Jiménez, “Design of a high-gain observer for the synchronization of chimera
states in neurons coupled with fractional dynamics,” Physica A: Statistical
Mechanics and its Applications, 539:122896, 2020, doi:
10.1016/j.physa.2019.122896.
[26] H. M. Huang et al., “Quasi-Hodgkin–Huxley Neurons with Leaky Integrate-
and-Fire Functions Physically Realized with Memristive Devices,” Advanced
Materials, 31(3), 2019, doi: 10.1002/adma.201803849.
[27] W. Bian and X. Chen, “Smoothing neural network for constrained non-lipschitz
optimization with applications,” IEEE Transactions on Neural Networks and
Learning Systems, 23(3): 13, 2012, doi: 10.1109/TNNLS.2011.2181867.

You might also like