You are on page 1of 13

Eur. Phys. J.

Plus (2019) 134: 122


DOI 10.1140/epjp/i2019-12530-5
THE EUROPEAN
PHYSICAL JOURNAL PLUS
Regular Article

Design of cascade artificial neural networks optimized with the


memetic computing paradigm for solving the nonlinear Bratu
system

Aisarul Hassan1,a , Siraj-ul-Islam Ahmad1,b , Muhammad Kamran1,c , Ahsan Illahi1,d , and Raja Muhammad Asif
Zahoor2,e
1
Research in Modeling and Simulation (RIMS) Group, Department of Physics, COMSATS University Islamabad, Islamabad,
Pakistan
2
Department of Electrical and Computer Engineering, COMSATS University Islamabad, Attock Campus, Attock, Pakistan

Received: 20 November 2018 / Revised: 20 January 2019


Published online: 26 March 2019

c Società Italiana di Fisica / Springer-Verlag GmbH Germany, part of Springer Nature, 2019

Abstract. A nature-inspired, integrated computational heuristic paradigm is developed for the piecewise
solution of the nonlinear Bratu problem arising in fuel ignition model, electrically conducting solids and
related fields, by exploiting the strength of Cascade Artificial Neural Networks (CANN) modeling, opti-
mized with the memetic computing procedure based on global search efficacy of genetic algorithms (GAs),
aided with the efficient local search of teaching learning based optimization (TLBO). The proposed tech-
nique incorporates the log-sigmoid activation function in the CANN model, trained by GAs hybridized
with TLBO, i.e., CANN-GA-TLBO. As a first application of CANN-GA-TLBO, 1D nonlinear Bratu’s
system represented with a boundary value problem of the second-order ordinary different equation has
been solved, which is a benchmark for testing new algorithms. Comparison of the results with exact so-
lution and previously reported solutions, including Adomian decomposition method, Laplace transformed
decomposition method, B-Spline method and artificial neural network solutions, confirms the superiority
of the designed stochastic solver CANN-GA-TLBO in terms of accuracy and convergence measures.

1 Introduction

An infant perceives knowledge about things by observing them. He constructs a version of reality by actions of his
surroundings. This could be a simple example of modeling something, using the language of instruction(s). Ask an
adult about something he knows, the answer would have a similar meaning to the answer given by any other adult
living in a different part of the world. The ability of a small child, who does not understand a thing, yet, keeps learning
about it by simply going through it again and again, with a new observation on every single attempt, contributes
to his version of reality. A system involved in this learning process is composed of a huge number of elements that
are connected with each other. These elements are called neurons and the system is termed as biological nervous
system. Artificial Neural Network (ANN) mimics this biological nervous system to solve complex problems. It was
in 1943, when a paper was published by Pitts and McCulloch to demonstrate, using electrical circuits, how these
neurons might work [1]. ANN had been aggressively applied in almost all field. Some recent applications include
nanotechnology [2], atomic physics [3], astrophysics [4], plasma physics [5], nonlinear optics [6], thermodynamics [7],
electromagnetic [8], combustion theory [9], fluid mechanics [10], bioinformatics [11] and finance [12]. Additionally,
ANN-based methodologies are exploited for the solution of complex systems governed with fractional order differential
equations [13,14].
a
e-mail: aisarulh@gmail.com
b
e-mail: sirajisl@yahoo.co.uk (corresponding author)
c
e-mail: kamrankhattak@comsats.edu.pk
d
e-mail: ahsanilahi@gmail.com
e
e-mail: Muhammad.asif@ciit-attock.edu.pk
Page 2 of 13 Eur. Phys. J. Plus (2019) 134: 122

Genetic Algorithms (GAs) imitate biological evolution process to reach the fittest of all possible solutions. For
example, let us picture a situation in which a bird flock reaches an island while migrating, and for some reason are
struck there. For survival in a completely different scenario, they, somehow, learn about a fruit that can be eaten but it
is covered in a hard outer shell. With a small beak, suitable only to collect tiny stuff, or for preening, some of them will
not survive. Rest, over time, adopting these circumstances, generation by generation, may reach a point where they are
equipped for any peril on the island. This evolution process through the lens of computation is termed as GAs. Since
its introduction, GAs are broadly incorporated to tackle several complex nonlinear optimization problems including
crystal structure prediction [15], numerical optimization problem [16], determination of mechanical characteristics [17],
optimization of core load pattern of nuclear reactor [18] and nonlinear Riccati systems [19]. These facts are source
of inspiration for the authors to investigate in stochastics optimization mechanisms for analysis of nonlinear Bratu’s
system.

1.1 Related works

To whet reader’s appetite for digging deep into this matter of solving nonlinear differential equations, there exists a
happy hunting ground populated with several efforts involving approximations, numerical techniques or combination
of both. The deterministic procedure based numerical solvers for nonlinear systems are broadly used include Inverse
Scattering Method [20], Chebyshev Polynomial Approximations [21], Variational Iteration Method [22], Adomian De-
composition Method [23], Padé Approximation Technique [24], Cubic B-Spline Scaling Functions and Chebyshev Car-
dinal Functions [25], Homotopy Perturbation Method [26], Finite Element Method [27], Finite Difference Method [28],
Reproducing Kernel Hilbert Space Method (RKHSM) [29], RKHSM within the Atangana-Baleanu fractional ap-
proach [30] and Gram–Schmidt Orthogonalization Process based Hilbert Space [31]. The numerical and analytical
approaches used for Bratu problems includes Successive Differentiation Method [32], Truncation Method with point
transformation [33], Differential Transform Method [34], Taylor Wavelets Method [35], Homotopy Analysis Method
based wavelet approach [36], Iterative Differential Quadrature Method [37], Iterative Finite Difference Method [38],
Iterative Reproducing Kernel Algorithm [39] and Variational Iteration Technique [40].
Determinism, expensive computation and a precursor analytical methodology, are three main features of the meth-
ods mentioned above. In addition to this, these are three main questions on which the future of these techniques hinges.
Through the eyes of determinism, possibility and actuality look identical, both Classical and deterministic algorithms
follow the same path to reach final results on each run, meaning, if we run the same algorithm more than once, every
time we get same values at each iteration, thus, delivering the same final result. All these deterministic numerical
procedures have their own significance, applications and limitations in solving nonlinear differential equations and
their systems.

1.2 Significance and innovative contributions

On the contrary to deterministic solvers, the nature inspired techniques tend to follow path which is divided into sub
paths to reach final result, therefore, all possibilities in a search place are considered using a random process to reach
the final result. Since now the process is random (with negligible difference in the outcome), running the same algorithm
over and over again will usually give a different result for the iterations. Further to the tasks, every output is better
than what you can expect from deterministic approach. Nature works in a way that is not deterministic (otherwise
it would be a lot easier to predict future), and this is why, methods motivated by nature work more efficiently and
effective. Several differential equations have been solved using such nature inspired soft computing techniques arising
in broad field of applied sciences [41–46], but cascaded ANN (CANN) has not yet introduced for solution of these
systems. Aim of present study is to utilized the competency of CANN for the solution of nonlinear BVP of Bratu’s
equation. The innovative contributions of the proposed scheme are listed as follows:
– A novel design of evolutionary computing paradigm is presented for nonlinear Bratu’s system by exploiting the
strength of cascaded artificial neural networks (CANN) modelling optimized with global search brilliance of genetic
algorithms (GAs) aided with speedy local convergence of teaching learning based optimization (TLBO).
– The proposed CANN-GA-TLBO scheme is applied effectively for the variants of Bratu’s system for stiff and
non-stiff scenarios based on the value of critical parameter.
– Comparison of approximate solutions of CANN-GA-TLBO with available exact solutions as well as reported results
of state of the art algorithms based on Adomian decomposition, Laplace transformed decomposition and B-Spline
methods confirm the superiority of CANN-GA-TLBO in terms of accuracy and convergence measures.
– Beside the provision of continuous solutions for Bratu equation, CANN-GA-TLBO can be exploited as an alternate,
accurate and reliable computing mechanism for Troecsh’s, Thomas-Fermi, Lane-Emden, Riccati and Bagley-Torvik
equations based nonlinear systems.
Eur. Phys. J. Plus (2019) 134: 122 Page 3 of 13

1.3 Organization

The rest of the paper is organized as follows: the nonlinear boundary value problem of Bratu’s system is presented in
sect. 2, the description of the proposed methodology CANN-GA-TLBO is presented in sect. 3, results for application
of CANN-GA-TLBO to nonlinear Bratu’s system are presented in sect. 4 while the concluded remarks are presented
in sect. 5.

2 System model: Nonlinear Bratu equation


The Bratu equation represents a boundary value problem (BVP), given by
 
y (x) + λey(x) = 0, 0 ≤ x ≤ 1,
(1)
y(0) = y(1) = 0,

where λ is the dimensionless Frank-Kamenetskii parameter. The analytical solution of eq. (1) is given by
⎧  

⎪ cosh((x − 12 ) θ2 )

⎨y(x) = −2 ln

cosh( θ )
,
4

(2)

⎪ √

⎪ θ
⎩ θ = 2λ cosh .
4
Single solution exists only for λ ≤ λc , where λc is the critical value of λ and is given by

θ
4 = 2λc cosh ,
4
while a unique solution exists for λ = λc (λc = 3.513830719) [47].
The Bratu equation is used as a benchmark for testing the newly developed numerical techniques as its analytical
solution is possible [48,49]. Numerous physical processes including model for electrospinning [50], phenomena in
electrostatics [51], electrical conducting solids [52], fuel ignition model [53], statistical mechanics [54] and electro-
spun organic nanofibers elaboration [55] are represented by Bratu-type equations. Keeping in view of paramount
importance of Bratu equations, the stochastic numerical procedures are also exploited recently for the solution of such
nonlinear systems represented with differential equations [56–59].

3 Mathematical formulation: Cascaded ANN


The solution of differential equation and its n-th derivative have been modelled using ANN by following continuous
mapping,


j
ŷ(x) = αi f (ωi x + βi ), (3)
i=1

dn j
dn
ŷ(x) = αi f (ωi x + βi ). (4)
dxn i=1
dxn

Here αi , βi and ωi are real valued, adopted and bounded weights, j is the number of neurons in the network and f is
the activation function. Log-sigmoid has been taken as the activation function, and is given by
1
f (υ) = . (5)
1 + e−υ
Moreover, x ∈ (0, 1) has been divided into N steps with step size h as
x ∈ (x0 = 0, x1 , x2 , x3 , . . . , xN = 1).
Suppose ŷ(x) satisfies the differential equation in interval x ∈ (0, 1) and with h = 0.1. then for the first cascade,
xb ∈ (x0 = 0, x1 , x2 , . . . , xk = 0.5)
Page 4 of 13 Eur. Phys. J. Plus (2019) 134: 122

and, for second cascade,


xd ∈ (x0 = 0.5, x1 , x2 , . . . , xl = 1).
At the node ⎧

⎪ ŷ(xb ) = ŷ(xd ),

ŷ  (xb ) = ŷ  (xd ), (6)


⎩ 
ŷ (xb ) = ŷ  (xd ).
This piecewise solution satisfies the Bratu equation with boundary conditions in the required interval. Using (3), (4)
and (5) Neural Networks (NNs) model can be constructed for the first cascade as

⎪ j
1

⎪ŷ(xb ) =

⎪ αi −(ω
,

⎪ 1 + e i xb +βi )

⎪ i=1

⎨ j
e−(ωi xb +βi )
ŷ  (xb ) = αi ωi , (7)

⎪ (1 + e−(ωi xb +βi ) )2

⎪ i=1


⎪ j
2e−2(ωi xb +βi ) e−(ωi xb +βi )

⎪ 
αi ωi2 −
⎩ŷ (xb ) = .
i=1
(1 + e−(ωi xb +βi ) )3 (1 + e−(ωi xb +βi ) )2

For the second cascade, we obtain




⎪ 2j
1




ŷ(x d ) = pi −(zi xd +qi )
,

⎪ 1 + e


i=j+1

⎨ 2j
e−(zi xd +qi )
ŷ  (xd ) = pi ω i , (8)

⎪ (1 + e−(zi xd +qi ) )2

⎪ i=j+1


⎪ 2j
2e−2(zi xd +qi ) e−(zi xd +qi )
⎪ŷ  (x ) =
⎪ 2


⎩ d pi ωi
(1 + e−(zi xd +qi ) )3 (1 + e−(zi xd +qi ) )2
.
i=j+1

Using (6) and (7), Bratu’s equation can be modelled for two cascades as

ŷ  (xb ) + λeŷ(xb ) = obj b ,


ŷ  (xd ) + λeŷ(xd ) = obj d . (9)
Equations (7)–(8) have been illustrated in fig. 1. Both cascades have the same number of neurons that form the NN,
as is evident from (7). The number of the neurons, as well as that of the cascades, depends upon the physical system
at hand. Fitness function E (sum of mean squared errors) is evaluated as follows:
E = E1 + E2 + E3 + E4 |i i = 1, 2, 3, . . . , M, (10)
where i is number of iterations and E1 , E2 , E3 , and E4 are evaluated as

1
k



⎪ E = (objb )2 ,


1
k

⎨ b=0

1 l (11)

⎪ E = (objd )2 ,

⎪ 2
l




d=0
E3 = (ŷ(xl=1 ) − ŷ(xk=6 )) + (ŷ  (xl=1 ) − ŷ  (xk=6 )) ,
2 2

E3 is evaluated at the interaction of two cascades, E4 is used to incorporate the boundary conditions:
1 2 
E4 = ((ŷ(xk=1 )) + ŷ(xl=6 ))2 . (12)
2
Furthermore, the weights αi , βi , ωi , pi qi , and zi have been adjusted by using the competency of memetic combination
of global search with GAs hybridized with TLBO for rapid local refinements, in such a way that minimizes the total
error E and accordingly the approximate solution of the Bratu equation are determined by the CANN-GA-TLBO
scheme.
Eur. Phys. J. Plus (2019) 134: 122 Page 5 of 13

Fig. 1. Neural network architecture for proposed technique.

4 Simulations and results


We now test the proposed formulism CANN-GA-TLBO for different values of critical parameter λ (1, 2 and 3.51) in
the nonlinear Bratu system. Each cascade has 10 neurons in the hidden layer, resulting in 30 adoptive weights for each
cascade. Adaptive weights have been adjusted or tuned by GA-TLBO hybrid optimization mechanism in such a way
that minimized the total error E. One hundred independent runs of CANN-GA-TLBO have been executed to analyze
the accuracy and convergence of our proposed model.
Additionally, we compare our solution with previously reported numerical solutions such as B-spline Method
(BSM) [60], Adomian Decomposition Method (ADM) [61,62], Laplace Transform Decomposition Method (LTDM) [63],
Artificial Neural Network (ANN) Method [64] as well as the exact solution as given in eq. (2) to access the worth of
proposed CANN-GA-TLBO method.

4.1 Bratu equation for λ = 1

For λ = 1, the Bratu equation can be written for the first cascade as
 
ŷ (xb ) + eŷ(xb ) = 0, 0 ≤ xb ≤ 0.5,
(13)
y(0) = 0,
Page 6 of 13 Eur. Phys. J. Plus (2019) 134: 122

Table 1. Weights obtained for the first cascade, for λ = 1.

i αi βi ωi

1 0.540972376 1.265041808 3.871147263

2 −9.556304939 −7.853220278 3.215412522

3 0.121637021 2.302118643 0.398136503

4 −1.597536839 6.724914938 8.106257331

5 0.110099371 −1.351330138 0.443000666

6 −1.294612784 0.322508743 −2.375571085

7 1.906745107 0.824050239 −1.343767519

8 0.494648956 −1.062144887 −0.424821078

9 0.00663977 1.07077525 −0.460342927

10 0.342552957 4.225388691 8.189153664

Table 2. Weights obtained for the second cascade, for λ = 1.

i pi qi Zi

1 1.146462247 13.90586559 18.75696808

2 1.280146324 −0.137769192 −1.364213922

3 4.484231257 −5.012995133 −0.408902528

4 1.20529602 13.12765562 17.92620348

5 −2.381127414 −0.93388657 0.845601999

6 −1.188362251 4.499750485 0.504935366

7 2.042890537 −0.461944975 0.243791607

8 −7.319727237 −3.160532778 −1.586465424

9 −2.787621464 −0.054577706 −1.591381045

10 −3.928584737 −2.618501942 1.059966094

and, for the second cascade, as


 
ŷ (xd ) + eŷ(xd ) = 0, 0.5 ≤ xd ≤ 1,
(14)
ŷ(1) = 0.

The weights obtained with fitness value 1.374827541 × 10−10 are given in table 1, for the first cascade, and table 2,
for the second cascade.
The approximate solutions are calculated using these weights in eqs. (7) and (8) and results are presented in
graphical illustrations as shown in fig. 2 along with the reference analytical solutions as provided in eq. (2). Results
are consistently overlapping for both cascades of ANN with reference solutions, which shows the accuracy of the
CANN-GA-TLBO algorithm.
In order to access the level of accuracy achieved by CANN-GA-TLBO, we have estimated the absolute error from
reference exact solution and results are listed in table 3 along with the absolute error reported previously by ADM,
LTDM, BSM and ANN. The proposed technique attained 6 to 7 decimal places of the accuracy.
The proposed CANN-GA-TLBO executed for 100 runs and results of fitness obtained for the runs (on a semi
logarithmic scale) are plotted in fig. 3. It can be seen that all of the fitness lie below 10−8 , indicating better convergence
rate of proposed technique.
Eur. Phys. J. Plus (2019) 134: 122 Page 7 of 13

Fig. 2. Comparison of CANN-GA-TLBO results with the analytical solutions for λ = 1.

Table 3. Comparison of the results for Bratu’s equation with value of critical parameter λ = 1.

Reported solutions
(x) Proposed solution
ADM LTDM BSM ANN
0.1 2.69e-3 1.98e-6 2.98e–6 2.75e-4 6.70e-07
0.2 2.02e-3 3.94e-6 5.47e–6 3.29e-4 9.65e-07
0.3 1.52e-4 5.85e-6 7.33e–6 2.13e-3 1.36e-06
0.4 2.20e-3 7.70e-6 8.50e–6 1.32e-3 1.73e-06
0.5 3.01e-3 9.47e-6 8.89e–6 3.75e-4 1.91e-06
0.6 2.20e-3 1.11e-5 8.50e–6 8.63e-4 1.60e-06
0.7 1.52e-4 1.26e-5 7.34e–6 3.20e-3 1.14e-06
0.8 2.02e-3 1.35e-5 5.47e–6 1.29e-3 8.09e-07
0.9 2.68e-3 1.20e-5 2.98e–6 4.66e-6 4.66e-07

Fig. 3. Fitness values obtained by the formalism CANN-GA-TLBO for 100 runs for λ = 1.
Page 8 of 13 Eur. Phys. J. Plus (2019) 134: 122

Table 4. Weights obtained for the first cascade, for λ = 2.

i αi βi ωi
1 0.470583862 −0.586862051 1.390410832
2 0.940124081 3.976742397 −4.115302278
3 0.354795114 2.456520683 6.920149346
4 −0.260225174 9.127217670 0.061687208
5 −1.139117544 14.30435256 0.080939649
6 −0.577448156 15.96364307 0.747397729
7 0.008571955 1.012372426 −4.430064399
8 0.582041300 0.905470276 4.311962042
9 0.741413927 1.126325078 −1.310778413
10 −0.740601475 0.279063321 −3.645569079

Table 5. Weights obtained for the second cascade, for λ = 2.

i pi qi zi
1 0.761811099 1.321998283 3.829531559
2 0.708971299 3.512382527 −2.302301156
3 −0.005475041 3.439087833 8.23971504
4 0.855662392 0.333902608 −0.825466895
5 1.39142883 0.513219651 −0.570800938
6 −0.639159517 1.206622585 6.056087182
7 −0.799373436 1.528951652 −3.84668085
8 −0.627017822 0.669129406 −6.271624189
9 −1.93738574 −0.035640686 0.742757856
10 −1.555559828 −3.848239826 2.36734327

4.2 Bratu equation for λ = 2

For λ = 2, the Bratu equation for the first cascade is given by



ŷ  (xb ) + 2eŷ(xb ) = 0, 0 ≤ xb ≤ 0.5,
(15)
y(0) = 0,

whereas, for the second cascade, the Bratu equation is given by



ŷ  (xd ) + 2eŷ(xd ) = 0, 0.5 ≤ xd ≤ 1,
(16)
ŷ(1) = 0.

The weights obtained with fitness value 4.477207126 × 10−11 are given in table 4, for the first cascade, and in table 5,
for the second cascade.
The approximate solutions are calculated on a similar pattern as mentioned before (in λ = 1 case). This approximate
solution is then compared with the analytic solution as given in eq. (2) and provided in fig. 4, while the values of
absolute error are given in table 6 along with absolute errors of reported solvers. It is seen that accuracy of the order
10−5 is achieved by CANN-GA-TLBO.
The fitness values obtained for 100 runs (on a semi-logarithmic scale) are plotted in fig. 5. It can be seen that all
of the values lie below 10−7 , indicating better convergence rate of proposed technique (for λ = 2).
Eur. Phys. J. Plus (2019) 134: 122 Page 9 of 13

Fig. 4. Comparison of CANN-GA-TLBO results with the analytical solutions for λ = 2.

Table 6. Comparison of the results for Bratu’s equation with value of critical parameter λ = 2.

Reported solutions
(x) Proposed solution
ADM LTDM BSM ANN
0.1 1.52e-2 2.13e-3 1.72e–5 2.35e-3 1.13e-05
0.2 1.47e-2 4.21e-3 3.26e–5 1.56e-3 1.96e-05
0.3 5.89e-3 6.19e-3 4.49e–5 3.52e-3 2.78e-05
0.4 3.25e-3 8.00e-3 5.29e–5 4.95e-3 3.55e-05
0.5 6.99e-3 9.60e-3 5.56e–5 4.09e-3 4.07e-05
0.6 3.25e-3 1.09e-2 5.29e–5 5.13e-3 3.58e-05
0.7 5.89e-3 1.19e-2 4.49e–5 3.77e-3 2.75-05
0.8 1.47e-2 1.24e-2 3.26e–5 1.70e-3 1.88e-05
0.9 1.52e-2 1.09e-2 1.72e–5 1.28e-3 9.66e-06

Fig. 5. Fitness values obtained by the formalism CANN-GA-TLBO, for 100 runs for λ = 2.
Page 10 of 13 Eur. Phys. J. Plus (2019) 134: 122

Table 7. Weights obtained for the first cascade, for λ = 3.51.

i αi βi ωi
1 0.484577184 4.44508067 −6.41245199
2 −1.544028527 1.961041847 0.113031645
3 −5.561923853 −3.246109463 2.753135667
4 1.975447108 −0.722689147 3.807464927
5 5.676793438 −3.907377343 0.214497413
6 1.443447381 1.716705439 5.811269981
7 −5.976584195 −9.517730639 4.776985059
8 −1.610886884 0.198138871 −3.869180237
9 3.42605565 −4.970203295 1.551786855
10 −2.512351564 −4.222599908 −3.071800087

Table 8. Weights obtained for the second cascade, for λ = 3.51.

i pi qi zi
1 0.4809252 7.695978973 −5.409487597
2 8.720300503 −4.890953494 −0.107904754
3 −1.890253912 0.273646911 −3.828351195
4 −2.225685556 −0.810456 −8.750464892
5 −0.341544914 31.23486985 22.34679878
6 15.18066837 −4.213100303 −0.217818743
7 −3.568592829 1.261649001 −4.215820405
8 4.578108437 2.095894254 −1.155625636
9 −7.838450036 −1.38106691 1.833633251
10 1.396253779 8.969260679 4.493165981

4.3 Bratu equation for λ = 3.51

For λ = 3.51, the nonlinear Bratu equation can be written, for the first cascade, as
 
ŷ (xb ) + 3.51eŷ(xb ) = 0, 0 ≤ xb ≤ 0.5,
(17)
y(0) = 0,

and, for the second cascade, as



ŷ  (xd ) + 3.51eŷ(xd ) = 0, 0.5 ≤ xd ≤ 1,
(18)
ŷ(1) = 0.

The weights obtained with fitness value 4.924197559 × 10−08 are given in tables 7 and 8 for ANN cascades 1 and 2,
respectively.
Our approximate solutions of CANN-GA-TLBO for λ = 3.51 are compared with the analytical solutions and results
are presented in fig. 6 along with the reference results.
The absolute error from the reference exact solution given in eq. (2) are calculated for λ = 3.51 and results
are provided in table 9 along with the reported results of state-of-the-art solvers. The proposed CANN-GA-TLBO
algorithm is applicable for the stiff scenario of the Bratu equations with accuracy of the order 10−4 .
The fitness values obtained for 100 runs are plotted in fig. 7. It can be seen that all of the values lie below 10−6 ,
indicating better convergence rate of proposed technique for λ = 3.51.
Eur. Phys. J. Plus (2019) 134: 122 Page 11 of 13

Fig. 6. Comparison of our result with the analytical solution for λ = 3.51.

Table 9. Comparison of the results for Bratu’s equation with value of critical parameter λ = 3.51.

Reported solutions
(x) Proposed solution
BSM ANN
0.1 3.84e–2 2.98e-4 1.51e-4
0.2 7.48e–2 6.88e-3 2.37e-4
0.3 1.06e–1 2.72e-3 3.05e-4
0.4 1.27e–1 1.76e-2 3.45e-4
0.5 1.35e–1 1.04e-2 3.56e-4
0.6 1.27e–1 1.37e-2 3.48e-4
0.7 1.06e–1 4.32e-3 3.09e-4
0.8 7.48e–2 6.68e-3 2.43e-4
0.9 3.84e–2 1.66e-3 1.59e-4

Fig. 7. Fitness values obtained by the formalism CANN-GA-TLBO, for 100 runs for λ = 3.51.
Page 12 of 13 Eur. Phys. J. Plus (2019) 134: 122

5 Conclusions
To summarize, we have developed a new heuristic computing formalism CANN-GA-TLBO to solve nonlinear differ-
ential equations by exploiting the cascaded feedforward ANN, GA and TLBO algorithms. We have applied effectively
the formalism CANN-GA-TLBO to the nonlinear Bratu equation for different scenarios based value of critical pa-
rameters. Comparing the proposed approximate solutions of the nonlinear Bratu equation with analytical solution
reveals that the value of error lies below 10−8 , 10−7 and 10−6 , for λ = 1, 2 and 3.51, respectively, which asserted the
reasonable accuracy of the proposed CANN-GA-TLBO. The error ranges of proposed CANN-GA-TLBO method are
found relatively better than previously reported solutions of state of art solvers based on ADM, LTDM, BSM and
ANN. Moreover, statistical analysis of the results, provided for 100 independent runs reveals efficiency, accuracy and
consistent convergence of CANN-GA-TLBO algorithm.
In the future, the proposed method can be used to solve other highly nonlinear differential equations [65–71]
including Troecsh’s, Thomas-Fermi equation, Lane-Emden, Riccati, Schrödinger and Bagley-Torvik equations based
systems. One may explore for improved performance of CANN-GA-TLBO by increasing cascades or by varying number
of neurons in each cascade.

Publisher’s Note The EPJ Publishers remain neutral with regard to jurisdictional claims in published maps and institutional
affiliations.

References
1. W.S. McCulloch, W. Pitts, Bull. Math. Biophy. 5, 115 (1943).
2. V. Papadopoulos, G. Soimiris, D.G. Giovanis, M. Papadrakakis, Comput. Methods Appl. Mech. Eng. 328, 411 (2018).
3. Z. Sabir et al., Appl. Soft Comput. 65, 152 (2018).
4. I. Ahmad et al., Springer Plus 5, 1866 (2016).
5. M.A.Z. Raja, M.A. Manzar, F.H. Shah, F.H. Shah, Appl. Soft Comput. 62, 359 (2018).
6. I. Ahmad et al., Eur. Phys. J. Plus 133, 184 (2018).
7. I. Ahmad et al., Neural Comput. Appl. 28, 929 (2017).
8. J.A. Khan et al., Connect. Sci. 27, 377 (2015).
9. M.A.Z. Raja, Connect. Sci. 26, 195 (2014).
10. A. Mehmood et al., J. Taiwan Inst. Chem. Eng. 91, 57 (2018).
11. M.A.Z. Raja, M. Umar, Z. Sabir, J.A. Khan, D. Baleanu, Eur. Phys. J. Plus 133, 364 (2018).
12. A. Ara et al., Adv. Differ. Equ. 2018, 8 (2018).
13. C.J. Zúñiga-Aguilar, A. Coronel-Escamilla, J.F. Gómez-Aguilar, V.M. Alvarado-Martı́nez, H.M. Romero-Ugalde, Eur. Phys.
J. Plus 133, 75 (2018).
14. M.A.Z. Raja, M.A. Manzar, R. Samar, Appl. Math. Model. 39, 3075 (2015).
15. S.Y. Chen, F. Zheng, S.Q. Wu, Z.Z. Zhu, Curr. Appl. Phys. 17, 454 (2017).
16. W. Zang, L. Ren, W. Zhang, X. Liu, Fut. Gen. Comput. Syst. 81, 465 (2018).
17. A.R. Hosseinzadeh, A.H. Mahmoudi, Mech. Mater. 114, 57 (2017).
18. N. Shaukat, A. Majeed, N. Ahmad, B. Mohsin, Nucl. Eng. Design 240, 2831 (2010).
19. M.A.Z. Raja, Z. Shah, M.A. Manzar, I. Ahmad, M. Awais, D. Baleanu, Eur. Phys. J. Plus 133, 254 (2018).
20. H.H. Chen, Y.C. Lee, C.S. Liu, Phys. Scr. 20, 490 (1979).
21. B. Chen, R. Garcı́a-Bolós, L. Jódar, M.D. Roselló, Nonlinear Anal. 63, e629 (2005).
22. J.H. He, H.Y. Kong, R.X. Chen, M.S. Hu, Q.L. Chen, Carbohydr. Polym. 105, 229 (2014).
23. A. Hasseine, H.J. Bart, Appl. Math. Model. 39, 1975 (2015).
24. M.C. Devi, L. Rajendran, A.B. Yousaf, C. Fernandez, Electrochim. Acta 243, 1 (2017).
25. M. Lakestani, M. Dehghan, Comput. Phys. Commun. 181, 957 (2010).
26. A. Heydari, M. Mirparizi, F. Shakeriaski, F.S. Samani, M. Keshavarzi, Propulsion Power Res. 6, 223 (2017).
27. A. Bouharguane, J. Comput. Appl. Math. 328, 497 (2018).
28. B. Sepehrian, M.K. Radpoor, Appl. Math. Comput. 262, 187 (2015).
29. M. Al-Smadi, O.A. Arqub, Appl. Math. Comput. 342, 280 (2019).
30. O.A. Arqub, M. Al-Smadi, Chaos, Solitons Fractals 117, 161 (2018).
31. O.A. Arqub, Z. Odibat, M. Al-Smadi, Nonlinear Dyn. 94, 1819 (2018).
32. A.M. Wazwaz, Rom. J. Phys. 61, 774 (2016).
33. R. Saleh, S.M. Mabrouk, M. Kassem, Comput. Math. Appl. 76, 1219 (2018).
34. M. Grover, A.K. Tomer, Global J. Pure Appl. Math. 13, 5813 (2017).
35. E. Keshavarz, Y. Ordokhani, M. Razzaghi, Appl. Numer. Math. 128, 205 (2018).
36. Z. Yang, S. Liao, Commun. Nonlinear Sci. Numer. Simul. 53, 249 (2017).
37. O. Ragb, L.F. Seddek, M.S. Matbuly, Comput. Math. Appl. 74, 249 (2017).
38. H. Temimi, M. Ben-Romdhane, J. Comput. Appl. Math. 292, 76 (2016).
Eur. Phys. J. Plus (2019) 134: 122 Page 13 of 13

39. Z. Altawallbeh, M. Al-Smadi, I. Komashynska, A. Ateiwi, Ukr. Math. J. 70, 687 (2018).
40. N. Das, R. Singh, A.M. Wazwaz, J. Kumar, J. Math. Chem. 54, 527 (2016).
41. M.A.Z. Raja, Z. Shah, M.A. Manzar, I. Ahmad, M. Awais, D. Baleanu, Eur. Phys. J. Plus 133, 254 (2018).
42. R.G. Peyvandi, S.I. Rad, Eur. Phys. J. Plus 132, 511 (2017).
43. A. Mehmood et al., Appl. Soft Comput. 67, 8 (2018).
44. K. Majeed et al., Appl. Soft Comput. 56, 420 (2017).
45. I. Ahmad et al., Neural Comput. Appl. 29, 449 (2018).
46. M.A.Z. Raja, F.H. Shah, M.I. Syam, Neural Comput. Appl. 30, 3651 (2018).
47. I.A.H. Hassan, V.S. Ertürk, Int. J. Contemp. Math. Sci. 2, 1493 (2007).
48. M.R. Ali, A.R. Hadhoud, Results Phys. 12, 525 (2019).
49. P. Roul, K. Thula, Int. J. Comput. Math. 96, 85 (2019).
50. J.H. He, H.Y. Kong, R.X. Chen, M.S. Hu, Q.L. Chen, Carbohydr. Polym. 105, 229 (2014).
51. S. Hichar, A. Guerfi, S. Douis, M.T. Meftah, Rep. Math. Phys. 76, 283 (2015).
52. M.A.Z. Raja, R. Samar, E.S. Alaidarous, E. Shivanian, Appl. Math. Model. 40, 5964 (2016).
53. Z. Masood et al., Neurocomputing 221, 1 (2017).
54. S. Chanillo, M. Kiessling, Commun. Math. Phys. 160, 217 (1994).
55. M.M. Mousa, Brit. J. Math. Comput. Sci. 5, 515 (2015).
56. M.A.Z. Raja, S.I. Ahman, R. Samar, Neural Comput. Appl. 25, 1723 (2014).
57. M.A.Z. Raja, R. Samar, M.M. Rashidi, Neural Comput. Appl. 25, 1585 (2014).
58. Z. Abo-Hammour, O. Abu Arqub, S. Momani, N. Shawagfeh, Discr. Dyn. Nat. Soc. 2014, 401696 (2014).
59. M.A.Z. Raja, Neural Comput. Appl. 24, 549 (2014).
60. H. Caglar, N. Caglar, M. Özer, A. Valarıstos, A.N. Anagnostopoulos, Int. J. Comput. Math. 87, 1885 (2010).
61. A.M. Wazwaz, Appl. Math. Comput. 166, 652 (2005).
62. E. Deeba, S.A. Khuri, S. Xie, J. Comput. Phys. 159, 125 (2000).
63. S.A. Khuri, Appl. Math. Comput. 147, 131 (2004).
64. M. Kumar, N. Yadav, Natl. Acad. Sci. Lett. 38, 425 (2015).
65. A. Başhan, Y. Uçar, N.M. Yağmurlu, A. Esen, Eur. Phys. J. Plus 133, 12 (2018).
66. N. Ahmed, S. Bibi, U. Khan, S.T. Mohyud-Din, Eur. Phys. J. Plus 133, 45 (2018).
67. E. Fendzi-Donfack, J.P. Nguenang, L. Nana, Eur. Phys. J. Plus 133, 32 (2018).
68. M. Inc, A.I. Aliyu, A. Yusuf, D. Baleanu, J. Mod. Opt. 66, 647 (2019).
69. A. Yusuf, S. Qureshi, M. Inc, A.I. Aliyu, D. Baleanu, A.A. Shaikh, Chaos 28, 123121 (2018).
70. A.I. Aliyu, A. Yusuf, D. Baleanu, Commun. Theor. Phys. 70, 511 (2018).
71. H.I. Abdel-Gawad, M. Tantawy, M. Inc, A. Yusuf, Mod. Phys. Lett. B 32, 1850353 (2018).

You might also like