You are on page 1of 129

Recent Applied Research in Mathematical

Statistical and Computational Sciences

Editors
Dr. Dinesh Kumar Singh Mr. Gaurav Goel
Dr. Shavej Ali Siddiqui Dr. Aarthi Elangovan
Dr. DeveshKatiyar Dr. Susheel Kumar Singh
Recent Applied Research in Mathematical Statistical and
Computational Sciences

Editors

Dr. Dinesh Kumar Singh


Assistant Professor
Department of IT,
Dr. Shakuntala Misra National Rehabilitation University, Lucknow, UP

Dr. Shavej Ali Siddiqui


Assistant Professor (Mathematics)
Department of Applied Sciences & Humanities in Faculty of Engineering & Technology at
Khwaja Moinuddin Chishti Language University, Lucknow U.P.

Dr. Devesh Katiyar


Assistant Professor
Department of Computer Science
Dr Shakuntala Misra National Rehabilitation University Lucknow, UP

Mr. Gaurav Goel


Assistant Professor
Department of Computer Science,
Dr ShakuntalaMisra National Rehabilitation University Lucknow, UP

Dr. Aarthi Elangovan


Assistant Professor
Department of Computer Science
Faculty of Science and Technology
SRM Institute of Science and Technology
Kattankulathur, Tamilnadu

Dr. Susheel KumarSingh


Assistant Professor
Department of Physics
DSMNRU, Lucknow, UP

MKSES Publisher (India)


MKSES Publisher (India)

Publisher Address: Head Office: 1st Floor, Building No-85A, (Nanak Arcade near Sani
Mandir, Parag road, LDA colony, Kanpur Road, Lucknow-226012

Mobile No: +91 9838298016, +91 8299547952 Office Land line No: +91 5223587193

E-mail:mkespublication@gmail.com

Website: www.mksespublications.com

Copyright© MKSES Publisher Lucknow India

First Published: December 2023

ISBN: 978-93-91248-89-5
Page No. 1-123

Disclaimer: The views expressed by the authors are their own. The editors and publishers do
not own any legal responsibility or liability for the views of the authors, any omission or
inadvertent errors.

© All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means without the prior written permission of the
publishers.
Preface
This Edited book “Recent Applied Research in Mathematical, Statistical and Computational
Sciences” is based on the proceedings of the “Proceeding of 2nd International Conference on
Research Methodology (ICRM-2023)”, Jointly Organized by Dr. Shakuntala Misra National
Rehabilitation University Mohaan Road, Lucknow, U.P. & Science-Tech Institute run by
Manraj Kuwar Singh Educational Society, Lucknow, U.P. during October 28-30, 2023. The
aim of this conference was to bring together the young as well as experienced researchers on
one platform to discuss the recent findings in the aforesaid areas of Mathematical, Statistical
and Computational Sciences. After peer reviewing process, relevant research papers were
finally included in this book as chapters.

The present volume is based on the contributions made by various authors on different
important topic of “Recent Applied Research in Mathematical, Statistical and
Computational Sciences” and introduces the subject along the following topics: A New Single
Sampling Plan based on Truncated Lifetest for Percentile Lifetime using Neutrosophic Pareto
distribution; Hopf bifurcation analysis model with a prey predator and Ammensal; Forecasting
of Rice area, production, productivity of Tamil Nadu and correlation with climate data; An
Innovative Approach for Detecting and Instance segmentation of Printed Electric Components
using Mask RCNN; and Harvesting polices in prey-predator model.
We must place on record our sincere gratitude to the authors not only for their effort in
preparing the papers for the present volume, but also their patience in waiting to see their work
in print. Finally, we are also thankful to our publishers Mrs. Shweta Singh M/S MKSES
Publishers, Lucknow for taking all the efforts in bringing out this volume in short span time.

Editors
Content

Chapter No. Chapter Name Page No.

1. A New Single Sampling Plan based on Truncated Lifetest for 1-10


Percentile Lifetime using Neutrosophic Pareto distribution
S. Jayalakshmi and S.Vijilamery
2. Hopf bifurcation analysis model with a prey predator and Ammensal 11-21
Paparo A. V and T.S.N Murthy
3. Forecasting of Rice area, produ/ction, productivity of Tamil Nadu 22-36
and correlation with climate data
Dr. P. Sujatha, Dr. B. Sivasankari and Dr. S. Anandhi
4. An Innovative Approach for Detecting and Instance segmentation of 37-63
Printed Electric Components using Mask RCNN
Dayana Vincent
5. Harvesting polices in prey-predator model 64-76
Paparo A. V and T.S.N Murthy
6. A Survey Evolutionary Algorithms for Breast Cancer Prediction 77-89
Dr.Aarthi E
7. Cryptography and Image Processing 90-101
Deep Singh, Rajwinder Singh and Amit Paul
8. Introduction to Search Engine Optimization 102-108
Dr Dinesh Kumar Singh, Atma Prakash Singh
9. Study on Edge Computing for IOT and Big Data 109-116
Dr Dinesh Kumar Singh, Dr Vineet Kumar Singh
10. Active Passive Classifier Algorithm in Machine Learning 1117-123
Dr Devesh Katiyar, Mr Gaurav Goel
ISBN: 978-93-91248-89-5 1

Chapter: 1
A New Single Sampling Plan based on Truncated Lifetest for Percentile Lifetime using
Neutrosophic Pareto distribution
S. Jayalakshmi1* and S.Vijilamery2
1
Assistant Professor, Department of statistics, Bharathiar University, Coimbatore-46.
2
Research Scholar, Department of statistics, Bharathiar University, Coimbatore-46.
*E-mail: statjayalakshmi16@gmail.com,
vijilamerysrsj@gmail.com

Abstract: Neutrosophic statistics is a generalization of classical statistics and is based on data


that concentrates on neutralizing information present in it. Traditionally, statistical methods
deal with the determinant data and provide probabilities of acceptances and rejections based
on the given data. In contrast, Neutrosophic statistics is concerned with indeterminate data, and
its theory takes into an account of the probability of acceptance, the probability of rejection,
and the probability of indeterminacy. This paper focus on the concept of a new single sampling
plan based on a truncated life test and percentile lifetime using the Neutrosophic Pareto
distribution. Neutrosophic statistics deals with indeterminate data based on truncated life tests
for percentile lifetime, using the Neutrosophic Statistical Interval method to calculate the
values of operating characteristic function. Tables and results are given and explained with
suitable illustrations.
Keywords: Neutrosophic Pareto distribution, Truncated Lifetest, Neutrosophic Single
Sampling Plan, Percentiles, Neutrosophic Statistical Interval Method.

1. Introduction

Quality inspection plays an important role in checking, maintaining and offering or providing
an assurance for product’s quality. The inspection helps to maintain high quality of the product
and the proper inspection planning is needed to use the techniques and instruments in a perfect
way. Acceptance Sampling Plan is one of the important traditional methodologies used for
testing or inspecting the quality of product. Reliability Acceptance sampling plan is sampling
inspection that conducts a lifetest performance upon the sampled products. Truncated lifetest
is one of the methods of Reliability Sampling Plan and it is time saving and cost effective.
Reliability Acceptance Sampling plan based on Truncated life test and its aids to calculate an
outcome of a trial, where the product’s success and failure is determined before the experiment
time. In particular, classical binomial distribution considers only the success and failure cases,
whereas sampling plan does not only decides the acceptance or rejection, but it also leads to
the indeterminacy which can be managed by Neutrosophic Statistics. In Single Sampling plan

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 2

samples are selected in a random manner from the submitted lot. The lot is accepted, if the
number of defectives are less than specified acceptance numbers which are otherwise rejected.
But, the Neutrosophic single sampling plan deals with probability of acceptance, probability of
rejection and probability of indeterminacy. The main goal of the study is inspection results may
be its leads to indeterminacy and that situation how can deals this problem and gives a proper
solutions for satisfying the both of producer’s and consumer’s needs.

Many research articles focused on Single sampling plan based on truncated lifetest: by Epstein
(1954) for exponential, by Kantam et al. (2001) for log-logistic, by Baklizi (2003) for Pareto
model of the second kind, by Baklizi and El Masri (2004) for Birnbaum-Saunders, by Baklizi
et al. (2005) for Rayleigh, by Balakrishnan et al. (2007) for generalized Birnbaum-Saunders,
by Aslam and Shabaz (2007) for generalized exponential, by Gui and Zhang (2014) for
Gompertz, by Al-Nasser and Al-Omari (2013) for exponentiated Frechet distribution, by Al-
Omari (2014, 2015,2016a, 2016b, 2018) respectively for Kappa distribution with three
parameters, generalized inverted exponential, transmuted inverse Rayleigh, generalized
inverted Weibull and Sushila distributions, by Al-Masri (2018) for inverse gamma, by Aslam
et al. (2010) for generalized exponential distribution.

Due to the significance of Neutrosophic statistics, several authors have used it in various of
fields, including: Smarandache(2014) has introduced a new area of Neutrosophic Statistics.
Aslam (2018) has introduced Neutrosophic statistics into the area of acceptance sampling
plans; Aslam (2018) has proposed the sampling plan for the process loss function using the
Neutrosophic statistics; Aslam and Raza (2018) have proposed sampling plans for multiple
manufacturing lines under the Neutrosophic statistical interval approach; Aslam and Arif
(2018) have discussed sudden death testing using the Neutrosophic statistics. Aslam (2019) has
designed the sampling plan for exponential distribution using Neutrosophic Statistics Interval
method. Aslam (2019) has proposed a new method to analyse rock joint roughness coefficients
based on Neutrosophic statistics. Chen et al. (2017) have developed an expression of the rock
joint roughness coefficient using Neutrosophic interval statistical numbers. Jeyadurga and
Balamurali (2020) have developed the new attribute sampling plan based on Weibull
distributed for percentile life using Neutrosophic Statistical Interval method. Zahid Khan
(2022) has developed the Neutrosophic Pareto Distribution.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 3

2. Neutrosophic Pareto Distribution:


The Cumulative Distribution Function of Neutrosophic Pareto distribution is,
𝜌 𝜃𝑁
𝐹𝑁 (𝑥 ) = 1 − ( 𝑁) (1)
𝑥

The Probability Density Function of Neutrosophic Pareto distribution is


𝜃
𝜃𝑁 𝜌𝑁𝑁
𝑓𝑁 (𝑥 ) = (2)
𝑥 𝜃𝑁 +1

𝜃𝑁 > 0, 𝜌𝑁 > 0
Where 𝜃𝑁 ∈ [𝜃𝑙 , 𝜃𝑢 ] is a Neutrosophic is shape parameter and 𝜌𝑁 ∈ [𝜌𝑙 , 𝜌𝑢 ] is a Neutrosophic
scale parameters.
Survival function
𝜌 𝜃𝑁
𝑆𝑛 (𝑥 ) = ( 𝑁) (3)
𝑥

Hazard function
𝜃𝑁
ℎ𝑛 (𝑥 ) = (4)
𝑥

Quantile function
−1⁄𝜃
𝑄𝑛 = 𝜌𝑁 (1 − 𝑝𝑛 ) 𝑁 (5)
3. Designing of single sampling plan based on Truncated lifetest using Neutrosophic
Statistical Interval Method:
The percentile estimator for Neutrosophic Pareto Distribution:
Pr(T≤ 𝜗𝑞 )=q
𝜗𝑞 = 𝜌𝑁 (1 − 𝑝𝑛 )𝜃𝑁 (6)
unknown value of scale parameter can be expressed as:
𝜗𝑞
𝜌𝑁 = (1−𝑝 𝜃 (7)
𝑛) 𝑁

The cumulative distribution function can be written as:


𝜌 𝜃𝑁
𝐹𝑁 (𝑡) = 1 − ( 𝑡𝑁) (8)
0

An convenient approach is to express experiment time 𝑡0 as a multiplication of qth percentile


life𝜃0 , i.e., 𝑡0 = a𝜗0 for a constant “a” (experiment time termination ratio).
Substituting the values of 𝑡0 and 𝜌𝑁 in equation (1), we get
𝜃𝑁
𝜗𝑞
𝐹𝑁 (𝑡) = 1 − ( −1⁄𝜃
) (9)
𝑎𝜗0 (1−𝑝𝑛) 𝑁

1 𝜗𝑞 𝜃𝑁
= 1−( 𝜃 .( ) )
𝑎 𝑁 (1 − 𝑝𝑛 )−1 𝜗0

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 4

𝑟𝐹 𝜃𝑁 (1−𝑝)
p= 1 − (10)
𝑎𝜃𝑁

𝜗𝑞
where 𝑟𝐹 = (𝜗 ) represents the ratio between the true percentile lifetime and specified
0

percentile lifetime.
According to Neutrosophic Statistics, the process of designing sampling plans of percentile
𝜗𝑞
lifetime requires the taking into account the probability of failure, if the ratio of ( ) < 1 , the
𝜗0
𝜗𝑞
probability of non-failure, if the ratio of (𝜗 ) > 1 , and the probability of indeterminacy, if the
0

𝜗𝑞
ratio of ( ) = 1. The indeterminacy threshold 𝐼𝑇 is the T number of trials whose outcome is
𝜗0

indeterminate among n trials. If 𝐼𝑇 < 𝑇 , the cases will be indeterminate part and𝐼𝑇 ≥ 𝑇, the
cases will be determinant part. The probabilities of such cases using Neutrosophic Interval
Probability (𝑃𝐹 , 𝑃𝑁𝐹 , 𝑃𝐼 ) and the total probability for the three cases of 𝑃𝐹 +𝑃𝑁𝐹 + 𝑃𝐼 ≥ 1. For
the fixed known values of a, p,𝜃 and compute the values of 𝑝𝐹 , 𝑝𝑁𝐹 , 𝑝𝐼 using the following
equations.
1 𝜗𝑞 𝜃𝑁 𝜗𝑞
𝑝𝐹 = 1 − ( . ( ) ) ,𝑤ℎ𝑒𝑟𝑒 <1 (11)
𝑎𝜃𝑁 (1−𝑝𝑛)−1 𝜗0 𝜗0

𝑟𝐹 𝜃𝑁 (1−𝑝)
𝑝𝐹 = 1 − 𝑎𝜃𝑁
, where𝑟𝐹 < 1

1 𝜗𝑞 𝜃𝑁 𝜗𝑞
𝑝𝑁𝐹 = 1 − (𝑎𝜃𝑁 (1−𝑝 . ( ) ) ,𝑤ℎ𝑒𝑟𝑒
)−1 𝜗 𝜗0
>1 (12)
𝑛 0

𝑟𝑁𝐹 𝜃𝑁 (1−𝑝)
𝑝𝑁𝐹 = 1 − 𝑎𝜃𝑁
, where 𝑟𝐹 > 1

1 𝜗𝑞 𝜃𝑁 𝜗𝑞
𝑝𝐼 = 1 − ( . ( ) ) ,𝑤ℎ𝑒𝑟𝑒 =1 (13)
𝑎𝜃𝑁 (1−𝑝𝑛 )−1 𝜗0 𝜗0

𝑟𝐼 𝜃𝑁 (1−𝑝)
𝑝𝑁𝐹 = 1 − , where 𝑟𝐹 = 1
𝑎𝜃𝑁

3.1 Operating procedure for Single Sampling plan based on Truncated lifetest using
Neutrosophic Statistical Interval method:
The operating procedure for single sampling plan based on Truncated lifetest using proposed
method as follows:

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 5

Step 1:
A decision is made to use the values for the measurements of 𝜃0 (specified percentile life), 𝑡0
(pre-defined testing time) and indeterminacy threshold 𝐼𝑇 . A criterion that makes sure that the
number of outcomes that result in an indeterminacy case (i.e., T) is compared with 𝐼𝑇 , with the
indeterminacy part being added if 𝐼𝑇 > T. From the submitted lot, a random sample of size n
will be drawn. The sample item should be subjected to a life test for a pre-defined period of
time 𝑡0 . As a result of these two factors, it is possible to determine the success and failures of
the sample.
(i) The sampled item, can be classified as a failure if its lifetime is less than
𝑡0 when 𝜃0 = 𝑡0 .
(ii) The sampled item, can be classified as a failure if its lifetime is greater than
𝑡0 when𝜃0 > 𝑡0 .
Step 2:
If d≤c, accept the lot when the cases in the lot have a determinant part 𝐼𝑇 ≤T. otherwise the lot
can be rejected.
Where d is denoted by the number of failures occurs. As a result, the failure time of a lot is
exactly equal to the true lifetime of lot (𝜗𝑞 ). The product does not meet failure time after
specified percentile lifetime and it can be classified as a success item or non-defective item.
The single sampling plan is characterized by the parameters of n and c. The Probabilities of
acceptance, Probabilities of rejection and Probabilities of indeterminacy using the Statistical
Interval Method follows a characterized parameters of notations represented by 〈𝑃𝐴 , 𝑃𝐼 , 𝑃𝑅 〉 for
NIP〈𝑝𝐹 , 𝑝𝐼 , 𝑝𝑁𝐹 〉. The values of probabilities of 𝑃𝐴 , 𝑃𝐼 𝑎𝑛𝑑 𝑃𝑅 can be computed as following
equations:
𝑛! (𝑛 − 𝑑)
𝑃𝐴 = ∑𝑐𝑑=0 (𝑃𝐹 )𝑑 . ∑𝑇𝑘=0 (𝑃𝐼 )𝑘 . (𝑃𝑁𝐹 )𝑛−𝑑−𝑘 (14)
𝑑!(𝑛−𝑑)! 𝑘
𝑛! (𝑛 − 𝑑 )
𝑃𝑅 = ∑𝑛𝑑=𝑐+1 𝑑!(𝑛−𝑑)! (𝑃𝐹 )𝑑 . ∑𝑇𝑘=0 (𝑃𝐼 )𝑘 . (𝑃𝑁𝐹 )𝑛−𝑑−𝑘 (15)
𝑘
𝑛! (𝑛 − 𝑧 )
𝑃𝐼 = ∑𝑛𝑧=𝑇+1 𝑧!(𝑛−𝑧)! (𝑃𝐼 ) 𝑧 . ∑𝑛−𝑧
𝑘=0 (𝑃𝐹 )𝑘 . (𝑃𝑁𝐹 )𝑛−𝑧−𝑘 (16)
𝑘
According to the proposed plan, if the number of failures is less than or equal to c,
which indicates the extent of indeterminacy, then it is obvious that the lot will be accepted
when the indeterminacy threshold 𝐼𝑇 is less than or equal to the indeterminacy case T (i.e.,
determinate part, T≤ 𝐼𝑇 ). From the Table 1, We calculate the values of Probabilities of

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 6

acceptance, rejection and indeterminacy 〈𝑃𝐴, 𝑃𝐼 , 𝑃𝑅 〉 are using fixed values of


𝜃, 𝑎, 𝑛, 𝑐, 𝑇, 𝑟𝐹 , 𝑟𝐼 𝑎𝑛𝑑 𝑟𝑁𝐹 .
Table 1: The probabilities (𝑷𝑨 , 𝑷𝑰 , 𝑷𝑹 ) for assuring 50th percentile life
when 𝜽 = 𝟏, 𝒓𝑭 = 𝟎. 𝟒, 𝒓𝑰 = 𝟎. 𝟖, 𝒓𝑵𝑭 = 𝟎. 𝟖.
N a = 0.5, 𝐼𝑇 = 1
c=0 c=1 c=2
5 (0.00013,0.36718,0.63268) (0.00371,0.36718, 0.62910) (0.03962,0.36718,0.59319)

10 (0.000000002, 0.755974769, (0.000000155, 0.755974769, (0.000004147, 0.755974769,


0.244025228) 0.244025076) 0.244021377)
𝐼𝑇 = 2
5 (0.00076,0.10351,0.89573) (0.01652,0.10351,0.879963) (0.13165,0.10351,0.76484)

10 (0.000000030, 0.474407196, (0.000001645, 0.474407196, (0.000038910, 0.474407196,


0.52559278) 0.525591170) 0.525553894)
a = 1.0, 𝐼𝑇 = 1
5 (0.00192, 0.26272, 0.73536) (0.02592, 0.26272, 0.71136) (0.14112, 0.26272, 0.59616)
10 ( 0.000001,0.6241903, (0.0000318, 0.6241903, (0.0004050, 0.6241903,
0.3758086) 0.37577779) 0.3754047)
𝐼𝑇 = 2
5 (0.005120, 0.057920, (0.057920, 0.057920, (0.259520, 0.057920,
0.936960) 0.884160) 0.68256)
10 ( 0.000005, 0.322200, (0.000147, 0.322200, (0.001681, 0.322200,
0.677795) 0.677653) 0.676119)

As a result of these given Table 1 observations, it is possible to conclude that the plan under
consideration will provide a high probability of acceptance when probability of failure is high,
and a high probability of rejecting the lot when failure probability is low.
4. Application of the proposed plan
We consider that the real life data was recorded from Lawless (2003). This study deals with
failure times for polyethylene cable insulation using method of an accelerated life test. The 10
specimens of each type tested and ordered failure times, in hours, are given below. 5.1, 9.2,
9.3, 11.8, 17.7, 19.4, 22.1, 26.7, 37.3, 60.0. To check the normality of given data: Kolmogorov
Smirnov test, Shapiro Wilk’s test, pp plot and QQ plot are used. The values of Kolmogorov
Smirnov test is 0.999 and Shapiro Wilk’s test statistic is 0.85981.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 7

To estimate the unknown parameters of distributions, Maximum likelihood estimation are


used. Table 2 shows that the summary results of the given data. To fit the data several methods
are used, including Anderson Darling test, Cramer-von Mises test, Akaike information criteria
(AIC), consistent Akaike information criteria (CAIC), Baysian information criteria (BIC), and
Hanan Quinn information criteria (HQIC) and the values are given below.
Table 2: Summary Results
Anderson- Cramér-von Akaike Bayesian Maximized Consistent
Darling Mises Information Information Log- Akaike
Criterion Criterion Criterion Criterion Likelihood Information
(AD) (CM) (AIC): (BIC): (MLL): Criterion
(CAIC):

0.60446 0.99318 95.78024 198.46825 -47.89012 98.00261

Hannan-Quinn Information Criterion Maximum Likelihood Estimation


(HQIC):
97.4483 θ̂N=0.82057,
ρ̂=6.48978
N

The maximum likelihood estimate of the shape parameter for the data in this study is estimated
to be 𝜃̂ =0.82057 1.0. i.e. 𝜃̂= 1. The data from the analysis shows that there are 2 failed cables

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 8

out of 10 cables insulations, 7 cables insulations that are non-failed out of 10 cables insulations,
and one cable insulation belongs to the indeterminacy group out of 10 cable insulations. In this
case, the lot acceptance, indeterminate and rejection probabilities are 〈𝑃𝐴, 𝑃𝐼 , 𝑃𝑅 〉 = (0.0000318,
0.6241903, 0. 37577779). Hence, the chance to accept the lot of ball bearing product is 0.003%.
5. Conclusion:
The present paper, we discusses the concept of a new single sampling plan based on truncated
lifetest using Neutrosophic Pareto Distribution. This proposed sampling plan will be significant
to producers, even if they are not rejecting the samples immediately, and at the same time it
will protect the consumers at a limiting quality level. There has been a need to provide another
opportunity to the producer on considering the indeterminacy case in the proposed plan, so that
the producer could get another chance. Due to the uncertain situations in which sampling will
be conducted, the proposed sampling plan will be more effective and appropriate if it is
designed using Neutrosophic statistics. For the practical use, Tables have been provided which
depicts the formulas that are necessary for determining the probabilities in the cases of the
determinate and indeterminate outcomes, and the acceptance of a significant portion of the
product.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 9

References:

1. Al-Masri, A.-Q. Acceptance sampling plans based on truncated life tests in the inverse-gamma

model. Electron. J. Appl. Stat. Anal. (2018)11(2):397–404.

2. Al-Nasser, A. D. and Al-Omari, A. I. Acceptance sampling plan based on truncated life tests

for exponentiated frechet distribution. Int. j. stat. manag. syst., (2013). 16(1):13–24.

3. Al-Omari, A. I. Acceptance sampling plan based on truncated life tests for three parameter

kappa distribution. Econ. Qual. Control, (2014).29(1):53–62.

4. Al-Omari, A. I. Time truncated acceptance sampling plans for generalized inverted exponential

distribution. Electron. J. Appl. Stat. Anal. (2015).8(1):1–12.

5. Al-Omari, A. I. Acceptance sampling plans based on truncated lifetime tests for transmuted

inverse rayleigh distribution. Econ. Qual. Control, (2016a). 31(2):85–91.

6. Al-Omari, A. I. Time truncated acceptance sampling plans for generalized inverse weibull

distribution. J Stat Manag Syst, (2016b).19(1):1–19.

7. Al-Omari, A. I. "Acceptance sampling plans based on truncated life tests for Sushila

distribution", J. Math. Fundam. Sci., vol. 50, no. 1, pp. 72-83, Mar. 2018.

8. Aslam M, Kundu. D, Ahmad, M, "Time truncated acceptance sampling plans for generalized

exponential distribution", J. Appl. Statist., vol. 37, no. 4, pp. 555-566, Apr. 2010.

9. Aslam, M, Arif, O. Testing of grouped product for the Weibull distribution using neutrosophic

statistics. Symmetry (2018) 10(9), 403

10. Aslam M. Design of sampling plan for exponential distribution under neutrosophic statistical

interval method. (2018), IEEE Access, 6, 64153-64158.

11. Aslam M. A new attribute sampling plan using neutrosophic statistical interval method.

Complex & Intelligent Systems, (2019), 5, 365-370.

12. Aslam M. A new method to analyze rock joint roughness coefficient based on neutrosophic

statistics. Measurement, (2019), 146, 65-71.

13. Aslam, M., Raza, M.A. Design of new sampling plans for multiple manufacturing lines under

uncertainty. Int. J. Fuzzy Syst. (2018) 38, 1–15

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 10

14. Baklizi, A. Acceptance sampling based on truncated life tests in the pareto distribution of the

second kind. Adv Appl Stat, (2003).3(1):33–48.

15. Baklizi, A. and El Masri, A. E. Q. Acceptance sampling based on truncated life tests in the

birnbaum saunders model. Risk Anal., (2004). 24(6):1453–1457.

16. Baklizi, A, El-Masri, A.-Q, and AL-Nasser, A. Acceptance sampling plans in the rayleigh

model. Commun. Stat. Appl. Methods, (2005).12(1):11–18.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 11

Chapter: 2
Hopf bifurcation analysis model with a prey predator and Ammensal
Paparo A. V1 and T.S.N Murthy2
1
Department of Mathematics, JNTU-GV, Collège of Engineering, Vizianagaram
Vizianagaram, A.P, India,
2
Department of ECE, JNTU-GV, Collège of Engineering, Vizianagaram
Vizianagaram, A.P, India,
E-mail: paparao.alla@gmail.com; tsnmurthy.ece@jntukucev.ac.in

Abstract: In this paper, we explore the dynamics of the three species model with a prey,
predator and ammesal. The first species is the prey, second species is predator which is
ammesal to third species (ammensal). A discrete time lag is induced in the interaction of prey
and predator species. Mathematical model is explored by the system of delay differential
equations. The co-existing state is identified and studied the local dynamics at this point. The
sufficient condition for hopf bifurcation is also derived. The discrete time lag (𝜏) is taken as
bifurcation parameter and identified the critical value of the system in support of numerical
simulation. The system under goes hopf bifurcation which leads to unstable nature. For 𝜏 = 0
the system is locally asymmetrically stable for certain values > 0 , the system exhibits unstable
nature.
Keywords: Prey, Predator Co-existing state, local stability, Hopf bifurcation
AMS classification: 34 DXX
1. Introduction
Mathematical analysis for ecological and biological models are inevitable. The predictive
analysis for the ecological models using mathematical tools like differential equations is
initiated by Lokta [1] and Volterra [2]. Kapur [3,4] discussed a wide range of mathematical
models in ecology and epidemiology. The qualitative approach of ecological models gains
importance in recent era. Authors [5-7] provide adequate mathematical approach in ecological
interaction pertaining to qualitative analysis.
Stability analysis of ecological phenomenon using differential equations approach was
elaborated by approach through differential equations is the finest area in Braun [9] and
Simon’s [10]. Any ecological phenomenon time delays are inevitable. The time delays mainly
classified as discrete, continuous and distributed type. The qualitative analysis of time delays
is widely discussed by authors [11-13].
The nature of the delay argument cause unbounded growth and extinction of populations leads
to instability tendency of models. These lags will change the stable equilibrium to unstable or

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 12

vice-versa. In the nature most popular are models of prey-predator type models. Many
researchers studied prey-predator models with holling type functional responses and harvesting
efforts [22-26].
Ammesalism is a relationship in which the species harms another without any benefit or loss.
The interaction prey, predator along with ammesal was dealt by Kondala Rao [20]. The time
delays are very prominent in ecological models.
The time delays in three species models with different interaction among the species with
distributed time lags are widely studied by paparao [14-19,21].
In spite of these model, we proposed a three species model with a prey, predator and ammesal
with time delay in the interaction of prey and predator. The dynamics of the model and
instability criteria was disused using hopf bifurcation analysis. The paper is organized as
follows. In section 2 we frame the model, section 3 equilibrium point, in section 4 local stability
analysis in section 5 derived sufficient condition for existence of hopf bifurcation and finally
in section 6 numerical simulation is carried out in support of mathematical analysis.
2. Model Equations:
Mathematical model for the proposed system with a prey predator and ammesal here the
predator species is ammesal to the third species. The discrete time lag (𝜏) is induced in the
interaction of prey and predator species is explored by the following delay differential equation
𝑑𝑥 𝑥
= 𝑎1 𝑥 (1 − ) − 𝑎12 𝑥(𝑡 − 𝜏)𝑦(𝑡 − 𝜏)
𝑑𝑡 𝑘1
𝑑𝑦 𝑦
= 𝑎2 𝑦 (1 − ) + 𝑎21 𝑥 (𝑡 − 𝜏)𝑦(𝑡 − 𝜏) (2.1)
𝑑𝑡 𝑘2
𝑑𝑧 𝑧
= 𝑎3 𝑧 (1 − ) − 𝑎32 𝑦𝑧
𝑑𝑡 𝑘3

With initial condition 𝑥(𝑡) = 𝑥0 , 𝑦(𝑡) = 𝑦0 &𝑧(𝑡) = 𝑧0


The following notations are adopted
x: prey population, y: predator population, z: ammesal population
ai: Growth rates of three populations,𝛼12 &𝛼21: prey-predator interaction rates,
𝛼32 : interaction coefficient of predator and ammesal species.𝑘𝑖 : carrying capacities of three
species.
Assumptions: (i) predator is of generalist type (ii) All interaction coefficient are taken positive
which lies in in between 0 & 1. Growth rates and carrying capacities, population strengths are
positive not necessarily lies in between 0 &

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 13

3. Equilibrium Points
Equating the system of equations (2.1) to zero, and derive the co-existing state is given by
𝑘1 (𝑎1𝑎2−𝑎12𝑎2𝑘2 )
(𝑎1 𝑎2 −𝑎12 𝑎21𝑘1 𝑘2)
,
𝑘2 (𝑎1𝑎2+𝑎21𝑎1𝑘1 )
𝐸 (𝑥, 𝑦, 𝑧) = (𝑎1 𝑎2 −𝑎12 𝑎21𝑘1 𝑘2)
,
𝑘3 {(𝑎3 𝑎1 𝑎2 +𝑎12 𝑎21 𝑘1𝑘2)+𝑘2𝛼32 (𝑎1 𝑎2 +𝑎21 𝑎1 𝑘1 )]
( 𝑎3 (𝑎1𝑎2−𝑎12 𝑎21𝑘1 𝑘2 ) )
(3.1)

Co-existing state exist if the following conditions are satisfied


(i) 𝑎2 𝑎2 > 𝑎1 𝛼12 𝑘1 𝑘2 (ii) (𝑎1 𝑎2 > 𝑎12 𝑘2 ) (3.2)
4. Local Stability at Co-existing State
Theorem 4.1: The co-existing state is locally asymptotically stable
Proof: The variational matrix for the system (2.1) is

2𝑎1 𝑥
𝑎1 − − 𝑎12 𝑦𝑒 −𝜆𝜏 −𝑎12 𝑥𝑒 −𝜆𝜏 0
𝑘1
2𝑎2 𝑦
𝐽= 𝑎21 𝑦𝑒 −𝜆𝜏 𝑎2 − + 𝑎21 𝑦𝑒 −𝜆𝜏 0
𝑘2
2𝑎3 𝑧
0 −𝑎32 𝑧 𝑎3 − − 𝑎32 𝑦
[ 𝑘3 ]

(4.1.1)
Characteristic equation of the (4.1) is given by

 ( , )   3  p1 2  p2  p3  e (q1 2  q2  q3 )  0 (4.1.2)

Where
2𝑎1 𝑥 2𝑎2 𝑦 2𝑎3 𝑧
𝑃1 = + + − (𝑎1 + 𝑎2 + 𝑎3 )
𝑘1 𝑘2 𝑘3
2
4a a 2 x y 4a1a 3 xz 4a2 a3 yz 2a1a32 x y 2a2 a32 y
P2  a1a2  a1a3  a2 a3  1    
k1k2 k1k3 k2 k2 k1 k2
2a1a 2 y 2a1a 2 x 2a1a 3 z 2a2 a3 z 2a a 3 x 2a3a 2 y
     a32 a1 y  a32 a2 y  1 
k2 k1 k3 k3 k1 k2
2
2a a a y 2a a a x 2a a a z 4a a a x y 8a1a2 a3 x y z
P3  1 2 3  1 2 3  1 2 3  1 2 32   a1a2 a32 y
k2 k1 k3 k1k2 k1k2 k3
2
4 a a a y z 4 a a a z x 2a a a y 2a a a x y 4a a a x y
 1 2 3  1 2 3  1 3 32  1 2 32  a1a2 a3  1 2 3
k2 k3 k2 k1 k1k 2
𝑞1 = −𝑎21 𝑥 + 𝑎12 𝑦

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 14

2
2a a xz 2a a y 2
q2  a1a12 x  a2 a12 y  a12 a3 y  a21a32 x y  3 21  2 12  a12a32 y  a3a21 x  a1a21 x 𝑞3 =
k3 k2
2
2𝑎1 𝑎21𝑎32𝑥𝑦 2𝑎12 𝑎3 𝑎2 𝑦 2 4𝑎1𝑎21𝑎3𝑥𝑧 2𝑎2𝑎12𝑎3𝑦𝑧
−𝑎1 𝑎21 𝑎3 𝑥 + + + 2𝑎12 𝑎32 𝑎2 𝑦 + +
𝑘1 𝑘2 𝑘1 𝑘3 𝑘3
2 3
2𝑎1 𝑎3 𝑎21 𝑥𝑧 4𝑎2 𝑎3 𝑎12 𝑦 2𝑎2 𝑎32 𝑎12 𝑦 2𝑎12 𝑎3 𝑦𝑧 2𝑎1 𝑎21 𝑎3 𝑥
− − − + − 𝑎21 𝑎32 𝑥𝑦 −
𝑘3 𝑘2 𝑘2 𝑘3 𝑘1
+ 𝑎12 𝑎2 𝑎3 𝑦
Which can be written as 𝜓(𝜆, 𝜏) = 𝑃(𝜔) + 𝑄(𝜔)𝑒 −𝜏𝜔
Case (i) For 𝝉 = 𝟎
The characteristic equation obtained from (4.2) by putting 𝜏 = 0given by the following
equation
𝑎3 𝑧 𝑎1 𝑥 𝑎2 𝑦 𝑎1 𝑎2 𝑥𝑦
𝜓(𝜆, 0) = − ( + 𝜆) [𝜆2 + 𝜆 ( + )+( + 𝑎12 𝑎21 𝑥𝑦)] = 0
𝑘3 𝑘1 𝑘2 𝑘1 𝑘2
𝑎3 𝑧
( + 𝜆) = 0
𝑘3
𝑎 𝑥 𝑎2 𝑦 𝑎1 𝑎2𝑥𝑦
Or [𝜆2 + 𝜆 ( 𝑘1 + 𝑘2
)+( 𝑘1 𝑘2
+ 𝑎12 𝑎21 𝑥𝑦)] = 0
1
From the above equation
𝑎3 𝑧
𝜆=− = 0 and
𝑘3
𝑎1 𝑥 𝑎2 𝑦 𝑎1 𝑎2𝑥𝑦
[𝜆2 + 𝜆 ( + )+( + 𝑎12 𝑎21 𝑥𝑦)] = 0 (4.1.3)
𝑘1 𝑘2 𝑘1 𝑘2

𝑎3 𝑧
− 𝑘3
One of the roots is negative i.e.,
From the equation (4.1.3) find the remaining two roots. if the two roots have negative real roots
if the trace of the equation is negative and the determinant is positive.
The trace and determinant from the equation (4.1.3) are given as follows
−𝑏 𝑎1 𝑥 𝑎2 𝑦
Here the trace is = = −( + )<0
𝑎 𝑘1 𝑘2

𝑐 (𝑎1 𝑎2 +𝑎12 𝑎21 𝑘1𝑘2)𝑥𝑦


Determinant = = >0
𝑎 𝑘1 𝑘2

Therefore, the system (2.1) is locally asymptotically stable at co-existing state.


Therefore, the co-existing state is locally asymptotically stable.
Case (ii): For 𝜏 > 0 the stability analysis can be studied by substituting 𝜏 = ±𝑖𝜔 in the (4.2)
we get the following equation

𝜓(𝑖𝜔, 𝜏) = (𝑖𝜔)3 + 𝑝1 (𝑖𝜔)2 + 𝑝2 (𝑖𝜔) + 𝑝3 + 𝑒 −𝑖𝜔𝜏 (𝑞1 (𝑖𝜔)2 + 𝑞2 (𝑖𝜔) + 𝑞3 ) = 0


𝜓(𝑖𝜔, 𝜏) = (𝑖𝜔)3 + 𝑝1 (𝑖𝜔)2 + 𝑝2 (𝑖𝜔) + 𝑝3 + [𝑐𝑜𝑠( 𝜔𝜏) − 𝑖 𝑠𝑖𝑛( 𝜔𝜏)][𝑞1 (𝑖𝜔)2 + 𝑞2 (𝑖𝜔)
+ 𝑞3 ] = 0

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 15

−𝜔2 𝑝1 + 𝑝3 − 𝑞1 𝜔2 𝑐𝑜𝑠 𝜔 𝜏 + 𝑞3 𝑐𝑜𝑠 𝜔 𝜏 + 𝑞2 𝜔 𝑠𝑖𝑛 𝜔 𝜏 + 𝑖[−𝑤 3 + 𝑤𝑝2 + 𝑞2 𝜔 𝑐𝑜𝑠 𝜔 𝜏


+ 𝑞1 𝜔2 𝑠𝑖𝑛 𝜔 𝜏 − 𝑞3 𝑠𝑖𝑛 𝜔 𝜏] = 0
By separating real and imaginary parts we get the following equations
(𝑞3 − 𝑞1 𝜔2 ) 𝑐𝑜𝑠 𝜔 𝜏 + 𝑞2 𝜔 𝑠𝑖𝑛 𝜔 𝜏 = 𝜔2 𝑝1 − 𝑝3
(4.1.4)
𝑞2 𝜔 𝑐𝑜𝑠 𝜔 𝜏 − (𝑞3 − 𝑞1 𝜔2 ) 𝑠𝑖𝑛 𝜔 𝜏 = 𝜔3 − 𝜔𝑝2
(4.1.5)
on simplification we get
(4.1.4)2 + (4.1.5)2 ⇒
(𝑞3 − 𝑞1 𝜔)2 + (𝑞2 𝜔)2 = (𝜔2 𝑝1 − 𝑝3 )2 + (𝜔3 − 𝜔𝑝2 )2
𝜔6 + 𝜔4 (𝑝1 2 − 2𝑝2 − 𝑞1 2 ) + 𝜔2 (𝑝22 − 2𝑝1 𝑝3 − 𝑞22 + 2𝑞1 𝑞3 ) + 𝑞3 2 + 𝑝3 2 = 0
Let 𝜓(𝑝) = 𝑝3 + 𝑝2 𝑁1 + 𝑝𝑁2 + 𝑁3 = 0
Where
𝑁1 = 𝑝12 − 2𝑝2 − 𝑞12 𝑁2 = 𝑝22 − 2𝑝1 𝑝3 − 𝑞22 + 2𝑞1 𝑞3 , 𝑁3 = 𝑞32 + 𝑝32
 ( p )  o
If we assume that N1  0, N2  0, N3  0 then have no positive real roots

𝑁1 = 𝑝12 − 2𝑝2 − 𝑞12 , 𝑁2 = 𝑝22 − 2𝑝1 𝑝3 − 𝑞22 + 2𝑞1 𝑞3 , 𝑁3 = 𝑞32 + 𝑝32 Thus if

𝑁1 > 0, 𝑁2 > 0, 𝑁3 > 0 then there is no  such that i is an Eigen value of the characteristic
equation of  ( , )  0

If  will never be a purely imaginary root of equation ( , )  0 , thus the real parts of all

Eigen values of  ( , )  0 are negative for all   0 summarizing, the above analysis, we have
the following theorem
Theorem 4.2 The system (2.1) is locally asymptotically stable at co-existing state for all 𝜏 if
the following conditions hold.
(𝑖). (𝑝1 + 𝑞1 ) > 0, (𝑝2 + 𝑞2 ) > 0, (𝑝3 + 𝑞3 ) > 0
(𝑖𝑖). 𝑁1 > 0, 𝑁2 > 0, 𝑁3 > 0
Proof: Now if any one of N1 , N2 , N3 is negative then (1) has positive root 0

Eliminating cos  from equation (4.4) & (4.5) we get

𝜔2 𝑝1𝑞3 −𝑝3 𝑞3 −𝜔4𝑝1𝑞1 +𝜔2𝑞1 𝑝3 +𝑞3 𝑤 3 −𝑝2 𝑞3 𝜔2


𝑐𝑜𝑠 𝜔 𝜏 =
𝑞32 +𝑞12 𝜔4−2𝑞1 𝑞3 𝑤2 +𝑞22 𝜔2

1 𝜔04 (𝑞3 −𝑝1𝑞1 )+𝜔02(𝑝1 𝑞3 +𝑞1 𝑝3 −𝑝2 𝑞3 )−𝑝3 𝑞3 2𝑘𝜋


𝜏𝑘 = 𝜔 𝑐𝑜𝑠 −1 [ ]+
0 𝑞12 𝜔04 +𝜔02 (𝑞22 −2𝑞1 𝑞3 )+𝑞32 𝜔0
(4.2.1)

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 16

𝑤ℎ𝑒𝑟𝑒𝑘 = 0,1,2,3. . . . . . . . . . . . . ..

4. HOPF BIFURCATION
Theorem 5.1: The sufficient condition for the system (4.1.1) admits bifurcation along the co-
existing state E if 𝜏 > 𝜏0 and locally asymptotically stable If 0 < 𝜏 < 𝜏0
Proof: Hopf bifurcation occurs when the real part of 𝜆(𝑡) become positive when 𝜏 > 𝜏0 , and
the steady state become unstable moreover, when 𝜏passes through the critical value 𝜏0 .
To check this result, differentiating the equation (4.1.2) With respect to𝜏 we get

𝑑𝜆 𝑑𝜆 𝑑𝜆 𝑑𝜆 𝑑𝜆 𝑑𝜆
3𝜆2 + 2𝑝1 𝜆 + 𝑝2 + 𝑒 −𝜆𝜏 (2𝑞1 𝜆 + 𝑞2 ) + (𝑞1 𝜆2 + 𝑞2 𝜆 + 𝑞3 )(−𝜆 − 𝜆 )𝑒 −𝜆𝜏 = 0
𝑑𝜏 𝑑𝜏 𝑑𝜏 𝑑𝜏 𝑑𝜏 𝑑𝜏
𝑑𝜆
[3𝜆2 + 2𝑝1 𝜆 + 𝑝2 + 𝑒 −𝜆𝜏 (2𝑞1 𝜆 + 𝑞2 ) − (𝑞1 𝜆2 + 𝑞2 𝜆 + 𝑞3 )𝜆𝑒 −𝜆𝜏 ] = (𝑞1 𝜆2 + 𝑞2 𝜆 + 𝑞3 )𝜆𝑒 −𝜆𝜏
𝑑𝜏

𝑑𝜆 −1 [3𝜆2 +2𝑝1𝜆+𝑝2 +𝑒 −𝜆𝜏 (2𝑞1 𝜆+𝑞2 )−(𝑞1 𝜆2+𝑞2 𝜆+𝑞3 )𝜋𝑒 −𝜆𝜏 ]
[ ] =
𝑑𝜏 (𝑞1 𝜆2 +𝑞2 𝜆+𝑞3 )𝜆𝑒 −𝜆𝜏

𝑑𝜆 −1 3𝜆2 +2𝑝1 𝜆+𝑝2 (2𝑞1 𝜆+𝑞2 ) 𝜏


[ ] = + −
𝑑𝜏 (𝑞1 𝜆2 +𝑞2 𝜆+𝑞3 )𝜆𝑒 −𝜆𝜏 (𝑞1 𝜆2 +𝑞2 𝜆+𝑞3 )𝜆 𝜆

𝑑𝜆 −1 3𝜆2 +2𝑝1 𝜆+𝑝2 (2𝑞1 𝜆+𝑞2 ) 𝜏


[ ] = + −
𝑑𝜏 −𝜆(𝜆3 +𝑝1 𝜆2 +𝑝2 𝜆+𝑝3 ) (𝑞1 𝜆2 +𝑞2 𝜆+𝑞3 )𝜆 𝜆

𝜆 = 𝑖𝜔
𝑑𝜆 −1 1 −3𝜔0 2+2𝑖𝑝1 𝜔0+𝑝2 (2𝑖𝑞1 𝜔0 +𝑞2 )
[ ] = [ + + 𝜏𝑖]
𝑑𝜏 𝜔0 (−𝜔03 +𝑝2 𝜔0 +𝑖(𝑝1 𝜔02 −𝑝3 )2 −𝑞2 𝜔0 +𝑖(𝑞3 −𝑞1 𝜔02)

𝑑𝜆 −1 1 (−3𝜔0 2 + 2𝑖𝑝1 𝜔0 + 𝑝2 )((−𝜔03 + 𝑝2 𝜔0 − 𝑖(𝑝1 𝜔02 − 𝑝3 )


[ ] = [
𝑑𝜏 𝜔0 (−𝜔0 3 + 𝑝2 𝜔0 )2 + (𝑝1 𝜔02 − 𝑝3 )2
(2𝑖𝑞1 𝜔0 + 𝑞2 )(−𝑞2 𝜔0 − 𝑖(𝑞3 − 𝑞1 𝜔02 )
+ + 𝜏𝑖]
(𝑞2 𝜔0 )2 + (𝑞3 − 𝑞1 𝜔02 )2
𝑑𝜆 −1 1 (−3𝜔0 2+𝑝2)(−𝜔03 +𝑝2𝜔0)+2𝑝1𝜔0(𝑝1𝜔02 −𝑝3) −𝑞2 2 𝜔0+2𝑞1 𝜔0(𝑞3 −𝑞1 𝜔02)
Real part of [ ] = [ + ]
𝑑𝜏 𝜔0 (−𝜔0 3 +𝑝2 𝜔0 )2 +(𝑝1 𝜔02 −𝑝3 )2 (𝑞2 𝜔0)2 +(𝑞3 −𝑞1 𝜔02 )2

Using the relation following relation


(−𝜔0 3 + 𝑝2 𝜔0 )2 + (𝑝1 𝜔02 − 𝑝3 )2 = (𝑞2 𝜔0 )2 + (𝑞3 − 𝑞1 𝜔02 )2

𝑑𝜆 −1 1 3𝜔05+𝜔30 (2𝑝12−𝑝2−3𝑝2 −2𝑞12 )+(𝑝22 −2𝑝1𝑝3+2𝑞1 𝑞2 −𝑞22 )𝜔0


𝑅𝑒 [𝑑𝜏 ] = 𝜔0
[
(𝑞2 𝜔0)2+(𝑞3 −𝑞1 𝜔02 )2
]

𝑑𝜆 −1 3𝜔04+𝜔02 (2𝑝12−4𝑝2 −2𝑞12 )+𝑝22 −2𝑝1 𝑝3 +2𝑞1 𝑞2 −𝑞22


𝑅𝑒 [𝑑𝜏 ] =[ ]
(𝑞2 𝜔0 )2 +(𝑞3 −𝑞1 𝜔02)2

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 17

d dλ −1
[ Re( λ)] = [Re ( ) ]
dτ dτ λ=iω 0

3ω40 + ω20 (2p12 − 4p2 − 2q21) + p22 − 2p1 p3 + 2q1q 3 − q22


=[ ]
(q 2 ω0 )2 + (q 3 − q1 ω20 )2
d
By using this condition N1>0, N2 > 0, N3>0 we have [ (Re( λ))] >0
dτ λ=iω0

Therefore, the Hopf bifurcation occurs at 𝜏 > 𝜏0


5. Numerical Simulation
Hopf bifurcation analysis is illustrated for the model equations (2.1) as bifurcation parameter
is time lag (τ) with the parametric values chosen in following examples.
Example: 6.1 let us choose the following parameters for examination
𝑎1 = 5, 𝑎2 = 1, 𝑎3 = 3, 𝛼12 = 0.5, 𝛼21 = 0.5, 𝛼32 = 0.04, 𝑘1 = 50, 𝑘2 = 50, 𝑘3 = 50 , 𝑥 =
3, 𝑦 = 2, 𝑧 = 2.

Fig. 6.1 (A) Fig. 6.1 (B)


The nature of the system (2.1) is unstable due to un bounded oscillations in prey and predator
populations for τ = 0.057

Fig. 6.1 (C) Fig. 6.1 (D)


The system (2.1) is stable for τ = 0.056
Example: 6.2 let us choose the following parameters for examination
𝑎1 = 1, 𝑎2 = 1, 𝑎3 = 1, 𝛼12 = 0.5, 𝛼21 = 0.5, 𝛼32 = 0.4, 𝑘1 = 50, 𝑘2 = 50, 𝑘3 = 50 , 𝑥 =
3, 𝑦 = 2, 𝑧 = 2.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 18

Fig. 6.2 (A) Fig. 6.2 (B)


The nature of the system (2.1) is unstable due to un bounded oscillations in prey and predator
populations τ = 0.067

Fig. 6.2 (C) Fig. 6.2 (D)


The system (2.1) is stable due to bounded solutions when τ = 0.066

Example: 6.3 Let us choose the following parameters for examination


𝑎1 = 1, 𝑎2 = 0.5, 𝑎3 = 0.25, 𝛼12 = 0.5, 𝛼21 = 0.5, 𝛼32 = 0.5, 𝑘1 = 50, 𝑘2 = 50, 𝑘3
= 50, x=3,y=2,z=2.

Fig. 6.3 (A) Fig. 6.3 (B)


The nature of the system (2.1) is unstable due to un bounded oscillations in prey and predator
populations for τ = 0.071

Fig. 6.3 (C) Fig. 6.3 (D)


The system (2.1) is stable due to bounded solutions when τ = 0.07

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 19

Example: 6.4 Let us choose the following parameters for examination


𝑎1 = 3, 𝑎2 = 2.5; , 𝑎3 = 3, 𝛼12 = 0.7, 𝛼21 = 0.5, 𝛼32 = 0.4, 𝑘1 = 50, 𝑘2 = 50, 𝑘3 = 50,
𝑥 = 3, 𝑦 = 2, 𝑧 = 2.

Fig. 6.4 (A) Fig. 6.4 (B)

The unbounded periodic solutions for the system (2.1) when τ = 0.043

Fig. 6.4 (C) Fig. 6.4 (D)


The system (2.1) becomes stable due to the bounded solutions when τ = 0.042
6. Conclusion
The hopf bifurcation analysis is identified in the proposed model with three species consists of
a prey, predator and ammesal species. The discrete time lag (𝜏) is induced in the interaction
prey and predator species. The system admits asymptotically stable nature when 𝜏 = 0. For
𝜏 > 0 the model admits instability tendence where stable equilibria become unstable. the
sufficient conditions for the existence of unstable nature were derived using the theorem (5.1).
The numerical simulation was carried out using MAT LAB to identify the bifurcation
parameter (𝝉) where the switch over nature is identified. The critical value of 𝜏 for the four
numerical examples are placed below for the evidence of existence of Hopf bifurcation.

S. No Example Hopf bifurcation value

1 Example 6.1 τ > 0.056

2 Example 6.2 τ > 0.066

3 Example 6.3 τ > 0.071

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 20

4 Example 6.4 τ > 0.042

The above parametric values of 𝝉 destabilizes the system which cause the bifurcation.

References
1. Lotka. A.J (1925). Elements of physical biology, Williams and Wilkins, Baltimore
2. Volterra, V (1931). Leconssen la theoriemathematique dela leitte pou Lavie, Gauthier-Villars,
Paris,
3. Kapur, J.N (1988). Mathematical Modelling, Wiley-Eastern.
4. Kapur, J.N (1985). Mathematical Models in Biology and Medicine, Affiliated Wiley-Eastern.
5. May, R.M (1973). Stability and complexity in model Eco-Systems, Princeton University press,
Princeton.
6. Murray, J. D (2002). Mathematical Biology-I: an Introduction, Third edition, Springer.
7. Freedman. I. (1980). Deterministic mathematical models in population ecology, Marcel-
Decker, New York.
8. Paul Colin Vaux (1986). Ecology, John Wiley and Sons Inc., New York.
9. Braun. (1978). Differential equations and their applications- Applied Mathematical Sciences,
(15) Springer, New York.
10. George F. Simmons.1974. Differential Equations with Applications and Historical notes, Tata
Mc. Graw-Hill, New Delhi.
11. V. Sree Hari Rao and P. Raja Sekhara Rao. 2009. Dynamic Models and Control of Biological
Systems, Springer Dordrecht Heidelberg London New York.
12. Gopala swamy, K. 1992. Mathematics and Its Applications Stability and Oscillations in Delay
Differential Equations of Population Dynamics Kluwer Academic Publishers.
13. Kaung Yang .1993. Delay Differential equations with applications in population dynamics,
Academic press.
14. Papa Rao A.V., Lakshmi Narayan K. 2015. Dynamics of Three Species Ecological Model with
Time-Delay in Prey and Predator, Journal of Calcutta Mathematical society, vol 11, No 2,
Pp.111-136.
15. Papa Rao A.V., Lakshmi Narayan K. 2017.A prey, predator and a competitor to the predator
model with time delay, International Journal of Research in Science & Engineering, Special
Issue March Pp 27-38.
16. Papa Rao A.V., Lakshmi Narayan K. 2017.Dynamics of prey predator and competitor model
with time delay, International Journal of Ecology& Development, Vol 32, Issue No. 1 Pp 75-
86.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 21

17. Papa Rao A.V., Kalesh vali., Apparao D.2022. Dynamics of SIRS Epidemic Model under
saturated Treatment, International Journal of Ecological Economics & Statistics (IJEES),
Volume 43, issue 3, Pp 106-119.
18. Papa Rao A.V., N.V.S.R.C. Murty Gamini. 2018. Dynamical Behaviour of Prey Predators
Model with Time Delay “International Journal of Mathematics And its Applications. Vol 6
issue 3 Pp: 27-37.
19. Papa Rao A.V., N.V.S.R.C. Murty Gamini. 2010. Stability Analysis of a Time Delay Three
Species Ecological Model “International Journal of Recent Technology and Engineering
(IJRTE). Vol7 Issue-6S2, PP:839-845.
20. Papa Rao. A. V, Lakshmi Narayan. K, Kondala Rao. K.2019.Amensalism Model: A
Mathematical Study, International Journal of Ecological Economics & Statistics (IJEES)Vol
40, issue 3, Pp 75-87.
21. Paparoa. V, G A L satyavathiK.SobhanBabu.2021. Three species Ammensalism model with
time delay, international journal of tomography and simulation, Vol 34, Issue 01, Pp 66-78.
22. Chen, H., Zhang, C., 2022, Analysis of the dynamics of a predator-prey model with holling
functional response. J. Nonl. Mod. Anal. 4, 310–324.
23. Liu, W., Jiang, Y.L., 2018, Bifurcation of a delayed gauge predator-prey model with Michaelis
Menten type harvesting. J. Theor. Biol. 438, 116–132.
24. Paparao A.V., G A L satyavathi., K. Sobhan Babu., Dynamics of Prey- Predator Model with
constant effort Harvesting of Prey., International Journal of Ecological Economics & Statistics
(IJEES) vol 44 issue 01, pp:40-50 2023.
25. Paparao A.V., G A L Satya vathi., K. Sobhan Babu Dynamics of Prey- Predator Model with
Michaelis-Menten Type Prey Harvesting; International Journal of Ecology & Development vol
38 issue 02 pp :67-77 2023
26. Ranjith Kumar G, Kalyan Das, Lakshmi Narayan Ravindra Reddy B., 2019, Crowding effects
and depletion mechanisms for population regulation in prey-predator intra-specific competition
model. Computational Ecology &software ISSN 2220-721Xvol 9 no 1 pp 19-36.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 22

Chapter: 3
Forecasting of Rice area, production, productivity of Tamil Nadu and correlation with
climate data
Dr. P.Sujatha Assoc. Prof. (Maths), Dr. B.Sivasankari2, Prof. (Maths), Dr.S.Anandhi3
1

Asst. Prof. (Maths)

Abstract: For proper planning and policy making in the agriculture sector of the country crop
yield forecasting and crop acreage estimation of the important crucial components. This
research is a study model of forecasting area, production and productivity of rice in Tamil
Nadu. Data for the period of 2000-01 to 2022-23 were analysed by time series methods. Auto
Correlation Function (ACF) and Partial Auto Correlation Function (PACF) were calculated for
the data. Appropriate Box- Jenkins Auto Regressive Integrated Moving Average (ARIMA)
model was fitted. Validity of the model was tested using standard statistical techniques. For
forecasting area, production and productivity ARIMA (0, 1, 1) model respectively were used
to forecast five leading years. The forecasts for the next five years were made. We also
correlated climate data viz., Temperature and Rainfall with Production. The results showed the
area forecast for the year 2023 to be about 1906086.73 hectare with lower and upper limit
1507149.18 and 2305024.28 hectares respectively, production forecast to be about 6115107.56
tonnes with lower and upper limit 3498028.08 and 8732187.05 tonnes respectively and
productivity forecast to be about 3.336 tonnes per ha with lower and upper limit 2.413549 and
4.258816 tonnes per ha respectively. Temperature was negatively correlated with production
whereas Rainfall was positively correlated with production.
Keyword: ARIMA, Correlation, ACF, PACF

Introduction

Required food production is necessary to be enhanced to provide food and nutritional security
for the growing population. To make proper planning and policy making in the agriculture
sector crop yield forecasting is necessary. The main objective of our research is

i) To predict and forecast the area, production, productivity of Paddy along with
rainfall data
ii) To make time series analysis using data from the past 20 years and provide
forecast for the next four years 2023-2027.
iii) To find the relation between climate data and production.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 23

The Gross Cropped Area in Tamil Nadu is around 58.43 lakh hectares of which the Gross
Irrigated Area is 33.09 lakh hectares which is 57% and the balance 43% of the area are under
rainfed cultivation. Out of these, Tamil Nadu has achieved a record coverage of paddy in
financial year (2022-23) as the total area stands at 22.05 lakh hectares. D. Balanagammal et al.
(2000) utilised the ARIMA model to forecast the cultivable area, production, and productivity
of several crops in the Indian state of Tamil Nadu based on data from 1956 to 1994 for the
following years. Zahra N. et al. used the linear model, quadratic model and the exponential
model to analyze the trends in rice area and yield in Punjab, Pakistan. Forecast of area,
production and productivity of rice in Thanjavur, Tamil Nadu also done by M. Hemavathi et al.
(2018) with ARIMA model by analyzing the time series data

Rice scenario

Tamil Nadu secured third position in rice productivity in India and from total gross cropped
area (58.97 La ha) of the state, paddy alone cultivated in 22.05 L ha (37%) and it ranks first in
area and production among other cereal crop cultivated in Tamil Nadu state (source: Policy
Note 2022-23, Govt of TN), because it is the main staple food of the state with regard to human
nutrition and caloric intake.
Tamil Nadu is the biggest producer of rice in the country now. It is also the staple food of the
state and has also played a pivotal role in the politics of the state. Tamil Nadu ranks sixth in
the production of rice among the states of India. In fact, Tamil Nadu ranks first in the country
in productivity of rice. Triennium average productivity of rice in Tamil Nadu is 3,494 kg/ha,
which is 79% higher than triennium average productivity (1,947 kg/ha) of the country.

Materials and Methods

The secondary data of rice area, production and productivity was collected for the period from
2001 to 2022 from various sources like Directorate of Economics and Statistics, Season and
Crop report and Tamil Nadu state website.

Forecasting of area, production and productivity using ARIMA Model

George Box and Gwilym Jenkins studied ARIMA models extensively during 1968 that can be
used for time series analysis and forecasting. ARIMA has three components, namely
Autoregression (AR), Integrated (I) and Moving Average (MA). The ARIMA model can be
expressed as (p, d, q) where p is the number of autoregressive terms, d is the degree of

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 24

differencing, q is the number of forecast errors. For optimum forecast in ARIMA model, the
time series should be stationary.

The main stages in setting up a Box-Jenkins forecasting model are as follows:

 Identification
 Estimating the parameters
 Diagnostic checking and
 Forecasting.

This process is referred as ARIMA (p, d, q) where p and q refer to the number of AR and MA
terms, and d refers to the order of differencing required for making the series a stationary.

The accuracy of ARIMA model is measured using Mean square error(MSE) and Mean absolute
percentage error(MAPE)(Markidakis and Hibbon, 1979).

Forecasting of Area, Production, Productivity of Rice

Results and Discussion

Time series Data for rice crop cultivated area, production and yield for the period 2000-01 to
2022-23 has been employed in the Auto Regressive Integrated Moving Average (ARIMA)
model in Gretl software. As we have earlier stated that development of ARIMA model for any
variable involves four steps: Identification, Estimation, Verification and Forecasting. Each of
these four steps is now explained for rice crop cultivated area, production and yield as follows.

Model identification

ARIMA model estimates only stationary series, so it is important to check the stationary. In
stationary series, the values vary over time only around a constant mean and constant variance.
The common method used to check stationarity is through examining the graph or time plot of
the data. From the graph (Fig 1, 2 and 3), it is showed that the data are stationary.The next step
is to identify the values of p and q. For this, the autocorrelation and partial auto correlation
coefficients of various orders of Xt are computed (Table 1,2,3). The Auto Correlation Function
(ACF) and Partial Auto Correlation Function (PACF) (Fig 4, 5 and 6) showed that the order of
p and q can at most be 0.5. We entertained eight tentative ARIMA models and chose that model
which has minimum AIC (Akaike Information Criterion) and SBC (Schwartz Bayesian
Criterion). The models and corresponding AIC and SBC values are given in (Table 4,5,6). So,

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 25

the most suitable model is ARIMA (0, 1, 1) for rice area, ARIMA (0, 1, 1) for rice production
and ARIMA (0, 1, 1) for rice productivity has the lowest AIC and SBC values.

Figure 1: Time series plot for rice area

Figure 2: Time series plot for rice production

Figure 3: Time series plot for rice productivity

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 26

Table 1: Auto correlations and partial auto correlations for rice area

Lag ACF PACF Q value P value


1 0.1445 0.1445
2 -0.0785 -0.1016
3 -0.3301 -0.3126 3.7156 0.054
4 -0.0886 -0.0062 3.9458 0.139
5 0.0313 0.0019 3.9763 0.264
6 -0.0987 -0.2411 4.2977 0.367
7 -0.0704 -0.0722 4.4722 0.484
8 -0.0340 -0.0266 4.5157 0.607
9 0.0339 -0.0980 4.5624 0.713
10 0.0573 -0.0223 4,7071 0.788
11 0.0066 -0.0277 4.7092 0.859
12 -0.1254 -0.2051 5.5390 0.852
13 0.0681 0.1018 5.8113 0.886
14 0.0307 -0.0311 5.8734 0.922
15 0.1376 0.0187 7.3021 0.886
16 -0.1394 -0.1442 9.0115 0.830
17 -0.1176 -0.0738 10.4725 0.789
18 -0.1233 -0.1512 12.4804 0.710
19 -0.0010 -0.0910 12.4805 0.770
20 0.1092 0.0034 15.6265 0.619
21 0.0876 -0.0139 19.6824 0.414

Table 2: Auto correlations and partial auto correlations for rice production

lag ACF PACF Q P value


1 -0.0049 -0.0049
2 0.0452 0.0452
3 -0.2299 -0.2300 1.5235 0.217
4 0.2141 0.2247 2.8686 0.238
5 -0.0907 -0.0931 3.1242 0.373

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 27

6 0.1038 0.0505 3.4799 0.481


7 -0.1211 -0.0291 3.9958 0.550
8 -0.0976 -0.2017 4.3549 0.629
9 0.0263 0.1465 4.3829 0.735
10 -0.1392 0.0617 5.2360 0.732
11 0.0648 0.0358 5.4373 0.795
12 -0.2022 -0.1608 7.5969 0.668
13 0.0096 0.0282 7.6024 0.748
14 0.0584 0.0988 7.8275 0.789
15 -0.0333 -0.2022 7.9112 0.849
16 -0.2121 -0.1405 11.8715 0.617
17 -0.1123 0.1023 13.2028 0.587
18 -0.1455 -0.1536 15.9959 0.453
19 -0.0559 -0.0947 16.5466 0.485
20 0.0639 0.0019 17.6253 0.481
21 0.0801 0.0902 21.0168 0.336

Table 3: Auto correlations and partial auto correlations for rice productivity

LAG ACF PACF Q-stat. [p-value]


1 -0.025 -0.025
2 0.1063 0.1057
3 -0.126 -0.122 0.7549 [0.385]
4 0.22 0.2104 2.1753 [0.337]
5 -0.073 -0.052 2.3418 [0.505]
6 0.071 0.0195 2.5081 [0.643]
7 -0.072 -0.012 2.6887 [0.748]
8 0.0049 -0.066 2.6896 [0.847]
9 -0.043 0.0017 2.7657 [0.906]
10 -0.017 -0.048 2.7786 [0.947]
11 0.0984 0.1297 3.2435 [0.954]
12 -0.128 -0.142 4.1111 [0.942]
13 -0.012 -0.017 4.1193 [0.966]

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 28

14 -0.027 0.0342 4.1663 [0.980]


15 -0.093 -0.198 4.8175 [0.979]
16 -0.249 -0.189 10.266 [0.742]
17 -0.088 -0.104 11.089 [0.746]
18 -0.049 -0.032 11.401 [0.784]
19 -0.029 -0.033 11.554 [0.826]
20 0.0001 0.068 11.554 [0.869]
21 0.0305 0.0688 12.043 [0.884]

Table 4: AIC and SBC values for tentative ARIMA models for Rice area

Coordinates AIC SBC

011 584.0759 587.2095

110 608.3337 612.6978

111 584.9655 589.1436

012 584.59 588.7681

210 589.2816 593.4596

112 586.5891 591.8118

211 585.5054 590.728

212 592.3834 586.1163

Table 5: AIC and SBC values for tentative ARIMA models for Rice production

Coordinates AIC SBC

011 659.3333 662.4668

110 664.4606 667.5942

111 661.3317 665.5098

012 661.3313 665.5094

210 666.2974 670.4755

112 662.9191 668.1417

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 29

211 663.0536 668.2762

212 664.581 670.8482

Table 6: AIC and SBC values for tentative ARIMA models for Rice productivity

Coordinates AIC SBC

011 33.28695 36.42052

110 42.29405 39.16049

111 35.04189 39.21998

012 33.24096 37.41905

210 44.83376 40.65567

112 41.86891 36.6463

211 36.7283 41.95091

212 38.63983 44.90696

Figure 4: Auto correlations and partial auto correlations for rice area

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 30

Figure 5: Auto correlations and partial auto correlations for rice production

Figure 6: Auto correlations and partial auto correlations for rice productivity

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 31

Forecasting and verification:

Rice cultivated area, production and productivity with upper and lower confidence level are
forecasted using SPSS package. Results of estimation are reported in the table 7, 8,9. The
results showed the area forecast for the year 2023 to be about 1906086.73 hectare with lower
and upper limit 1507149.18 and 2305024.28 hectares respectively, production forecast to be
about 6115107.56 tonnes with lower and upper limit 3498028.08 and 8732187.05 tonnes
respectively and productivity forecast to be about 3.336 tonnes per ha with lower and upper
limit 2.413549 and 4.258816 tonnes per ha respectively. The graphs ofpredicted domains fig
4,5,6 shows that as the year goes on, there is continuous decreasing trend in rice area due to
which production and yield both resulted in decreasing trend. The Mean Absolute Percentage
Error (MAPE) for rice cultivated area, production and productivity is found to be 10.78, 15.38
and 12.6 respectively. This measure indicates that the forecasting inaccuracy is low.

Table 7: Forecasted values of rice cultivated area with 95% Confidence Level (CL)

Year Area Prediction(Ha) 95%Interval


2023 1906086.73 1507149.18 – 2305024.28
2024 1836366.34 426330.12 – 2246402.56
2025 1840873.33 1430791.36 – 2250955.30
2026 1840581.98 1430499.82 – 2250664.14
2027 184060.81 1430518.65 – 2250682.97

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 32

Figure 4: Forecasted values of rice cultivated area with 95% Confidence Level (CL)

Table 8: Forecasted values of rice production with 95% Confidence Level (CL)

Year Production Predicted 95%Interval


2023 6115107.56 3498028.08 - 8732187.05
2024 5886437.84 3249134.01 - 8523741.66
2025 5843835.25 3205832.22 - 8481838.28
2026 5835898.12 3197870.83 - 8473925.42
2027 5834419.39 3196391.25 - 8472447.53

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 33

Figure 5: Forecasted values of rice production with 95% Confidence Level (CL)

Table 9: Forecasted values of rice productivity with 95% Confidence Level (CL)

Year Productivity Prediction (tonnes/ha) 95% Interval


2023 3.3361 2.413549 - 4.258816
2024 3.296 2.360349 - 4.232463
2025 3.625 2.321483 - 4.209584
2026 3.241 2.292739 - 4.190406
2027 3.229 2.271272 - 4.174679

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 34

Figure 6: Forecasted values of rice productivity with 95% Confidence Level (CL)

Correlation of Yield with climate data:

The table 10 shows year wise climate data, area, production productivity of rice in Tamil Nadu.
The correlation values are expressed in the table 11. If the correlation factor is greater than o,
they are positively correlated and if it is less than zero, they are negatively correlated. Among
the climatic parameters, Rainfall and Temperature is taken into account for correlation. The
Temperature and production are negatively correlated whereas Rainfall and production are
positively correlated.

Table 10: Paddy and Year Wise climate Data of Tamil Nadu

Production Productivity Mean Temp


Year Area(ha) Rainfall (mm)
(tonnes) (tonnes/ha) [°C]

2001 2059878 6583630 3.1961 30.3 927.3

2002 1516537 3577108 2.3587 30.1 931.4

2003 1396651 3222776 2.3075 29.8 941.3

2004 1872822 5061622 2.7027 29.9 948.5

2005 2050455 5209433 2.5406 29.9 960

2006 1931397 6610607 3.4227 30.9 1304.1

2007 1789170 5039954 2.8169 30.1 859.7

2008 1931603 5183385 2.6835 30 1164.8

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 35

2009 1845553 5665258 3.0697 29.9 1023.1

2010 1905000 5792400 3.0406 30.1 938

2011 1903000 7458000 3.9191 29.9 1165.1

2012 1493000 4050000 2.7127 30.4 937.1

2013 1725000 5350000 3.1014 30.2 743.1

2014 1795000 5728000 3.1911 30.6 790.6

2015 2000212 7374681 3.6869 29.8 987.9

2016 1442841 3554113 2.4633 29.5 1118

2017 1828919 6638450 3.6297 30.7 598.1

2018 1721265 6131550 3.5622 30.4 1017.2

2019 1907407 7265161 3.8089 30 960

2020 2036239 6881725 3.3796 29.6 985.8

2021 2165000 8020000 3.7044 30.6 1232.8

2022 2205470 8067000 3.6577 29.8 1236.1


Table 11: Correlation value
Rainfall Temp Production
Rainfall 1
Temp -0.142297 1
Production 0.4108169 -0.137102 1

Conclusion: Time series Data for rice crop cultivated area, production and yield for the period
2000-01 to 2022-23 has been employed in the Auto Regressive Integrated Moving Average
(ARIMA) model in Gretl software. The results showed the area forecast for the year 2023 to
be about 1906086.73 hectare with lower and upper limit 1507149.18 and 2305024.28 hectares
respectively, production forecast to be about 6115107.56 tonnes with lower and upper limit
3498028.08 and 8732187.05 tonnes respectively and productivity forecast to be about 3.336
tonnes per ha with lower and upper limit 2.413549 and 4.258816 tonnes per ha respectively.

References:

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 36

 Box GE, Jenkins GM, Reinsel GC, Ljung GM. Time series analysis: forecasting and control.
John Wiley & Sons; 2015.
 Balanagammal D, Ranganathan CR, Sundaresan R. Forecasting of agricultural scenario in
Tamil Nadu a time series analysis. J Indian SocAgricult Stat. 2000;53(3):273–86.
 Zahra N, Akmal N., Siddiqui S, Raza I, Habib N, Naheed S (2015) Trend analysis of rice area
and yield in Punjab, Pakistan. Pakistan J. Agric. Res., Vol. 28(4)
 Makridakis, S. and M. Hibbon. 1979. Accuracy of forecasting: An empirical investigation, J.
Roy. Stat. Soc. A. 41(2): 97-145
 ARIMA Model for forecasting of area, production and productivity of rice and its growth status
in Thanjavur district of Tamilnadu, India - M. Hemavathi and K. Prabakaran (2018)
 Tamil Nadu Agricultural University - https://agritech.tnau.ac.in › pdf, PDF - SEASON AND
CROP REPORT
 Tamil Nadu Government Portal - https://www.tn.gov.in › crop - Season and Crop Report -
2019-20
 Agricultural Statistics at a glance 2014 & Annual Report 2015-2016, Government of India,
Department of Agricultural, Cooperation and Farmers welfare, Ministry of Agricultural and
Farmers welfare.
 Agricultural Statistic 2015. Government of India, Department of Agricultural, Cooperation
and Farmers welfare, Ministry of Agricultural and Farmers welfare.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 37

Chapter: 4
An Innovative Approach for Detecting and Instance segmentation of Printed Electric
Components using Mask RCNN
Dayana Vincent
Department of Data Science, NMKRV College for Women, Bengaluru
E-mail: dayanavincent2000@gmail.com

Abstract: Printed Electric Board acts like a backbone for the any electronic devices by
integrating numerous semiconductor components like IC, Capacitor, Resistor, Connector,
Transistor and many more. The performance of PCB significantly impacts the overall
functionality of any electronic devices. For instance, motherboard acts like a brain for the any
electronic devices. The functionality of PCB is important for the working of any electronic
devices. Manual Inspection is difficult and can cause human errors. With the help of many deep
learning techniques provides a way for the detection of components in PCB. This project relays
on using an enhanced and efficient deep learning model Mask RCNN, an instance segmentation
and object detection model which provides an innovative way for detecting the main
components in PCB and segmenting the objects in the PCB. The core objective of this study is
the successful implementation of Mask R-CNN for the detection and instance segmentation of
key components on PCBs. The results of using Mask RCNN show an mAP of 0.91 for the
detection and instance segmentation of components such as capacitors, transistors, ICs, and
connectors. By automating the detection using Mask RCNN ensures PCB assurance, quality
inspection and improving the organizational productivity.

Keywords: Object Detection, Instance Segmentation, Mask RCNN, Printed Electric Circuit,
Quality Inspection, PCB assurances.

Introduction

Printed Electric Circuit Boards (PCBs) are the main components of any electronic devices.
PCB, often green, boards play an important role in every electronic device that we use daily,
from smartphones and laptops to cars and industrial machinery. Printed Electric Circuit Boards
are the interconnected networks of conductive pathways that allow electronic components to
communicate, work together, and perform their designated functions or tasks. Understanding
the importance of Printed Electric Circuit Boards and the importance of accurately detecting
and segmenting their components is very important task for various reasons. Central Nervous
System of Electronics is known as PCBs serve as the central nervous system of electronic

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 38

devices. They provide a platform for connecting various electronic components such as
microprocessors, memory chips, sensors, and connectors. Without PCBs, the electronics in
these devices would be a just a collection of wires. As consumer demands for smaller, lighter,
and more powerful devices increase, PCBs must continuously evolve to accommodate these
requirements. Detecting and segmenting components accurately on these compact boards is
essential for ensuring that each device functions flawlessly. Quality Control in Industries have
to maintain high levels of quality control is paramount. PCBs must be inspected very carefully
to detect any defects or inconsistencies in their components, as even a small error can result in
device malfunction. Precise detection and segmentation of PCB components streamline the
quality control process, minimizing production errors and ensuring product reliability.
Accurate PCB component detection for repair and maintenance tasks. Technicians rely on
these techniques to diagnose and fix issues in electronic devices efficiently. Without them, the
process would be time-consuming, costly, and prone to human error. As the world becomes
more environmentally conscious, the importance of recycling electronic devices grows.
Secondly, efficient PCB component detection and segmentation are crucial for the responsible
disposal and recycling of electronic waste. It allows for the recovery of valuable materials and
the safe disposal of hazardous ones. Advancements in technology are often underpinned by
breakthroughs in PCB design and functionality. Detection and segmentation techniques
contribute to the development of innovative devices with improved performance, energy
efficiency, and functionality. The importance of PCBs in the world of electronics cannot be
overstated. They form the backbone of modern technology, enabling the devices and systems
that have become indispensable in our lives. The accurate detection and segmentation of PCB
adaptable to accommodate the evolving landscape of electronic devices and PCB layouts.
Mask RCNN, an efficient deep learning algorithm that is used as a simple overhead to Fast
RCNN.is used to detect the components in PCB. The use of deep learning techniques such as
YOLO models, Detectron2 and many more deep learning models has proved efficiently for
classifying and segmentation of electric components in PCB. Therefore, using Mask RCNN
for segmentation helps the PCB Components to segment the components effectively by
preserving the boundaries of the components. Mask RCNN have proved its efficiency in
human pose estimation and well known for shadow detection. This model can be used for
building PCB component detection therefore allows us to understand about the segmentation
models and how it works and tune model for effective segmentation for are pivotal for ensuring
the quality, reliability, and sustainability of electronic devices, from their manufacturing to
their end-of-life stages. As technology continues to evolve, so too will the methods and

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 39

technologies used to understand and manipulate these intricate circuit boards, pushing the
boundaries of what is possible in the world of electronics. While the importance of Printed
Electric Circuit Boards (PCBs) in the realm of electronics is undeniable, the task of detecting
and accurately segmenting their components is not without its challenges. These challenges
stem from the intricacies of PCB design, the ever-increasing complexity of electronic devices,
and the need for precision in various stages of the electronics lifecycle. Recognizing and
addressing these challenges is essential to ensuring the seamless functioning of electronic
devices and the efficiency of electronics-related processes.

Printed Electric Circuit Boards are engineered with intricate layouts to accommodate
numerous components within a limited space. These complex designs make it challenging to
identify and isolate individual components accurately. Detection algorithms must contend with
densely packed components, traces, and vias while minimizing false positives and negatives.
PCBs host a wide range of component sizes, from tiny surface-mount resistors and capacitors
to larger microprocessors and connectors. Detecting and segmenting components of varying
sizes, especially miniature components, demands highly accurate and scalable algorithms
capable of discerning minute details. On densely populated PCBs, components often overlap,
making it difficult to distinguish individual parts. Detecting overlapped components and
accurately segmenting them is a significant challenge, as it requires algorithms to parse the
interplay of shapes and colors effectively. Components may appear at different orientations
and scales on PCBs due to variations in manufacturing processes or design choices. Detection
models must be robust enough to handle such variations, ensuring that components are
correctly identified regardless of their orientation or size. Environmental conditions, such as
lighting and background clutter, can impact the accuracy of detection and segmentation.
Developing algorithms that are robust to these factors is essential for consistent results in real-
world scenarios. In some cases, the components on PCBs lack distinctive features, making
them challenging to identify. Addressing this requirement, the development of advanced
feature extraction and pattern recognition techniques. In manufacturing and quality control
applications, real time or near-real-time detection and segmentation are essential to
maintaining production efficiency. Algorithms must be optimized for speed and efficiency to
meet these demands. As technology evolves, PCB designs continue to change. Detection and
segmentation methods need to be scalable and adaptable to accommodate the evolving
landscape of electronic devices and PCB layouts. Mask RCNN, an efficient deep learning
algorithm that is used as a simple overhead to Fast RCNN.is used to detect the components in

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 40

PCB. The use of deep learning techniques such as YOLO models, Detectron2 and many more
deep learning models has proved efficiently for classifying and segmentation of electric
components in PCB. Therefore, using Mask RCNN for segmentation helps the PCB
Components to segment the components effectively by preserving the boundaries of the
components. Mask RCNN have proved its efficiency in human pose estimation and well
known for shadow detection. This model can be used for building PCB component detection
therefore allows us to understand about the segmentation models and how it works and tune
model for effective segmentation for different components of Printed Electric Circuit Board.
The project aims to build a model that detect the components of PCB therefore this can be
useful for small scale manufacturers and also improving the area of PCB component detection.

Challenges in Detecting PCB Components

PCB (Printed Circuit Board) detection is a complex and challenging task that presents several
significant for both researchers and practitioners. One of the challenges in this field is the
limited availability of high-quality datasets. Unlike some other computer vision tasks, creating
comprehensive PCB datasets requires manual annotation of individual components on the
boards. This process is time-consuming, as the annotation tool used in this research is VGG
annotation tool for getting polygon shape attributes. Moreover, the scarcity of publicly
available PCB datasets can badly effect development in this field this making it a difficult for
the development and evaluation of accurate detection models.

Another challenge is from the diversity of PCBs. These boards can accommodate a vast
component of electronic components, each with its unique shape, size, and colour and these
all components will very dense in PCB. Some components are tiny, like resistors or capacitors,
while others are comparatively large, such as connectors or capacitors. These components can
assume various shapes, including rectangles, circles, or irregular shapes. Additionally, they
may come in different colours, which introduces yet another layer of complexity.
Consequently, designing a single model capable of accurately detecting such a wide variety of
components on diverse PCBs is a difficult task.

Selecting an optimized model architecture and fine-tuning its hyper parameters is a critical
aspect of any PCB detection, but it is not without its own set of challenges. The ideal model
should be able to balance between complexity and performance, requiring a deep
understanding of deep learning techniques. Moreover, hyperparameter tuning is a resource-
consuming process that demands access to significant computational power and expertise in

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 41

machine learning. The vast landscape of model choices, from Faster R-CNN and YOLO to
custom architectures, further complicates this decision-making process. Moreover, the
challenges associated with PCB detection extend beyond the technical aspects.

Literature Review

Since the proposed PCB electronic components detection network is implemented on the Mask
RCNN, the following will introduce these aspects to the relevant research work.

[1]. Automatic visual inspection for printed circuit board via novel Mask R-CNN in smart city
applications. Jian Liana, Letian Wangc, Tianyu Liud, Xia Dingb, Zhiguo Yub, research
introduces an approach for deep learning-based approach was proposed for instance
segmentation in printed circuit board images. By adding the geometric attention-guided mask
branch into the fully convolutional one-stage object detector under the framework of Mask R-
CNN, it can produce a segmentation mask for each bounding box to enhance the identification
accuracy. To evaluate the ability of the proposed approach, the comparison experiments were
conducted between state-of-the-art techniques and MASK RCNN. The experimental results of
this paper demonstrates that the presented work outperformed the state-of-the-art both in
precision, sensitivity, and accuracy for both small devices like resistors and capacitors as well
as integrated circuits

[2]. Application Research of Improved YOLO V3 Algorithm in PCB Electronic Component


Detection. Jing Li 1,2, Jinan Gu 1, Zedong Huang 1 and Jia Wen 3. The detection of electronic
components on PCBs has historically relied on computer vision techniques and traditional
machine learning methods. However, recent advancements in deep learning have
revolutionized this field. YOLO (You Only Look Once) V3, a state-of-the-art object detection
algorithm, has gained prominence for its real-time detection capabilities and high accuracy. It
underscores the importance of data augmentation, anchor boxes, and dimensionality reduction
in improving detection accuracy and highlights the practical significance of achieving high
mAP scores in real-world manufacturing application. This YOLOv3 have achieved mAP of
0.93.
[3]. PCBDet: An Efficient Deep Neural Network Object Detection Architecture for Automatic
PCB Component Detection on the Edge Brian Li1, Steven Palayew1, Francis Li1, Saad
Abbasi1, Saeejith Nair, and Alexander Wong, PCBDet has been developed to deliver state-
ofthe-art inference throughput while outperforming other state-of-the-art efficient
architectures in the realm of PCB component detection. Experimental results demonstrate

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 42

PCBDet's ability to achieve a remarkable 2× inference speed-up on an ARM Cortex A72


processor, surpassing an Efficient Net-based design. Moreover, PCBDet showcases an
approximate 2-4% improvement in mean Average Precision (mAP) on the FICS-PCB
benchmark dataset compared to its counterparts. This paper underscores the significance of
efficient deep neural network architectures, such as PCBDet, in addressing the computational
challenges associated with automatic PCB component detection on the edge.

[4]. Solder Joint Defect Detection in the Connectors Using Improved Faster-RCNN Algorithm
Kaihua Zhang 1 and Haikuo Shen. This paper introduces a dataset comprising samples of
connector solder joints, and through data augmentation, the number of image samples has been
expanded to over three times the original dataset size. Anchor boxes are generated through
clustering, and transfer learning is applied using the powerful ResNet-101 architecture. These
elements are integrated into an improved Faster Region-based Convolutional Neural Networks
(Faster R-CNN) algorithm. Experimental results confirm enhancements achieved by the
proposed algorithm over the original one. The average detection accuracy of this method
reaches an impressive 94%, and for certain defect types, the detection rate even reaches 100%.
These results align with and meet the stringent requirements of the industry, offering a robust
solution for connector solder joint defect detection.

[5]. A PCB Electronic Components Detection Network Design Based on Effective Receptive
Field Size and Anchor Size Matching Jing Li , 1,2 Weiye Li,3 Yingqian Chen , 3 and Jinan
Gu. This paper presents a real-time electronic component detection network achieved through
the effective matching of receptive field size and anchor size within the YOLOv3 framework.
It encompass three key aspects: (1) the realization of calculating and visualizing the effective
receptive field size for various depth layers within the convolutional neural network (CNN)
through gradient backpropagation; (2) the introduction of a modular YOLOv3 composition
strategy that allows for easy addition and removal of components; and (3) the development of
a lightweight and efficient detection network achieved through an innovative algorithm for
matching effective receptive field size and anchor size. In comparison to other approaches,
including Faster-RCNN, SSD (single-shot multibox detectors), and the original YOLOv3, our
method boasts the highest detection mean average precision (mAP) of 95.03% on a PCB
electronic component dataset. Furthermore, it exhibits the smallest memory footprint,
representing approximately one-third of the original YOLOv3's parameter size.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 43

Additionally, our approach ranks second in terms of floating-point operations (FLOPs),


underlining its efficiency and effectiveness in real-world electronic component detection
applications.

Methodology

Objectives

The objective of this research project is to explore the application of Mask R-CNN, a deep
learning technique, for quality inspection in various industries. Specifically, the project aims
to achieve the following objectives:

1. Develop a robust and accurate Mask R-CNN model for detection and segmentation of
components in Printed Electric Circuit Board.

2. Train the Mask R-CNN model using a dataset of Printed Electric Circuit Board and the
annotation JSON file.

3. Improving the model through learning rate scheduling, reducing the number of neurons and
trying out with different gradient descent algorithms.

4. Evaluate the performance of the trained model using metrics such as precision, recall, and
F1 score to measure its effectiveness in detecting components accurately.

5. Comparison of the performance of the Mask R-CNN model with traditional manual
inspection methods in terms of efficiency, accuracy, and cost-effectiveness.

6. Enhancing the productivity and making use of advanced techniques for hardware assurance
and quality control.

7. Deploying Mask RCNN for detecting and Instance segmentation pf PCB components using
Flask application.

Proposed Methodology

MASK Region based Convolution Network is employed for the detection and instance
segmentation of PCB components. Mask R-CNN is able to detect and segment multiple objects
in an image, including overlapping and occluded objects. Instance segmentation task is a big
challenge in computer vision and Mask RCNN is needs more training time but it produces
accurate results. Mask RCNN is a two- stage object detection and instance segmentation model
which provides masks overhead for the detected objects. Mask RCNN is an extension of Fast

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 44

RCNN. It consists of i.) Feature Extraction Layer which is ResNet101 and ResNet50. These
feature extraction layer will extract the feature maps with various sizes. These extracted feature
maps are then utilized to produce ii) Region Proposal Network ii) A RCNN layer will map all
the region proposal layer into same size and then give as an input for the fully connected network

Figure 3.2.1: Flowchart of the Proposed Methodology

MODEL

Another property of MASK RCNN that it uses ROIAlign pooling which is considered a
method where there is no information loss. This pooling method is different from Fast RCNN
where there is quantization which leads to information loss. Mask RCNN also uses full
connected network for predict the classes. The masks are produced in a such way that each
object is considered as each instance. Mask RCNN is

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 45

Figure 3.3.1: Architecture of Mask R-CNN for PCB Components Detection

Figure 3.3.2: ROI Align


Implemented using dataset of PCB which then fed to MASK RCNN model. The Model given
annotations as well as images as input the input annotations are in the form of region attributes
and the model is then trained and tested on validation data [7].

There are three Losses in MASK RCNN, this can be written as,

The notation used throughout this analysis. Let I represent the set of N input PCB images, each
denoted as In. The set of PCB components present in image In is defined as Cn, containing Mn
individual components cn,m.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 46

We formulate the instance segmentation task using the Mask R-CNN architecture. Given an
input image In, the Mask R-CNN model consists of a backbone network fbackbone, a region
proposal network (RPN) fRPN, and the segmentation head fsegmentation. We express the
entire model as a composition of these functions [8]:

fMask R-CNN(In) = fsegmentation(fRPN(fbackbone(In)))

the loss function used for training the Mask R-CNN model. The loss function is composed of
three components: the classification loss Lcls, the bounding box regression loss Lbbox, and
the mask segmentation loss Lmask. The total loss Ltotal is given by the sum of these
individual losses [7]:

L total =L cls +L bbox +L mask

Where the loss function is composed of three components: the classification loss Lcls, the
bounding box regression loss Lbbox, and the mask segmentation loss Lmask [6].

DATASET

PCB Data is collected from Roboflow and pcb-component-detection- chiawen-kuo [7]which


is a public dataset of PCB images . This dataset consists of different types of PCB from mobile
to Laptops There a ton of components including resistors, capacitors, inductors, transistors,
and integrated circuits. The detection is done on a small set of PCB training folder and 1959
images for training and for validation 717 images. The annotation of the images is done by
using VGG Annotator tool [8] this results in region attributes that is width and height of the
components.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 47

Figure 4.1: Annotation of PCB image using VGG annotation tool

Identification of PCB components is difficult even for an electrical engineer. There are many
varieties of texture, shape for different components in the PCB. Before pre-annotation the

identification of PCB components is done by using findchips [9] website which gives an
overview of PCB components that varies. The dataset is of high resolution and also the dataset
is not sufficient to train MASK RCNN. The model may lead to overfitting keeping this in mind
and then using data augmentation techniques. Data augmentation techniques can used when
the data is not diverse. This technique is employed before the model training starts. Data
augmentation techniques are done prior to the model building. The techniques such as rotation,
flipping, image enhancement techniques, converting into grayscale images and center cropping
are done for making the model to generalize more better. Including the data augmentation, the
sample size become 1959 for training data and 717 for validation data.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 48

Fig.4.2 Data Augmentation for PCB Images using TensorFlow

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 49

This is another achievement of this project is that model was efficient without data
augmentation while training and the Map also increases.

EXPERIMENTAL SETUP
5.1 SYSTEM REQUIREMENT

IDE : Google Colab

GPU: NVIDIA Tesla T4 with 16GB

OS: Windows 10

Web application: Flask

5.2 LIBRARIES

1. h5py 3.1.0

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 50

h5py is a Python library that provides an interface to the HDF5 (Hierarchical Data Format
version 5) file format. It is commonly used for storing and manipulating large datasets.

Usage in Mask RCNN: h5py can be used to save and load model weights and other data
required during the training and inference of the MASK RCNN model.

2. ipykernel 6.15.2

ipykernel is part of the IPython ecosystem and provides the IPython kernel for Jupyter
notebooks and other interactive computing environments. It allows you to run Python code in
Jupyter notebooks.

Usage in Mask RCNN: It is used to run and execute code within Jupyter notebooks, which can
be useful for training and evaluating the MASK RCNN model in a more interactive manner.

3. keras-nightly 2.5.0.dev2021032900

Keras is a high-level neural networks API that is now integrated into TensorFlow as tf.keras.
The "nightly" version likely indicates that you are using a pre-release version of Keras.

Usage in Mask RCNN: Keras is used as the high-level deep learning framework for building
and training the MASK RCNN model.

4. matplotlib 3.5.3

Matplotlib is a popular Python library for creating static, animated, and interactive
visualizations in Python. It provides a wide range of plotting functions and capabilities.

Usage in Mask RCNN: Matplotlib is often used for visualizing the results of object detection,
showing images with detected components, plotting loss curves during training, and other
visualizations.

5. numpy 1.19.5

NumPy is a fundamental package for scientific computing in Python. It provides support for
large, multi-dimensional arrays and matrices, along with mathematical functions to operate on
these arrays.

Usage in Mask RCNN: NumPy is used for data manipulation and processing, especially in
tasks involving image data, such as preprocessing images before feeding them into the MASK
RCNN model.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 51

6. pandas 1.4.4

Pandas is a powerful data manipulation and analysis library for Python. It provides data
structures like DataFrames for efficient data handling and analysis.

Usage in Mask RCNN: Pandas can be used for organizing and processing data related to the
PCB components and annotations, making it easier to manage and analyze the dataset used for
training and evaluation.

7. scikit-image 0.16.2

Scikit-image is a collection of algorithms for image processing. It is part of the scikit-learn


ecosystem and provides tools for tasks such as image segmentation, filtering, and feature
extraction.

Usage in Mask RCNN: Scikit-image may be used for certain image processing tasks, possibly
as part of the data preprocessing or post-processing steps in the MASK RCNN pipeline.

8. tensorflow 2.5.0

TensorFlow is an open-source deep learning framework developed by Google. It is widely used


for building and training deep neural networks.

Usage in Mask RCNN: TensorFlow is the core framework used to implement the MASK
RCNN model. It provides the necessary tools and functions for building and training neural
networks for object detection.

9. tensorflow-estimator 2.5.0

TensorFlow Estimator is a high-level TensorFlow API that simplifies the process of defining,
training, and evaluating machine learning models.

Usage in Mask RCNN: TensorFlow Estimator might be used for creating custom training loops
or managing the training process of the MASK RCNN model.

Implementation

Data Preparation

Data collected from various sources is augmented using TensorFlow library such as rotation,
center cropping, flipping, grayscale image conversion and image enhancement methods such as
saturation, brightness and contrast. The initial dataset is 47 images where after data augmentation

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 52

we have obtained about 700 images and other images are collected from Roboflow. Data
Annotation is done using VGG annotation tool which accepts images as input and the polygon
annotations are done thus total regions obtained are 20790 for training data for images 1959
which consists of IC: 5594, Capacitor: 9302, Connector: 4837, Transistor: 105 regions and 13506
regions attributes for validation data which consists of IC: 3855, Capacitor: 6231,
Connector:2771, Transistor: 649 regions.

Fig.5.1.1. Total number of PCB components in train JSON file.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 53

Fig.5.1.2. Total number of PCB components in validation JSON file.


MODEL TRAINING
MASK RCNN is implemented using TensorFlow 2 [10]. The chosen backbone network is
ResNet101, which serves as the foundation for feature extraction. ResNet101 is favored for its
depth and its ability to capture intricate image features. The batch size is set to 1, implying that
each training step processes a single image. Utilized a single GPU for model training to
optimize computational efficiency. Images are resized to ensure uniformity, with a maximum
dimension of 1024 pixels and minimum dimension of 800 pixels to guarantee adequate feature
preservation. Images are resized to a square shape for consistency using the 'square' resize
mode. Initially, no data augmentation is applied, resulting in an mAP of 0.45. Subsequently,
data augmentation is introduced, and annotations are added to the images.

For region proposal generation, we choose RPN anchor ratios of [0.5, 1, 2]. RPN anchor scales
are set to (32, 64, 128, 256, 512) pixels. The RPN anchor stride is fixed at 1, determining the
grid spacing for anchor placement. The RPN NMS threshold is set to 0.7 to regulate the
suppression of overlapping proposals. During training, we use 256 RPN training anchors per
image, which impacts the number of proposals used. The FPN classif FC layer size is set to
1024, enabling more expressive feature mapping. Masks are generated with a mask pool size
of 14 and a mask shape of [28, 28]. The learning rate is set to 0.01 to control the optimization

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 54

step size, while a learning momentum of 0.9 influences parameter updates. Weight decay is set
to 0.0001 to encourage model regularization. Gradient clipping is applied with a norm of 5.0
to prevent gradient explosion. Loss weights for different components are all set to 1.0, ensuring
equal importance during training. The detection confidence threshold is set at 0.75, affecting
the acceptance of detected objects. For inference, the NMS threshold is set to 0.3, controlling
post-processing for non-maximum suppression [15].

Initially, both ResNet50 and ResNet101 were considered as backbones. Although ResNet101
performed well, it exhibited fluctuations during training, likely due to the model's complexity
relative to the training data, resulting in noise. To address these fluctuations, we improved the
model by transitioning from Stochastic Gradient Descent (SGD) to the Adam optimizer and
introducing a learning rate scheduler. These adjustments significantly enhanced the model's
performance, as demonstrated in Table 1.

Total parameters: 67,966,670 Trainable parameters: 21,112,142 non-trainable parameters:


46,854,528. We conducted experiments with different learning rates to converge to the optimal
solution, as presented in Table 1.

Therefore, learning rate scheduling typically starts with a larger learning rate and gradually
reduces it. Learning rate scheduling helps the model converge to a better solution and can
improve training stability. Thus, the improved Mask RCNN was able to increase its efficiency
using learning rate scheduling [11] and dealing with different optimizers resulting in better
performance metrics and more stable training.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 55

HYPERPARAMETERS OF MODEL

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 56

Table 2: Tunned Hyperparameters of Model

EVALUATION METRICS

In this study, several evaluation metrics were chosen for measuring the performance of the
comparison algorithms as shown below [1]

where Sn and Sp represents sensitivity and specificity, and TP, FP, TN, and FN denotes true
positive, , false positive, true negative, and false negative, respectively. Within the metrics, Sn
and Sp are used to reveal the accuracy of the correctly identified defective and non-defective,
respectively. Accuracy (Acc) can represent the overall performance and the Area under the
ROC curve (AUC) is used to calculate the average of Sn and Sp [1].

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 57

MODEL DEPOLYMENT USING FLASK

This section unveils the technical things using Flask application, its architecture and features,
and demonstrates its usability through user instructions and illustrative screenshots.

Prediction Page

Fig.6.6.1: PCB Detection Application using Flask

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 58

Results
Mask RCNN is used for predicting the components such as IC, Connector, Capacitor and
transistor of PCB. The detection of components in MRCNN is difficult as it requires much
training data for accurately predicting the objects in the PCB. The model was run on 50 epochs
and learning rate used is 0.01. Due to a smaller number of training data and testing data the
model was incapable for generalising some components. Model also ignored some of the
components from making prediction. Using the time series analysis by TensorBoard the model
error was 3.5 at beginning and then increase in the epochs the loss of classification, val loss,
bbox and mask loss becomes very less. Model was able to predict the class some components
of IC, transistor, Capacitor and Connector correctly for some images.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 59

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 60

The PCB Components are detected using MASK RCNN we can see in the above picture that
some of many components such as capacitors are detecting properly. The other components
are not detected the disadvantages is the model didnt explored much training data because of
smalll of batch size and the GPU able up to a limit.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 61

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 62

Fig 7.2. Detection on collected PCB Images


An mAP of 0.91 means that, on average, your model's predictions are correct for 91% of the
PCB components in the test dataset. This is a strong indicator of the model's ability to
accurately detect and classify PCB components. Achieving an mAP of 0.91 is particularly
valuable in real-world applications, especially in quality control and manufacturing processes
where accurate detection of PCB components is critical.

CONCLUSION
In conclusion, the application of Mask R-CNN for predicting components on PCBs presented
both successes and challenges. The model exhibited promising results in predicting certain
classes, such as ICs, capacitors, and connectors for some images. However, it struggled with
generalization and faced difficulties in accurately detecting components. This issue could be
attributed to the limited amount of training and testing data available. During the training
process, the model's performance gradually improved over the 50 epochs after wards the model
is seems to be overfitting. The learning rate of 0.01 seemed appropriate, as evidenced by the
reduction in the loss values for classification, validation loss, bounding box (bbox) loss, and
mask loss as the epochs progressed. The initial error of 3.5 decreased, indicating that the model
was learning from the data and becoming more capable of making accurate predictions.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 63

To enhance the model's generalization capabilities, several strategies were suggested.


Increasing the amount of training and validation data could potentially expose the model to a
wider variety of examples, allowing it to learn more robust features and patterns Fine-tuning
hyperparameters also emerged as a crucial consideration. Extending the number of training
epochs might enable the model to converge better, potentially enhancing its ability to detect
challenging components accurately. Defining appropriate anchor sizes is essential for
capturing objects of various scales and aspect ratios, further improving detection accuracy.
LIMITATION OF THE STUDY

1 .Lack of Dataset Diversity: The dataset used for this study is not diverse enough. Collecting
a limited number of images, augmenting them, and relying on a few images from the internet
can result in a dataset that may not fully represent the variability of real-world PCB
components. This can lead to a lack of robustness in your model's performance when faced
with unseen variations.

2. Limited Batch Size and GPU Resources: Setting a batch size of 1 and having only one
available GPU can severely limit the training process. A smaller batch size might lead to
slower convergence and increased training time. Additionally, having only one GPU restricts
the scale of experimentation and model complexity. The observation of increasing and
decreasing losses during training suggests that the model might be learning noise from the
training data. This instability in the loss curve can be indicative of the model's difficulty in
converging to an optimal solution due to the dataset's limitations.

3. Hyperparameter Tuning Challenges: Tuning the hyperparameters of Mask R-CNN can be a


complex and resource-intensive task. With limited resources, it becomes challenging to
thoroughly explore the hyperparameter space and find the best configuration for optimal
performance.

While the study acknowledges these limitations, it is important to emphasize that these
limitations can be addressed in future research. By expanding the dataset, acquiring more
GPUs, and fine-tuning hyperparameters, there is the potential to significantly improve the
model's performance and robustness.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 64

Chapter: 5
Harvesting polices in prey-predator model
Paparo A. V1 and T.S.N Murthy2
1
Department of Mathematics, JNTU-GV, Collège of Engineering, Vizianagaram
Vizianagaram, A.P, India,
2
Department of ECE, JNTU-GV, Collège of Engineering, Vizianagaram
Vizianagaram, A.P, India,
E-mail: paparao.alla@gmail.com; tsnmurthyece.jntuk@ieee.org

Abstract: In this paper we investigate the constant effort harvesting effort of prey and predator
species in prey-predator model with holling type II functional response. The system is
described by a system of ordinary differential equations. The boundedness properties, long
term behaviour of the system, equilibrium points are identified. Local stability analysis is
discussed at each of its equilibrium points. Global stability is studied by constructing suitable
Lyapunov’s function. We proved that the system is both locally and globally asymptotically
stable. Further Numerical simulation is performed and in support of analytical study.
Keywords: prey-predator, local stability, global stability, Simulation.
Mathematics Subject Classification: 34DXX

1. Introduction

The nature provides wide opportunities. Ecological relations play a vital role in stability of the
nature. The prey-predator relations are prominent in the nature. Identifying the prey for the
predator is a common phenomenon in the nature.

It Is the significant relationship in ecology Many researchers’ attention has been capture in this
interaction. Lotka [8] model is most common in mathematical ecology that is the fundamental
for Prey-predator population model. Later on, the significant. Later on, the dynamics of prey-
predator model are explored by Carlos [2], Edward [3], Freedman [4], Kot [6], Lakshmi
Narayan [7], May.R.M [11], Murry [12,13], lima [10], Ranjith Kumar [16] and Sita Rambabu
[21].

In theoretical ecology, there are two types of functional responses viz. prey-dependent
functional response and predator-dependent functional response. The first one postulate that
the per capita rate of predation depends on the prey numbers

The models with different holling time functional responses are also included in the study.
Prey-predator models with holling type functional response. The detailed discussion of holling

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 65

type functional responses are given by authors [17,18,19]. The prey-predator dynamics with
holling type-II response was dealt by Chen e.t al [1].

The exploitation of biological resources and harvesting of population are commonly practiced
in fisheries, forestry, and wildlife management. Harvesting efforts can be classified in to
constant, constant effort, linear harvesting and non-linear harvesting. Harvesting impacts with
holling type functional responses list is wide. Gupta e.t al,[5] & Liu, W [9] studied the studied
the Michaelis-Menten type harvesting in prey-predator model and bifurcation analysis. T.K
Kar [20] proposed the maximum sustainable yield on prey-predator model. Xiao [22] analysed
the prey-predator dynamics with constant harvesting rate. Recently paparo [14,15] studied the
dynamics of prey-predator model with holling type two response and with Michaelis-Menten
type and constant effort harvesting of prey species.

In spite of the above models, we proposed the constant effort type harvesting in both the prey-
predator species with holling type response1 + 𝑘𝑁𝑖 and studied the dynamics of the model
includes the boundedness properties, long term behaviour of the system, stability analysis at
co-existing state. Finally, the analytical results are supported by numerical simulation.

2. Formation of Model

The system of equations for the proposed model when constant effort type harvesting function
is taken for investigation on both the prey and predator species.

𝑑𝑁1 𝛼12 𝑁1𝑁2


= 𝑎1 𝑁1 − 𝛼11 𝑁12 − − 𝑞1 𝐸1 𝑁1
𝑑𝑡 1+𝑘𝑁1
𝑑𝑁2 𝛼21 𝑁1 𝑁2 (2.1)
= 𝑎2 𝑁2 − 𝛼22 𝑁2 2 + − 𝑞2 𝐸2 𝑁2
𝑑𝑡 1+𝑘𝑁2

With initial conditions 𝑁1 (0) = 𝑁10 > 0 &𝑁2 (0) = 𝑁20 > 0 (2. 2)

The following notations are adopted

𝑁1 &𝑁2 : Prey and predator population,𝑎1 ,𝑎2 : Growth rates of prey and predator population

𝛼11 : Internal competition of prey species, 𝛼22 : Internal competition among predator
species.𝛼12 : The interaction rate of prey on to the predator, 𝛼21 : The interaction rate of
predator on to the prey, q1: catch ability coefficient of prey, E1: harvesting effort of the prey

q2: catch ability coefficient of predator, E2: harvesting effort of the predator

Assumptions: The predator species is generalist type where the predator species can survive in
absence of prey species.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 66

Assume all the interaction coefficients values lies in the interval [0,1].

All constant in the model assumed to be positive.

The environment is not closed

2.1. Positivity and Bounded ness of the Solutions.

In this section we prove the positivity and bounded ness of the solutions of system of equations
(2.1) along with the initial conditions (2.2). To prove the results, we use the following two
lemmas.

𝑑𝑥
Lemma1: if 𝑎, 𝑏 > 0 and ≤ (≥)𝑥 (𝑡)(𝑎 − 𝑏𝑥 (𝑡)) with 𝑥(0) > 0 then
𝑑𝑡

𝑎 𝑎
log 𝑡→∞ 𝑠𝑢𝑝 𝑥(𝑡) ≤ (log 𝑡→∞ 𝑖𝑛𝑓 𝑥 (𝑡) ≥ )
𝑏 𝑏
𝑑𝑥
Lemma2: if 𝑎, 𝑏 > 0 and 𝑑𝑡
≤ 𝑥 (𝑡)(𝑎 − 𝑏𝑥 (𝑡)) with 𝑥 (0) > 0,then for all 𝑡 ≥ 0

𝑎 𝑎 𝑎
𝑥(𝑡) ≤ with 𝑐 = 𝑏 − in particular 𝑥(𝑡) ≤ max{𝑥 (0), } forall 𝑡 ≥ 0
𝑏−𝑐𝑒 −𝑎𝑡 𝑥(0) 𝑏

Theorem 2.1.1: All the solutions 𝑁1 (𝑡)&𝑁2 (𝑡) of the system (2.1) with initial conditions (2.2)
are positive i.e., 𝑁1 (𝑡) > 0 &𝑁2 (𝑡) > 0

Proof: From the system of equations (2. 1) the prey equation is given by

𝑑𝑁1 𝛼12 𝑁1 𝑁2
= 𝑎1 𝑁1 − 𝛼11 𝑁12 − − 𝑞1 𝐸1 𝑁1 it follows that 𝑁1 (𝑡) = 0 is an invariant set.
𝑑𝑡 1+𝑘𝑁1

1 𝑑𝑁1 𝛼12𝑁2
= 𝑎1 − 𝛼11 𝑁1 − − 𝑞1 𝐸1
𝑁1 𝑑𝑡 1 + 𝑘𝑁1

𝛼12𝑁2
∫(𝑎1 −𝛼11 𝑁1−1+𝑘𝑁 −𝑞1 𝐸1 )𝑑𝑡
𝑁1 (𝑡) = 𝑒 1 >0

This implies and 𝑁1 (𝑡) > 0 for all 𝑡 ≥ 0.

𝑑𝑁2
We apply similar argument for the predator equations = 𝑎2 𝑁2 − 𝛼22 𝑁2 2 +
𝑑𝑡
𝛼21 𝑁1𝑁2
−𝑞2 𝐸2 𝑁2 𝑁2 (𝑡) = 0 is an invariant set and hence 𝑁2 (𝑡) > 0 for all 𝑡 ≥ 0.
1+𝑘𝑁2

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 67

1 𝑑𝑁2 𝛼21 𝑁1
= 𝑎2 − 𝛼22 𝑁2 − − 𝑞2 𝐸2
𝑁2 𝑑𝑡 1 + 𝑘𝑁2

𝛼21 𝑁1
∫(𝑎2 −𝛼22 𝑁2 −1+𝑘𝑁 −𝑞2 𝐸2 )𝑑𝑡
𝑁2 (𝑡) = 𝑒 2 >0

This implies and 𝑁2 (𝑡) > 0 for all 𝑡 ≥ 0

Thus, all the trajectories 𝑅+2 cannot cross the co-ordinate axis. Hence all the solutions
𝑁1 (𝑡)&𝑁2 (𝑡) are positive.

Theorem 2.1.2: All the solutions 𝑁1 (𝑡)&𝑁2 (𝑡) of the system (2.1) with initial conditions (2.2)
are bounded for all 𝑡 ≥ 0.

Proof: From the system of equations (2.1) the prey equation is given by

𝑑𝑁1 𝛼12 𝑁1 𝑁2
= 𝑎1 𝑁1 − 𝛼11 𝑁12 − − 𝑞1 𝐸1 𝑁1 ≤ 𝑁1 (𝑎1 − 𝛼11𝑁1 )
𝑑𝑡 1+𝑘𝑁1

using lemma2 where both 𝑎1 &𝛼11 > 0


𝑎1 𝑎
And also 𝑁1 (𝑡) ≤ 𝛼 −𝑎1 𝑡 and 𝑐 = 𝛼11 − 𝑁 1
11 −𝑐𝑒 10

𝑎
In particular 𝑁1 (𝑡) ≤ max {𝑁10 , 𝛼 1 } = 𝑀1 for all 𝑡 ≥ 0. (2.1.2.1)
11

From the predator equation from (2.1) we have equations

𝑑𝑁2 𝛼21 𝑁1 𝑁2 𝛼 𝑁
𝑑𝑡
= 𝑎2 𝑁2 − 𝛼22 𝑁2 2 + 1+𝑘𝑁2
21 1
−𝑞2 𝐸2 𝑁2 ≤ 𝑁2 (𝑎2 + 1+𝑘𝑁 − 𝛼22 𝑁2 ) again, using lemma2
2

21 1 𝛼 𝑁 21 1 𝛼 𝑀
Which implies 𝑁2 (𝑎2 + 1+𝑘𝑁 − 𝛼22 𝑁2 ) ≤ 𝑁2 (𝑎2 + 1+𝑘𝑁 − 𝛼22 𝑁2 ) from equation
2 2

(2.1.2.2)

𝑎2 (1+𝑘𝑁20)+𝛼21𝑀1
In particular 𝑁2 (𝑡) ≤ max {𝑁20 , } = 𝑀2 for all 𝑡 ≥ 0. (2.1.2.3)
𝛼22(1+𝑘𝑁20 )

Hence the system (2.1) possesses bounded solutions.

2.2. Permeance:

The long-term behaviour of the dynamical system in particular the positive solutions of the
system approach the boundary of the positive orthant. If the system includes two spices the
positive solutions approach the boundary of positive quadrant (two-dimensional space) and for
three species the positive solutions approach the boundary of positive octant (three-dimensional
space).

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 68

The permeance of the system exist if there exist two positive constants 𝜂1 &𝜂2 such that each
positive solutions of 𝑁1 (𝑡)&𝑁2 (𝑡) with initial conditions 𝑁10 &𝑁20 in 𝑅+2satisfies,

min{lim𝑡→∞ 𝑖𝑛𝑓 𝑁1 (𝑡, 𝑁10 , 𝑁20 ), lim𝑡→∞ 𝑖𝑛𝑓 𝑁2 (𝑡, 𝑁10 , 𝑁20 )} ≥ 𝜂1 and

max{lim𝑡→∞ 𝑠𝑢𝑝 𝑁1 (𝑡, 𝑁10 , 𝑁20 ), lim𝑡→∞ 𝑠𝑢𝑝 𝑁2 (𝑡, 𝑁10 , 𝑁20 )} ≤ 𝜂2

Theorem 2.2.1. The system (2.1) with initial conditions (2.2) is permanent if𝑎2 𝛼22 (1 +
𝑘𝑀1 ) > 𝛼12 [𝑎2 (1 + 𝑘𝑀1 ) + 𝛼12 𝑀1 ]

Proof: From the system of equations (2.1) the prey equation is given by

𝑑𝑁1 𝛼12 𝑁1 𝑁2
= 𝑎1 𝑁1 − 𝛼11 𝑁12 − − 𝑞1 𝐸1 𝑁1 ≥ 𝑁1 (𝑎1 − 𝛼11 𝑁1
𝑑𝑡 1 + 𝑘𝑁1
𝛼12 {𝑎2 (1 + 𝑘𝑁20 ) + 𝛼21 𝑀1 }
− 𝛼12 𝑁2 ) ≥ 𝑁1 (𝑎1 − 𝛼11𝑁1 − )
𝛼22 (1 + 𝑘𝑁20 )

using the bounded ness property from equation (2.1.2.3)

𝑑𝑁1 𝛼12 {𝑎2(1+𝑘𝑁20 )+𝛼21 𝑀1 }


Hence ≥ 𝑁1 (𝐿1 − 𝛼11 𝑁1 ) where 𝐿1 = 𝑎1 − ℎ −
𝑑𝑡 𝛼22 (1+𝑘𝑁20)

𝐿 𝑎
Using lemma1lim𝑡→∞ 𝑖𝑛𝑓 𝑁1 (𝑡) ≥ 𝛼 1 and lim𝑡→∞ 𝑠𝑢𝑝 𝑁1 (𝑡) ≤ 𝛼 1
11 11

from equation (2.1.2.1)

From the system of equations (2.1) the prey equation is given by

𝑑𝑁2 𝛼21 𝑁1 𝑁2
= 𝑎2 𝑁2 − 𝛼22 𝑁2 2 + −𝑞2 𝐸2 𝑁2 ≥ 𝑁2 (𝑎2 + 𝛼21 𝑁1 − 𝛼22 𝑁2 )
𝑑𝑡 1 + 𝑘𝑁2
≥ 𝑁2 (𝑎2 + 𝛼21 𝑀1 − 𝛼22 𝑁2 )

𝐿
Using lemma1lim𝑡→∞ 𝑖𝑛𝑓 𝑁2 (𝑡) ≥ 𝛼 2 where 𝐿2 = 𝑎2 + 𝛼21 𝑀1 and
22

𝑎2 (1+𝑘𝑁20)+𝛼21𝑀1
lim𝑡→∞ 𝑠𝑢𝑝 𝑁2 (𝑡) ≤ 𝛼22(1+𝑘𝑁20 )
from equation (2.1.2.3)

𝐿 𝐿 𝑎 𝑎2 (1+𝑘𝑁20)+𝛼21 𝑀1
Now choose 𝜂1 = 𝑚𝑖𝑛 (𝛼 1 , 𝛼 2 ) and 𝜂2 = 𝑚𝑎𝑥 (𝛼 1 , ) we get the
11 22 11 𝛼22(1+𝑘𝑁20 )

permanence of the system (2.1).

3. Equilibrium states:

Equating the system of equations (2.1) to zero and solve it, we get the following four
equilibrium points.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 69

I. The Extinct state E1: N 1  0, N 2  0 (3.1)

II. Semi Extinct: The state in which one of one Extinct and one survive

Case A: prey species exist and predator species extinct.

, N2  0
𝑎1 −𝑞1 𝐸1
E2: 𝑁1 = (3.2)
𝛼11

Case B: prey species extinct and predator species exist

𝑎2 −𝑞2 𝐸2
E3: 𝑁1 = 0, 𝑁2 = (3.3)
𝛼22

III: Two species are survived

Solve the system of equations (2.2.1) we get the quadratic equation in N 2 is given by

𝑎𝜆2 + 𝑏𝜆 + 𝑐 = 0 (3.4)

Where

𝑎 = −𝛼11 𝛼22𝑘 + (𝑎1 − 𝑞𝐸)𝑘 2 𝛼22 ,

𝑏 = −(𝑎1 − 𝑞𝐸)𝑎2 𝑘 2 − (𝑎1 − 𝑞𝐸)𝛼22 𝑘 + 𝛼11 𝛼22 + 𝛼12 𝛼21 + 𝑘𝛼11,

𝑐 = −(𝑎1 − 𝑞𝐸)𝛼21 − 𝑘(𝑎1 − 𝑞𝐸)𝑎2 + (𝑎2 − 𝑞2 𝐸2 )𝛼11

Solving the above equation (2.4.4) we get two roots for 𝑁2 which is given by

(−𝑏)−√(𝑏)2−4𝑎𝑐
𝑁2𝐿 = (3.5)
2𝑎

(−𝑏)+√(𝑏)2 −4𝑎𝑐
𝑁2𝐻 = (3.6)
2𝑎

The equilibrium point exists only if (𝑏)2 > 4𝑎𝑐

The second equilibrium point 𝑁1 is obtained from the following equation

(𝑎 −𝑞𝐸)−𝛼12 𝑁
1 ̅2
𝑁1 = (𝑘(𝑎
1 −𝑞𝐸)−𝛼11 ) (3.7)

Hence the possible equilibrium points are(𝑁2𝐿 , 𝑁1 )&(𝑁2𝐻 , 𝑁1 )

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 70

The value of 𝑁1 is obtained from equation (3.7) by substituting the values of 𝑁2𝐿 &𝑁2𝐻
respectively from the equations (3.5) & (3.6).

4. Local stability analysis:

Calculate the Jacobean matrix for the system of equations (2.1) is given by

𝜕𝑓1 𝜕𝑓1
𝜕𝑁 𝜕𝑁2
𝐽= [ 𝜕𝑓1 𝜕𝑓2
] (4.1)
2
𝜕𝑁1 𝜕𝑁2

Here

𝛼12 𝑁1 𝑁2
𝑓1 (𝑁1 , 𝑁2 ) = 𝑎1 𝑁1 − 𝛼11 𝑁12 − − 𝑞1 𝐸1 𝑁1
1 + 𝑘𝑁1
𝛼21 𝑁1𝑁2
𝑓2 (𝑁1 , 𝑁2 ) = 𝑎2 𝑁2 − 𝛼22 𝑁2 2 + − 𝑞2 𝐸2 𝑁2 (4.2)
1+𝑘𝑁2

Hence the Jacobean matrix at the equilibrium points

𝑘𝛼12 𝑁2 𝛼12𝑁1
−𝛼11𝑁1 + −
(1+𝑘𝑁1 )2 1+𝑘𝑁1
𝐽=[ 𝛼21𝑁2 𝑘𝛼21𝑁1
] (4.3)
1+𝑘𝑁2
−𝛼22 𝑁2 − (1+𝑘𝑁 )2 2

The characteristic equation is given by 𝑑𝑒𝑡(𝐽 − 𝜆𝐼) = 0 and calculate the eigen roots at each
of its equilibrium points as follows.

Case (i) E1(0,0) is unstable.

𝑎1 −𝑞1 𝐸1
Case (ii): case A: The eigen equation at E2: 𝑁1 = , 𝑁2 = 0
𝛼11

is 𝜆2 + (𝛼11 + 𝑘𝛼21 )̅̅̅


𝑁1 𝜆 = 0 ,where roots are

̅1 hence the system neutrally stable.


𝜆 = 0 & 𝜆 = −(𝛼11 + 𝑘𝛼21 )𝑁

𝑎2 −𝑞2 𝐸2
Case (ii): case B: The eigen equation at E3: 𝑁1 = 0, 𝑁2 = is
𝛼22

̅̅2̅𝜆 − 𝑘𝛼12 𝛼22 ̅̅̅̅


𝜆2 + (𝛼22 − 𝑘𝛼12)̅𝑁 𝑁22 = 0 ,

The system is unstable since the product of the root is negative i.e., −𝑘𝛼12 𝛼22̅̅̅̅
𝑁22

Case (iii): Co-existing case E4:

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 71

The eigen equation is a 2  b  c  0 (4.4)

where a=1,

−𝑘𝛼12 𝑁2 𝑘𝛼21 𝑁1
𝑏=( 2
+ + 𝛼11 𝑁1 + 𝛼22 𝑁2 ) ,
(1 + 𝑘𝑁1 ) (1 + 𝑘𝑁2 )2
𝑘2𝛼 𝛼 𝑁 𝑁 𝑘𝛼12 𝛼22𝑁22 𝑘𝛼21𝛼11 𝑁12 𝛼 𝛼 𝑁 𝑁
𝑐 = ((1+𝑘𝑁12)2 (1+𝑘𝑁
21 1 2
)2
− (1+𝑘𝑁1 )2
+ 𝛼11𝛼22 𝑁1 𝑁2 + (1+𝑘𝑁2 )2
12 21 1 2
+ (1+𝑘𝑁 )(1+𝑘𝑁 )
)
1 2 1 2

(4.5)

The system is stable if the sums of roots are negative, and products of roots are positive i.e.

𝑘𝛼21 𝑁1 𝑘𝛼12 𝑁2
If 𝛼11 𝑁1 + 𝛼22 𝑁2 + > and
(1+𝑘𝑁2 )2 (1+𝑘𝑁1)2

𝑘𝛼21 𝛼11 𝑁12 𝛼12𝛼21 𝑁1 𝑁2 𝑘2𝛼 𝛼 𝑁1𝑁2 𝑘𝛼12 𝛼22 𝑁22


𝛼11 𝛼22𝑁1 𝑁2 + + + (1+𝑘𝑁12)2 21 >
(1+𝑘𝑁2)2 (1+𝑘𝑁1)(1+𝑘𝑁2) 1 (1+𝑘𝑁2 )2 (1+𝑘𝑁1)2

(4.6)

E4 is locally asymptotically stable if condition (4.6) is satisfied otherwise unstable

5. Global stability:

Theorem 5.1: The co-existing state E4  N1 , N 2  is globally asymptotically stable.

Proof: Let the Lyapunov function be

𝑁 𝑁
𝑉(𝑁1 , 𝑁2 ) = 𝑙1 (𝑁1 − 𝑁1 ) − 𝑁1 𝑙𝑜𝑔 (𝑁1 ) + 𝑙2 [(𝑁2 − 𝑁2 ) − 𝑁2 𝑙𝑜𝑔 (𝑁2 )] (5.1)
1 2

The time derivate of V along the solutions of equations 2.1 is

𝑑𝑉 𝑑𝑁1 𝑁1 𝑑𝑁2 𝑁2
= 𝑙1 [1 − ] + 𝑙2 [1 − ] (5.2)
𝑑𝑡 𝑑𝑡 𝑁1 𝑑𝑡 𝑁2

𝛼12 𝑁2
= 𝑙1 [𝑁1 − 𝑁1 ] [(𝑎1 − 𝛼11 𝑁1 − − 𝑞1 𝐸1 ] +
1 + 𝑘𝑁1
21 1 𝛼 𝑁
𝑙2 [𝑁2 − 𝑁2 ] [𝑎2 − 𝛼22 𝑁2 + 1+𝑘𝑁 − 𝑞2 𝐸2 ]
2 (5.3)

By proper choice of

𝛼12 𝑁2 𝛼21𝑁1 1 + 𝑘𝑁1


𝑎1 = 𝛼11 𝑁1 + + 𝑞1 𝐸1 , 𝑎2 = 𝛼22 𝑁2 − + 𝑞2 𝐸2 &𝑙1 = , 𝑙2
1 + 𝑘𝑁1 1 + 𝑘𝑁2 𝛼12

1 + 𝑘𝑁2
=
𝛼21

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 72

𝑑𝑉 1+𝑘𝑁1 2 𝛼22 (1+𝑘𝑁2 ) 2


We get = −𝛼11 (𝑁1 − 𝑁1 ) − (𝑁2 − 𝑁2 ) (5.4)
𝑑𝑡 𝛼12 𝛼21

𝑑𝑉
Clearly 𝑑𝑡
< 0,

Hence the system is globally stable at positive equilibrium point E4  N1 , N 2 

Therefore, the system (2.1) is globally stable at E4  N1 , N 2 

6. NUMERICAL EXAMPLE:

Example 6.1: Let a1=1, a2=1, k =50 α11=0.1, α12=0.1, α21=0.05, α22=0.5,

,N1=25, N2 =10.

Fig 6.1.1(A) Fig 6.1.1(B)

Fig 6.1.1(C) Fig 6.1.1(D)

Fig 6.1.1(A): with q1 =10, E1 = 0.1,q2 =10; E2 = 0.1 & q1 =1 ,E1 = 1,q2 =1; E2 = 1 converging
to E(0.2,0.2)

Fig 6.1.1(B) (ii) with q1 =10 ,E1 = 5,q2 =10; E2 = 5 converging to E(0,0)

Fig 6.1.1(C) (iii) with q1 =0.5 ,E1 = 0.5,q2 =0.5; E2 = 0.5 converging to E(7,2)

Fig 6.1.1(D) (iv) with q1 =0.05 ,E1 = 0.05,q2 =0.05; E2 = 0.05 converging to E(9,3)

Example 6.2: Let a1=1.5, a2=2, k =5 α11=0.1, α12=0.2, α21=0.03, α22=0.1, N1 =25, N2 =10

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 73

Fig 6.2.1(A) Fig 6.2.1(B)

Fig 6.2.1(C) Fig 6.2.1(D)

Fig 6.2.1(A): with q1 =10, E1 = 0.1,q2 =10; E2 = 0.1 & q1 =1 ,E1 = 1,q2 =1; E2 = 1 converging
to E(4,10)

Fig 6.2.2(B): with q1 =10 ,E1 = 5,q2 =10; E2 = 5 converging to E(0,0)

Fig6.2.3(C): with q1 =0.5 ,E1 = 0.5,q2 =0.5; E2 = 0.5 converging to E(12,18)

Fig 6.2.2(D): with q1 = 0.05, E1 = 0.05, q2 = 0.05; E2 = 0.05 converging to E(15,20)

7. Conclusion

We consider a two species ecological model based on prey- predators’ interactions with
harvesting of both species with constant effort harvesting is taken for investigation. The
mathematical model with prey predator dynamics was studied and prove that the eco-system is
stable for co-existing state. The properties of the model were studied like positivity,
boundedness and permeance of the system. The stability analysis of the model was discussed
at possible equilibrium points. The global stability analysis of co-existing state is also
addressed by choosing proper Lyapunov’s function. Numerical simulation is performed in
support of analytical results. The stability analysis at four equilibrium points and its nature with
three different harvesting efforts are placed below

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 74

Equilibrium points Nature of the system with linear type harvesting


𝐸1 : 𝑁1 = 0, 𝑁2 = 0 Un stable

𝐸2 : 𝑁1 ≠ 0, 𝑁2 = 0 Neutrally stable

𝐸3 : 𝑁1 = 0, 𝑁2 ≠ 0 Un stable

𝐸4 : 𝑁1 ≠ 0, 𝑁2 ≠ 0 Asymptotically stable if
𝑘𝛼
21 1 𝑁 𝑘𝛼
12 2 𝑁
𝛼11 𝑁1 + 𝛼22 𝑁2 + (1+𝑘𝑁 )2
> (1+𝑘𝑁 )2
and
2 1

𝑘𝛼21 𝛼11 𝑁12 𝛼12 𝛼21 𝑁1𝑁2


𝛼11 𝛼22𝑁1 𝑁2 + 2
+
(1 + 𝑘𝑁2 ) (1 + 𝑘𝑁1 )(1 + 𝑘𝑁2 )
𝑘 2 𝛼12 𝛼21 𝑁1𝑁2 𝑘𝛼12 𝛼22𝑁22
+ >
(1 + 𝑘𝑁1 )2 (1 + 𝑘𝑁2 )2 (1 + 𝑘𝑁1 )2

Table 7.1

The global stability analysis of co-existing state is also addressed by choosing proper
Lyapunov’s function and prove that the system is globally asymptotically stable. Further
Numerical simulation is performed in support of analytical results. The harvesting effort results
are placed below.

Examples Results with linear type harvesting


Example 6.1 Prey and predator population are extinct
with (i) q1 =10, E1 = 5, q2 =10; E2 = 5

Example Prey and predator population are extinct


6. 2 with (i) q1 =10, E1 = 5, q2 =10; E2 = 5

Table 7.2

The extinction of two populations with the harvesting efforts q1 =10, E1 = 5, q2 =10; E2 = 5
cause overharvesting, so the range of over harvesting efforts are identified using numerical
simulation the range of harvesting efforts and catch ability coefficient are also identified. The
specific range 𝑓𝑜𝑟 𝑞 [0.05,0.5]𝑎𝑛𝑑 𝐸 [0.05,0.5] . so harvesting ranges are significant. if the
harvesting efforts are not properly chosen, the system may undergo over harvesting.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 75

References

1. Chen, H., Zhang, C., 2022, Analysis of the dynamics of a predator-prey model with
holling functional response. J. Nonl. Mod. Anal. 4, 310–324
2. Carlos Chavez, C., 2012, Mathematical models in population biology and epidemiology,
Second Edition, Springer.
3. Edward A. Bender,1978 Introduction to Mathematical Modelling, John Wiley & Sons,
4. Freedman.H.I., 1980, Deterministic mathematical models in population ecology, Marcel-
Decker, New York.
5. Gupta, R.P., Chandra, P., 2013, Bifurcation analysis of modified Leslie-Gower predator-
prey model with Michaelis-Menten type prey harvesting. J. Math. Anal. Appl. 398(1),
278–295.
6. Kot, M., 2001, Elements of Mathematical Ecology, Cambridge University press,
Cambridge
7. Lakshmi Naryan.K., and Pattabhi Ramacharyulu.N.ch., 2007, A Prey - Predator Model
with cover for Prey and an Alternate Food for the Predator, and Time Delay.,
International Journal of Scientific Computing, Vol.1 No.1 pp 7-14.
8. Lotka. A.J., 1925, Elements of physical biology, Williams and Wilkins, Baltimore.
9. Liu, W., Jiang, Y.L., 2018, Bifurcation of a delayed gause predator-prey model with
Michaelis Menten type harvesting. J. Theor. Biol. 438, 116–132.
10. Lima, S.L., 1998, Nonlethal effects in the ecology of predator-prey interactions - What
are the ecological effects of anti-predator decision-making? Bioscience 48(1), 25–34.
11. May, R.M., 1973, Stability and complexity in model Eco-Systems, Princeton University
press, Princeton.
12. Murray, J.D., 1989, Mathematical Biology, Biomathematics 19, Springer-Verlag, Berlin-
Heidelberg-New York.
13. Murray, J.D., Mathematical Biology-I., 2002, an Introduction, Third edition, Springer.
14. Paparao A.V., G A L satyavathi., K. Sobhan Babu., Dynamics of Prey- Predator Model
with constant effort Harvesting of Prey., International Journal of Ecological Economics
& Statistics (IJEES) vol 44 issue 01, pp:40-50 2023.
15. Paparao A.V., G A L satyavathi., K. Sobhan Babu Dynamics of Prey- Predator Model
with Michaelis-Menten Type Prey Harvesting; International Journal of Ecology
&Development vol 38 issue 02 pp :67-77 2023
16. Ranjith Kumar Upadhyay, Satteluri R. K. Iyengar., 2014, Introduction to Mathematical
Modeling and Chaotic Dynamics, A Chapman & Hall Book, CRC Press.
17. Ranjith Kumar G, Kalyan Das, Lakshmi Narayan Ravindra Reddy B., 2019, Crowding
effects and depletion mechanisms for population regulation in prey-predator intra-

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 76

specific competition model. Computational Ecology &software ISSN 2220-721Xvol 9


no 1 pp 19-36.
18. Seo, G., DeAngelis, D.L., 2011, A predator prey model with a Holling type I functional
response including a predator mutual interference. J. Nonlinear Sci. 21, 811–833
19. Sree Hari Rao.V., and Raja SekharaRao.P., 2009, Dynamic Models and Control of
Biological Systems, Springer Dordrecht Heidelberg London New York.
20. T.K. Kar., Bapan Ghosh., Impacts of maximum sustainable yield policy to prey–predator
systems Ecological Modelling 250 (2013) 134–142.
21. Sita Rambabu. B., Lakshmi Narayan k., 2019, Mathematical Study of Prey-Predator
Model with Infection Predator and Intra-specific Competition, International Journal of
Ecology & Development ISSN: 0973-7308vol 34, issue 3(1) pp 11-21.
22. Xiao, D.M., Jennings, L.S., 2005, Bifurcations of a ratio-dependent predator-prey system
with constant rate harvesting. SIAM J. Appl. Math. 65(3), 737–753.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 77

Chapter-6

A Survey Evolutionary Algorithms for Breast Cancer Prediction


Dr.Aarthi E

Assistant Professor, Department of Computer Science, Faculty of Science and


Humanities , SRM Institute of Science and Technology, Kattankulathur, TamilNadu,
India

Abstract

In recent years, there has been growing interest in utilizing evolutionary algorithms for the
diagnosis of breast cancer (1). This literature review aims to explore the impact of evolutionary
algorithms on breast cancer diagnosis . Through an extensive analysis of various sources, it has
been found that evolutionary algorithms have shown promise in improving the accuracy and
efficiency of breast cancer diagnosis models. Furthermore, several studies have demonstrated
the effectiveness of using evolutionary algorithms in conjunction with other data mining and
machine learning techniques for breast cancer diagnosis (2). This literature review provides an
overview of the use of evolutionary algorithms in breast cancer diagnosis and highlights their
impact on improving accuracy and efficiency. The review also reveals that the proposed models
using evolutionary algorithms have achieved high true accuracy rates, reaching up to 98. 60%.
Additionally, the review identifies a variety of applications of evolutionary algorithms in breast
cancer diagnosis, including the evolution of neural network structures for pattern recognition
and classification, feature selection, and producing association rules (3). applied data mining
techniques, including evolutionary algorithms, to obtain the best classifiers for breast cancer
diagnosis.(4) proposed an ensemble of fuzzy system and evolutionary algorithm, which
demonstrated the ability to evaluate confidence levels and clarify the working mechanism of
the system in deriving outputs for breast cancer diagnosis (5). constructed a hybrid SVM-based
strategy with feature selection for breast cancer diagnosis, highlighting the importance of
evolutionary algorithms in identifying important risk factors. Furthermore, researchers have
explored the potential of evolutionary algorithms to revolutionize personalized medicine by
enabling more precise predictions and tailored treatment plans for patients.

Keywords : Evolutionary Algorithms, Genetic Algorithm, Particle Swarm Optimization


, Ant Colony Optimization.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 78

1.Introduction

Breast cancer is a significant global health issue, affecting millions of women worldwide.
According to the statistics mentioned in the sources, breast cancer is one of the most prevalent
types of cancer and a leading cause of mortality among women . Early detection and accurate
diagnosis of breast cancer are crucial for successful treatment and improved patient outcomes
(6.). One of the evolving areas in breast cancer prediction, recurrence is the use of evolutionary
algorithms with machine learning and deep learning algorithms (7). Evolutionary algorithms
have the potential to improve the accuracy and efficiency of breast cancer diagnosis by aiding
in feature selection, classification, and predictive modeling.

Some studies have focused on the use of evolutionary algorithms for feature selection,
finding that these algorithms can effectively identify relevant features from large datasets and
improve the accuracy of breast cancer diagnosis models. Other studies have utilized
evolutionary algorithms for classification, showing that these algorithms can effectively
distinguish between benign and malignant breast lesions. Additionally, evolutionary
algorithms have been used for predictive modeling, enabling the development of personalized
treatment strategies based on individual patient characteristics. (4) proposed an ensemble of
fuzzy systems and evolutionary algorithms that not only provide accurate diagnoses but also
evaluate the confidence level and working mechanism of the system (5). The impact of
evolutionary algorithms in breast cancer diagnosis extends beyond accuracy and efficiency.
These algorithms have the potential to revolutionize breast cancer diagnosis by enabling earlier
detection of cancer recurrence and facilitating more personalized treatment strategies.

Evolutionary algorithms are population-based metaheuristics inspired by natural evolution in


biological populations (8). Historically, the design of EAs was motivated by observations about
natural evolution, such as Darwin's theory of natural selection. Evolutionary algorithms have
been widely used in various fields, including optimization and machine learning. They are
known for their ability to explore a wide search space and find optimal solutions, making them
suitable for solving complex problems.

Generic Evolutionary Algorithm Representation A generic evolutionary algorithm


representation is a framework that outlines the key components and processes involved in an
evolutionary algorithm. These components typically include a representation of the problem
solution (chromosome), the initialization of a population, a fitness function to evaluate the

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 79

quality of solutions, selection mechanisms to choose individuals for reproduction, reproduction


operators such as crossover and mutation to create offspring individuals, stopping conditions
that determine when the algorithm should terminate, and the comparison between evolutionary
computation and classical optimization methods. In the context of breast cancer diagnosis,
evolutionary algorithms can be applied to optimize the process of determining the presence of
breast cancer and its severity.

The Impact of Evolutionary Algorithms in Breast Cancer Diagnosis Evolutionary algorithms


have shown great potential in improving the accuracy and efficiency of breast cancer diagnosis.
Several studies have demonstrated the effectiveness of using evolutionary algorithms for
various aspects of breast cancer diagnosis, such as feature selection, classification, and
predictive modeling. For instance, research by (9). introduced modified differential evolution
algorithms that improve the searching ability for breast cancer diagnosis. These algorithms
merged different mechanisms, such as quadratic approximation, Gaussian disturbing, immune
theory, and differential evolution, to enhance the accuracy of the diagnostic process. In another
study, (5). applied a genetic algorithm to identify optimal feature subsets for breast cancer
diagnosis. They found that the genetic algorithm was able to select a small subset of features
that achieved high classification accuracy, reducing the computational complexity and
improving the interpretability of the diagnosis model. Furthermore, the use of evolutionary
algorithms in breast cancer diagnosis has also shown promise in predictive modeling. For
example, research by (10) utilized an evolutionary algorithm to develop a predictive model for
breast cancer recurrence.

The model was able to accurately predict the likelihood of recurrence, providing valuable
information for personalized treatment planning. Overall, the impact of evolutionary
algorithms in breast cancer diagnosis has been significant. These algorithms have improved the
accuracy and efficiency of various aspects of breast cancer diagnosis, such as feature selection,
classification, and predictive modeling. They have shown potential in optimizing the diagnostic
process, reducing computational complexity, and enhancing interpretability

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 80

2. Role of Evolutionary Algorithms in Medical Diagnosis

Evolutionary Algorithms have shown great potential in the field of medical diagnosis, offering
a comprehensive and efficient approach to address the complexity of diagnosing various
diseases. The use of evolutionary algorithms in medical diagnosis has been proven successful
in previous studies, particularly in the feature selection stage of pattern recognition (11).
However, there is still a limited application of evolutionary algorithms in the wider scope of
medical diagnosis (12). To explore the broader use of evolutionary algorithms in medical
diagnosis, researchers have conducted promising studies that focus on utilizing algorithms such
as CGP and EGWO-GA for the accurate and improved diagnosis of diseases like malignant
mammograms and CXR images (13). These studies have demonstrated that evolutionary
algorithms can enhance the accuracy and efficiency of medical diagnosis, paving the way for
further advancements in this field. In conclusion, evolutionary algorithms offer a promising
approach to improve the accuracy and efficiency of medical diagnosis (12). In conclusion,
evolutionary algorithms have shown great potential in the field of medical diagnosis,
particularly in the feature selection stage of pattern recognition.

2.1. Impact of Evolutionary Algorithms in Breast Cancer Diagnosis

The use of evolutionary algorithms in breast cancer diagnosis has had a significant impact on
improving the accuracy and efficiency of the diagnostic process (13). These algorithms have
been proposed as a means to evolve neural network structures for pattern recognition and
classification of breast tumors as either benign or malignant cancer cells . These systems not
only provide accurate diagnoses, but also evaluate the confidence level of the system's
responses and clarify how the outputs are derived (5). Researchers have also proposed the use
of ensemble systems that combine fuzzy systems and evolutionary algorithms for breast cancer
diagnosis (14). Furthermore, the use of evolutionary algorithms in breast cancer diagnosis has
shown potential in revolutionizing personalized medicine. Additionally, the use of evolutionary
algorithms in breast cancer diagnosis has the potential to greatly reduce computational
complexity. This makes it a preferred choice for researchers in data mining who may not have
domain-specific expertise in medicine .

3. Literature Review: Evolutionary Algorithms for Breast Cancer Diagnosis

Several studies have explored the use of evolutionary algorithms in breast cancer diagnosis. (4)
proposed an ensemble of fuzzy system and evolutionary algorithm for breast cancer diagnosis,

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 81

which can evaluate the confidence level to which the system responds and clarifies the working
mechanism of how it derives its outputs (5). This approach not only improves the accuracy of
breast cancer diagnosis but also provides insight into how the system reaches its conclusions.
(17) constructed a hybrid SVM-based strategy with feature selection to find the important risk
factors for breast cancer. This hybrid strategy not only improves the efficiency of breast cancer
diagnosis but also identifies the key risk factors for the disease. Other methods, such as fuzzy
systems and isotonic separation, have also been used in breast cancer diagnosis. However,
evolutionary algorithms stand out for their ability to perform search and determine the most
suitable fuzzy systems (15).

3.1 Comparison of Evolutionary Algorithm Models in Breast Cancer Detection

There are several studies that have compared different evolutionary algorithm models for breast
cancer detection. Sethi compared evolutionary algorithms and machine learning in predicting
breast cancer using two different datasets. They found that evolutionary algorithms
outperformed machine learning techniques in terms of accuracy, with an overall true accuracy
of 98. 60%. (16) presented a hybrid machine learning framework for the diagnosis of breast
cancer and diabetes, using feature selection and classification techniques. They demonstrated
the effectiveness of their approach by achieving high accuracy rates in both breast cancer and
diabetes diagnosis. Furthermore, (4) proposed an ensemble of fuzzy system and evolutionary
algorithm for breast cancer diagnosis. This ensemble approach not only evaluates the
confidence level of the system's responses but also provides clarity on how the outputs are
derived. (17) also used a hybrid SVM-based strategy with feature selection to identify
important risk factors for breast cancer. By comparing and analyzing different evolutionary
algorithm models, researchers have been able to identify their strengths and limitations in
breast cancer detection (5). These studies highlight the potential of evolutionary algorithms in
improving the accuracy and efficiency of breast cancer diagnosis. These algorithms have been
successful in achieving high classification accuracy and reducing computational complexity,
making them valuable tools for researchers without domain-specific expertise in medicine .

These studies have examined the performance of various evolutionary algorithms such as
genetic algorithms, particle swarm optimization, and ant colony optimization in terms of
accuracy, sensitivity, specificity, and computational efficiency. The results of these studies
have shown that different evolutionary algorithm models can have varying performances in
breast cancer detection. For example, a study comparing genetic algorithms and particle swarm

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 82

optimization found that both algorithms were effective in feature selection for breast cancer
diagnosis, but genetic algorithms outperformed particle swarm optimization in terms of
accuracy and computational efficiency. Furthermore, another study comparing genetic
algorithms and ant colony optimization revealed that both algorithms yielded similar results in
terms of accuracy and computational efficiency. However, the study also found that ant colony
optimization had slightly higher sensitivity and specificity values compared to genetic
algorithms. ## Evolutionary Algorithm Models for Breast Cancer Diagnosis In the literature,
several evolutionary algorithm models have been proposed for breast cancer diagnosis. These
models utilize different evolutionary algorithms such as genetic algorithms, particle swarm
optimization, and ant colony optimization to improve the accuracy and efficiency of breast
cancer diagnosis. Some of the key evolutionary algorithm models for breast cancer diagnosis
include:

1. (5)Genetic algorithms for feature selection and classification: Genetic algorithms have
been widely used for feature selection in breast cancer diagnosis. These algorithms aim to
identify the most relevant and informative features from a large set of variables, such as
gene expressions or clinical biomarkers. By selecting the most relevant features, genetic
algorithms can improve the accuracy and efficiency of breast cancer diagnosis by reducing
the dimensionality of the data and removing irrelevant or redundant features. These genetic
algorithm models have been shown to achieve high accuracy and computational efficiency
in breast cancer diagnosis.

2. (18). Particle swarm optimization for feature selection and classification: Particle swarm
optimization is another popular evolutionary algorithm used for feature selection in breast
cancer diagnosis. This algorithm works by simulating the behavior of a swarm of particles,
where each particle represents a potential solution. The particles move through the solution
space, adjusting their positions based on their own previous best position and the best
position found so far by any particle in the swarm. This approach allows particle swarm
optimization to search for the optimal set of features that maximize classification accuracy
in breast cancer diagnosis. The use of particle swarm optimization has been shown to
effectively improve the accuracy and efficiency of breast cancer diagnosis by selecting the
most relevant features.

3. (19).Ant colony optimization for feature selection and classification: Ant colony
optimization is another evolutionary algorithm that has been applied to breast cancer

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 83

diagnosis. This algorithm is inspired by the foraging behavior of ants, where individual
ants deposit pheromone trails to mark the paths to food sources. These pheromone trails
attract other ants, leading them to follow the path with the highest concentration of
pheromones. These algorithms aim to mimic this behavior by using a trail of pheromones
to guide the search for the optimal set of features for breast cancer diagnosis. By using ant
colony optimization, researchers have been able to effectively select the most relevant
features for breast cancer diagnosis, leading to improved accuracy and efficiency in
classification. In summary, the use of evolutionary algorithms such as genetic algorithms,
particle swarm optimization, and ant colony optimization in breast cancer diagnosis has
shown promise in improving accuracy and efficiency by selecting the most relevant
features.

4. In a study conducted by (20), the use of the evolutionary algorithm NSGA III was applied
to initialize a deep neural network and optimize its super parameters for the prognosis of
breast cancer. The results obtained using evolutionary algorithm models in breast cancer
detection have shown great potential in improving the accuracy and efficiency of
diagnosis.

5. (21) developed a three-dimensional direct numerical model of a female breast with a


tumor, utilizing stochastic methods for solving the inverse problem. They demonstrated
the effectiveness of using evolutionary algorithms in accurately simulating breast tumors.

In summary, the use of evolutionary algorithms such as genetic algorithms, particle swarm
optimization, and ant colony optimization in breast cancer diagnosis has shown promise
in improving accuracy and efficiency by selecting the most relevant features (19). The use
of evolutionary algorithms such as particle swarm optimization and ant colony
optimization has shown promise in improving the accuracy and efficiency of breast cancer
diagnosis by selecting the most relevant features (18). The use of evolutionary algorithms,
such as particle swarm optimization and ant colony optimization, in breast cancer
diagnosis has shown promise in improving the accuracy and efficiency of the process by
selecting the most relevant features.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 84

4. Results and discussion

4.1 Data sets

The Digital Database for Screening Mammography (DDSM) dataset employed in this
research can be accessed at [18]. Dataset-1 denotes this dataset throughout this text. It
provides a large database of mammograms, both normal and abnormal.
An additional dataset is considered to emphasize the effectiveness of the proposed
methodology. This dataset is publicly available on Kaggle [19] and is denoted by Dataset-
2 throughout this text. The dataset available at the provided link is a collection of
mammograms and breast cancer images. It is a valuable resource for training and
evaluating a proposed optimized convolutional neural network (CNN) for classification
purposes.

4.2 Comparison of various machine learning models with evolutionary algorithms for breast
cancer diagnosis

The models performance metric of the discussed for breast cancer diagnosis , has been
tabulated . (5) implements UKDB algorithm , ensemble of machine learning algorithm and
GA evolutionary algorithm represented as UKDB model. (17) implements SSD model
combination of deep learning algorithm with particle swarm optimization. (18) identifies the
breast cancer using CoFA model uses ant colony optimization in feature selection and CNN
for the breast cancer prediction. (19), the use of the evolutionary algorithm NSGA III as NSGA
model was applied to initialize a deep neural network . (20) Ng model uses stochastic process
with deep learning algorithm for prediction of breast cancer.

Table 1 : Performance Metrics of Models using dataset 1

Dataset-1 Accuracy Precision F-Score

UKDB model 0.892 0.902 0.946

SSD model 0.967 0.922 0.932

CoFA model 0.986 0.927 0.936

NSGA model 0.924 0.929 0.945

Ng model 0.943 0.933 0.952

Table 2 : Performance Metrics of Models using dataset 2

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 85

Dataset-2 Accuracy Precision F-Score

UKDB model 0.908 0.917 0.940

SSD model 0.965 0.910 0.941

CoFA model 0.983 0.923 0.937

NSGA model 0.910 0.925 0.950

Ng model 0.927 0.938 0.957

1
0.986
0.98 0.967
0.96
0.943
0.94
Accuracy

0.924
0.92

0.9 0.892

0.88

0.86

0.84
UKDB model SSD model CoFA model NSGA model Ng model
Models

Figure 1: Comparison of Accuracy of EvoAlgo Models in Breast Cancer Detection


with dataset 1.

Figure 1 displays a comparison of the accuracy of different evolutionary algorithm models in


breast cancer detection. The x-axis represents the different models used, while the y-axis
represents the accuracy achieved by each model . Each bar in the graph represents the accuracy
percentage obtained by a specific evolutionary algorithm model. The height of each bar
indicates the corresponding accuracy percentage, allowing for a visual comparison of the
different models. Graph 1 clearly shows the superiority of CoFA model - combination of
transfer learning and deep learning model with evolutionary algorithm Ant colony optimization
for feature selection has achieved the highest accuracy among all the models. Additionally,
SSD method based on the differential evolution algorithm and local search also performed
well in terms of accuracy.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 86

0.98

0.96
Accuracy
0.94

0.92

0.9

0.88

0.86
UKDB model SSD model CoFA model NSGA model Ng model
Models

Figure 2: Comparison of Accuracy of EvoAlgo Models in Breast Cancer Detection


with dataset 2.

Graph 2 displays a comparison of the efficiency of different evolutionary algorithm models in


breast cancer detection.. The height of each bar indicates the corresponding efficiency measure,
allowing for a visual comparison of the different models. Graph 2 clearly shows that the
evolutionary algorithm models proposed by (18) exhibit high efficiency in breast cancer
detection. Overall, the graphs provide a clear comparison of both the accuracy and efficiency
of different evolutionary algorithm models in breast cancer detection. Based on the information
provided, it can be concluded that (10) model combining transfer learning and deep learning
achieved the highest accuracy among the compared models in breast cancer detection. (18)
achieved a high classification accuracy of 98.30% on benign cases and 96. 70% on malignant
cases using SVM with PSO optimazation algorithm. Overall, the graphs show that different
evolutionary algorithm models have been used successfully in breast cancer detection with
varying levels of accuracy and efficiency. Based on the provided information, it can be
concluded that different evolutionary algorithm models have been successfully used in breast
cancer detection with varying levels of accuracy and efficiency. The graphs provided in the
sources illustrate the comparison of different evolutionary algorithm models in breast cancer
detection. Overall, the comparison of evolutionary algorithm models in breast cancer detection
shows that different models have varying levels of accuracy and efficiency.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 87

5.Conclusion : Future Prospects of Evolutionary Algorithms in Healthcare

The ability of evolutionary algorithms to handle high-dimensional data and identify complex
patterns makes them a valuable tool for improving the accuracy and efficiency of breast cancer
diagnosis. In addition, with advancements in computational power and algorithms, there is a
potential for the development of more sophisticated evolutionary models specifically tailored
for breast cancer diagnosis. These models could incorporate deep learning techniques,
combining the power of evolutionary algorithms with neural networks to further enhance the
accuracy and efficiency of breast cancer diagnosis

In conclusion, the use of evolutionary algorithms, such as particle swarm optimization and ant
colony optimization, in breast cancer diagnosis has shown promise in improving the accuracy
and efficiency of the process by selecting the most relevant features. These algorithms have
the potential to aid in early detection and improved treatment outcomes for breast cancer
patients (18). They have the potential to aid in early detection and improved treatment
outcomes for breast cancer patients.

6. References

1. Dong, R. (2020, January 1). Explore the Characteristics of Age, BMI and Blood
Composition of Breast Cancer Patients Based on Multivariate Statistical Analysis.
https://scite.ai/reports/10.11648/j.acm.20200904.15

2. Başçiftçi, F., & Ünal, H T. (2019, December 31). An Empirical Comparison of Machine
Learning Algorithms for Predicting Breast Cancer.

3. Wahde, M., & Szallasi, Z. (2005, April 27). Improving the prediction of the clinical
outcome of breast cancer using evolutionary algorithms.

4. Andres F, Ismaila N, Henry NL, Somerfield MR, Bast RC, Barlow W, Collyar DE,
Hammond ME, Kuderer NM, Liu MC, Van Poznak C, Wolff AC, Stearns V. Use of
Biomarkers to Guide Decisions on Adjuvant Systemic Therapy for Women With Early-
Stage Invasive Breast Cancer: ASCO Clinical Practice Guideline Update-Integration of
Results From TAILORx. J Clin Oncol. 2019 Aug 1;37(22):1956-1964. doi:
10.1200/JCO.19.00945. Epub 2019 May 31. PMID: 31150316.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 88

5. Wang, L., Liu, Y., Mammadov, M., Sun, M., & Qi, S. (2019, May 13). Discriminative
Structure Learning of Bayesian Network Classifiers from Training Dataset and Testing
Instance. https://scite.ai/reports/10.3390/e21050489

6. Gündoğdu S. Improving breast cancer prediction using a pattern recognition network


with optimal feature subsets. Croat Med J. 2021 Oct 31;62(5):480-487. doi:
10.3325/cmj.2021.62.480. PMID: 34730888; PMCID: PMC8596469.

7. Hwa HL, Kuo WH, Chang LY, Wang MY, Tung TH, Chang KJ, et al. Prediction of
breast cancer and lymph node metastatic status with tumour markers using logistic
regression models. J Eval Clin Pract. 2008;14:275–80. doi: 10.1111/j.1365-
2753.2007.00849.x. Petrowski, A., & Hamida, S B. (2011, August 29). Evolutionary
Algorithms., https://arxiv.org/pdf/1805.11014v1.pdf.

8. Zhou S., Qian S., Chang W., Xiao Y., Cheng Y. (2018). A novel bearing multi-fault
diagnosis approach based on weighted permutation entropy and an improved SVM
ensemble classifier. Sensors 18:1934. 10.3390/s18061934 l.

9. Yu, W M., & Li, Y. (2022, December 15). Breast cancer classification from
histopathological images using transformers.

10. Ajlouni, N., Özyavaş, A., Takaoğlu, M., Takaoğlu, F., & Ajlouni, F. (2023, June 15).
Medical Image Diagnosis Based on Adaptive Hybrid Quantum CNN.
https://scite.ai/reports/10.21203/rs.3.rs-3037666/v1.

11. Gopatoti, A., & Vijayalakshmi, P. (2022, August 1). CXGNet: A tri-phase chest X-ray
image classification for COVID-19 diagnosis using deep CNN with enhanced grey-
wolf optimizer. https://scite.ai/reports/10.1016/j.bspc.2022.103860.

12. Miller, J F., Smith, S L., & Zhang, Y. (2010, July 7). Detection of microcalcifications
in mammograms using multi-chromosome Cartesian genetic programming.
https://scite.ai/reports/10.1145/1830761.1830827.

13. Tewolde, G., & Hanna, D M. (2007, May 1). Particle Swarm Optimization for
classification of breast cancer data using single and multisurface methods of data
separation. https://scite.ai/reports/10.1109/eit.2007.4374523.

14. You, H., & Rumbe, G. (2010, January 1). Comparative Study of Classification
Techniques on Breast Cancer FNA Biopsy Data.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 89

15. Siddiqui, S Y., Naseer, I., Khan, M A., Mushtaq, M F., Naqvi, R A., Hussain, D., &
Haider, A. (2021, January 1). Intelligent Breast Cancer Prediction Empowered with
Fusion and Deep Learning. https://doi.org/10.32604/cmc.2021.013952.

16. Dou, Y., & Meng, W. (2021, July 5). An Optimization Algorithm for Computer-Aided
Diagnosis of Breast Cancer Based on Support Vector Machine.
https://scite.ai/reports/10.3389/fbioe.2021.698390

17. Pramanik, P., Mukhopadhyay, S., Mirjalili, S., & Sarkar, R. (2022, November 5). Deep
feature selection using local search embedded social ski-driver optimization algorithm
for breast cancer detection in mammograms. https://scite.ai/reports/10.1007/s00521-
022-07895-x.

18. Peng, H., Zhu, W., Deng, C., Yu, K., & Wu, Z. (2020, September 28). Composite firefly
algorithm for breast cancer recognition. https://scite.ai/reports/10.1002/cpe.6032

19. Abdikenov B., Iklassov Z., Sharipov A., Hussain S., Jamwal P. K. (2019). Analytics of
heterogeneous breast cancer data using neuroevolution. IEEE Access 7 18050–18060.
10.1109/access.2019.2897078 [CrossRef] [Google Scholar] .

20. Bahador, M., Keshtkar, M M., & Zariee, A. (2018, July 20). Numerical and
experimental investigation on the breast cancer tumour parameters by inverse heat
transfer method using genetic algorithm and image processing.
https://scite.ai/reports/10.1007/s12046-018-0900-4.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 90

Chapter-7
Cryptography and Image Processing
Deep Singh1, Rajwinder Singh2 and Amit Paul3*
1
Department of Mathematics and Statistics, Central University of Punjab, Bathinda, India
2, 3
Department of Mathematics, Guru Nanak Dev University, Amritsar, Punjab, India
Email: deepsinghspn@gmail.com, rajwindermaths221@gmail.com, amitpaulcuj@gmail.com
*
Corresponding author

1. Introduction

Cryptography is the art of hiding or encoding data so that only the right person can decipher it.
It is a technology that has been encoding messages for thousands of years. Now, Bank cards,
computer passwords, and e-commerce all use cryptography. Modern encryption methods
include techniques that allow the encoding and decoding of data by using encryption keys.
Information communicated on the open channel of the internet can be of any form such as text,
image, audio, or video. Images are frequently used for sharing information these days. So, the
protection of image data is very crucial from the various assaults. The encryption methods,
such as the Data Encryption Standard (DES) and Advanced Encryption Standard (AES) are
considered virtually unbreakable. However, these methods are suitable only for text encryption.
For the protection of the image data, various schemes are emerging that use chaotic maps.
Cryptology, another name for this cyber security technique, blends mathematics, engineering,
computer science, and to produce intricate codes that conceal a message's actual meaning.
Although its roots can be found in ancient Egyptian hieroglyphics, cryptography [1] is still
essential for protecting data while it's in transit and keeping shady parties from reading it. To
safeguard credit card transactions, email, web browsing, data privacy, and digital signatures, it
employs mathematical ideas and algorithms to convert messages into hard-to-decipher codes.

1.1 Importance of cryptography

Cryptography continues to be essential for protecting data, preserving user confidentiality, and
preventing attackers from getting personal data. The common applications of cryptography are
given by:

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 91

(i) Privacy and Confidentiality

Every day, people and firms use cryptography as a method to safeguard the confidentiality and
safety of their data. Secrecy is ensured by cryptography, which encrypts the data by using a
key. A popular illustration of this is the communicating app whatsapp, which encode user
dialogues to prevent hacking or interception. By using public and private shared keys,
asymmetric encryption, and encrypted tunnels, virtual private networks (VPNs) employ
cryptography to further secure web browsing.

(ii) Authentication

Cryptography can be used to verify the identity of the sender as well as the source of the data.
Authentication (verification) is the process of confirming the originality of a user or piece of
information. User authentication is the process of verifying a user's identity when they log into
a computer system.

(iii) Integrity

Just as cryptography is able to verify whether a message is legitimate, it can also confirm the
integrity of data that is transmitted and received. Thanks to cryptography, data cannot be altered
while being stored or transmitted between the person who sent it and the recipient. Digital
signatures, for instance, can identify fraud or manipulation in money transfers and distribution
of software.

(iv) Non-repudiation

While generating or sending data, the sender cannot later change their minds because
cryptography confirms their responsibility and accountability. Digital signatures, which ensure
that the sender of a message, contract, or other document they generated cannot subsequently
claim it was bogus, are one efficient way to avoid this. In the event of email non-repudiation,
email tracking also makes sure that the sender and the recipient cannot dispute the delivery of
the e-mail or receipt of the e-mail.

1.2 Types of cryptography

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 92

Cryptography is classified into two categories: (i) public key cryptography and (ii) secret key
cryptography.

Secret key cryptography

Secret key cryptography uses only one for encoding and decoding of the data [2], commonly
known as symmetric. Using the key, the sender encodes the original message and sends it to
the receiver, who decrypts it and retrieves the initial original message employing the same key.
Stream cipher and block cipher are the examples of secret key cryptography. Stream ciphers
based on just one byte or bit at any point and continuously change the key using methods of
feedback. By figuring out where it is in the bit key stream, a self-synchronizing stream cipher
makes sure the decryption and encryption processes do not collide. The keystream is generated
by a synchronous stream cipher at both the sender’s end and the one receiving it, with no
connection to the message stream Block ciphers are one fixed-size data block at a time is
encrypted by block ciphers. When the same key is used, every time, a plaintext data block is
encrypted, the cipher text remains the same. The Feistel cipher is a good illustration of this, as
it makes extensive use of permutation, key expansion, and substitution to generate diffusion
and confusion within the cipher. Because the encoding and decoding stages are comparable, if
not the same, reversing the key minimizes the amount of code and circuitry needed to
implement the encryption method in a piece of hardware or software.

Public key cryptography

Public key cryptography, also known as asymmetric cryptography [2], generates codes that are
incredibly challenging to decipher by using mathematical operations. It eliminates the need for
a secret key and permits secure communication over an insecure communications channel.
Proxy re-encryption, for instance, allows a proxy entity to re-encrypt data without needing
access to the private keys or plaintext when transferring data from one public key to another.
The multiplication of two large prime numbers to produce a huge result that is difficult to
decipher is a common type of public key cryptography, known as factorization vs.
multiplication. Exponentiation versus logarithms, such as 256-bit encryption, is another type
of public key cryptography. It provides security so much that even a computer with trillions of
search combinations per second cannot break it.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 93

Two keys that are mathematically related but do not allow for the determination of either are
used in generic forms of public key cryptography. To put it simply, a sender can use their
private key to cipher a original message, and the person who receives it can use their public
key to decipher the ciphertext. The public key cryptosystem known as the RSA (Rivest, Shamir,
Adleman) cryptosystem is an example.RSA was the first and remains the most common PKC
implementation. The algorithm, which is utilized in key exchanges, digital signatures, and data
encryption, bears the names of Ronald Rivest, Adi Shamir, and Leonard Adleman the three
mathematicians at MIT who devised this method. It factors two chosen prime numbers to get
a big number. RSA is particularly secure because an attacker can't figure out the prime factors.

2. Digital Image Encryption

The web of today is moving regarding multimedia data, the majority of which is made up of
images. However, as multimedia applications continue to expand, security becomes more and
more crucial for image storage and communication. Encryption is one way to guarantee
security in these situations. Image encryption techniques [3,4] aim to transform the original
image into a more difficult-to-understand version while maintaining user privacy; in other
words, it's crucial that no one can access the content without the decryption key. Applications
for image encryption can be found in multimedia systems, telemedicine, medical imaging,
internet communication, military communication, and more. In multiparty information
management, the ability to retrieve information from encrypted databases is a crucial
technological capability for protecting privacy.

2.1 Digital image processing

Digital imaging processing [2] is a method of using an electronic device to analyse digital
images. Given that vision is the most extensively developed sense in humans, it should come
as no surprise that image is the most significant aspect of human senses. Imaging devices, on
the other hand, cover nearly the whole of the electromagnetic spectrum, from radio frequencies
to gamma rays, in contrast to humans who are restricted to the visible band. They can work
with pictures produced by sources that people are not used to connecting with images. These
consist of generated by computer images, microscopy with electrons, and ultrasonography.
Consequently, a broad range of purposes can be found in the field of processing digital images.
The term "digital image processing" refers to operations whose inputs and outputs are images

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 94

as well as operations that extract characteristics from images, including the identification of
specific objects. Take the field of automated text analysis as a straightforward example to help
you understand these ideas. Within the context of digital image processing, the steps involved
in getting a picture of the text-containing region, processing it beforehand, extracting
(segmenting) the specific characters, defining the characters in a format appropriate for
computer processing, and identifying those individual characters fall. Digital image analysis
methods are primarily driven by two key application areas: improving pictorial details for
people to understand as well as analysing image data for storage, transport, and depiction for
autonomous machines.

2.2 What is a digital image?


A 2-D function, 𝑓(𝑥, 𝑦), where x and y are plane coordinates, can be used to define an image.
The intensity of the image at that position (𝑥, 𝑦) is denoted by the value 𝑓(𝑥, 𝑦). We refer to
an image as digital when every value of function f is discrete and finite.

2.3 Pixel or Image element


There are only a few constituents that make up a digital image [1], and each one has a distinct
value and position. Pixel and image elements are the terms used to describe these components.
The word most frequently employed to describe the components of a digital image is pixel.

2.4 The origin of digital image processing


When images were initially transmitted from London to New York via submarine cable, the
publishing sector was one of the first to use digital images. With the advent of the Bart Lane
cable image circulation system in the early 1920s, the time it took to transport an image
throughout the Atlantic decreased from a week to less than three hours. Pictures were coded by
specialized machinery for printing for cable transfer, and upon receipt, they were reassembled.
The choice of printing manufacturers and the distribution of levels of intensity posed some of
the first challenges in enhancing the visual appeal of these early digital images.In actuality, the
development of digital machines and associated technologies for data transfer, storage, and
reveal has been essential to the advancement of digital image processing since digital images
require such large amounts of computing power and storage. There is a limited quantity of
elements that make up the digital image, and each element has an address and values. These
components are known as pixels or image elements. The element of a digital image is

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 95

represented by its pixels. Digital image processing is the process of obtaining an image of the
text-containing region, preliminary processing the image, taking out each character
individually, describing the characters in a format appropriate for computer processing, and
identifying each of those narratives. In the late 1960s and early 1970s, methods for processing
digital images were developed for use in astrophysics, remote Earth asset assessments, and
healthcare imaging. The development of computerized tomography (CT) in the early 1970s is
one of the most important
Ray travels through the item and is picked up by the matching ring detection devices at the
other end. This process keeps occurring while the source of energy rotates. Tomography uses
algorithms to create a picture showing a cut-through of an object using data collected from the
object. To create images for use in business, such as X-rays, biological and medical research
simpler to comprehend, and to boost code levels of intensity into color, algorithms from
computers are used. Image restoration and enhancement techniques are applied to damaged
images of irretrievable items or outcomes of experiments that are too large to reproduce.
Distorted photos that were the only known copies of destroyed or misplaced after being
captured on camera that had been destroyed or misplaced after being captured on camera have
been effectively restored thanks to image processing techniques.

2.5 Tasks of digital image processing

(i) Image enhancement


Improving images, or increasing their degree of quality, constitutesof the most typical jobs
involving picture processing. Reducing the amount of noise in an image is the most popular
image-processing application. The difference in brightness between an image's lightest and
darkest areas is called contrast. Enhancing contrast allows for a general brightness increase in
an image, improving visibility. An image's brightness is defined as its total amount of light or
darkness. Increasing an image's brightness can make it appear lighter and more readable. In
addition to offering mechanical contrast and brightness adjustments, the majority of image
editing programs also let you make adjustments manually.

(ii) Image restoration

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 96

There are several reasons why image quality can decline, especially for older pictures that were
shot before cloud storage became popular. For instance, scanned photos from printed copies
captured using antiquated instant cameras frequently get cracks on them as well. The
restoration of images is a particularly interesting field because it holds the potential to repair
historical documents that have been damaged. Entire pages that are missing from torn
documents may be reconstructed using strong Deep Learning-based image restoration
techniques. It involves the process of incorporating missing pixels into an image, known as
image in painting. For example, this can be accomplished by using a texture manufacturing
technique, which creates fresh textures to cover the missing pixels.Models based on deep
learning, however, are the preferred choice since they can identify patterns.

(iii) Segmentation of images

Segmentation of images is the method of breaking an image up into distinct areas or parts.
Segmentation of images is often used as a first step before object identification. Each area of
the picture represents a different object. While thresholding is one of the most popular
approaches, numerous other algorithms can be used to segment images. An image can be made
binary—that is, with every pixel being either black or white—by applying a technique called
binary thresholding. Pixels with brightness levels below the threshold amount turn black and
those with luminosity levels above it turn white. This is the reason behind the threshold value's
selection. This in turn causes the objects in the image to become divided and appear as distinct
black-and-white areas.

(iv) Object identification

Identification of objects is the process of identifying objects in an image, and it is widely used
in surveillance and safety uses. While there are numerous techniques available for identifying
objects, the most popular approach is convolutional Neural Networks (CNNs), a kind of Deep
Learning model. Instead of processing one pixel at a time, the convolution operation, which is
the foundation of CNNs (Artificial Neural Network), enables the computer to simultaneously
see parts of an image. CNNs were developed specifically to process images. When CNNs are
trained to identify objects, they will generate a bounding box similar to the one shown above,
which will show the object's class label and location within the image.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 97

(v) Compression of image


Compression of image is the process of lowering the file size of a photograph while attempting
to maintain its quality. This is done to preserve storage space or to lower the bandwidth needed
to send the image, particularly when applying image processing algorithms on mobile and edge
devices. Lossy compression techniques are used in conventional methods to reduce file size by
marginally sacrificing image quality. For example, the Discrete Cosine Transform is used by
the JPEG file format to compress images.
(vi) Manipulation of image

Manipulation of image is the process of changing an image to give it a differentlook. This could
be wanted for a few causes, such as adding an object that isn't in the image or removing
something that isn't wanted there. This is how most graphic artists create posters, movies, and
other media. One example of manipulating imagesis Neural Style Transfer, a method that
modifies an image's appearance by using Deep Learning models..

(vii) Image generation

A crucial duty in image processing is the generation of fresh images, particularly for Deep
Learning algorithms that need a lot of labelled data to learn. An further novel neural network
architecture is Generative Adversarial Networks (GANs), are frequently employed in image
generation methods. A GAN consists of two independent models: the generator, which
generates artificial images, and the discriminator, which attempts to distinguish artificial
images from original ones. The creator tries to produce artificial images that look realistic in
the hopes of fooling the discriminator. In the meantime, the discriminator gains expertise in
differentiating between real and fake images. After a few iterations, this challenging game
enables the generator to generate realistic graphics that can be used for further deep learning
techniques model training.

(viii) Translation from image to image

One subclass of vision and graphics problems is translation from image to image problems,
where the objective is to find the mapping among a source image andthe final image through
the use of a training batch of matched image pairings. To create an output that is a realistic
drawing of the object the sketch represents, for example, one can use a free-hand sketch as the

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 98

input. For general-purpose translation from image to image andpixel to pixel is a popular model
in this field that uses a conditional GAN model. Thus, the network can resolve various image
processing problems, including semantic segmentation, sketch-to-image translation, and image
colorization.

2.6 Image Security Parameters

(i) Large keyspace


Every combination that could be used by an attacker as a key to decode the hidden image is
gathered in the key space. The larger the key set, the strongest the encoding technique.

(ii) Key sensitivity


The encryption algorithm should be sensitive to the key. This means that If we modify slightly
the encryption key and then use it to decrypt that data, then it produces a different result.

(iii) Uniform Histogram of the Image


The image's histogram tells us how many pixels there are in the image at each intensity level.
For an encrypted image to be safe from a known plain-text attack, its histogram must be
consistent.
(iv) Information entropy

The entropy of image data measures randomness in the data. The encrypted data should be
random so that the attacker cannot easily establish a relation to decrypt it.Therefore, an
encoding technique must exhibit randomness and uniform distribution during the
encoding method. The following is the formula:

𝑁
Entropy = − ∑2𝑖=0−1 𝑃(𝑚𝑖 ) log 2 [P(P(𝑚𝑖 ))]

1
Where P(𝑚𝑖 ) 𝑖𝑠 𝑡ℎ𝑒 𝑝𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑦 and N is the number of bits. If 𝑁 = 8 then 𝑃(𝑚) = 28 So,

Entropy is given by 𝐻(𝑚) = 8.

(v) Correlation analysis

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 99

In the image, neighbouring pixels are strongly related to each other. The correlation among the
pixels is measured in terms of the correlation coefficient. The correlation coefficient value
should be maximum in the plain image and minimum in the ciphered image. The correlation
coefficient for two adjacent pixels in an image, 𝑥𝑖 and 𝑦𝑖 , can be computed as follows:
Assume we select N consecutive pixel pairs. The following formulas can then be used to
calculate the average:

𝑁−1
1
𝐸(𝑥𝑖 ) = ∑ 𝑥𝑖
𝑁
𝑖=0

𝑁−1
1
𝐸(𝑦𝑖 ) = ∑ 𝑦𝑖
𝑁
𝑖=0

Now Variance can be founded by using the equation:

𝑁−1
1
𝑉(𝑥) = ∑(𝑥𝑖 − 𝐸(𝑥))2
𝑁
𝑖=0

𝑁−1
1
𝑉(𝑦) = ∑(𝑦𝑖 − 𝐸(𝑦))2
𝑁
𝑖=0

Then, we find the Covariance by using following equations:

𝑁−1

𝐶𝑜𝑣 (𝑥, 𝑦) = ∑(𝑥𝑖 − 𝐸(𝑥))(𝑦𝑖 − 𝐸 (𝑥 ))


𝑖=0

Then, the Coefficient of Correlation is given by:

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 100

𝐶𝑜𝑣(𝑥, 𝑦)
𝑟(𝑥, 𝑦) =
√𝑉(𝑥)√𝑉(𝑦)

Based on these parameters, we detect the robustness of the encryption techniques.

Concluding Remarks: One of the best methods for protecting information and communication
networks is cryptography. Information security can be easily achieved through the use of
cryptography techniques. Through encryption, cryptography aids in the protection of sensitive
data. Digital image processing is now a highly sought-after area of study and employment. It
offers cost-effective solutions for a range of real-world applications. Some introductory notes
on cryptography, digital image encryption, digital image processing, its history, and tasks are
included in this chapter. It will assist in igniting interest in the community for knowledge of
digital image processing and cryptography.

References
1. R. C. Gonzalez, R. E. Woods, Digital Image Processing, Fourth Ed., Pearson, 2019.

2. J. Hoffstein , J. Pipher , J. H. Silverman, An Introduction to Mathematical Cryptography,


Springer.

3. Singh A., Singh V. K. & Yadav S. : Image Encryption Technique Using HuffmanCoding
and Spatial Transformation, International Conference on Trends in Electronicsand Informatics
(ICOEI), pp. 352-356, IEEE, 2019.

4. Al-Husainy M. A. F. : A novel encryption method for image security. InternationalJournal


of Security and Its Applications, Vol.6 (1), pp. 1-8, 2012.

5. Dey S. : SD-AEI: An advanced encryption technique for images, In Second International


Conference on Digital Information Processing and Communications (ICDIPC), pp.68- 73,
IEEE, 2012.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 101

6. Nag A., Prakash J. S., Ghosh S. S., Biswas s., Sarkar D. & Sarkar P. P. : Imageencryption
using affine transform and XOR operation, Proceedings of 2011 Internationalconference on
signal processing, communication, computing and networking technologies,2011.

7. Sun Q., Guan P., Qiu Y. & Xue Y. : A novel digital image encryption method based on
one-dimensional random scrambling, In 9th International Conference on Fuzzy Systems
and Knowledge Discovery, pp.1669-1672, IEEE, 2012.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 102

Chapter-8
Introduction to Search Engine Optimization
1
Dr Dinesh Kumar Singh, 2Atma Prakash Singh
1
Assistant Professor, Dept. of IT, DSMNR University Lucknow, UP, India.
2
Assitant Professor, Dept. of CSE, BBDNITM, Lucknow, UP, India
Email: 1dineshsingh025@gmail.com & 2talk2aps@gmail.com

About SEO

SEO means Search Engine Optimization and is the process used to optimize a website's
technical configuration, content relevance and link popularity so its pages can become easily
findable, more relevant and popular towards user search queries, and as a consequence, search
engines rank them better.
Search engines recommend SEO efforts that benefit both the user search experience and page’s
ranking, by featuring content that fulfills user search needs. This includes the use of relevant
keywords in titles, meta descriptions, and headlines (H1), featuring descriptive URLs with
keywords rather than strings of numbers, and schema markup to specify the page's content
meaning, among other SEO best practices.
Search engine optimization (SEO) refers to techniques that help your website rank higher in
organic (or “natural”) search results, thus making your website more visible to people who are
looking for your product or service via search engines. SEO is part of the broader topic of
Search Engine Marketing (SEM), a term used to describe all marketing strategies for search.
SEM entails both organic and paid search. With paid search, you can pay to list your website
on a search engine so that your website shows up when someone types in a specific keyword
or phrase. Organic and paid listings both appear on the search engine, but they are displayed in
different locations on the page.

Google Replaces the Phone Book

Outbound marketing as we know it is dead. It used to be that a majority of a local company’s


marketing budget went to yellow pages, newspaper, and radio advertisements. In order for you
to get any business, you had to put your offers and advertisements in your prospect’s face.
Well, not anymore. The age of the Internet has made it so that consumers are now in control.
It has never been easier for consumers to tune out the plethora of advertisements and
commercials they hear each day. Since you can no longer get their attention with outbound

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 103

marketing, you have to switch your approach to inbound marketing and make sure you’re easy
to find when consumers are looking for you. When was the last time you used a phone book?
Google is the new phone book. If your website is not indexed and optimized to show for
keywords and phrases that are relevant to what you have to offer, all of that potential traffic is
going to your competitors.

How Search Engines Work

Search engines perform several activities in order to deliver search results

1.Crawling - Process of fetching all the web pages linked to a website. This task is performed
by a software called a crawler or a spider (or Googlebot, in case of Google).

2.Indexing- Process of creating index for all the fetched web pages and keeping them into a
giant database from where it can later be retrieved. Essentially, the process of indexing is
identifying the words and expressions that best describe the page and assigning the page to
particular keywords.

3.Processing - When a search request comes, the search engine processes it, i.e., it compares
the search string in the search request with the indexed pages in the database.

4.Calculating Relevancy - It is likely that more than one page contains the search string, so
the search engine starts calculating the relevancy of each of the pages in its index to the search
string.

5.Retrieving Results - The last step in search engine activities is retrieving the best matched
results. Basically, it is nothing more than simply displaying them in the browser.

How do they do it?

Every search engine has what are referred to as bots, or crawlers, that constantly scan the web,
indexing websites for content and following links on each webpage to other webpages. If your
website has not been indexed, it is impossible for your website to appear in the search results.
Unless you are running a shady online business or trying to cheat your way to the top of the
search engine results page (SERP), chances are your website has already been indexed.

What it Takes to Rank 9 Long-Tail Concept & Theory

It is not difficult to get your website to index and even rank on the search engines. However,
getting your website to rank for specific keywords can be tricky. There are essentially 3

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 104

elements that a search engine considers when determining where to list a website on the SERP:
rank, authority, and relevance.

Rank the main concept to keep in mind about On-page SEO is that it consists of improvements
that website owners should be able to implement directly on their website to improve ranking.
Rank is the position that your website physically falls in on the SERP when a specific search
query is entered. If you are the first website in the organic section of the SERP (don’t be
confused by the paid ads at the very top), then your rank is 1. If your website is in the second
position, your rank is 2, and so on. As discussed previously in How Search Engines Work, your
rank is an indicator of how relevant and authoritative your website is in the eyes of the search
engine, as it relates to the search query entered.

Content is King

We’ve all heard it - when it comes to SEO, content is king. Without rich content, you will find
it difficult to rank for specific keywords and drive traffic to your website. Additionally, if your
content does not provide value or engage users, you will be far less likely to drive leads and
customers. It is impossible to predict how people will search for content and exactly what
keywords they are going to use. The only way to combat this is to generate content and lots of
it. The more content and webpages you publish, the more chances you have at ranking on the
search engines. Lottery tickets are a good analogy here. The more lottery tickets you have, the
higher the odds are that you will win. Imagine that every webpage you create is a lottery ticket.

Here are few examples:

Homepage: Use your homepage to cover your overall value proposition and high-level
messaging. If there was ever a place to optimize for more generic keywords, it is your
homepage.

Product/Service Pages: If you offer products and/or services, create a unique webpage for
each one of them. Resource Center: Provide a webpage that offers links to other places on your
website that cover education, advice, and tips.

Blog: Blogging is an incredible way to stay current and fresh while making it easy to generate
tons of content. Blogging on a regular basis (once per week is ideal) can have a dramatic impact
on SEO because every blog post is a new webpage.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 105

On-Page SEO

On-page SEO refers to optimizations that you can perform on your website to improve ranking.
The main concept to keep in mind about On-page SEO is that it consists of improvements that
website owners should be able to implement directly on their website to improve ranking. It
includes providing good content, good keywords selection, putting keywords on correct places,
giving appropriate title to every page, etc.

Website Content

As mentioned in the Content is King section, you want to write content that your audience will
find valuable and engaging. Aside from the topical nature of the content, the way you format
your webpages can have an impact on how the search engine bots digest your content. Every
webpage you create should have a thought-provoking headline to grab the reader’s attention,
and should also include the keyword or phrase that the webpage covers. Other body formatting,
such as bolding certain keywords or phrases, can help stress the importance of phrases you are
optimizing for.

URL Structure

The actual structure of your website URL can have an impact on the search engines ‟ ability to
index and understand your website’s content. Opting for a more organized URL structure will
have the greatest impact. Although this may be optimal for the software, it serves no other
purpose. If you can edit the URL to include the title of your webpage, you should do so. In fact,
some website creation software, like HubSpot, will automatically create URLs based off of
your webpage content in order to eliminate this issue.

Title Tags & Meta Tags

Besides an actual text headline on your page, every webpage you create has a title tag. This is
the text snippet that appears in the upper left corner or on the tabs of your web browser. Also,
the title tag is the blue link that the search engines show when they list your webpage on the
SERP. Title tags max out at 75 characters, so choose your words wisely. Meta tags are snippets
of code you can include within your webpage’s HTML. The meta tags are usually located near
the title tag code in the head of your HTML. There are two meta tags – meta description and
meta keywords. The meta description is a text snippet that describes what your specific
webpage is about. Meta descriptions are usually the first place a search engine will look to find
text to put under your blue link when they list your website on the SERP. If you do not have a

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 106

meta description, the search engines will usually select a random piece of content from the
page they are linking to. The meta description is limited to 150 characters. Meta keywords
consists of an additional text snippet in the HTML that allows you to list a few different
keywords that relate to your webpage. Back in the day, search engines used this field to
determine what keywords to rank your webpage for.

Now, most search engines claim they do not even use meta keywords when indexing content.
Some small or niche search engines may still use it though. As a best practice, it is
recommended to put 5-7 keywords in the meta keywords, but don’t spend too much time
thinking about it.

Headline Tags

When the search engine bots scan your webpages, they look for clues to determine exactly
what your webpage is about. Keywords that are treated differently than most others on the page
show the search engines that they are more important than other keywords on the page. This is
why the use of headline tags within your page is so important. By using various headline tags
(each tag will produce a different size headline), you not only make your webpage easier to
digest from a reader’s standpoint, but you will also give the search engines definitive clues as
to what is important on the page.

Internal Linking

Up until this point, we have only referenced inbound links, or those links coming to your
website from other websites. When creating content for your website on your blog or on
specific webpages, you may want to reference other pages on your website. You can reference
these other pages by inserting a link to another webpage within a specific webpage’s content.
The use of anchor text is recommended when linking to another webpage or even another
website. When anchor text is used, it implies that the page you are linking to is about the
keyword or phrase you use as your anchor. This is yet another way you can help out the search
engines.

Off-Page SEO

“Off-Page SEO” refers to all activities that you do directly off your site to raise the ranking of
a page in search engines. It includes link building, increasing link popularity by submitting
open directories, search engines, link exchange, etc.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 107

Who’s Linking to You?

Do you know? As discussed in the What it Takes to Rank section of this book, you can use free
tools to determine what websites are already linking to you, something the search engines are
very concerned about. Although twenty inbound links from your friends‟ websites may be a
good start to link building, garnering one link from a major publication or educational website
(with Edu address) could be worth more than the power of those twenty links combined. Since
the Internet is essentially an inter-linking network of pages and websites that make up the
World Wide Web, not every link is created equal. Links from major publications and blogs
usually provide more link juice because they are visited by millions of people each day.
Therefore, they have an incredible impact on the ability for webpage to go viral.

How are they Linking to You?

A common practice in linking building is link trading, or “I will put a link to your website on
my website if you put a link to my mine on yours.” These types of links are referred to as
reciprocal links. Since all link juice is good link juice, reciprocal links are not prohibited, but
their value is certainly not as good as a one-way link to your website. There was most likely a
time when reciprocal links were just as good as any other, but the search engines are always
getting smarter in determining how much juice a link should receive. Just like any other aspect
of SEO, throwing money at link building is bad. Paying others to link to you is strictly
prohibited by the search engines. In fact, all paid links must include a tag, called a no-follow
tag, which tell the search engines not to give those links credit. If you’re caught with un-tagged
paid links (the linker or the linkee), your website could be suspended from the search engines
or blacklisted for good. Links to your website from advertisements are not counted as inbound
links by the search engines. If they discover paid link relationships that are not classified as
advertisements, you risk having your website suspended from being listed on the SERP, or
even blacklisted if the instance is deemed severe enough.

Using social media to Spread Content

Use of social networks like Facebook, Google+, Twitter, and LinkedIn has exploded over the
last few years. In fact, the latest figures from ComScore suggest that 16% of all time spent
online is spent on a social network with hundreds of millions of users across these social
networks sharing content they find online with their friends and followers, search engines have
begun to take notice. According to SEOmoz, the amount of social activity that a webpage has
on social networks (shares, recommendations, likes, links, +1‟s, etc.) is an important factor in

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 108

that page’s ability to rank on the SERP. Simply put, search engines have realized that content
shared on social networks is extremely influential, and should therefore rank higher. Beyond
using social networks to engage new prospects, drive leads, and build brand awareness,
businesses should consider all of the SEO benefits they miss out on by not having a brand
presence.

Using Email to Spread Content

Almost any business these days uses email to nurture relationships with their current leads and
customers, and utilizes promotional email blasts to attract new ones. It is no surprise that with
the death of direct mail over the past few years, email marketing has exploded. It has never
been easier to set up an email program, upload your leads, and send them communication.
Obviously, the extreme rate at which businesses have adopted email has deteriorated its
effectiveness industry-wide. There is so much noise out there that you need to make every
email send count. Just like you need to make the content on your website easy to share in social
media, you need to do the same for email. Aside from having clear call-to-action in your emails
to nurture your list, drive leads, and convert them to customers, you should also make it easy
for your email readers to share the content with friends and post it to social networks. This will
increase the reach of your website content and make it easier for you to get inbound links for
SEO.

Off-page SEO includes activities done off of a website in an effort to increase the site’s search
engine rankings. Common off-page SEO actions include building backlinks, encouraging
branded searches, and increasing engagement and shares on social media.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 109

Chapter-9
Study on Edge Computing for IOT and Big Data
1
Dr Dinesh Kumar Singh, 2Dr Vineet Kumar Singh
1
Assistant Professor, Dept. of IT, DSMNR University Lucknow, UP, India.
2
Assitant Professor, Dept. of IT, IET- Dr.RMLA University Ayodhya, UP, India
Email: 1dineshsingh025@gmail.com & 2cn.vineet@gmail.com

INTRODUCTION: Due to the widespread use of smart devices, multi-access edge computing (MEC) has
emerged as the prefered technique for managing huge amounts of data in the heterogeneous Internet of Things
(H-IoT). The growth of connected IoT devices poses a significant data challenge for AI-based solutions. There
are several security problems with cloud computing infrastructure. Data security, stability, and privacy are a few
of them, along with the defence of infrastructure against outside dangers. Contemporary cloud computing
infrastructures may be classified as either centralised or decentralised. The effective and scalable processing of
data provided by Internet of Things (IoT) devices has been made possible by edge computing, which has emerged
as a viable strategy. This paradigm places data processing and analysis near to the data source, reducing latency,
boosting security, and conserving bandwidth. A robust foundation for managing the enormous volumes of data
created by IoT devices is provided by the combination of edge computing and big data. The present status of edge
computing for IoT and big data, as well as its important features, difficulties, and prospects, are examined in this
paper. We examine the current edge computing literature and pinpoint the major research axes that may progress
the state-of-the-art in this area.

EDGE COMPUTING: In order to move computation and data storage closer to the devices and sensors that
produce them, there is a distributed computing paradigm known as "edge computing." Using this method, data
processing and analysis for Internet of Things (IoT) devices will be more effective, security will be improved,
and latency will be decreased. As contrast to centralised cloud data centres, edge computing involves performing
computation and data storage on devices or servers that are situated at the network's edge. With this method, data
may be processed more quickly, network traffic is reduced, and IoT applications perform better overall.
Traditional cloud computing approaches are facing new difficulties as a result of the expansion of IoT devices
and the rising amount of data they produce. Edge computing eliminates the requirement for data to be transferred
to a central location for processing by allowing data processing and analysis to be carried out locally on devices.
This lowers latency, boosts security, and conserves bandwidth. Real-time data processing and decision-making
skills, which are essential for applications like autonomous cars, healthcare monitoring, and industrial
automation, may also be offered via edge computing. A network of linked gadgets and sensors that can interact

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 110

with one another in real time is necessary for the deployment of edge computing. Technologies like 5G, Wi-Fi,
and Bluetooth may all be used to build this network. Edge computing also needs specific hardware and software
that is capable of distributed and decentralised processing and data storage. For the purpose of providing effective
and scalable processing of data produced by IoT devices, edge computing is a potential strategy. It has
ramifications for a broad variety of applications, including smart cities, healthcare, transportation, and more, and
has the potential to revolutionise the way we handle and interpret data.

Background:As it allows for the remote connection and control of items and devices, the Internet of Things
(IoT) has attracted a lot of interest in recent years. IoT device data generation generates enormous volumes of
data, which calls for processing and analysis that is both efficient and scalable. Because of the latency, bandwidth,
and security issues with traditional cloud computing models, edge computing has become a viable option. Edge
computing decreases latency, improves security, and increases the effectiveness of data processing and analysis
by moving computation and data storage closer to the devices and sensors that produce them. A strong framework
for managing the enormous volumes of data created by IoT devices is offered by edge computing paired with big
data.

Essential Features and Advantages of Edge Computing for IoT:Edge computing is a distributed computing
paradigm that moves data processing and archiving closer to the gadgets and sensors that produce it. With this
method, data may be processed more

quickly, network traffic is reduced, and IoT applications perform better overall. For the Internet of Things, the
following are the main features and advantages of edge computing:

 Reduced Latency: Edge computing decreases latency by processing data closer to the source, enabling real-
time data analysis and decision-making capabilities. This is especially crucial for applications like driverless cars,
industrial automation, and healthcare monitoring that call for quick answers.
 Scalability: Edge computing makes it simpler to scale up or down in accordance with the needs of the
application since it provides distributed and decentralised processing and data storage. For applications that
produce a lot of data and need flexible and scalable processing, this method is very helpful.
 Security: By eliminating the need to send sensitive data over the network, edge computing enhances the security
of IoT applications. At the network's edge, data may be handled and analysed locally, lowering the possibility of
data breaches and cyber-attacks.
 Savings in Bandwidth: Edge computing lowers network business by recycling data locally, which conserves
bandwidth and lowers the cost of data transmission. For operations that produce a lot of data, similar artificial
detectors and videotape surveillance systems, this is especially pivotal.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 111

 Trustability:Edge computing improves IoT operation responsibility by enabling on- point data processing and
storehouse. As a result, there's lower chance of data loss and you can be confident that pivotal programmes will
still run indeed if your network goes down.
 Real- time analytics: Edge computing makes it possible to make opinions and assay data in real- time, which
is essential for operations that call for quick replies. operations taking prophetic conservation, including artificial
robotization and smart grids, profit the most from this methodology.
In summary, edge computing for IoT provides a variety of vital traits and advantages, analogous as low
quiescence, scalability, security, bandwidth savings, stability, and real- time analytics. Edge computing is a
doable strategy for allowing effective and scalable processing of data produced by IoT bias as a result of these
advantages. This has implications for a variety of operations, analogous as smart cosmopolises, healthcare,
transportation, and more .
Edge Computing Implementation in Big Data Applications: Challenges and Possibilities :
Edge computing is an arising system of computing which allows for distributed processing as well as information
storehouse ancient to the instruments and detectors that induce the data. Edge computing facilitates hastily data
processing and analysis, shrinks quiescence, and boosts the

overall efficacity of big data operations by transferring computing capacity and storing to the edge of the
network. It isn't without difficulties, nonetheless, to emplace edge computing in large data operations. One of the
key challenges of incorporating Edge computing in massive data processing is the complexity of the structure.
For Edge computing, a complex network of
waiters and bias is required, and maintaining this network may be difficult. Additionally, deployment and
maintenance costs for the physical processing and storage required for bite computing may be high. As a result
of using bite computing in massive data processing, data screen and sequester are two additional impacts that
appear. Bite computing, a strategy that reuses and stores data at the network's edge, raises concerns about the
segregation of stoner data and the usage of screens. It may be difficult for enterprises to vouch that the data is
safe against assaults and breaches due to data processing and storage at the edge of the network.Eventually,
integrating bite computing into monumental data operations has both vantages and disadvantages. Although there
are difficulties due to the complication of the structure and enterprises about data sequestration and screen, the
implicit vantages, similar as dropped quiescence, bandwidth savings, scalability, and responsibility, make it a
feasible program for developing monumental data operations. New possibilities and difficulties will really present
themselves as bite computing develops, but it's putative that this technology has the authority to unnaturally revise
how we manage and assay voluminous quantities of data.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 112

 An developing technology called edge computing puts computer power and storage
closer to the gadgets and sensors that produce data.
 Edge computing in big data applications brings both potential and constraints.
 Edge computing also improves the scalability and adaptability of big data systems, enabling them to meet
shifting demands.
 Edge computing also increases the dependability and accessibility of apps by enabling local data
processing and storage.
 New possibilities and difficulties will undoubtedly present themselves as edge computing develops, but
it is apparent that this technology has the power to fundamentally alter how we handle and analyse large
amounts of data.

Architectures and technology for Edge Computing –


A kind of distributed computing termed as "Edge computing" brings computer coffers and data storage facilities
closer to the devices and detectors that generate data. With this technology, data recycling and dissection may be
done more quickly, which minimises quiescence and improves operation interpretation. Slice-bit technology, a
daedal network of bias, and complex software armature are still needed for bite computing. This section will look
at the infrastructures and technologies that are widely used in bite-sized computing.
Edge Computing Architectures –
Edge computing may be supported by a variety of edge computing designs. The following are some of the most
typical:
1.Cloud-based edge computing – In this design, the sensors and edge devices are linked to a cloud server that
offers the computing power and data storage required for data

processing and analysis. The edge devices and sensors in this architecture are linked to a cloud server, which
offers the computing power and storage required for data processing and analysis.
2. Fog computing –
In order to build a distributed computing infrastructure that can enable real-time data processing and analysis,
this design entails placing several edge devices and sensors closer to the source of the data.
3. Mobile edge computing –
This design places computing tools and services to the cellular network's edge, allowing mobile apps to analyse
and analyse data more quickly.
4. Hybrid edge computing –

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 113

Depending on the particular application needs and use cases, this architecture combines hosted on the cloud, fog,
and mobile devices edge computing.
Edge Computing Technologies –
Along with the various architectures, there are a number of technologies that are often used in edge computing.
The following are some of the most typical:
1.Virtualization - This technology enables the production of virtual computing resources, allowing for a more
effective use of hardware resources and permitting the deployment of numerous programmes on a single physical
device.
2. Containerization - This technique enables the development of small, portable software packages that may be
used on a variety of hardware platforms, increasing the adaptability and scalability of edge computing settings.
3. Artificial intelligence (AI) and machine learning (ML) –
These technologies are used in edge computing to offer real-time data analysis and decision-making capabilities,
enabling applications to react more rapidly and effectively to changes in data.

4. 5G networks –
The subsequent-generation 5G networks provide quicker and more dependable connection, allowing edge
computing applications to analyse and transport data more quickly.
This is the Edge

o dDevice Local Edge lLocal


Edge computing Server Edge Data center

Cc

Cloud

Edge computing architectures and technologies are vital for facilitating the adoption of edge computing
environments. The choice of architecture and technology will depend on the unique application requirements and
use cases. While there are many other solutions available, cloud-based, fog, mobile, and hybrid architectures are
some of the most regularly used, and virtualization, containerization, AI/ML, and 5G networks are some of the
most extensively accepted technologies. Study On Edge Computing For IoT And Big Data. As edge computing
continues to expand, we can expect to see new architectures and technologies emerge, but the key to successful

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 114

implementation will be picking the proper mix of architecture and technology to satisfy the particular needs of
each application.

Security & Privacy in Edge Computing-

Edge computing facilitates the processing and evaluation of data closer to the source, hence boosting the speed
as well as effectiveness of applications. However, with the advent of edge computing, there are also rising worries
about security and privacy issues. In this part, we will cover some of the main security and privacy problems in
edge computing and explore possible solutions.

Security Challenges in Edge Computing –

1.Device Security –

Edge devices, such as sensors and gateways, are commonly installed in distant and insecure areas, rendering them
susceptible to physical and cyber-attacks. To overcome this difficulty, devices need to be protected with effective
authentication and encryption systems to prevent unwanted access.

2. Network Security –

In edge computing, data is transferred across wireless networks, which are sensitive to eavesdropping and
modification. This danger may be addressed by installing secure communication protocols, such as Transport
Layer Security (TLS) and Secure Shell (SSH).

3. Data Security –

Edge computing creates massive amounts of sensitive data that might be exposed to theft or manipulation. To
protect data, it must be encrypted during transmission and storage, and access control methods must be put in
place to guarantee that only authorized users may access it.

Privacy Challenges in Edge Computing –

1.Data Privacy - Edge computing creates personal and sensitive data, such as biometric and health data, which
need rigors privacy protection. To guarantee privacy, data must be anonymized or pseudonymized, and
permission processes must be put in place to ensure that users have control over their data.

2. Location Privacy - Edge devices are usually mobile and may monitor user whereabouts, creating issues about
location privacy. To overcome this difficulty, location data should be acquired only when required and should be
anonymized or encrypted to prevent unwanted access.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 115

3. Policy Compliance –

Edge computing must comply with multiple data protection requirements, such as GDPR, HIPAA, and CCPA.
Compliance may be assured by using privacy-by-design principles and executing frequent audits to discover and
rectify privacy infractions.

Solutions for Security and Privacy Challenges in Edge Computing –

1.Security-by-design –

Edge computing systems and gadgets should be established with security in mind from the beginning, containing
secure hardware and software components, strong authentication and encryption techniques, and secure
communication protocols.

2. Access Control –

Access control systems, such as identity and access management (IAM) and role-based access control (RBAC),
should be established to guarantee that only authorized users may access sensitive data.

3. Encryption –

Data should be encrypted during transmission and storage to prevent unwanted access.

Security and privacy are significant problems in edge computing, and they require careful analysis and attention
from designers and developers. Solutions like as security-by-design, access control, anonymization and
pseudonymization, and encryption can assist address these difficulties. However, as edge computing continues
to expand, new security and privacy vulnerabilities may emerge, and solutions must be regularly updated to keep
ahead of potential attacks.

Conclusion –

Edge computing has emerged as a viable alternative for processing and analyzing data closer to the source,
enabling faster and more efficient applications. The integration of IoT and big data has further raised the demand
for edge computing, as it gives real-time insights and enables intelligent decision-making. Numerous deployment
options, including fog computing, cloudlets, and mobile edge computing, are now available because to the fast
evolution of the architecture and technologies utilised in edge computing. The best architecture to employ will
depend on the particular use case and needs. Each architecture has benefits and limits of its own. Edge computing
has many advantages, but it still faces severe privacy and security challenges. Because they are often used in
insecure, distant situations, edge devices are susceptible to both physical and digital threats. Edge computing
generates sensitive data, which need strict security and privacy measures to safeguard. Solutions like access

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 116

control, anonymization and pseudonymization, encryption, security-by-design, and other techniques may be used
to deal with these problems. To keep up with emerging dangers and adhere to data protection laws, these solutions
must be regularly updated. All things considered, edge computing for IoT and big data has enormous potential to
improve the effectiveness, speed, and intelligence of applications. The unique use case, architecture, and security
and privacy safeguards must all be carefully taken into account for the implementation to be effective. Further
study and development in this field will be required to realise edge computing's full potential as the demand for
it grows.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 117

Chapter -10
Active Passive Classifier Algorithm in Machine Learning
1
Dr Devesh Katiyar, 2Mr Gaurav Goel
1
Assistant Professor, Dept. of CS, DSMNR University Lucknow, UP, India.
2
Assitant Professor, Dept. of CS, DSMNR University Lucknow, UP, India
Email: 1katiyardevesh@gmail.com & 2goyals24@gmail.com

Introduction: In this modern era when the Internet seems to be everywhere in several places
at the same time, everyone is relying on a variety of online surfing for news. With the very
rapid use of social networks such as Facebook,Twitter,Instagram,etc. The news quickly spread
to millions of users in a very short time. The spread of false news has consequences, including
creating bias and affecting election outcomes in favour of certain candidates. Spammers also
use interesting news headlines to monetize their clickbait ads. This article aims to perform a
binary classification of various news articles available on the internet, considering concepts
related to artificial intelligence, natural language processing, and machine learning. It aims to
give users the ability to categorise news as: Checks whether it's false or real, and the
authenticity of the site that spreads news.As more and more people in their lives are devoted
to interacting online through social media platforms, more and more people tend to search for
and use data and information on social networks instead of social media. Traditional news
outlets, The explanation for this shift in consumer behaviour is the nature of these social media
platforms:

 Data and information consumption on social networks is often faster and cheaper than
in traditional media, such as newspapers or television.
 Share, discuss, and exchange news with friends or other readers on social networks
more easily.

It is also seen that social media has now surpassed television as the primary source of data and
information. Despite the advantages offered by social networks, the level of stories on social
networks is lower than in traditional news organizations. However, because providing data and
information online is inexpensive and much faster and easier to spread through social
media, a huge amount of false news, i.e. articles that intentionally contain wrongdata and
information facts, produced online for various purposes, such as financial and political gain. It
is estimated that over a million tweets were related to the "Pizzagate" false news during the

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 118

presidential election summit. Given the popularity of this new phenomenon, "false news" was
even named the word of the year by Macquarie Dictionary in 2016. The mass dissemination of
false news can have

significant negative effects on individuals and society. Firstly, false news can disrupt the
balance of authenticity in the data and information ecosystem. For example, it is clear that the
most popular false news was even more prevalent on Facebook than the most accepted
mainstream news during the 2016 US presidential election. The second reason is the
intentionally false news theory. Convince consumers to simply accept biassed or misleading
beliefs. False news is often manipulated by propagandists to convey political messages
or influence, such as some reports that Russia has design and implement d false accounts and
social bots to spread false stories. Third, false news changes the way people interpret and react
to real news. For example, some false news is designed and implemented only to inflame
people's trust and confuse them, hindering their ability to distinguish what is right from what
is not.To help mitigate the negative effects caused by false news (both for the public's benefit
and therefore the news ecosystem). It is crucial that we develop methods to automatically
detect incorrect data and information spreading on social networks. The Internet and social
media have made access to data and information much easier and more comfortable. Internet
users can often pursue the events of their interest online, and the growing number of mobile
devices makes this process easier. But with great opportunity comes great challenge. Mass
media has a huge influence on society, and because it happens so often, someone wants to take
advantage of it. Sometimes, to achieve certain goals, the mass media can manipulate
knowledge in a number of ways. This leads to the production of articles that are not entirely
correct or may be completely incorrect. There are even different kinds of websites that produce
false news almost exclusively. They intentionally post hoaxes, half-truths, propaganda, and
misdated information pretending to be real, often considering social media to generate web
traffic and amplify their effects. The main goal of false news websites is to influence public
opinion on certain issues (mainly politics). Examples of such sites can also be found in
Ukraine, the United States of America, Germany, China, and different kinds of other countries.
Therefore, false news is both a global problem and a global challenge. Scientists believe that
the problem of false news can also be solved by machine learning and AI technologies. There
is a reason for this: Recently, AI algorithms have started to perform better in different kinds of
classification problems (image recognition, speech detection, etc.) with larger data sets. There
are several influential articles on automatic fraud detection. They gather data and information

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 119

by asking people directly for correct or incorrect data and information on certain topics,
executions, and friendships.

Goal and Motivation:The main goal behind the development and updating of existing chapter
is the following.

 Be careful and alert while forwarding such a false article.


 Search and find out the Correct story of the particular article.
 Avoid wrong news crisis events.
 Be updated and informative.

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications
to predict results more accurately, even if they are not explicitly programmed. The machine
learning algorithm uses historical data as input to predict new output values. The spread of
false news can have a significant negative impact on individuals and society. First, false news
can disrupt the reliability balance of the news ecosystem of instances. Understanding the truth
of the news and understanding the news with message recognition can have a positive impact
on society.With this system, the time required to search for a location is significantly reduced,
making decisions about where to visit faster. used to display location views (users can also
zoom in and out to get a better view) and 360-degree images embedded in the application. The
system is used in the Weather Underground API to get the exact details of the weather. Users
can also find the path to their final destination on the map, providing the user with a better
view. It is more convenient for users to book trips from the website instead of going to the
agency, ultimately saving time and money.
Persisting System:We may obtain data and information online from various sources, such as
social media sites, search engines, the homepages of news agency websites, or data and
information verification sites. On the internet, there are several public datasets to classify false
news, like Buzzfeed News, LIAR, and BS Detector, etc. These datasets have been widely used
in various research papers to determine the veracity of the news. In the following sections, I
have briefly discussed the various elements of the dataset used in this work. This persisting
system can help us train this model considering machine learning technique.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 120

Requirement of New System: Nowadays, various people use the Internet as a central platform
to find data and information about reality in the world, and this should continue. A message
Users of this website can also see the main or keyword

updates. Most false news and messages are graphically graphed. After and everyone wants to
know how to stop this so we give some important tips to stop this false news from spreading
rumors around the world.
Issue identification:This system is a web application that helps users detect false news. You
have specified a text area that allows the user to choose whether to paste the message, paste the
URL link of the message, or paste another message link. Then it reflects the reality of the
message. All data that the user gives to the
detector can be saved for future model state updates and further use in data analysis. It also
assists users by providing guidance on how to prevent such false events and prevent the spread
of such events.

Methods Used

Logistic Regression: Another name for the logistic function is the sigmoid function, which
was developed by statisticians to describe the characteristics of ecologically driven, rapidly
increasing population growth. peak at the carrying capacity of the medium. It's an S-shaped
curve that can take different kinds of real-valued numbers and map them to a value between 0
and 1, but never accurately to those limits. It is a classifier and not a regression algorithm. It is
used to estimate discrete values (binary values like 0/1, yes/no, and correct/wrong) based on a
given set of independent variables. Simply put, it predicts the probability of an event occurring
by fitting the data with a logic function. Hence, it is also well known as "logic regression."
Therefore, it predicts the probability of its output, whether it works or not. The values range
from 0 to 1.

sigmoid (g) =1 / (1 + e^-z)


Hypothesis => g = YX + L
hΘ(x) = sigmoid (g)

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 121

Decision Trees Classification: Decision trees are a supervised learning technique that can be
used for both classification problems and regression problems, but is generally preferred for
solving mathematical problems. classifier It is a tree classifier, where the inner nodes
recurrently mention the features of the dataset, the

branches recurrently mention the decision rules, and every leaf node recurrently mentions the
outcome.

Gradient booster Classifier:Gradient boosting is a popular boosting algorithm. In gradient


enhancement, every prediction checks and corrects the error of its predecessor. Unlike
Adaboost, the weights of training instances do not change, but every prediction is trained
considering the remaining errors from the predecessor as the label.

Random Forest Classifier: Random Forest is a recognised name for a collection of decision
trees and their data. In Random Forest, we have a set of decision trees (also well known as
"forest"). To classify a new object based on attributes, every tree gives a classification, and we
say that the tree "votes" for this class. The forest selected the classification with the most votes
(out of the total number of trees in the forest). A random forest is a classification algorithm
consisting of different kinds of decision trees. It is used in bagging and randomization when
building individual trees to try to design and implement forests of unrelated trees for which the
committee's predictions are more accurate than different kinds of individual trees. Which odd.
The Random Forest, as it is very clear by its name, has a huge number of various individual
decision trees that act as an entire tree. Every individual tree in the random forest makes a class
prediction, and the class with the most votes will become this model's prediction. The reason
the random forest model works so well is that a huge number of relatively uncorrelated models
(trees) acting as a committee will outperform different kinds of individual models. Every
individual tree in the forest randomly spits out a class, and the class predicted by the votes
becomes the class predicted by this model. The reason That random forest model works well
as follows: A huge number of relatively unfavourable models (trees) acting as a committee will
be superior to one of the structural models. Individualization. So how do you
random forests ensure that the behavior of Every individual tree is not too

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 122

correlated with the behavior of one of the other plants of the model? It is used in the following
method:
Sacging (Forgation Bootstrap):Decision trees are very sensitive to the data they are trained
at small changes to the training package can lead to structures . Important nursery. Random
Forest takes advantage of this by allowing every tree of every individual to randomly sample
from the data set with a replacement, leading to different trees. This process is called packaging
or boot packaging. that 2 Features of Randomness In a normal decision tree, when it's time to
split a node, we look at all possible characteristics of that node and choose the one that produces
the greatest distance between the observations of the left node and the observations of the right
node. Knot straight. In contrast, every tree in a random forest can only choose from a random
subset of traits.

Active Passive Classifier Algorithms:Active-passive algorithms are commonly used for


large-scale learning. It is one of the rare "e-learning algorithms." In online machine learning
algorithms, theInput data arrives in sequential order, and the machine learning model is updated
step by step, unlike in batch learning, where the entire dataset is used at once. This is useful in
situations where there is a huge amount of data and it is computationally impossible to form
the entire data set due to the sheer size of the data. We can simply say that The e-learning
algorithm will receive a training example, update the classifier, and then discard that example.
Calculation and formulation of matrices:performance calculation of algorithms for detecting
false news different endpoints were used. In this subsection, we look at some of the most widely
used metrics for false-news detection. Most existing methods that treat the false news problem
as a classification problem by predicting whether a news story is wrong or not are that

Correct Positive (CP): when false news is predicted, that is detected and classified as false
news;

Correct Negative (CN): When predicted, correct news is actually classified as correct news;

Wrong Negative (WN): When predicted, correct news is actually classified as false ews.
Wrong Positives (WP): When predicting false news, it was actually classified as real.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


ISBN: 978-93-91248-89-5 123

Conclusion:In this chapter the 42nd century, most tasks are performed online. Newspapers,
which were preferred over printed newspapers, are now being replaced by applications such as
Facebook, Twitter, and news articles for reading online WhatsApp forwarding is also an
important feature. The growing problem of false news only complicates things and is trying to
change or interfere with its opinions and attitudes towards the use of digital technology. When
a person is fooled by the actual news, there are two possibilities. People start with and believe
that their perception of a particular subject is as true as expected. Therefore, in order to suppress
the phenomenon, we have developed a false news detection system that receives input from
the user and classifies it as true or false. To implement this, you need to use different NLP and
machine learning techniques. The model is trained with the appropriate dataset, and
performance calculations are also done with various performance measurements. The best
model is me, ts. The best model is me, H. The model with the highest accuracy is used to
classify headlines or articles. As we saw in the static search above, the best model was a logistic
regression with 65% accuracy. Therefore, we used grid search parameter optimization to
improve the performance of the logistic regression. This resulted in 75% accuracy. Therefore,
if a user feeds a particular news article or headline to a model, there is a 75% chance that it will
be classified according to its true nature. Users can see news articles or keywords online. He
can also verify the authenticity of the website. The accuracy of the dynamic system is 93%,
which improves with every iteration. We will follow the latest news and build our own dataset
to keep it up-to-date. All live news and up-to-date data are stored in a database with a web
crawler and an online database.

Recent Applied Research in Mathematical, Statistical and Computational Sciences


About The Editors
Dr. Dinesh Kumar Singh is working as Assistant Professor in Department of IT, DrShakuntalaMisra National
Rehabilitation University Lucknow. He has obtained B.Tech, M.Tech&P.hd, in Computer Science & Engineering
from Different Reputed Universities of India. He has been Teaching Computer Organization & architecture, Object
Oriented Programming, Data Mining, Artificial Intelligence, E-commerce, Cyber Crime to UG and PG students For
More than 20 Years. He has Been Published more than 40 research articles in different reputed journals.

Dr. Shavej Ali Siddiqui obtained his Ph.D from Dr. BhimraoAmbedkar University, Agra in the year 2013. He has
more than 17 years of experience in teaching the subject of Mathematics at Bright Girls Degree College, Lucknow;
Sagar Institute of Technology & Management, Barabanki and KMC Language University, Lucknow. He has taught to
all branches of B.Tech, M.tech; BCA, MCA, Polytechnic Pharmacy students etc. Currently, he is working as an
Assistant Professor (Mathematics) in the Department of Applied Sciences and Humanities at
KhwajaMoinuddinChishti Language University, Lucknow. He has published more than 15 research articles in reputed
National and International Journals; Books Author, Book Chapter and also Presented Research articles in
Seminar/Conferences.

Dr. Devesh Katiyar is working as Assistant Professor in Department of Computer Science at Dr Shakuntala Misra
National Rehabilitation University Lucknow and has been associated with the IT Industry, research and academics for
the last fourteen years. He has rich and diverse experience in academia and has been Visiting Professor at various
Universities and Colleges. Till date he has guided enormous Projects. He has many publications in National &
International journals and has presented papers in International & National Conferences. He has been providing
Consultancy to various Government and non-Government Organisations. He is Member of Editorial Board and
Reviewer of various National/International Journals.

Mr. Gaurav Goel is working as Assistant Professor in Department of Computer Cience, DrShakuntalaMisra National
Rehabilitation University Lucknow. He has obtained B.Tech, M.Tech in Computer Science & Engineering from
Different Reputed Institutions of India. He has been Teaching Computer Organization & architecture, Digital Image
Processing, Machine Learning, Soft Computing, Data Mining, Artificial Intelligence to UG and PG students For More
than 8 Years. He has Been Published more than 15 research articles in different reputed journals.

Dr. Aarthi Elangovan serves as an Assistant professor in the department of Computer science , Faculty of Science and
Humanities at SRM Institute of Science and Technology ; Kattankulathur , Tamilnadu, India. A position she has held
since 2006. Possessing a Ph.D. in Computer Science and a master's degree in Computer Applications, she exhibits a
keen interest in both Internet of Things (IoT) and social media data analytics. Her scholarly contributions include the
publication of over 10 research articles in indexed journals, with a primary focus on leveraging deep learning and
machine learning algorithms for the analysis of specialized systems related to disaster management and health
monitoring applications. Her research methodology predominantly involves the application of data mining , advanced
machine learning, IOT and statistics.
Dr. Susheel Kumar Singh obtained his B.Sc. from University of Allahabad, Allahabad. He has completed his M.Sc. in
Electronics as well as M.Sc. in Physics from University of Lucknow, Lucknow. He was awarded his doctorate in
Physics from University of Lucknow. Currently he is serving, Department of Physics at DSMNRU, Lucknow. He has
authored 05 Text book for UG and PG Students and published 15 Edited Book. He has published more than 30 research
paper in prestigious international journals. He has organised more than 50 Conference/FDP/Workshop with
Collaboration of National and International Institutes. Dr. Singh is General Secretary of prestigious Educational
Society named by MKSES Educational Society Lucknow, UP, India. He has great research passion in the area of
Material Science. Dr. Singh participated and presented his research work in many National and International
Conferences.

ISBN: 978-93-91248-89-5

Website : mksespublications.com

399/-

You might also like