You are on page 1of 18

Hindawi

Journal of Healthcare Engineering


Volume 2022, Article ID 2693621, 18 pages
https://doi.org/10.1155/2022/2693621

Research Article
Brain Tumor Detection and Classification by MRI Using
Biologically Inspired Orthogonal Wavelet Transform and Deep
Learning Techniques

Muhammad Arif ,1 F. Ajesh,2 Shermin Shamsudheen ,3 Oana Geman ,4,5


Diana Izdrui ,5 and Dragos Vicoveanu5
1
Department of Computer Science and Information Technology, University of Lahore, Lahore, Pakistan
2
Department of Computer Science and Engineering, Sree Buddha College of Engineering, Alappuzha, Kerala, India
3
College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
4
Neuroaesthetics Lab, Stefan cel Mare University of Suceava, Suceava, Romania
5
Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Suceava, Romania

Correspondence should be addressed to Muhammad Arif; arifmuhammad36@hotmail.com and Oana Geman; oana.geman@
usm.ro

Received 15 November 2021; Revised 4 December 2021; Accepted 9 December 2021; Published 10 January 2022

Academic Editor: Liaqat Ali

Copyright © 2022 Muhammad Arif et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is
properly cited.
Radiology is a broad subject that needs more knowledge and understanding of medical science to identify tumors accurately. The
need for a tumor detection program, thus, overcomes the lack of qualified radiologists. Using magnetic resonance imaging,
biomedical image processing makes it easier to detect and locate brain tumors. In this study, a segmentation and detection method
for brain tumors was developed using images from the MRI sequence as an input image to identify the tumor area. This process is
difficult due to the wide variety of tumor tissues in the presence of different patients, and, in most cases, the similarity within
normal tissues makes the task difficult. The main goal is to classify the brain in the presence of a brain tumor or a healthy brain. The
proposed system has been researched based on Berkeley’s wavelet transformation (BWT) and deep learning classifier to improve
performance and simplify the process of medical image segmentation. Significant features are extracted from each segmented
tissue using the gray-level-co-occurrence matrix (GLCM) method, followed by a feature optimization using a genetic algorithm.
The innovative final result of the approach implemented was assessed based on accuracy, sensitivity, specificity, coefficient of dice,
Jaccard’s coefficient, spatial overlap, AVME, and FoM.

1. Introduction brain tumors, but the results have not been conclusive. In
[2], an effective partitioning strategy was presented using the
Every year, more than 190,000 people in the world are di- k-means clustering method integrated with the FCM
agnosed with primary or metastatic brain (secondary) tu- technique. This approach will benefit from the k-means
mors. Although the causes of brain tumors are not certain, clustering in terms of the minimum time of calculation FCM
there are many trends among the people who get them. Any helps to increase accuracy. Amato et al. [3] structured PC-
human being, whether a child or an adult, may be affected by assisted recognition using mathematical morphological re-
it. The tumor region has initially identified a reduction in the construction (MMR) for the initial analysis of brain tumors.
risk of mortality [1]. As a result, the radiology department Test results show the high accuracy of the segmented images
has gained prominence in the study of brain tumors using while significantly reducing the time of calculation. In [4],
imaging methods. Many studies have looked at the causes of classification of neural deep learning systems was proposed
2 Journal of Healthcare Engineering

for the identification of brain tumors. Discrete wavelet accuracy. Finally, based on the classifier result, the tumor
transformation (DWT), excellent extraction method, and region is classified into malignant or benign.
main component analysis (PCA) were applied to the clas- The rest of the article is intended to continue: section 1
sifier here, and performance evaluation was highly accept- presents the background to brain tumors and related work;
able across all performance measurements. section 2 presents the construction techniques with the
In [5], a new classifier system was developed for brain measures used throughout the method used; section 3 de-
tumor detection. The proposed system achieved 92.31% of scribes the results and analysis and the comparative study;
accuracy. In [6], a method was suggested for classifying the and, finally, section 4 presents the conclusions and up-
brain MRI images using an advanced machine learning coming work.
approach and brain structure analytics. To identify the
separated brain regions, this technique provides greater 2. Methodology
accuracy and to find the ROI of the affected area. Researchers
in [7] proposed a strategy to recognize MR brain tumors In this research work, the initial image database is collected;
using a hybrid approach incorporating DWT transform for the obtained images are enhanced by thresholding, mor-
feature extraction, a genetic algorithm to reduce the number phological operation, and region filling. After preprocessing,
of features, and to support the classification of brain tumors the tumor region is segmented using the BWT algorithm.
by vector machine (SVM) [8]. The results of this study show The features are extracted by using the GLCM algorithm.
that the hybrid strategy offers better output in a similar sense The genetic algorithm is used for selecting the features.
and that the RMS error is state-of-the-art. Specific seg- Finally, the SVM Naı̈ve Bayes, BOV-based SVM classifier,
mentation concepts [9–33] include region-based segmen- and CNN classify the image accurately. The flow for iden-
tation [10], edge-based technique [11], and thresholding tifying the brain tumors is portrayed in Figure 1.
technique [12] for the detection of cancer cells from normal
cells. Common classification method is based on the Neural
Network Classifier [13], SVM Classifier [14], and Decision 2.1. Data Acquisition. The data collected are grouped into
Classifier [15]. In [32], a brain tumor detection method was two kinds—healthy brain images and unhealthy brain im-
developed using the GMDSWS-MEC model. The result ages. Among the 66 patients, 22 patients have normal MRI
shows high accuracy and less time to detect tumors. brain images and the rest 44 collect in the abnormal MRI
brain image category from the Harvard Medical School
website (http://med.harvard.edu/AANLIB/) [16]. The MRI
1.1. Research Gap Identified. From the research analysis, we brain image obtained from the database were in the form of
have identified that traditional algorithms are very effective an axial plane, T2-weighted, and 256 × 256 pixels. These
to the initial cluster size and cluster centers. If these clusters images are scrutinized, and preprocessing is done before the
vary with different initial inputs, then it creates problems in processing of algorithms.
classifying pixels. In the existing popular fuzzy cluster mean
algorithm, the cluster centroid value is taken randomly. This 2.2. Preprocessing. The preprocessing step focuses on spe-
will increase the time to get the desired solution. Manual cifically removing the redundancy present in the captured
segmentation and evaluation of MRI brain images carried image without affecting the subtleties that play a key role in
out by radiologists are tedious; the segmentation is done by the general procedure. It is done to improve the visual look
using machine learning techniques whose accuracy and and characteristics of an image. In the conventional model,
computation speed are less. Many neural network algo- MRI images [34] are often affected by impulse noise, such as
rithms have been used for the classification and detection of salt and pepper, which degrades the performance of the
the tumor where the accuracy is less. The detection accuracy tumor segmentation system to avoid the proposed skull
is based on the segmentation and the detection algorithms stripping and morphological operations.
used. So far, in an existing system, the accuracy and the The key activities in preprocessing are thresholding,
quality of the image are less. morphological operation, and region filling. In the input
image, at first, mean is calculated, and thresholding oper-
ation is done. To remove holes from the input image, region
1.2. Contribution of the Proposed Research. The proposed filling operation is done, which is trailed by morphological
technique is an effective technique to detect tumour from activity, with the goal that it can eliminate noise as well as
MRI images. In the proposed technique, different classifiers small objects from the background. Normally, preprocessing
are used. The proposed system should be capable of pro- can be assessed visually or quantitatively.
cessing MRI, multislice sequences, accurately bounding the
tumor area from the preprocessed image via skull stripping
and morphological operations. The region should be seg- 2.3. Segmentation-Berkeley’s Wavelet Transformation.
mented by Berkeley’s wavelet transformation and extract the Segmentation is used to identify the tumor infected area
texture features using ABCD, FOS, and GLCM features. from the MR images. The Berkeley wavelet transformation
Classifiers such as Naı̈ve Bayes, SVM-based BoVW, and uses two-dimensional triadic wavelet transformation and a
CNN algorithm should compare the classified result and complete, orthonormal basis, and hence it is very ideal for
must identify the tumor region with high precision and the identification of the area of interest from the MR images
Journal of Healthcare Engineering 3

Image Collection Pre Processing Segmentation


By BWT

Texture Feature Feature Bag of Visual


Extraction Selection Feature Based SVM
(GLCM) (GA) Data Base Classifier

Figure 1: Basic block diagram of planned work.

[17]. The BWT converges repetitively from one level to finer details from them and in effect enhance the image
n—number of levels—and decomposes the other part of the quality. Alternatively, wavelet analysis can compress or
image at a very fast rate. denoise a signal without significant degradation. The BWT
The partition of the highly contagious MR brain areas is algorithm steps are defined as follows:
done as follows:
(1) Initially calculate scaling and translation process:
(i) In the initial stage, the enhanced brain MRI image is 1 T− τ
transformed into a binary image with a cut-off level P � √� ψ 􏼒 􏼓. (1)
S S
of 117. Pixels with values larger than the defined level
are converted to white, whereas the remaining pixels (2) Perform conversion of data from a spatial form to a
are marked as black, resulting in the development of temporal domain frequency.
two distinct regions around the infected tumor
tissues. (3) Simplification of image conversion calculation of the
mother wavelet transformation is partly fixed:
(ii) In the second stage, the morphological erosion
procedure is used to remove white pixels. Finally, the ϕ 1 ϕ S
βθ (τ, S) � βθ 􏼐3 (x − i), 3S (y − j)􏼑,
region is divided into destruction and identical areas, S2
and the area of the omitted black pixels of the erosion (2)
process is counted as the MR image mask for the 1 xy
β0 � √� 􏼔μ􏼒 , 􏼓􏼕.
brain. 9 3 3
This present work deals with the conversion of Berkeley’s (4) Apply the morphological technique.
wavelets, which is used to effectively section the brain MR
(5) Reschedule the corresponding pixel values; this
image. The conversion of the Berkeley wavelet (BWT) is
process is only for binary images.
defined as the transformation of two-dimensional triadic
wavelets, which can be used to analyze the signal or image. (6) Removing pixels from or to the edge area of the
The BWT is used in features, such as spatial position, band artifacts relies on just the streamlining element of the
pass frequency, band pass orientation tuning, quadrature image selected.
phase, etc. As with the conversion of the mother wavelet or
other wavelet transformation communities, the BWT al-
gorithm will also allow for an efficient transition from one 2.4. Feature Extraction. The extraction procedure for the
spatial form to a temporal domain frequency. The BWT is an element is utilized to consequently find the lesion. There are
important method of image transformation and is a com- three procedures to consider for the extraction of features:
plete orthonormal one. FOS, ABC, and GLCM.
BWT consists of eight major mother wavelets grouped
into four pairs, each pair having different 0, 45, 90, and 135
2.4.1. ABC Parameter. The extracted features based on ABC
degrees aspects. Inside each pair of wavelet transforms, some
are given as follows:
wavelet has odd symmetry, while another wavelet also has
symmetry. The BWT algorithm is an accurate, orthonormal (i) Asymmetry index:
basis and is therefore useful for computational power re- A1 + A2
duction. Here, the Berkeley wavelet transformation is used AI � . (3)
for efficient division [18]. Wavelet analysis is an efficient 2Ar
approach capable of revealing data aspects which are other
(ii) Four different border irregularity measures are
techniques for analyzing the signal. The process, by con-
sidering the images at numerous stages, can take out the (a) Compactness index:
4 Journal of Healthcare Engineering

CI � (P2L) � 4AL. (4) G− 1


Mean μ � 􏽘 i p(x),
(b) Fractal index: The set of fractals is calculated i�0
using the tracking of boxes. The box tracking is G− 1
portrayed by N(S) in the fractal dimension plot, Variance σ 2 � 􏽘 (x − μ)2 p(x),
and N symbolizes a sequence of boxes. Box track i�0
is described for the Hausdorff dimension plot 3
G− 1
with log N(S), and log N signifies several Skewness μ3 � σ − 􏽘 (x − μ)3 p(x), (7)
columns. i�0

(c) Edge abruptness: G− 1


4
Kurtosis μ4 � σ − 􏽘 (x − μ)4 p(x),
(max . min)%6 + 2 i�0
EI � . (5)
100 G− 1 G− 1

(d) Gray transition: It calculates the position and Entropy H � − 􏽘 􏽘 p(x, y)log2 [p(x)].
i�0 j�0
velocity of an image’s gradient.
(iii) Diameter: The diameter of the lesion field is cal- Figure 2 shows 13 features that are selected for contri-
culated for smaller axis length by using the function’ butions to the classification system. The network consists of
region props’ function. The resulting value is con- 4 hidden neurons, and 1 output neuron is used for the final
verted to a value of the nanometer, and that value is examination of the network. At that point, the rules for
assigned the diameter. choosing the component are utilized to diminish the
quantity of feature input.

2.4.2. GLCM Features. This strategy obeys two measures to 2.5. Features Selection Using Genetic Algorithm. Not all the
synthesize characteristics of the medicinal images. During features make a payment to the classification process.
the initial process, the GLCM features are computed and in Feature selection is carried out to identify the most suitable
the next process, the texture characteristics depending upon feature set. We use the genetic algorithm to optimize the
the GLCM are determined. The metric formula is depicted selected feature values for the proposed method because it
below for a few of the innovative features. Extraction of the offers significant advantages over typical optimization
feature is done using a co-occurrence matrix of the gray level techniques, such as linear programming, heuristic, first
[20]. Several texture features are available, but this study uses depth, first breadth, and Praxis [21] Feature selection process
only four features: strength, contrast, correlation, and is presented in Figure 3, and the procedure of GA is given in
homogeneity. Algorithm 1.
G− 1 G− 1
Energy � 􏽘 􏽘 [p(x, y)]2 , 2.5.1. Features Selection: Genetic Algorithm. Genetic algo-
i�0 j�0 rithms play an important role in reducing the dimensionality
of the feature space, thus helping to improve the perfor-
G− 1
mance of the classifier. In the genetic algorithm, the major
Homogenity � − 􏽘 p(i)log2 [p(x)],
i�0
stages are fitness evaluation, chromosome encoding, selec-
(6) tion technique, genetic operators, and the condition stops
G− 1 G− 1
x, y p(x, y) − μx μy iteration. In binary search space, the genetic algorithm
Correlation � 􏽘 􏽘 , considers chromosomes to be a bit string. Figure 4 shows the
σxσy
i�0 i�0 matrix representation showing the bit value of the chro-
G− 1 G− 1
mosome in genetic algorithm.
Contrast � 􏽘 􏽘 (x − y)p(x, y). Initially, a primary population is formed arbitrarily as
i�0 j�0 well as by utilizing fitness function, it is assessed. In the
testing, the chromosome with the bit string value “1” rep-
resents the specific feature indexed by the selected position.
The ranking determines the accuracy of previously tested
2.4.3. Statistical Features. The descriptor Color Moment classification data. The chromosomes that have the highest
(CM) is used to derive both functions. Statistical Function fitness function are selected according to the ranking. The
Equations (FOS) are given below: remaining chromosome has to undergo mutation and
Journal of Healthcare Engineering 5

Asymmet

Border

Color
Φ
Diameter

Contrast
Output 0 or 1
Correlati Φ

Energy ∑

Homogen Φ

Mean
Standard Φ

Skewness

Kurtosis

Entropy Hidde Output

Figure 2: Features selected for CNN techniques.

Stage 1: Establish a randomly generated population.


Stage 2: Each chromosome, its fitness function is calculated.
Stage 3: Continue the stages below until the ‘n’ offspring are produced.
(1) Choose the current population parent chromosome
(2) Probably PC crossover the set to form two offspring at a randomly selected point
(3) Mute the above process with probability Pm
Stage 4: Current population is replaced by a new population.
Stage 5: repeat step 2.

ALGORITHM 1: Algorithm for genetic algorithm.

crossover to produce a greater likelihood of a new chro- Since the GA iterates, dependent upon the classification
mosome. This process is repeated until the attainment of a error, the distinct chromosomes in the present population
fitness function [22]. are assessed and the fitness is ranked as well. Iteration is
mainly done to decrease the error rate and to find the
(1) Initial Population Selection. The initial population matrix smallest and best fitness value,
is given by m × n, where m and n represent population size
and chromosome length. In the present function, the α −1
number of derived features of the picture block is equal to fit � + Exp􏼠 􏼡. (9)
Nf Nf
the length of the chromosome, and the number of chro-
mosomes indicates the size of the population. Each bit of a Here, Nf � cardinality of the selected features and
chromosome in the features matrix indicates the position of α � represents the classification error.
the feature. A “1” in the chromosome is the selection of the
matrix of the equivalent feature. (1) New Population for Generation of Children. A new
generation is established after evaluating the fitness function.
2.5.2. Fitness Evaluation. A fitness function has the original It mainly depends on the genetic operator such as crossover
goal of assessing the discriminating capacity of each subset. and mutation. This new population is named: Elite, cross-
Evaluating the reasonable number of the test data and the over, and mutation.
function space of the training sets solves the classification
problem. (a) Elite Children. Since these new populations are the
􏽶������������� finest children containing the highest fitness value,
􏽴
M these are pushed automatically to the successive gen-
2
D xtest , xi 􏼁 � 􏽘 xtest − xi 􏼁 . (8) eration. The population size gives the total elite
m− 1 chromosome. This infers EliteCount ≤ populationSize.
6 Journal of Healthcare Engineering

Start GA

Initialize population

Fitness evaluation

Selection of parent chromosomes

Elite Crossover Mutation


children children children

Termination
condition
New population

Best
chromosome

End GA Fitness evaluation

Figure 3: GA-based feature selection.

Input pattern of Genetic Algorithm Population of chromosome Transformed Pattern


Wavelet transform
G1 1 1 1 1 0
M
a11
G2 0 1 1 1 0
a1x ...... N=M∗G1
...... ......
a21 ......

a2x Gn 0 0 1 1 1

Accuracy of classifier transformed pattern from matrix G1l

Figure 4: Matrix representation showing the bit value of chromosome in genetic algorithm.

If the size of chromosomes is 2, the genetic algorithm (2) Tournament. In the genetic algorithm, the goal of the
chooses the two finest chromosomes and pushes selection technique is to ensure the population is being
them to the subsequent generation. persistently enhanced with complete fitness values. This
(b) Crossover Children. Crossover fraction is the ratio technique aids the GA in eliminating poor designs as well as
of the following generation besides the remaining maintaining simply the finest individuals. There are a lot of
kids, which are created by crossover. There are no selection techniques, for instance, stochastic uniform. In this
children with mutations when the fraction is fixed proposed research, the selected tournament is having a size
to one. of 2 which is used because of its easiness, speed, and efficacy.
Tournament selection always ensures in the selection process
(c) New Population: Mutation Children. Lastly, the that the poorest individual does not go to the subsequent
number of mutation children is Mcount � 100-Elit- generation. To carry out tournament selection, two func-
eCount-COcount � 100-2-78 � 20. This infers tions are required (i.e., the players (parents) and the win-
EliteCount + COcount + Mcount � 100. ners). Two chromosomes of size 2 are selected in the
Journal of Healthcare Engineering 7

Training phase

Texture Feature
Segmentation Extraction and
Pre Processing By BWT Selection
Data

BOV & Classified


SVM Output

Texture
Feature Optimised
Test Image Pre-Processing Extraction Feature

Testing phase

Figure 5: Block diagram showing the steps for support vector machine training and testing.

tournament selection, from the elite children’s population columns, missing values are interpreted as being sparse.
and finest of the two chromosomes. It is carried out itera- Missing values in columns with simple data types are
tively by filling up a freshly produced population. interpreted as missing at random. Generally, when choosing
to manage our data preparation, Naı̈ve Bayes requires
(3) Crossover Function. In the GA, to produce children for binning. It trusts to estimate the likelihood of accumulation
the next generation, the crossover operator genetically of procedures [23]. To reduce the cardinality, the columns
combines two individuals called parents. After the removal should be discarded as appropriate.
of elite kids, the number of new kids formed from the new In principle, it has the base error rate in contrast with
population is given by the crossover kids and it is being every other classifier. They give hypothetical avocation to
evaluated. The crossover fraction normally has a value be- different classifiers which do not expressly utilize the Naı̈ve
tween 0 and 1. The present work crossover value is 0.8. Two- Bayes theorem. For example, it tends to be shown, based on
parent chromosomes have binary values and an XOR op- specific presumptions, that numerous neural systems and
eration is carried out on these chromosomes. curve-fitting calculations yield the most posterior hypoth-
esis, as does the naı̈ve Bayesian classifier. This classification is
completed by using a probabilistic methodology that cal-
2.6. Classification. Within the area of computers, vision culates the class probabilities and predicts the most likely
classification is a significant chore. The labeling of images classes. A Naı̈ve Bayes classifier applies Bayesian statistics
hooked on one of several defined groupings is called image with strong, independent suspicions on the features that
classification. The classification scheme contains a database drive the classification procedure. It is the basic grouping
that holds predefined patterns that relates to the perceived conspires, which approximations the class conditional
item to organize into a suitable classification. Here, various likelihood by expecting that the properties are restrictively
techniques like Naı̈ve Bayes, SVM-based BoVW, and CNN autonomous, given the class name c. The conditional
algorithm are used. probability can be finally conveyed as pursues:
n
A
P(C|A) � c � 􏽙 P􏼒 i 􏼓 � c, (10)
2.6.1. Naı̈ve Bayes Classifier. The classification of Naı̈ve i�1
C
Bayes has played a major role in the extraction of medical
data. It shows superior accuracy, performance because the wherever both attributes set A � {A1, A2 .... An} contains
attributes are independent of each other. The missing values the capabilities of n attributes. With the restrictive
arise continuously in the case of clinical data [35]. This probability ratio, instead of calculating the class condi-
naturally treats misplaced ideals as if lost by mistake. This tional probability for each grouping of A, just estimate the
algorithm replaces zeros with limited numerical data and conditional probability of each Ai, given C. The last
zero vectors with categorical negative data. In nested method is becoming increasingly useful as it does not
8 Journal of Healthcare Engineering

Proposed CNN model


FC

INPUT
Soft-
max

Benign

Conv Conv Conv Conv


1 1 1
1
+ + + +
Batch Batch Batch Malignant
Batch
Norm Norm Norm Norm
+ + + +
Max Max Max Max
Pool1 Pool1 Pool1 Pool1

Figure 6: Basic diagram for the working of CNN.

entail an incredibly huge collection of preparations to get It follows 4 simple steps:


a reasonable estimate of the likelihood. Each classifier
(i) Ability of Image features of a defined label
determines the posterior likelihood for each class C to be
used to classify a test sample. (ii) Development of visual vocabulary by clustering,
accompanied by frequency analysis
􏽑ni�1 P Ai /C􏼁
P(C|A) � P(C) (11) (iii) Classification of generating vocabulary-based
P(A). images
Since P (A) is fixed for each A, it is adequate to choose the (iv) Obtain the best class for the query image.
class that boosts the numerator term:
The algorithm for the bag of visual classifier based SVM
n is as follows:
A
P(C) 􏽙 P􏼒 i 􏼓. (12) Level 1: Set up image type sets.
i�1
C
Level 2: Generate functional Bag.
This classifier has a few benefits. It is informal to use Level 3: Training the image with BoVwords.
different classification approaches; only one time scan of the
training data is compulsory. The naı̈ve Bayesian classifier can Level 4: Classification using SVM classifier.
simply hold missing characteristic qualities by essentially Based on the operation of the BOV classifier, the images
precluding the probability when computing the probabilities are processed for examination, specifically, the image is
of enrolment in each class. divided into two parts: (1) training and (2) testing subjects.
After that, the takeout support vector machine produces a
visual vocabulary from each package’s representative im-
2.6.2. SVM Classification Based on BoV. BoVwords is a ages. The image is extracted from the training set in the
supervised model of learning and an extension to the NLP process used to extract the characteristics from the image.
algorithm. Bag of Words is used for classification of images. Using the nearest neighbor algorithm, an image histogram
It is used quite widely aside from CNN. In essence, BOV function is constructed. The histogram converts the image
provides a vocabulary that can best describe the image in into a function vector. Classification finally is done with
terms of extrapolating properties. By generating a bag of SVM classifier assistance.
visual words, it uses the Computer Vision ToolboxTM
functions to define the image categories. The method pro-
duces histograms of occurrences of visual words; such 2.6.3. Convolutional Neural Network. Deep learning meth-
histograms are used to classify the images. The steps of the odology is used for arranging the information mainly into
support vector machine are presented in Figure 5. four objective factors and to check its precision for each point.
The steps below explain how to set up your images, Very few investigations were conveyed utilizing different
develop a bag of visual words, then train and apply a classifications from this informative collection. Deep learning
classifier for the image type [24]. is a kind of AI that manages calculations roused by brain
Journal of Healthcare Engineering 9

structure and capacity, called neural artificial networks. The vector the gotten at some stage, or the outcomes collected are
thing that isolates it from neural systems is the huge realizing, redesigned for 1D structure. The base layer is directly con-
which is utilized to explain the advancement of new tech- nected to the classification process, with the same number of
nology in an ANN to process bigger measures of information neurons as classes. With a softmax activation function, the
and to improve its precision of the data classification [36]. outputs are normalized, and in this way gauge probabilities of
Convolution Neural Network (CNN) is an exceptional sort of the back class. Various measuring techniques such as size and
neural system for preparing information in image, text, and number of capabilities maps bit sizes, factor skipping, and
sound structures that have worked effectively in their usage connection table are used in the convolution layer.
[25]. The expression “Convolution Neural Network” built up In the proposed system, the bit size utilized by the
a measurable activity called convolution, to show their net- connected layer is 7 × 7, the number of feature maps
work. The convolution operation is the operation of a dot compared to 6 for the main layer of convolution, and the
product between the process’s input matrices. The working of second layer of convolution equivalent to 12. The skipping
the CNN is presented in Figure 6, and the basic architecture is component in the network decides the horizontal and
presented in Figure 7. vertical pixel number by which the part will skip between
Convolutional Neural Network relies upon connecting ensuing convolutions. In each layer where M includes the
the previous layer’s local territory to the following layer. size maps (Mx, My), the size part (Kx, Ky), the skipping
Spatially, CNN sets up a neighborhood relationship by factors (Sx, Sy) are than the size relationship of the output
applying a progressive example of communication between maps as per the above parameters:
neurons of adjoining layers. The one-layer units connect to
Mnx � Mn− 1x − knx Snx + 1,
the former layer subunits. Preceding layer unit number (14)
forms map width. The subsequent rendition takes motiva- Mny � Mn− 1y − kny Sny + 1,
tion from the Lanet-5. This standard of forcing neighbor-
hood, availability is portrayed, as observed in Figure 8. where the layer is demonstrated by the record “n.” In layer
Anyway, as in customary multilayer systems, CNN Ln − 1 at most, each map in layer Ln is associated with Mn− 1
shares loads; the measure of accessible boundaries does not maps. Neurons of a given map share their loads yet have
expand considerably to the input measurements. The main distinctive open fields.
elements of CNN-2 layer classifiers are explained below: The information layer has a component map compared
to the size of the character’s standardized image. In this
(1) The Convolutional Layer. Behaviors of a 2D sifting between exploration, masses are arbitrarily picked from a uniform
x-input images and w channel bank, producing an extra dispersion inside the range [− 1/fan-in, 1/fan-in] where the
assortment of h images. Input-output correspondences are contributions to a hidden variable are given by fan-in. For
shown by an association table; channel reactions are con- CNN, this relies upon the quantity of feature maps input and
solidated straightly from the input associated with a similar the size of the open field. Likewise, with the CNN case, max-
output image. An association with the accompanying semi- pooling is a nondirect down-example. The input image is
conductor is framed through the MRI column: (input Image, partitioned into a collection of nonoverlapping rectangles by
separated, output Image). This layer does the mapping after methods for max-pooling, and yields are determined as the
most elevated advantage for each such sublocale [26].
hj � 􏽘 xi ∗ w k , For two reasons, max-pooling is useful in vision:
(13)
i,k∈CTi,k,j
(a) reduces the numerical sophistication of the upper
∗ strata.
where shows true 2D convolution. A given layer’s wk the
channel has a similar size, and xi determines the size of the (b) It provides different types of invariance in
output image alongside the input value hj . Likewise with translation.
standard multilayer systems, ‘he is then applied to the The second layer of convolution layer comprises six
nonlinear activation function. component maps which are gotten by interpreting the size
5 × 5 pieces into the feature map of the info. The subtesting
(2) Pooling Layer. The pooling layer does not mean to lessen layer is shown in the third layer, and it is employed in the
the calculation unpredictability, however, in addition to the downexamining of the image. For a downsampling of the
direct decision of the feature. The input images are tiled in image, the max-pooling method is employed over the 2 × 2
uncorrelated subareas, which recover just one output an region of the input image. The convolution layer, which is
incentive from. Most extreme or normal mainstream the third layer, makes the changes over the image to a
choices, ordinarily named as max-pooling and avg-pooling. partition size of 5 × 5 based on the feedback from the past
The max-pooling is normally profitable given that it adds a layer. At this stage, the number of feature maps acquired
slight invariance to interpretation and distortion, which in equivalents 12. Again, the subinspecting is acted in the
turn prompts quicker assembly and better speculation. fourth layer with the procedure max-pooling in the area of
size 4 × 4. The fifth layer is a fully associated layer that drives
(3) Fully Connected Layer. In the layered system, a fully a classifier to give a feedforward yield.
connected layer acts as a base layer. Either the system switches The training part considers the elements in layers 1–4 are
convolutional and max-pooling layers with the 1D feature the extraction part of the trainable element, and the classifier
10 Journal of Healthcare Engineering

Feature Maps 12×12

Feature Maps Feature Maps Feature Maps


6@24×24 12@8×8 4×4

OUTPUT

INPUT 10@1× 1
28 × 28

Convolution Sub Sampling Convolution Sub Sampling Convolution


Figure 7: The Convolutional Neural Network basic architecture.

is the following replacement layer. Images are partitioned into 0 to 1 (0 methods dark, 1 method white). At the point when
a test set and preparing the set. Every computerized image was we utilize the dataset, we split the preparation and test cluster
size-standardized and dependent on a 28 × 28-pixel fixed- into smaller than normal bunches.
scale image. In the first dataset, each pixel of the image is
portrayed by an incentive somewhere within the range of 0
and 255, where 0 is dark, 255 is white, and everything in the 2.6.4. Performance Analysis. The following performance
middle is an alternate shade of dim. An image is represented metrics are obtained from the segmentation and classified result.
as a 1-dimensional array of float esteems of 784 (28 × 28) from Detection part:

S(A ∩ ​ B)
Jaccard index � J(A, B) � ​ .
S􏼒A⋃B􏼓

A∩​ B
Dice overlap index(DOI) � D(A, B) � 2X ,
A+B
2XTruepositive
Similarity index SI � , (15)
2Xtruepositive + falsepositive + falsenegative

Vautomatic
Absolute volume measurement error(AVME) � 􏼠 − 1􏼡 × 100%,
Vmanual
􏼌􏼌 􏼌
􏼌􏼌Vmanual − Vautomatic 􏼌􏼌􏼌
Figure of merit(ε) � 1 − (ε) � 1 − .
Vmanual

Classification part: In our approach after applying the fitness function of the
TP GA, the optimized features considered are mean, variance,
Sensitivity � × 100%, skewness, kurtosis, and energy. The result is shown in Ta-
TP + FN
ble 1. The cycle of classification is divided into two parts (i.e.,
TN the training part and the evaluation part). First, known data
Specificity � × 100%, (16) (i.e., 24 features∗ 46 images) are given in the training part to
TN + FP
the training classifier. Second, unknown data are provided to
TP + TN the classifier in the testing phase, and the classification is
Accuracy � × 100%.
TP + FP + FN + TN carried out after the training part. The classification’s ac-
curacy rate and error rate depend on the quality of the
3. Results and Discussion training.
Figures 11–13show the results of GA, and Figure 14
The suggested strategy was created using MATLAB software shows the visual word occurrence of the BoVW classifier.
that has a Core 2 Duo code configuration. Initially, the Here, the x-axis represents the visual word index, and the y-
preprocessing technique is applied to enhance the image. axis represents the frequency of occurrences in the images
Next, the segmentation is applied to extract the boundary correspondingly. The obtained accuracy and confusion
region of the tumor. The result is shown in Figures 9 and 10. matrix is shown in Figures 15 and 16. The obtained accuracy
Journal of Healthcare Engineering 11

Layer m+1

Layer m

Layer m-1

Figure 8: Local connections pattern representation.

Figure 9: Preprocessing results.

of the proposed classifier is 90% and 97.3%. Table 1 shows convolution has three layers, and every block of identity has
the performance value of the suggested classifier. Table 2 three layers of convolution too. The ResNet-50 has more
presents the statistical analysis of the brain images, and Table than 23 million trainable parameters. Figures 17 and 18
3 presents the performance comparisons. show the first line of ResNet and First Convolution Layer
Further, the findings have acquired an average index Weights. A confusion matrix is a table that is sometimes
coefficient of 0.82 dice similarity, suggesting a greater com- used to explain the output of a classification model over a set
parison between the computerized tumor areas (machines) of test data defined for true values. It enables the output of an
extracted by radiologists and manual tumour region ex- algorithm to be visualized. In a projected class, the matrix
traction Our current technique results demonstrated the row represents instances, while each column represents
importance of quality parameters and accuracy compared to instances in a real class (or vice versa). Figure 19 shows the
state-of-the-art techniques. matrices obtained for precision and uncertainty. The ac-
In CNN, the model ResNet-50 consists of 5 stages, each curacy obtained is 98.5 percent. Figures 20 and 21 show the
with a convolution block and identity block. Each block of plot of different classifiers for performance.
12 Journal of Healthcare Engineering

Figure 10: Result of the proposed segmentation in FLAIR and T2 images.


Journal of Healthcare Engineering 13

Table 1: Optimized feature extracted after applying genetic algorithm.


First-order statistics (FOS) feature
Image no.
Mean Standard deviation Skewness Kurtosis Entropy
1 0.011 0.019105 2.274177 7.10785 2.68715
2 0.006 0.010301 1.605176 4.28244 2.05357
3 0.008 0.010716 1.578003 4.13867 2.57759
4 0.010 0.01203 0.826578 2.47775 2.77483
5 0.011 0.01429 0.945757 2.65592 2.23901
6 0.010 0.008143 0.419944 2.28365 2.81441
7 0.014 0.01945 2.495946 12.0050 3.11189
8 0.012 0.01388 1.159288 3.16243 3.16398
9 0.014 0.02124 2.216729 9.24762 2.65219
10 0.008 0.010971 1.308686 4.27185 2.4252

3.4

3.2
Mean

3.0

2.8

0 50 100 150 200


Generation
Estimate
External
Internal
Figure 11: Feature selection result using GA.

5
0
2

1.5

0.5
2
1.5
1
0.5
0 0

Minimum
Figure 12: Fitness function using GA.
14 Journal of Healthcare Engineering

Average Distance Between Individuals


30

Average Distance
20

10

0
20 40 60 80 100 120 140 160 180 200
Generation

Best, Worst, and Mean Scores


10000

5000

0
20 40 60 80 100 120 140 160 180 200
Generation
Figure 13: Crossover and mutation result.

Visual word occurrences


0.35
0.3
Frequency of occurrence

0.25
0.2
0.15
0.1
0.05
0
0 100 200 300 400 500 600
Visual word index
Figure 14: Visual word occurrence of the BoVW.

0 65 10 86.6%
43.3% 6.7% 10.3%
Output Class

1 5 70 84.0%
3.3% 46.7% 16.0%

92.85% 93.3% 90.0%


7.15% 6.7% 10.0%

0 1
Target Class
Figure 15: Confusion matrix of Naı̈ve Bayes.
Journal of Healthcare Engineering 15

0 72 3 98.60%
48.0% 2.0% 1.4%

Output Class
1 74 96.10%
1
0.7% 49.3% 3.90%

97.1% 96.0% 97.3%


2.9% 4.0% 2.7%

0 1
Target Class
Figure 16: Confusion matrix of BOVW based SVM classifier.

Table 2: Statistical analysis of the brain images.


Dice Jaccard’s Spatial Spatial VOI of Time Figure of
Image AVI Accuracy Specificity
coefficient coefficient overlap overlap segmented ROI elapsed merit
1 0.85 0.81 0.891 24.349 0.892 859 98.58 99.5838 0.687 0.826
2 0.89 0.79 0.833 − 28.549 0.834 532 98.91 99.6829 0.7011 0.799
3 0.87 0.88 0.912 − 19.46 0.97 577 98.88 99.6765 0.6835 0.978
4 0.79 0.81 0.888 − 22.163 0.884 723 99.02 99.6969 0.6524 0.787
5 0.73 0.89 0.967 − 7.889 0.961 785 99.18 99.8522 0.9601 0.232
6 0.85 0.82 0.878 29.862 0.875 935 99.16 99.8343 0.980 0.612
7 0.87 0.88 0.914 20.134 0.917 925 99.11 99.7590 0.7404 0.939
8 0.89 0.71 0.762 − 39.225 0.768 444 99.06 99.8163 0.8194 0.967
9 0.85 0.89 0.938 − 13.315 0.939 794 98.88 99.5867 0.677 0.653
10 0.87 0.88 0.913 − 16.892 0.913 775 98.73 99.7162 0.699 0.477

Table 3: Performance comparison.


Classifier Naı̈ve Bayes (%) BoVW-based SVM (%) CNN (%)
Accuracy 90 97.3 98.5
Error 10 2.7 1.5
Sensitivity 86.6 98.6 98.6
Specificity 84 96.1 97
Precision 93.3 96 98

First section of ResNet-50


170
input_
1
168 conv
1
bn_c
on
166 activ v1
ati
max_ on_1_relu
pooli
164 res2a
_b
ng2d
_1 res2a
_b
bn2a ranch2a bn2a ranch1
_b _bran
162 activ ranch2a ch1
ation
res2a _2_re
_b lu
160 bn2a ranch2b
_b
activ ranch2b
ation
158 res2a
_b
_3_re
lu
bn2a ranch2c
_bran
156 ch2c add_
1
activ
ation
154 res2b
_b
_ 4_relu
bn2b ranch2a
_b
152 activ ranch2a
ation
res2b _5_re
_b lu
150 bn2b ranch2b
_bran
1 ch1.2
2b 1.4 1.6 1.8 2
Figure 17: First section of ResNet.
16 Journal of Healthcare Engineering

First convolutional layer weights

Figure 18: First convolution layer weights.

Confusion matrix

Malignant
Benign

99.5% 0.5%
Benign
(0.995) (0.0555)

2.5% 97.5%
Malignant
(0.025553) (0.97556)

Figure 19: The confusion matrix of proposed CNN.

Comparison plot
100
90
80
70
60
% values

50
40
30
20
10

1 2 3
SVM Naive Bayes BOVW-SVM CNN

Accuracy
Error
Figure 20: Accuracy and error plot.
Journal of Healthcare Engineering 17

comparison of sensitivity, precision and specificity


100
90
80
70
60

% values
50
40
30
20
10

1 2 3
SVM Naive Bayes BOVW-SVM CNN

sensitivity
specificity
precision
Figure 21: Accuracy, sensitivity, and precision plot.

4. Conclusion References
Medical image segmentation is a challenging issue due to the [1] M. A. Dorairangaswamy, “A novel invisible and blind
complexity of the images, as well as the lack of anatomical watermarking scheme for copyright protection of digital
models that fully capture the potential deformations in each images,” IJCSNS International Journal of Computer Science
structure. This proposed method works very effectively to and Network Security, vol. 9, no. 4, 2009.
[2] W.-J. Kim, J. K. Lee, J.-H. Kim, and K.-R. Kwon, “Block-based
the initial cluster size and cluster centers. The segmentation
watermarking using random position key,” IJCSNS Interna-
is done by using BWT techniques whose accuracy and
tional Journal of Computer Science and Network Security,
computation speed are less. This work recommends a system
vol. 9, no. 2, 2009.
that requires negligible human intrusion to partition the [3] F. Amato, A. López, E. M. Peña-Méndez, P. Vaňhara,
brain tissue. The main aim of this recommended system is to A. Hampl, and J. Havel, “Artificial neural networks in medical
aid the human experts or neurosurgeons in identifying the diagnosis,” Journal of Applied Biomedicine, vol. 11, no. 2,
patients with minimal time. The experimental results show pp. 47–58, 2013.
98.5% accuracy compared to the state-of-the-art technolo- [4] A. Demirhan, M. Toru, and I. Guler, “Segmentation of tumor
gies. Computational time, system complexity, and memory and e along with healthy tissues of brain using wavelets and
space requirements taken for executing the algorithms could neural networks,” IEEE Journal of Biomedical and Health
be further reduced. The same approach can be also used to Informatics, vol. 19, no. 4, pp. 1451–1458, 2015.
detect and analyze different pathologies found in other parts [5] S. Madhukumar and N. Santhiyakumari, “Evaluation of
of the body (kidney, liver, lungs, etc.). Different classifiers k-Means and fuzzy C-means segmentation on MR images of
with optimization methodology can be used in future re- brain,” The Egyptian Journal of Radiology and Nuclear
search to improve accuracy by integrating more effective Medicine, vol. 46, no. 2, pp. 475–479, 2015.
segmentation and extraction techniques with real-time [6] M. T. El-Melegy and H. M. Mokhtar, “Tumor segmentation in
images and clinical cases using a wider data set covering brain MRI using a fuzzy approach with class center priors,”
various scenarios. EURASIP Journal on Image and Video Processing,
vol. 201421 pages, 2014.
[7] G. Coatrieux, H. Hui Huang, H. Huazhong Shu, L. Limin Luo,
Data Availability and C. Roux, “A watermarking-based medical image integrity
control system and an image moment signature for tampering
Data will be available from the corresponding author upon characterization,” IEEE Journal of Biomedical and Health
request. Informatics, vol. 17, no. 6, pp. 1057–1067, 2013.
[8] M. Arif and G. Wang, “Fast curvelet transform through ge-
Conflicts of Interest netic algorithm for multimodal medical image fusion,” Soft
Computing, vol. 24, pp. 1815–1836, 2020.
The authors declare that they have no conflicts of interest [9] S. Lal and M. Chandra, “Efficient algorithm for contrast
regarding the publication of this paper. enhancement of natural images,” The International Arab
18 Journal of Healthcare Engineering

Journal of Information Technology, vol. 11, no. 1, pp. 95–102, [25] A. Veeramuthu, S. Meenakshi, and V. Priya darsini, “Brain
2014. image classification using learning machine approach and
[10] B. Willmore, R. J. Prenger, M. C.-K. Wu, and J. L. Gallant, brain structure analysis,” Procedia Computer Science, vol. 50,
“The Berkeley wavelet transform: a biologically inspired or- pp. 388–394, 2015.
thogonal wavelet transform,” Neural Computation, vol. 20, [26] S. Kumar and C. Dabas, “Sunila godara ‘classification of brain
no. 6, pp. 1537–1564, 2008. MRI tumor images: a hybrid approach’ information tech-
[11] X.-S. Yang, “A new metaheuristic bat-inspired algorithm,” in nology and quantitative management (ITQM2017,” Procedia
Nature Inspired Cooperative Strategies for Optimization Computer Science, vol. 122, pp. 510–517, 2017.
(NISCO 2010), pp. 65–74, Springer, Heidelberg, Germany, [27] M. Khalil, H. Ayad, and A. Adib, “Performance evaluation of
2010. feature extraction techniques in MR-brain image classification
[12] I. Ali, C. Direkoglu, and M. Sah, “Review of MRI-based brain system,” Procedia Computer Science, vol. 127, pp. 218–225,
tumor image segmentation using deep learning methods,” in 2018.
Proceedings of the 12th International Conference on Appli- [28] D. G. Glan and S. S. Kumar, “An optimized approach for SVM
cation of Fuzzy Systems and Soft Computing, pp. 29-30, based segmentation of MR images of brain tumors,” Journal of
Vienna, Austria, August 2016. Advanced Research in Dynamical and Control Systems, vol. 9,
[13] E. Abdel-Maksoud, M. Elmogy, and R. Al-Awadi, “Brain no. 8, pp. 80–87, 2017.
tumor segmentation based on a hybrid clustering technique,” [29] S. González-Villà, A. Oliver, Y. Huo, X. Lladó, and
Egyptian Informatics Journal, vol. 16, no. 1, pp. 71–81, 2015. B. A. Landman, “Brain structure segmentation in the presence
[14] G. Litjens, T. Kooi, B. E. Bejnordi et al., “A survey on deep of multiple sclerosis lesions,” NeuroImage: Clinica, vol. 22,
learning in medical image analysis,” Medical Image Analysis, Article ID 101709, 2019.
vol. 42, pp. 60–88, 2017. [30] H. Mohsen, E.-S. A. El-Dahshan, E.-S. M. El-Horbaty, and
[15] B. Devkota, A. Alsadoon, P. W. C. Prasad, A. K. Singh, and A.-B. M. Salem, “Classification using deep learning neural
A. Elchouemi, “Image segmentation for early stage brain networks for brain tumors,” Future Computing and Infor-
tumor detection using mathematical morphological recon- matics Journal, vol. 3, no. 1, pp. 68–71, 2018.
struction,” in Proceedings of the 6th International Conference [31] J. Masci, U. Meier, D. Ciresan, J. Schmidhuber, and F. Gabriel,
on Smart Computing and Communications, ICSCC, Kuruk- “Steel defect classification with max-pooling convolutional
shetra, India, December 2017. neural networks,” in Proceedings of the 2012 International
[16] D. G. Glan and S. S. Kumar, “Brain tumor detection and Joint Conference on Neural Networks (IJCNN), pp. 1–6, IEEE,
segmentation using a wrapper based genetic algorithm for Brisbane, QLD, Australia, June 2012.
optimized feature set,” Cluster Computing, vol. 22, no. 1, [32] S. M. Kurian, S. J. Devaraj, and V. P. Vijayan, “Brain tumour
pp. 13369–13380, 2018. detection by gamma DeNoised wavelet segmented entropy
[17] B. Willmore, R. J. Prenger, M. C.-K. Wu, and J. L. Gallant, classifier,” CMC-Computers, Materials & Continua, vol. 69,
“The Berkeley wavelet transform: a biologically inspired or- no. 2, pp. 2093–2109, 2021.
thogonal wavelet transform,” Neural Computation, vol. 20, [33] Q. Javaid, M. Arif, and S. Talpur, “Segmentation and classi-
no. 6, pp. 1537–1564, 2008. fication of calcification and hemorrhage in the brain using
[18] R. Anjali and S. Priya, “An efficient classifier for brain tumor fuzzy C-mean and adaptive neuro-fuzzy inference system,”
classification,” IJCSMC, vol. 6, no. 8, pp. 40–48, 2017, http:// Research Journal of Engineering Science, Technology and In-
www.ijcsmc.com. novation, vol. 15, no. 1, pp. 50–63, 2016.
[19] M. Angulakshmi and G. G. Lakshmi Priya, “Brain tumor [34] M. Arif, G. Wang, O. Geman, and J. Chen, “Medical image
segmentation from MRI using superpixels based spectral segmentation by combining adaptive artificial bee colony and
clustering,” Journal of King Saud University–Computer and wavelet packet decomposition,” in Dependability in Sensor,
Information Sciences, vol. 32, no. 10, pp. 1182–1193, 2018. Cloud, and Big Data Systems and Applications. DependSys
[20] D. Zahras and Z. Rustam, “Cervical cancer risk classification 2019, G. Wang, M. Z. A. Bhuiyan, S. De Capitani di Vimercati,
based on deep convolutional neural network,” in Proceedings and Y. Ren, Eds., vol. 1123, Singapore, Springer, 2019.
of the 2018 International Conference on Applied Information [35] M. Arif, G. Wang, V. E. Balas, and S. Chen, “Band seg-
Technology and Innovation (ICAITI), pp. 149–153, IEEE, mentation and detection of DNA by using fast FuzzyC-mean
Padang, Indonesia, September 2018. and neuro adaptive fuzzy inference system,” in Smart City and
[21] D. G. Glan and S. S. Kumar, “An improved tumor segmen- Informatization. iSCI 2019, G. Wang, A. El Saddik, X. Lai,
G. Martinez Perez, and KK. Choo, Eds., vol. 1122, Singapore,
tation algorithm from T2 and FLAIR multimodality MRI
Springer, 2019.
brain images by support vector machine and genetic algo-
[36] S. Albahli, H. T. Rauf, M. Arif, M. T. Nafis, and A. Algosaibi,
rithm,” Cogent Engineering, vol. 5, no. 1, Article ID 1470915,
“Identification of thoracic diseases by exploiting deep neural
2018.
networks,” Computers, Materials & Continua, vol. 66, no. 3,
[22] H. Mohsen, E.-S. A. El-Dahshan, E.-S. M. El-Horbaty, and
pp. 3139–3149, 2021.
A.-B. M. Salem, “Classification using deep learning neural
networks for brain tumors,” Future Computing and Infor-
matics Journal, vol. 3, no. 1, pp. 68–71, 2018.
[23] R. K. Vijila and S. Joseph Jawhar, “Novel technology for lung
tumor detection using nano image,” IETE Journal of Research,
vol. 67, no. 5, pp. 1–15, 2021.
[24] A. C. Jinisha and T. S. S. Rani, “Brain tumor classification
using SVM and bag of visual word classifier,” in Proceedings of
the 2019 International Conference on Recent Advances in
Energy-efficient Computing and Communication (ICRAECC),
pp. 1–6, IEEE, Nagercoil, India, March 2019.

You might also like