You are on page 1of 8

Journal of Applied Biology & Biotechnology Vol. 10(01), pp.

213-220, January, 2022


Available online at http://www.jabonline.in
DOI: 10.7324/JABB.2021.100126

An automated segmentation and classification model for banana leaf


disease detection

V. Gokula Krishnan1*, J. Deepa2, Pinagadi Venkateswara Rao3, V. Divya4, S. Kaviarasan5


1
Associate Professor, CSIT Department, CVR College of Engineering, Hyderabad, India.
2
Assistant Professor, CSE Department, Easwari Engineering College, Chennai,India.
3
Associate Professor, CSE Department, ACE Engineering College, Hyderabad, India.
4
Assistant Professor, EEE Department, CVR College of Engineering, Hyderabad, India.
5
Assistant Professor, CSE Department, Panimalar Institute of Technology, Chennai, India.

ARTICLE INFO ABSTRACT

Article history: The productivity in agriculture is a major factor in the economy. As a result, disease detection in plants plays
Received on: July 30, 2021 a significant role in agriculture. If sufficient care is not taken in this regard, then it can have major impacts
Accepted on: October 31, 2021 on plants by affecting the quality, quantity, or productivity of the respective product or service. In addition
Available Online: January 07, 2022 to reducing the amount of labor required to monitor huge farms of crops, automatic disease detection detects
symptoms at an early stage, i.e., when they first develop on plant leaves. A method for picture segmentation is
presented in this study, which is utilized for the automatic categorization of banana leaf diseases. The images
Key words:
are used to detect and classify diseases in banana plants. This is a cost-effective and efficient way for farmers
Agriculture, banana leaf,
to monitor the plant’s health. The images must be segmented in order to evaluate and extract information from
classification, disease detection,
them. This module of image processing isolates the object of interest from the rest of the image, allowing for
deep learning, segmentation
more detailed analysis. As a result, the success of a higher-level image processing modules is determined by
the precision with image segmentation modules being carried out. For segmentation and classification, a hybrid
fuzzy C-means procedure is used. Additionally, in order to identify banana plant illnesses, the color, shape,
and texture characteristics were extracted. To compare the suggested method with the existing deep learning
methods, for diseases such as black sigatoka, yellow sigatoka, dried/old leaves, banana bacterial wilt with
healthy plants, several quantitative metrics were investigated.

1. INTRODUCTION crops, diseases are visible at an early stage, while in others, they
As a major source of food and livelihood for millions of people, will only be apparent at a later stage, by that time it will be too late
agriculture is a vital industry. As a major food crop, bananas have to save the crop from destruction.
the potential to generate substantial cash for growers and the Recently developed sensors for remote sensing have improved the
country’s economy [1]. Farmers are not able to fully take use of ability to identify diseases in plants. Hyperspectral imaging at a close
the banana crop’s potential because of a variety of risks that reduce range is one of the new research techniques available to biologists
production. Pests and illnesses pose a major threat to banana [4]. In recent years, numerous researchers have described their work
production, reducing yields and resulting in substantial financial on the design and deployment of HS imaging schemes to gather
losses for growers. When plant diseases are correctly identified reflectance information from plant leaves [5,6]. The sensor camera
early on, growers can better control disease severity [2]. Insects is moved over a piece of a leaf which records the reflection of light.
It is highly problematic and expensive to obtain sufficiently labeled
and diseases have a wide range of symptomologies. In certain
illustrations in the earlier illness infected stage for categorization due
to the subsystem of the hyperspectral camera, as well as the non-
apparent signs of banana infection [7].
*Corresponding Author
V. Gokula Krishnan, Associate Professor, CSIT Department, CVR College Artificial intelligence (AI) [8] helps to recognize plant illnesses based
of Engineering, Hyderabad, India. E-mail: gokul_kris143 @ yahoo.com on the plant’s presence and visual symptoms that imitate human
© 2022 Krishnan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License -NonCommercial-ShareAlike
Unported License (http://creativecommons.org/licenses/by-nc-sa/3.0/).
214 Krishnan et al.: Journal of Applied Biology & Biotechnology 2022;10(01):213-220

behavior are deliberated [9]. This could help reduce the spread of pests It was postulated that local textural traits might be used for the
and diseases by alerting farmers and speeding up disease detection classification of three significant banana foliar diseases. As part of
[10]. Internet of things, robotics, and satellites are among the cutting- the pre-processing stage, the resized images are given a contrast
edge technology transforming agriculture and helping farmers boost using the contrast limited adaptive histogram equalization
anticipate the future. Studies on wheat [11] and cassava [12] have technique. Image enhancement and color segmentation are
verified AI-based detection of agricultural diseases. Computerized used to recognize the disease-affected regions. Three image
image-based illness recognition has shown talented results [14]. transformations are used to convert segmented images to transform
However, extracting characteristics is computationally demanding domain (DWT, DTCWT, and Ranklet transform). LBP and its
and requires expert knowledge for robust depiction [15,16], where derivatives are used to extract feature vectors from images in the
only few substantial, curated crop disease image databases exist. transform domain (ELBP, MeanELBP, and MedianELBP). A 10-
Currently, the PlantVillage data set has over 50,000 photos of diverse fold cross-validation method is used to associate the performance
crops and illnesses. Convolutional neural network (CNN) learned on of 5 prominent image classifiers. Tests with ELBP features taken
these images, but did not perform well when using real field images from the DTCWT domain revealed the highest accuracy, precision,
[17]. Until now, crop disease-finding models have typically focused sensitivity, and specificity (96.4% and 95.4%, respectively).
on leaf symptoms, which are the easiest to identify. For the classification of wheat seeds, Eldem [21] has presented
a novel DNN model. In the overall data set, 70% of the data is
Therefore, in this research study, hybrid segmentation is used to
trained data and 30% is test data. A 100% classification rate is
segment the affected areas of input leaves and a deep learning
achieved when the generated model is applied to data sets. More
classifier is used for classification. The detailed explanation about
than 100,000 synthetic wheat seed data are generated using a fuzzy
research work is given in the following sections. The related study
C-means (FCM) algorithm. There is a 70/30 split between the
of the existing techniques on banana leaves are given in section
trained and the test data in this experiment. UCI wheat seed and
2, whereas the study of the proposed technique is presented in synthetically created data sets were used to evaluate the proposed
section 3. The validation of the proposed technique with existing model. A 100% classification success rate was attained.
techniques in terms of various metrics is given in section 4. Lastly,
the conclusion of the research study is provided in section 5. By imputing missing pixels, Balasubramaniam and Ananthi
[22] have introduced a new technique for segmenting partial,
2. RELATED WORKS nutrient-deficient cropped pictures [23]. In most cases, each
image has pixels that convey information about its intensity. It is
With regard to CNN models, Ferentinos [18] detected plant considered an incomplete image when there are missing pixels.
diseases using simple photos of healthy and diseased leaves. The It is not immediately relevant to incomplete images. This can
AlexNetOWTBn and VGG models were utilized in the CNN lead to an error when attempting to segment nutritional deficient
model to classify leaf diseases. An open database including 87,848 areas when there are missing pixww els. Due to intrinsic faults
photos of 25 diverse plants in 58 separate classes of plant–illness in imaging equipment or environmental conditions, crop pictures
pairs was used to train the models. Both in-lab and field photos with nutritional deficit may contain missing pixels. Nutrient
were acquired. We have 58 separate classes of combinations that insufficiency in crop photos is segmented using FCM color
include particular healthy plants (HP). We have 25 plant species clustering.
in these groups. To produce the best results, the CNN model’s
parameters are fine-tuned to optimize training parameters. As a 3. PROPOSED SYSTEM
result of the VGG CNN, classification accuracy reached 99.53%,
In this section, an explanation of the proposed scheme is given
with an error rate of only 0.47% correspondingly. The image
wherein Figure 1 shows the working procedure of banana leaf
segmentation, which affects classification accuracy, was excluded
disease classification. Initially, the International Center for
from this investigation.
Tropical Agriculture (CIAT) banana images were taken as input
A banana disease and pest detection scheme utilizing an AI- data, which is given to the pre-processing technique. A hybrid
based DCNN has been proposed by Selvaraj et al. [19] in order to segmentation called total generalized variation fuzzy C means
benefit banana growers. Novel and rapid ways for detecting pests (TGVFCMS) is used for segmenting the affected area on the
and diseases at the right moment will make it easier to monitor leaves. After segmentation, the data is given to CNN for final
and create control measures with greater effectiveness. Just-in- classification.
time crop disease detection is a new use of DCNNs and transfer
learning. In this experiment, 18,000 photos of banana fields were 3.1. Data Set
used. The data set is separated into training (Ttr), validation (Tv), We have a database of over 18,000 real field photographs of
and testing (Tt), with 70%, 20%, and 10%, respectively. We also bananas in CIAT’s image library. The data sets cover dried/old
choose to use a simple random sampling technique due to its ease leaves (DOL), HP, and a balanced 700 images from 5 major
of implementation. The accuracy of the different models examined diseases such as Fusarium wilt of banana (FWB), black sigatoka
ranged from 70% to 99%. There are a number of ways in which (BS), Xanthomas wilt of banana or banana bacterial wilt (BBW),
this system can assist in the early detection and management of yellow sigatoka (YS), and banana bunchy top disease. The data
pests and diseases. set consists of entire plant, leaf, pseudo-stem, fruit bunch, cut fruit
Krishnan et al.: An automated segmentation and classification model 2022;10(01):213-220 215

Figure 1: Working procedure of the proposed methodology.

Figure 2: Some of the sample data set images.

Figure 3: (a) Input image. (b) Hue component. (c) Saturation component. (d) Intensity component of the HSI color model.

and corm. But in this proposed research work, we have only used CMYK, L*a*b, and hue saturation value (HSV) can also be used
the leaf images to detect the disease BS, BBW, YS, and DOL in for analysis. Acquired images are manipulated directly in the RGB
HP. Totally, we have only used 9,000 images from the data set. color space model and color transformation is carried out over the
Some of the sample data set images are shown in Figure 2. image to standardize the color model. HSI and HSV color space
models are commonly used in leaf disease identification as they
3.2. Pre-Processing are similar to human perception. In the HSI color model, hue and
Images of the banana leaf will be commonly acquired in the RGB intensity of a color are observed and in the HSV color model, hue
color space model. Color is an important feature as color value and value of a color are considered for manipulation. Hue is an
shows variations based on disease infection in the plant. Other important color component taken into consideration as it shows
color space models like RGB, hue saturation intensity (HSI), the perceived color of an object which is shown in Figure 3.
44 4216 4 Krishnan
Krishnan etKrishnan
etKrishnan
al.:
al.:Journal
Journalet of
al.:
ofet Journal
Applied
Applied of Applied
Biology
Biology && Biology
&Biology & Biotechnology
Biotechnology
Biotechnology 2021;0(00):1-8
4 4 4 44 Krishnan
Krishnanet
Krishnan al.: Journal
etetal.:
al.:Krishnan
al.:of
Krishnan
Journal
Journal
et
Krishnan ofetApplied
et
of al.:
Applied
al.: Journal
Applied
al.:
Journal Biology
Journal
Journal
of of
Biology
Biology
ofof Applied
Biotechnology
Applied
& &2021;0(00):1-8
Biology
&Biotechnology
Applied
Applied 2021;0(00):1-8
2022;10(01):213-220
&Biotechnology
Biotechnology
Biology
Biology
& & Biotechnology
2021;0(00):1-8
2021;0(00):1-8
Biotechnology
Biotechnology 2021;0(00):1-8
2021;0(00):1-8
2021;0(00):1-8
2021;0(00):1-8

During
During
During
During
During image
image During
image
image
image
During image
pre-processing,
pre-processing,
pre-processing,
During
During
Duringpre-processing,
pre-processing,
image image pre-processing,
image
image
pre-processing, anan image
image
pre-processing,
an image
pre-processing,
ananimage
pre-processing, imageis
anisan isisimage
altered
altered
is
image altered
an
altered
altered
an an to
istois
image
image altered
provide
provide
image
to
to
tois
altered provide
provideisto
provide
isaltered to
more
more
altered
altered provide
more
more
more
providetotoprovide more
Here,
Here,
toprovide
provide
more the
Here,
Here,
Here,the
more
more
more the the
Here,Here,
second-order
thesecond-order
second-order
second-order
second-order
Here,
the the
Here,
Here, second-order
the
second-order TGV
TGV
thethe TGV
TGV isisasas
second-order
second-order
TGV
second-order is is
is
TGV as TGV
follows:
follows:
as
as follows:
follows:
follows:
isTGV
TGV is
TGV
as as is is
follows: follows:
asisasfollows: asfollows: follows:
information
information information
about
about the
the about
affected
affected the area
areaaffected
ofof a a area
leaf.
leaf. of
With
With a leaf.
good
good With
visual
visual good visual
information
information
information
interpretation,
interpretation,
interpretation,
interpretation,
information
interpretation,
about
about
interpretation,
information
about
information
information
itinterpretation,
itpresents
interpretation, presents
the
the
about
interpretation,
ititit presents
presents
interpretation,
presents
affected
the affected
affected
itleaf
itleaf
the
presents
about
about
about
image
itimage
leaf
leaf
presentsleaf
the
itimage
image
area
area
affected the
area
the
leaf
presents
itpresents
presents
image
leaf
of
affected
affected
of
of
affected
area
image
information
information
aaa leaf.
information
leaf
image leaf
information
information
leaf
leaf.
leaf.
image
area
of aarea
areaWith
ofofof
With
With
leaf.
information
effectively.
effectively.
image
image
information
good
a good
effectively.
aleaf.
agood
leaf.
With
information
effectively.
effectively.
information
information
leaf. visual
visual
With
good
effectively.
This
This
effectively.
With
visual
With
This
This
This
goodgood
good
visual
TGV
TGV
This
effectively.
effectively.
effectively.This TGV
TGV
visual
visual
visual
a a(This
22
This
u(This
u)2a22)(=TGV
aa (uu))=
=supTGV
sup
(∫au∫ΩTGV
=a2sup
sup
TGV
2
(TGV
udiv
Ω )udivu=2)sup
∫a∫ΩΩΩ(udiv
au
2 22
=2udivsup
vdxvdx
a()u
2| v
(=u)2∫2vdx
)sup ∫vsup
|=
=vdx εsup
udiv {{ {{ { { { {{

εudiv
Ω| |v∫ εc∫(udiv
v2cεvdx
Ω ΩΩ
∫(ΩΩudiv
 vdx
cc|(v
cudiv (ΩΩvdx
2,2,S2S|2v ε
2 2 2 2 d ×dd×2d d ×d2
ε,, )dc2|),d×v,(×dv|Ωdεcvv)|)∞v,(ε,∞ΩSεv≤c2vd≤,(×∞ac2SΩ∞d∞ac20(d≤)0,,Ω(≤×,,SΩdadiv
SSvdx
vdx ,avd)div
0S

dv
)div
d×d ≤
}} }} } } }
≤,∞∞)∞av),v≤
,0×0,,Sd,dv×vdiv
∞ ∞∞ 0 ∞
≤0≤1≤,≤
v0,≤v,∞ava∞∞a1div adiv
aa≤ ,0v,≤div
v1a1,1a0div ∞av1≤v∞va≤
div a≤a
1 ≤
∞∞ 1
procedure
procedure
procedure
procedure
procedure procedure
does
does does
procedure not
does
does not procedure
not
procedure
procedure not
notdoes
modify
modify
does modify
modify
modify not
does
not the
the
does
does modify
leaf
the
the leaf
the
not
modify notnot image’s
leaf
leaf
leaf the
image’s
modify
image’s
modify
modify
the image’sleaf
image’s
leaf inherent
thethe image’s
inherent
the inherent
leaf
inherent
leaf
image’sinherent
leaf inherent
information,
information,
image’s
image’s
image’s information,
information,
inherentinformation, information,
inherent
inherent
inherentinformation, information,
information,
information,

but
butbut
itit does
does change
change (5) (5) (5)(5) (5) (5) (5)(5)(
but
but but
itititbut
does
does
does ititbut does
change
but
change
change
but
does itthe
itthe
it
does dynamic
change
the
does
the
changethedynamic
does dynamic
dynamic
dynamic
change
change the therange
change range
dynamic
the
dynamic range
the
the
range
range of
of
dynamic
dynamic specific
dynamic
ofspecific
ofrange
of
range specific
specific
specific aspects
of
range
range
of aspects
range specific
of
specificaspects
ofof
aspects
aspects for
for aspects
specific
specific
specific for
for
foraspects
aspects forfor forfor
aspects
aspects for
localization,
localization,
localization,
localization,
localization,
resizing
resizing andand
allowing
allowing
localization, allowing
localization,
allowing
localization,
allowing
localization,
localization,
filtering
filtering
resizing and are
for
for
allowing
allowing
are the
afor
for
for
the
filtering
aallowing
more
more
common
common
for
are
for
aaaallowing
allowing more
more
more accurate
accurate
a afor
the
more
accurate
morefor
accurate
accurate
for a amore
pre-processing
pre-processing
common
localization.
localization.
accurate
aaccurate
more
more localization.
accurate
localization.
localization.
accurate
accurate Image
Image
localization.
localization.
techniques
techniques
pre-processing
Image
localization.
Image
localization.
Image
localization. Image
where
Image
techniques
where Image
Image
where
where
where c2where
Image c
where
2 2222 d ×

ΩΩc, ,Swhere
ccc Ω
SΩwhered d×d dddd×××2×ddsignifies
,,SSc2cΩ
where ddsignifies
,Ω S 
c cΩ
(( ((( ))( ()) ( (() ) ) ))
×2d Ωdthe
,signifies
2S
2 signifies
d ×d the
dsignifies
c Ω S signifies
, S, ,Ssignifies
×dd × dvector
d d×the
the
the vector thespace
vector
signifiessignifies
signifies
vector
vector the space the
vector space
space
spacevector
the the ofvector
dd××dddddd××××dddd
ofvector compactly
vector
of
spaceof compactly
of spacecompactly
compactly
compactly
spaceofspacespace of ofcompactly
compactly ofof compac
compact
compactly
resizing
resizing
resizing and
and
resizing filtering
andresizing resizing
filtering
resizing
filtering
and filtering and are
are
and
areand the
the
the filtering
filtering
filteringare common
common
common
the are
areare the
common pre-processing
the
pre-processing
the common common
pre-processing
common pre-processing
techniques
pre-processing
techniques
techniques
pre-processing
pre-processing techniques maintained
maintained
techniques
techniques
techniques
maintained
maintained
maintained under
under
maintained
maintained under
under
under the the
maintained
maintained
maintained setset
the
the
the
underunderofof
under
set
set
set symmetric
symmetric
of
of
of the
under
under symmetric
symmetric
the setthesymmetricset the
of set theof matrices
matrices
set
symmetricsymmetric
set matrices
of
matrices
of
matrices
of symmetric symmetric
symmetric S S
matrices matrices S
S matrices . .Calculation
Calculation
matrices
. . dS
S .SCalculation
matrices
. Calculation
Calculation
Calculation
d ×d ddd××dd
×d d.×Calculation
SS . Calculation
. . Calculat
Calculatio
used
used ininbanana
used
used
used banana
in
in
in used
banana
banana
usedbanana leaf
leaf
in
used
in used
used disease
banana
leaf
banana disease
leaf
leaf
in in
disease
inbanana
disease
disease
banana
leaf diagnosis.
leaf
banana diagnosis.
disease
diagnosis.
leaf
diagnosis.
diagnosis.
leaf
disease leaf The
TheThe
disease
disease
disease
diagnosis. captured
diagnosis.
The
Thecaptured
diagnosis.
captured
diagnosis. The
diagnosis.
captured
captured
The leaf
leaf
The image
captured
leaf
The
The
leaf
capturedleafimage image in
captured
image
captured
image
captured leaf in image
leaf in
in
in image
leaf ofof
leaf
leaf indivergence
divergence
divergence
image
image
image
in inin inof and
and
divergence norms norms can
can
and be be carried
norms carried can outoutbe as as carried follows:
follows: out as follows:
of
of
of divergence
divergence of ofof
divergence and
of
and
and
divergence divergence
norms
divergence
norms
normsand can
can
can
and
norms and
and be
be be
norms carried
norms
norms
cancarried
carried be can can
carried can
out
outout be beasas
be
as
carried follows:
carried carried
follows:
out follows: asout outout
follows: asasfollows:asfollows: follows:
different
different
different
different
different resolution
resolution
different
resolution
resolution
resolution
different sizes
different
different sizes
resolution
different
resolution are
sizes
sizes
sizes are standardized
standardized
are sizes
resolution
resolution are
resolution
aresizes are
standardized
standardized
standardized
sizes
are sizes
sizes areto
toare
are
standardizedaato
standardized fixed
to fixed
to aaafixedresolution
standardized
fixed
standardized
standardized resolution
fixed
to toaresolution
a tofixed
resolution
resolution
fixed asize
asize
resolution
atoresolution
to fixed fixed
fixed size
size
size sizesize
resolution
resolution
resolutionsize size
size
∂∂vv∂∂vv ij d ∂2v2∂vij∂v∂∂vvijdd 2∂∂22vv∂ii∂ii222vv d∂ 2 v∂ 2 v∂ii2∂∂∂v∂v2vijv2ij∂v∂vv ij ∂v ∂v∂ij v∂∂vvi
using
using
using
region
region
image
usingimage
using image
ininleaf
image
using
leaf
resizing.
usingresizing.
imageusing
regionimages
image
resizing.
resizing.
images
using
resizing.
using
image
in
Image
as
leaf
Image
image resizing.
image
image
asimages
Image
Image
Image
resizing.
there
there
images
filtering
filtering
resizing.
areare
Image
filtering
resizing.
filtering
resizing.
filtering
Image
more
asaremore
is
there
is
Imageused
used
filtering
is
Image
is used
Image
is
filtering used to
used
possibilities
to
possibilities
are more
remove
remove
is
to
filtering
filtering toused
filtering
isto used
remove
remove
remove
is is unwanted
totohave
unwanted
to
isused
used
to
have
possibilities
remove
used unwanted
unwanted
unwanted
remove
dust
toremove
totoremove
dust
toto
unwanted
remove
unwanted
have
((
todust ( dust
div )
unwanted
unwanted
unwanted
div
div (v v
div ) v =
i ()div
v = )∑ (∑
=
d d ddd ijij
div
i ii v( )div
=j j
=∑∑
1
= 1 )div=,)v,iijvd∑jij)div
∂(ij∂j=(jx==1=x1i1jv∂j∑
vdiv =),div
,j==∑ =1divdv
div
∑ ijvd2d22 =
∑, vv,div = ij∑ ∑(( (( ))( ))( ( ()( ) ) ))
div
ij 2
,== v,1div
j,=j∑ = vdddiv
∑1ddiv 2 2=
= 12v∑ 2 vv∑
2+ d+ 2ii2=
iiii
= +j∑
∑+== ∑12 ∑d iid d
i2 < ∑i∑ <j∑ j+ 2i <2
+ii 2iiii∑ij2ij+i2∑
j ∑+ + <2 j∑ ∑ij ij ij
region inleaf
leaf images as there are more possibilities to have dust ∂xxj jj=i1i ∂x∂j j=x1j =ijj∂1=i 1x∂∂jixiixj j ij∂j=∂=1x1xii i∂∂xixij2i=i22i1i ∂xj =i∂21jx=j∂i1=i∂<i∂1<xxjx∂jji∂2∂xj∂xxi2xi2ijj<j∂jj∂jxx∂j jjxi <j∂∂ijx<ix<∂jjj∂jx∂x∂jx∂xj jx∂j ∂jx
region in leaf images
region in
as leaf
there images
are more idust
region in
region region
region
inimagesleaf in in as
leafleafthere images
imagesas there as asas
morethere
are there
therepossibilities
possibilities
more arearearemore more
more toto possibilities
have
possibilities
have
possibilities
possibilities dust
dust
have totodust
have have
have dust j=
particles
particles
particles
particles
particles or
or due
due
particles
or
particles drops
due drops
orparticles
or due
due or
particlesdrops
particles
drops
ordrops on
due on
due orthethe
on
on
or drops
on
or
due
drops leaf.
leaf.
the
the
due
due
the onThe
leaf.
drops
on The
leaf.
drops
leaf.
drops thethe filtering
The
The
on filtering
The
onleaf.
on
the
leaf. filtering
the
filtering
the The
filtering
leaf.
The process
process
leaf.
leaf. filtering
The process
process
The
process
The
filtering isfiltering
isfiltering
filtering carried
carried
process
isiscarried
is
process carried
carried iscarried
process
process
process
is carriediscarried
carried
is iscarried
out
outout
using
using
out
out using
using either
either
out using
either
either a aoutlowlow
a a pass
either pass
low
using
low aor
pass
pass or
low
either high
high
or
or pass
a pass
high
high pass
low or filter.
filter.
high
pass
pass
pass A A
pass
filter.
filter.
or low
highlow
AA pass
low
low pass
filter.
pass A filter
pass
pass filter
low
filter. pass
filter
filter
A low (6)
filter
pass filter (6) (6)(6) (6) (6) (6)(6)(
using out either out
using outusing ausing
loweither
either apass
either
lowor apassalow
high
low pass
pass
or high orfilter.
orpass
highhigh A low
passfilter.
pass
filter. Apass
filter.
lowA filter
Alow
pass low passpassfilter
filter filter
lessens
lessens
lessens
lessens
lessensthethe amplitude
amplitude
lessens
the
the
the
lessens amplitudethe
amplitude
lessens
amplitude
lessens
lessens of
the amplitude of high
high
amplitude
of
thethe
of
of
the amplitude frequencies
high
high frequencies
amplitude
amplitude
high of high
frequencies
frequencies
frequencies
of high ofof and
and
frequencies
high
high
offrequencies has
and
and
and
high frequencies has low
low
frequencies
has
frequencies
has
has frequencies
and
low
low frequencies
low
and has has and low
frequencies
frequencies
frequencies
and
andlow hashasfrequencies
hasfrequencies low
low frequencies
frequencies
And
And
low frequencies And
And
And And AndAnd And And
unchanged.
unchanged.
unchanged.
unchanged.
unchanged. AAhigh
high
unchanged.
unchanged.AA high pass
Aunchanged.
high
high
unchanged. pass
unchanged. A
Apass filter
pass
passfilter
high
high A retains
filter
filterretains
pass
A
filter
Ahigh
pass high
high retains
retains
retains
pass
filter high
high
filter
pass
pass frequencies
high
highfrequencies
retains
highfilter
filter
filter
retains frequencies
frequencies
retains
retains
high and
and
highfrequencies
frequencies
retains frequencies
high
high
high soothes
and
and soothes
and frequencies
soothes
frequencies
frequencies soothes
soothes
and andsoothes
soothes
and and
and soothes
soothes
soothes
the
thethe
amplitude
amplitude ofof low
low frequencies.
frequencies. Median
Median filter
filter and
and average
average filter
filter 1/2 1/2 1/2 1/2
1/2 1/2 1/2 1/21/2 1/2
the
the amplitude
amplitude
amplitudetheamplitude
the amplitude
the of
the
the
of
of low
lowamplitude
amplitude
low
amplitude of lowlow
frequencies.
offrequencies.
frequencies. frequencies.
of
oflow low
low
offrequencies. Median
frequencies.
Median
Median
frequencies.
frequencies. Median
Medianfilter
filter
filter and
and filter
Median
Median
Medianand
filter average
average
average andfilter
filter
filter
and average
filter
and
filter
and
averagefilter
and filterfilter
average
average
average
filter filter
filter sup sup sup
sup ddsup dsup
dd
sup sup 22d 222d dd
dsup 222222
 2 2  2

∑∑∑(( ∑
)()(∑))∑(∑
∑ () ((()( ∑
∑()∑ )))()(∑))∑(∑
2 2 2
are
areare
commonly
commonly used
used filtering
filtering usedtechniques
techniques toto reducereduce tonoise
noise inin an an van
van =in
= =v v ∞=v =vvv∞= viiii=v=vxiixiiii xx+ v+ii2v2+ii+ x2v2iixvvviivij+
∑ ∑() () (() ))
2xv+xijxxijij2+xx+2v+ij22vijxvijxvvijijx xx  
2 2 2
are
are commonly
commonly are commonly
commonly
are commonly
are are
are used
used
used commonly
commonly
commonly filtering
filtering
filtering
used used filtering
used
techniques
used
techniques
filteringtechniques techniques
filtering
filtering
filtering
techniques to
to techniques
reduce
totechniques
reduce
reduce
techniques to reduce
noise
noise
reduce noise
toto to innoise
in
reduce
reduce
in
reduce
noise an
an
annoise in
noise
innoise vvin
in anan
∞xx
=an  ij v
iix
image.
image. The
The pre-processed
pre-processed images
images are
are given
given toto the
the segmentation
segmentation ∞∞ ε ε Ωx
xΩ εε Ω Ω
 xεii=i=Ω x ∞  x
ε Ω
=111 xεi= x ε ε Ω Ω
 Ω1i =1 i =1i =i 1=1 ii<i<<jjj i < ji <j i <ij<i <j j    
image.
image.
image. The
image.
The
The pre-processed
image.
The
pre-processed
pre-processed
image. TheThe
pre-processed pre-processed
images
pre-processed
images
images are
images
are
are given
given
given images
are
images to
toto the
given
theare
are
the segmentation
given
to the
segmentation
given
segmentation to to the
segmentation
the segmentation
segmentation ∞∞ ∞ ∞
image.image. The pre-processed
The pre-processed images imagesare given are given to thetosegmentation the segmentation i =i 1
= 1 i <i < j j
which
which
which
which
whichisisdescribed
described
is
is described
which
isdescribed
which described
iswhich
which indescribed
iswhich inthe
is
described the
in
in section
is section
the described
inisdescribed
the
the section
described inthe
section
section
in below.
thebelow.
in below.
in inthe
section
below.
below.
the
section the section
below.
section
section
below. below.
below.
below.
1/2 1/2 1/2
1/2
sup
sup  dsup
sup
sup d sup  ∂
sup∂2d2vv∂ij∂ij222vv ij ∂222∂v2 2v∂1/2
sup
ddsup ∂
1/2
2 2 2 1/2

v v v  

1/2 1/21/2
 
 
3.3.
3.3.3.3.
Segmentation
Segmentation
3.3. Segmentationusing
using
3.3.
3.3.Segmentation
Segmentation
3.3. TGVFCMS
TGVFCMS
Segmentation
using
3.3.Segmentation
Segmentation
3.3. TGVFCMS
Segmentation
using
usingTGVFCMS
SegmentationTGVFCMS
using using
using
TGVFCMS
using TGVFCMS
TGVFCMS
TGVFCMS div
divdiv
vdiv
v==
vv =
=
divdiv =
vdiv
div ∑
v∑iv=
div =∞iv∑ 1|=
1=∑
= d|∑∑|j|j∑
dd  ∑ d
∂i∂∑
d d  d
2i 2|=∑ (∑(|id2x∑dx))| | 
ijij
22∑
d d d
j1i =i|1=j∑
ij 22d dij ij ij 
ij 2 
((x1x|)|)∑2dj|∑| j(2jx()2x|22()2x|(()xx|)2)||(7)
2 2
(7)(7)(7)
εεvΩ
3.3. using TGVFCMS = i i=i==111 Ω 1∑ (7) (7) (7)(7)(
∞∞ ∞xx Ω xΩ
j=jj1x
After
After pre-processing,
pre-processing,
After
After the
the
pre-processing,
After
Afterpre-processing,
pre-processing,
After
AfterAfter
After segmentation
segmentation
pre-processing,
the
pre-processing, is
segmentation
pre-processing,
the
thesegmentation
segmentationiscarried
the
thethe carried
is out.
out.
segmentation
carried
segmentation
is
is
thesegmentation
pre-processing,
pre-processing, the segmentationcarried
carried TGVFCMS
TGVFCMS
out.
out.
out. TGVFCMS
is carried
is iscarried
isiscarried
segmentation TGVFCMS
carried TGVFCMS
carried out.
out.
out.TGVFCMS
out. TGVFCMS
TGVFCMS
TGVFCMS
out. TGVFCMS
∞∞ x
xεε
∞ 
Ω

∞Ω x
 
  ε x
∞∞
ε
Ω Ω
x εxxεε

  j jj∂ j∂
 x x =
j ∂ 
x
 

 j 
x ∂  ∂
x ∂
x
j j j 
x  
j     
technique
technique
technique
technique
technique isisrobust
robust
isistechnique
is
technique
technique robust
robust
robust
techniqueto
isto
technique isnoisenoise
to
to tois
robust
robust noise
noise
noise and
is and
isrobust robust
robust
to to andedge-preserving.
and
and edge-preserving.
noise
noise edge-preserving.
edge-preserving.
to
toedge-preserving.
tonoise
and noise
noise
and and
and
edge-preserving.
and
edge-preserving. Regularization
Regularization
Regularization
edge-preserving.
Regularization
edge-preserving.
Regularization
edge-preserving. Regularization Regularization
Regularization
Regularization
Regularization
ofofTGV
TGV
of
of TGV
of TGV
TGV images
images images
images
ofofimages
TGV TGV up up
ofofTGV of to to
up
upup
TGVTGV
images
images a a specific
to specific
toimages
to a
aup specific
images
aimages specific
specific
up totoup order
order
a aup up order
order of of
toatospecific
order
specific
to
specific separation
separation
of
aaspecific
of
ofspecific separation
separation
separation
order
order order is
oforder
oforder is particularly
particularly
ofis isis
of
separation
separation particularly
ofparticularly
particularly
separation
separation
separation isThe The The
The
Theleast
particularly
is isparticularly
is isparticularly
particularly particularly least least
least
Thesolution
least solution
The solution
solution
solution
The
least The
least The isleast
isleast
solution reserved
reserved
is
least
isis solution
solution reserved
reserved
reservedsolution
solution
is is over
over isall
over
isover
over
reserved
reserved allreserved
is vector
reserved vector
all
reserved
all
all
over vector
vector
vector
over fields
fields
over
over
over
all all fields
fields
fields
vector onon
all
all
vector
all vectorΩ Ωfields
on
vector
on
on
fields and
vectorΩ
Ωand and
Ωfields
and
on fields
and
fields
on
Ω on Ωon
andon
Ω ΩΩ
andandana
beneficial
beneficial for
for measuring
measuring image
image attributes
attributes like
like noise
noise removal
removal ( ( ) ε((v(v())= = (ε(∇∇
) ))(=u)∇(εuv(εv(∇ (isis∇u+)vv/∇the
))v(+)=/v)/∇(2)=+2∇=u(∇v∇ 2)/ /2the
)the
beneficial for measuring image attributes like noise removal ε ε v v = = ∇ ∇ v v+ +∇ ∇u uTT
/ /2 T2 is is thethe symmetrized
symmetrized derivative.
derivative. 7 7 illustrates
illustrates
beneficial
beneficial beneficial
beneficial for
for beneficial
beneficial measuring
beneficial
measuring
for for measuring forfor
measuring for image
image measuring
measuring
measuring attributes
attributes
image
image image image
image like
like
attributes
attributes attributes
attributesnoise
attributes
noise
likelikenoise removal
removallike
like
noise
like noise
removalnoise
noise
removal εremoval
removal
removal (εvv)(++v= ∇ε∇ is 2)+u∇/∇u2is
the
+the )usymmetrized
symmetrized
symmetrized issymmetrized
isthe the
the derivative.
symmetrized
symmetrized
derivative.
derivative. 77derivative.
illustrates
derivative.
7 derivative.
illustrates
illustrates 7illustrat
illustra
7 7illustrates
T
T T T T T T
/is 2symmetrized
is symmetrized derivative.
derivative. 7 illustrates
7 illustrates
and
andandedge
edge
and
and edge
edge
edge sharpness.
and sharpness.
and sharpness.
sharpness.
sharpness.
and
edge
edge andand edge Our
sharpness.Our
edgeedge
sharpness. Our
Our
Our TGVFCMS
TGVFCMS
sharpness. TGVFCMS
sharpness.
TGVFCMS
sharpness.
TGVFCMS
OurOurTGVFCMS Our was
Our
TGVFCMSwas
Our was
was developed
was
TGVFCMS developed
TGVFCMS
TGVFCMS developed
developed
developed
was to
was
wasdeveloped to
was include
was include
to
developed todeveloped
to include
include
developed
include
developed totoinclude that
to toto
that
include include
include
the
include the smooth
smooth regions
regions areare moremore likely
likely toto bebe created
created by by 2 2 than
than byby 1,
1,
that
that
that the
the
thethat smooth
smooth
smooth
that the that
the that that
smooth regions
regions
regions
the
smooth the thesmooth smooth
regions are
smooth
are
are
regions more
more
more
regionsregions
regions
areare likely
likely
likely
more areare
more to
are
toto
more be
likelymore
more
be
be created
created
likely likely
created
likely
likely
to be to be by
by
by
to
to be
created to 22
be
2
created than
be
than
than by
created
created
created by
by
by 2 than 1,
1,
by1, by
by2by
2 than 22 than
than
than1,
by 1,by
by by
1
TGV
TGV TGV
TGV
TGV regularization
regularization
regularization
regularization
regularization TGVTGV inregularization
inregularization
the the
in in the
regularization
in smoothing
smoothing
the
the smoothing
smoothing
smoothing term
in term
insmoothing the
the term
term
term in in order
order
in
smoothing
in
smoothing
in order
ordertoto
order minimize
minimize
totoorder
term
term
to minimize
minimize
inin
minimize order
toorder toto to
when minimize
minimize
when 22isis22when
added v v
TGVTGVTGV regularization
regularization in inthe the insmoothing
the smoothing termterm term
in in order in orderto minimize
minimize minimize when
when
when 2added
is
is
is added
added
when added
when
2 to to
when
is
2 the
when the
to to
added
is to 2 smooth
the
the
added smooth
the
2
is 2 is
addedissmooth
smoothadded
smooth
toadded the
to regions.
regions.
to
the regions.
to to
regions. the
regions.
the
smooth the
smooth IfIf
smooth
smooth =
If
smooth
If
If
regions.= 0,
v
vv0,
==
=then
then
0,
regions.
0,
0,
regions.
regions. If then
v minimization
then
regions. minimization
then
If = v If
0, v
minimization
If v
minimization
= minimization
If
v =
then
0, = =
0,
then 0,
0, then
then
then
minimization minimizat
minimizatio
minimization
minimization
undesirable
undesirable
undesirable
undesirable
undesirable noise
noise noise
noise
noise andandand
undesirable
undesirable andartifacts
andartifacts artifacts
artifacts
artifacts noise from
noise from from
from
from
and FCM-based
and FCM-based FCM-based
artifacts
artifacts
FCM-based
FCM-based approaches.
fromapproaches.
from approaches.
FCM-based
approaches.
approaches.
FCM-based The
The The
The
The approaches.
approaches. could
could bebe The
Theachieved
achieved at atthe the edge edge neighbors
neighbors where
where 2uuwhere
2neighbors
undesirable
undesirable undesirable noise
noise noise
and and and
artifacts
artifacts artifacts from from
fromFCM-based FCM-based FCM-based approaches. approaches.
approaches. The The could
could
couldThe be
be achieved
becould
could achieved
achieved becould
could could
be achieved be at
at
achieved atbe bethe
the
the
achieved achieved edge
achieved
edge
atedge the
at neighbors
at
neighbors
neighbors
at
the atthe
edge the
the
edge edge edge
edge
neighbors where
where
where 22is
neighbors
neighbors
neighbors 2isuuwhere
ularger
larger
is
isiswhere u2than
larger
where
2larger
where
larger than than
than
than
isu2larger
is isuislarger
2u2ularger isthan
larger
larger thth
than
than
following
following isis thethe framework
framework of of TGV:TGV:
following
following
following following is
is
followingis the
the
the following
following
following framework
framework
framework
isisthe theis is
frameworkis
frameworkthe the
the framework of
ofof TGV:
framework
framework
TGV:
TGV: ofofTGV: TGV: of
of TGV: of TGV:
TGV: u.u. Therefore,
Therefore,
u. Therefore, thethe theratioratio ratio of of ofpositive
positive
positive weights
weights
weights αα αα00weights
and
and α α
α110ααα1and
cancan
can α
be
αcan α0and
bebe
used αα α1can
0α α
u.
u. Therefore,
Therefore,
u. u.Therefore,u.u.the
Therefore, u.
the
Therefore, Therefore,
Therefore,ratio
ratio thethe of
of
ratio the
ratio the
positive
positive
the ofratio
ratio ratio
of of
positive ofof
weights
weights positive
positive
positive
positive weights
00weights weights
0 weights
andand can 01be
beand
andcan 1 be
can
1 can be
011 and 1 can be
TGV
TGVTGVa a( (
kk
TGV uu)=aTGV
k k)=uusup
aa ((TGV
k
)=sup
)= akTGV
sup(∫TGV
u∫TGV
ksup
a (Ω
{{ {{ {{ { {{((
udiv
udiv
u)=Ω)=a∫∫Ω(sup
k k
aΩ
kkk
udiv()vdx
uudiv
Ωasup
=u(vdxu)=)=sup
k kk
∫ vdx
∫ΩvdxΩ
vεsup
| vudiv
|sup
udiv
ε||v kk
k kc Ω
∫Ωvdx
∫vΩεc∫vdx
εudiv Ω
udiv
 udiv
k k
(( ((( ( ()())( )()()( ( ))())( (})})))}})
,|cSym
k
v, Ω
k
Ωcc | v ε 
Sym
vdx
k k
εΩ,vdx

kk
vdx
,Sym
Symk k
|cvc|εΩv
k
dk d
k
ε,εSym
| v
Ω 

k
, ,Ω
d
kdddiv
k
cc Ω
c, Sym
kdiv
,,Sym

II
vSym
k,,divvSym

d d ≤≤aa
,div I
∞∞vv
k I
I k k
,∞
,∞div

l≤
d l
d
div
d
used
v∞, ,div
≤aIav,l llIdiv ≤div

I
used
I I
to
used
used
vThese
v≤l avThese
aThese
These ∞ lThese } } } }}
totoachieve
achieve
achieve
toto
used achieve
achieve
used
aalweights
≤weights
≤ a≤weights
∞∞ l l These
weights
weights
These
to aaused
used
have
These
to
have
balance
These
weights
balance
balance
used
achievea
have
These
have
have
weights
ato
achieve
been
balance
balance
been
to to
weights
achieve
achieve
been
a
weights
weights
been
been
have
among
among
achieve
set
balance
a
setset
have
among
among
to
a
balance
tohave
set
set
have
been
the
a the
a
0.10
the
0.10
to
have
totobeen
been
first
among first
the
balance
balance
balance the first
among
and
0.10
0.10
0.10
set and
been
been
set
and
and
first
first the
0.15,
and
toandand
set
second
and
among
among
among and
the
0.15,
0.10
to set
set
second
and
first
0.15,
second
second
the
second
the the
and
first
derivatives.
first
respectively,
to
to0.15,
0.15,
0.10toand
derivatives.
respectively,
0.100.10
and
first
first
and
respectively,
0.10 and
respectively,
derivatives.
second
respectively,
and
0.15, and
0.15,
derivatives.
and
derivatives.
and and
second
0.15, for
second
second
second
for
0.15,
0.15,
respectively, for
for
derivativ
derivative
derivatives
derivatives.
derivatives.
respectively,
respectively,
for
respectively,
respectively, for forforf
practical
practical
practical
practical
practical reasons.
reasons.
reasons.
reasons.
practicalreasons. TheThe
practical
practical
practical reasons. The
The Thesuggested
suggested
reasons. suggested
reasons.
suggested
reasons.
suggested
The TGVFCMS
TGVFCMS
The The
suggested TGVFCMS
The
TGVFCMS
TGVFCMS suggested
suggested
suggested can
TGVFCMS can produce
can
can produce
can
TGVFCMS
TGVFCMS produce
TGVFCMS
produce
produce
cancan results
results
results
can
results
canresults
producecan produce
produce
produce
results resu
resu
results
(1) (1)
(1)(1) (1) that (1)(1) (1) (1)
(1) practical reasons. The suggested TGVFCMS produce results
that are
that
that
thatare more
aremore
are
are
that more
more
moreresilient
resilient
arethat that resilient
that
resilient
resilient
more are are to
are to
more noise
more
resilient noise
to
more
toto noise
noise
noiseandand
resilient
resilientto to anddetail-preserving
and
resilientanddetail-preserving
noise detail-preserving
to noise
detail-preserving
to
detail-preserving
to noise noise
andand and and
and through
through through
detail-preserving
detail-preserving
through
detail-preserving
detail-preserving through the
the the
the
the
through through
through
throughthe thethet
that are more resilient noise detail-preserving through
where
where l l= = 0,1,
0,1, … … , ,k k − − 1, 1, andand
,0,1, k k ∈ ∈ 
k
,k1,
k,∈  ∈ k 
designates
,k−and designates anan 
order
order
∈kdesignates
kkdesignates
∈∈an ofof TGV,
TGV, definition
definition
definition
definition
definition of of second-order
second-order
of
ofof second-order
definition
second-order
second-order
definition TGV.
TGV.
of of TGV. After
After
second-order
TGV.
second-order
TGV. After
After segmentation,
After segmentation,
segmentation,
TGV.
segmentation,
segmentation,
TGV. After
After classification
classification
classification
segmentation,
segmentation,
classification
classification classificat
classificatio
where
wherewhere
where l = = 0,1,
lwhere 0,1,
where =…
lwhere
lwhere =… 0,1, ,l,kk=l l…
0,1, = 1,1,
−= −0,1,
… 0,1, and
and
k,and k… −… …
−1, and −1,−k1,k1,
∈and
designates
designates
∈and
designates
and anan order
order
designates
order
designates
designates anan ofof
of
order TGV,
TGV,
TGV,
anan
order an
oforder
order order
TGV,
of ofof
TGV,
techniques
ofTGV,
TGV,
techniques
techniques
definition
are
areare
definition
used
are used
of
used for forfor
of
second-order
TGV,definition of second-order TGV. After segmentation, classification
prediction.
forprediction.
second-order
prediction. The
TGV.
The
TGV.
After
segmented
The segmented
After
segmented
segmentation,
segmentation,
sample
sample sample image
image image
classification
classification
==
and
and== aa aa=
and
and
and a
=
and a0
and
(( ((
0, ,
a
aa=a
a
and
1 ,
,
= …
,,
=a
0000 111a
1and …
a
and ,
1aa
,
, … ,a
a
…(( ( ()() ))
a
a,k a
,
k,
− ,1a
a
− a1, …
ka
,
k
signifies
a
,

−1 1a0, ,
signifies
a
, aa… 1, ,

signifies
signifies
signifies …
, ) ) ) ))
athe
, ,
the
a a positive
positive
the
the
signifiesthe
signifies
k 1 positive weight
signifies
positive
positive
signifies
signifiesthe the
weight weight
weight
the
positive
positivethe
weight
the toto TGV.
TGV.
positive
positive to
positive
toto
weight TGV.
TGV.
weight TGV.
weightto weight
weight TGV.
to is isto
TGV.
techniques
techniques
to
shown to
TGV.
shown
is TGV.
TGV.
shown
techniques
in in
techniques
Figure
Figure
in
are
techniques
Figure
techniques
techniques
used
used
4. 4.
are
4.
arefor
used are
used are
prediction.
prediction.
are used
for used
used
for for
prediction. for
forThe
The
prediction. prediction.
prediction.
segmented
prediction.segmented
TheThe The The
The
segmented sample
sample
segmented segmented
segmented
segmented image
image
samplesample sample
sample
sample
image image ima
ima
image

(( (()) ))(( )()(( ) ))


0 0 1 10 k k −−−101 1
1 k −k1−1 k −1 k − −
1 isisshown
shown is in
in
shown Figure
is Figure
is is
shown
is shown in Figure 4. shown
in shown 4.
Figure 4. in in in
Figure Figure
Figure
4. 4. 4.4.
Sym
kk kkkk dd dddd
Sym
SymSym   signifies
ksignifies
kSymSym dk kkthe the
 the
d dthe space
dspace of of symmetric
symmetric k-tensors.
k-tensors. For
ForFor
Sym
Sym Sym   d signifies
signifies
signifies
signifies signifiesthe space
signifies
signifies
space
space
signifies the thespace of
of
ofthe symmetric
the
the
spacesymmetric
symmetric space
space
space
of of of
symmetric
symmetric k-tensors.
ofof symmetric
k-tensors.
k-tensors.
symmetric
symmetric k-tensors. For
For
k-tensors.
k-tensors. k-tensors.
k-tensors.
For For For For
For
each
each component,
component, η ηεcomponent,
εηηM M εεkM k−M 1,k,kk−ε 1,M
,η η ηεl-divergence
εl-divergence
M Mthe , of
1−1,l-divergence
each
each
each
k-tensor
k-tensor
component,
component,
component,
each
field
each
eachcomponent,
field isis
each
each
component,
assumed
assumed
component,
component,
as as
1−η
η−the
follows:
follows:
the
−11ε M
l-divergence
the
thel-divergence

the −k1−M ,l-divergence
1, k −
the k1k−,l-divergence
the of
the
the the
the
of
of
l-divergencesymmetric
symmetric
l-divergence
the
l-divergence
of the
the ofsymmetric
symmetric
symmetric
ofthe the ofof
ofsymmetric
the the
symmetric the3.4.3.4.
symmetric Classification
3.4.Classification
3.4.
symmetric
symmetric
3.4. Classification
Classification
Classification
3.4. 3.4. 3.4. 3.4.
Classification 3.4.using
Classification
using
Classification using Deep
Classification
Classification
using
using Deep Deep
Deep
Deep
using Learning
Learning
using
Learning
using
Deep using
Learning
Learning
using
Deep
Technique
DeepTechnique
Deep
Deep
Learning Technique
TechniqueLearning
Technique
Learning
Learning
Learning Technique
Technique Technique
Technique
Technique
k-tensor
k-tensor
k-tensor field
field
k-tensorfield is
k-tensor k-tensor
k-tensor
isisfieldassumed
assumed
assumed
k-tensor field is assumed as follows: is field field
field
assumed as
as
as
is isis
follows:
follows:assumed
follows:
assumed
assumed as follows: as asas follows:
follows:
follows: The The CNN
The CNN CNN isis a ais matrix
matrix
a matrix input-based
input-based input-based classifier;
classifier;
classifier; so,so, wewe
so, convert
weconvert convert the
the the
The
The CNN CNN
The The isis
CNNThe aThe
CNN aThe matrix
matrix
CNNisCNN CNN
a matrix
is agrayis isaisamatrix
input-based
input-based
matrix amatrix
matrix
input-based input-based
classifier;
classifier;
input-based
input-based
input-based so,
classifier;so, classifier;
classifier;
weweso,
classifier;
classifier; convert
convert
weso,
so, we so,
so,wethe
the
convert we
we
convert convert
convert
convertthe thethet
l l l ll segmented
segmented
segmented image
image image intointo
segmented into a a gray a grayscale
imagescale scaleimage
image
into image
a asas
gray a a
as3232a
scale × ×
32 3232×
image matrix,
matrix,
32
32 matrix,
as a 32 × 32 mat
ll!!∂∂vvη∂∂η++γvvlγηηη!+++γ∂γγ vllη!l+l!∂γ!∂v∂ηv+vηγη++γ γ l l segmented
segmented segmented image
image segmented
segmented into
image into a a
image image
gray
into gray a scale
scale
into into
gray a image
a
image
graygray
scale as
as
scale
scale
image aa 32
32
image
asimage
××a 32
32as as
matrix,
matrix,
a
× a
32 32 × ×32
matrix, 32 matri
matrix
(( (( )) (()) ∑∑(∑
l l
l l l ll!!
segmented image into aCNN gray scale image The as aThe 32layer× 32 matrix,
()() ∑)) ∑∑∑
)∑ l ! ∂ v which
which isisgiven
given as as an an inputinput image
image for
for CNN classification.
classification. The layer
layer
l l l ll which is given as an input image for CNN classification.
div
divdivvv vv=div
div ∑ = == div vv == (2)
(2) which
which which an is is
input
input given
(2)which is given as an input image for CNN classification. The layerlay
isis given
given whichas as an given image
image asas ananfor
for input
inputCNN
CNN image
image for
classification.
classification.
for CNNCNN classification.
classification.
The
The layer
layer The
The la
vεrεηMdiv
ll
vrdiv =rγvε= which whichis given is given
as an as
input an input
image image
for CNN for CNN
classification.classification. The The
layer layer
γMM!11η1!1γγη=∂rη!∂ε!rxMεxγM1∂γ∂1γxxrγε!γγMrεr1εM∂Mγ1x1γ!γγ!γ !∂x∂∂γxxγ γ
ηη ηdiv M
η + γ (2) (2)
(2) (2)(2) (2)(2)
ηη 1rr1ε εM definition
definition
definition
definition
definition ofofthe the
of
of
definition proposed
the
ofdefinition
the
theproposed
definition proposed
definition
proposed
of proposed the of CNNCNN ofCNN
ofthe
proposed CNN
the
CNNtheisisproposed
defined
proposed defined
is
proposed
is defined
isdefined
CNN defined below.
CNN
is below.
CNN
CNN below.
below.
below.
is
defined isdefined
defined
isdefined
below. below.
below.
below.
η γ ! ∂x definition of the proposed CNN is defined below.
Mk
MkMk
is
isthe
Mk
Mk the
is
is multi-index
multi-index
the
isthe
the multi-index
Mk
multi-index
multi-index
Mk of
isof
isthe order
theorder
of
the
of kkof
given
ordergiven
multi-index
ofmulti-index
order
order ofasof
kkkgiven
given
givenasorder
kfollows:
of follows:
as
order
asas kgiven
follows:
order kas
follows:given
follows:
k given asfollows:
follows:
asasfollows:
MkMkisisMk
theis
the multi-indexmulti-index
multi-index oforder
order kgiven
given asfollows:
follows:
M
MkM==kkk ηε
kM ηε
==
M d=d
ηε
ηε
k

M ∑
| |M{{ {{ {{ { {{ }} }} }} } }}

ddd
M
ηε
k
||
k∑
=
ik∑

i
==η=1ηddd
d ==kk
ηε
i diηε
ηη
M k = ηε  | ∑i =i1=1ηii ==1η
1
= ηε
i i
=i |
=
1
=11∑
i i
i== | ∑
d dddd
 kk
η | ∑
|
i ∑
d dd
= kη
dd
ηi i =k=kk (3)
i 1=
i =i 1=k
(3)
(3)(3)
(3) (3)
(3)(3) (3)(3)

The
TheThe
∞∞-norm
The
The -norm-norm
∞-norm
-norm forfor symmetric
symmetric
for
forThe
for Thesymmetric
symmetric
symmetric-norm
∞∞-norm k-vector
k-vectorfor k-vector
for
k-vector
k-vector field
symmetric
symmetric field isisk-vector
field
field
field given
given
is
k-vector
isis asasfield
k-vector
given
given
given follows:
follows:
as
asfield
follows:
field
as isgiven
follows:
follows:given
is isgiven asfollows:
follows:
asasfollows:
symmetrick-vector
The ∞The -norm for symmetric
The ∞-norm -normfor forsymmetric k-vectorfield fieldisisgiven
givenasasfollows:
∞∞
∞ ∞
follows:
sup
supsup    kk!!  2 2   2 22
∑
 =∑  ∑ vvηkkη!∑ xkx)k!)22!2k !kk2!!2vv 
vv∞∞v=v= == vsup supsup !(x∑
sup
 
ηε ∑

sup
ηε
v
v ∞ ∞=ηεv M M=sup
=
 ∑
( 
 ηηε v x
v∑
) )
(( vM( x )vη (ηxη()(xx))(4)  (4)
(4)(4)
(4) (4) (4)(4) (4)
∞xεεΩxΩ
∞x
∞ xεvε∞Ω=

∞  
ηε
M Mη
xε Ω∞xkεkxηxεΩεk!kΩ!Ω
ηε η
ηε ηM !!M
η ηε
kη ηεη
M

M
!kηvηkηk(η!xη!)!    (4)

 xε Ω   k η !           
 
This
This
Thisis
Thisis
This isbecause
because
is
is because
because
This because This
is the
Thisthe
This first
thefirst
the
the
is
because isis gradient
first gradient
because
first
because
first
because thegradient gradient
gradient
thethe
first and
the and
first
gradient the
first
and
andthe
first
and high-order
high-order
gradient
the
gradient
the
the
gradient andhigh-order and
high-order
high-order
and
the and gradients
gradients
the
the
the high-order
gradients
gradients
gradients
high-order
high-order
high-order gradients
gradients
gradients
gradients
generated
generated
generated
generated
generated This
in
in (4)(4)
in
in is
are
are
(4) because
both
both
generated
ingenerated
(4)
(4) are
generated
are
are(4) restricted
both
both
both the
restricted
in
restricted
in(4)
restricted (4)
restricted
(4) firstto
are to gradient
be be
both
to sparse.
sparse.
be restricted
sparse. and the
tobe high-order
be sparse. gradients Figure
Figure4:4:Sample
Sample segmentation
segmentation image.
image.
generated in in
are both areare both
restricted bothto
torestricted
be
be
restricted
sparse.
sparse. totobe
to be sparse. sparse.
sparse. Figure
Figure
Figure 4:
4: Sample
4:Sample
Figure Figure 4:
segmentation
4:Figure
Sample 4: 4: Sample
segmentation
segmentation
Figure
Sample image.
Sample
Sample segmentation
image.
image.
segmentation
segmentation
segmentation image.
image.
image. image.
generated in (4) are both restricted to be sparse. Figure 4: Sample segmentation image.
Krishnan et al.: An automated segmentation and classification model 2022;10(01):213-220 217

Figure 5: Typical architecture of deep CNN.

3.4.1. CNN classification


ez j
It consists of different sorts of layers, such as a convolution layer, a f j (z) = (9)
ReLU layer, pooling layers, and output levels with fully connected ∑ ke z k
output layers. An image’s borders and forms are recognized by z
CNN. where e j is the standard exponential function for output vector
and k is a number of classes in the CNN classifier.
3.4.2. Convolutional layer Images may be classified using deep neural networks. These
In CNN construction, the initial layer is always a convolutional networks are pre-trained to categorize different images, but they
layer. A CNN accepts M × N × 1 as an input layer. A two-dimensional may be modified to meet our classification problematic via transfer
image with single layers has a two-dimensional size of M × N. learning by modifying essential parameters.
This filter has the same depth as the input image and is convolved For all the networks, hyper-parameter training has been
with the image. As a result, the input image is convolved with this maintained. Multiple epochs (maximum 25) were used to divide
curve or form, resulting in the final image. During convolution, the data into segments. As the name suggests, mini-batch size is the
shape that closely approaches the curve in the input image and is sum of samples that are used to update a model’s parameters.
signified by the filter ends up with higher values. Equation (8) can Training mini-batch size was retained at 7 and initial learning rate
be used to represent a convolution process as follows: was fixed at 0.0001.

( )
s ( t ) = x* w ( t ) (8) 4. RESULT AND DISCUSSION
The proposed system validations are carried out to test the
effectiveness of each objective described in the sub-sections below.
3.4.3. Pooling layer
To minimize the size of the data, a pooling layer is used. It includes
4.1. Evaluation Metrics
dividing the matrix data into segments and replacing each segment
with a single value. Figure 5 shows that the segmented matrices For classification, the assessment criteria comprise sensitivity
are swapped by the extreme or average of all values within the (SE), specificity (SP), and accuracy (AC). The performance
current segment. criteria are defined as follows:
tp
SE =
3.4.4. Fully connected layer tp + fn (10)
Dimensional changes are made in a fully linked layer in order to tp + tn
AC =
accommodate the network layer architecture. Each dimension of tp + fp + tn + fn (11)
input and output of a completely connected layer is connected to tn
SP = (12)
each other. All activations from a previous layer are passed on to tn + fp
the next layer, as it is in a typical ANN. where tp, tn, fp , and fn signify the number of true positives, true
negatives, false positives, and false negatives.
3.4.5. Softmax layer
When the Softmax function is called, input from preceding layers 4.2. Performance Analysis of Segmentation and Classification
is translated into a possibility for the classes that total to 1. As a Table 1 shows the initial performance analysis of the proposed
result, this layer plays a significant part in the anticipated output, TGVFCMS with CNN for the overall banana disease data set in terms
as it is the class which has the largest possibility for the given data of SE, SP, and ACC. Here, we compared the proposed method by
input. It is given as follows: having the whole data set without pre-processing and segmentation.
218 Krishnan et al.: Journal of Applied Biology & Biotechnology 2022;10(01):213-220

Figure 6: Graphical representation of proposed CNN with TGVGCMS in terms of accuracy.

Table 1: Performance evaluation of the proposed CNN with better performance than the existing FCM. Table 2 shows the
TGVFCMS. performance analysis of CNN with other existing machine
Method SE (%) SP (%) ACC (%) learning and deep learning practices; Figure 7 shows the graphical
Without pre-processing 67.26 78.04 63.09 representation of CNN in terms of accuracy.
Without segmentation 68.15 78.27 78.62
Here, the existing techniques, including LSTM, RF, ANN, DT,
With pre-processing and
FCM segmentation
69.62 79.67 89.69 SVM, KNN, RNN, and auto-encoder, are compared with the CNN
Proposed CNN with
technique. Among these techniques, RF and DT achieved very low
89.04 96.38 93.45 performance in terms of SE, SP, and ACC. The SVM, KNN, and
TGVFCMS
ANN achieved nearly 63% of SE, 65%–70% of SP, and 71% of
ACC, whereas RNN achieved better performance than LSTM in
Table 2: Performance evaluation of CNN. terms of SE. Auto-encoder achieved high performance than RNN
Method SE (%) SP (%) ACC (%) and LSTM in terms of all metrics. But, the proposed CNN technique
Random forest (RF) 51.33 57.33 60 achieved 890.4% of SE, 96.38% of SP, and 93.45% of ACC. This is
Decision tree (DT) 59.41 60.12 62.54 because CNN is trained with TGVFCMS segmentation technique,
Support vector machine (SVM) 63.51 65.20 60.39 which has better performance than the other existing techniques.
ANN 63.76 71.43 73.54
K-nearest neighbor (KNN) 62.60 72.70 76.32 5. CONCLUSION
Recurrent neural network (RNN) 78.24 77.45 81.05 Automatic identification and classification of illnesses in banana
Long- and short-term memory (LSTM) 66.67 79.62 82.45 leaves are more accurate using image processing techniques. For
Auto-encoder 78.03 82.57 85.05 disease diagnosis, these technologies eliminate the amount of time
Proposed CNN 89.04 96.38 93.45 and money required to complete the project. Various leaf diseases
and their symptoms are described in this paper. MATLAB is used
After that, the data set is trained with pre-processing and existing FCM to make a system for detecting banana plant diseases. The image
segmentation, which is compared with our proposed TGVFCMS with acquisition, pre-processing, and segmentation phases are followed
CNN. Figure 6 shows the proposed method accuracy. by the classification phase, which is then used to further classify the
diseases. An affected area is segmented using TGVFCMS and then
From Table 1 and Figure 6, it is clear that CNN without pre- the classification of diseases on segmented images is carried out by
processing achieved only 67.26% of SE, 78.04% of SP, and using the CNN technique. The experiments are carried out on CIAT
63.09% of ACC. The same CNN method increased 2% of SE and banana image library and validated with existing techniques in terms
SP by training without segmentation. However, CNN trained with of SE, SP, and ACC. From the trial results, it is proved that the CNN
FCM segmentation and pre-processing achieved nearly 69% of method achieved 93.45% accuracy, where the existing techniques
SE, 79% of SP, and 89.69% of ACC. The SE, SP, and ACC are achieved nearly 75%–85% accuracy. In addition to replacing
gradually increased (nearly 13%–17%) when the classifier CNN is manual methods for identifying banana illnesses, this approach is
trained with TGVFCM segmentation technique. This proves that more accurate than manual methods and will prove to be a valuable
our proposed hybrid segmentation technique of FCM achieved tool for farmers and plant pathologists in determining the disease
Krishnan et al.: An automated segmentation and classification model 2022;10(01):213-220 219

Figure 7: Graphical representation of proposed CNN in terms of accuracy.

and its control measures. As a result of this method, the production ripeness of a banana. In: International symposium on information
of crops will grow. The algorithm will be optimized for improved technology 2008. Inst. of Elec. and Elec. Eng. Computer Society,
outcomes and great accuracy in the future. It is necessary to study IEEE, Kuala Lumpur, Malaysia, pp 1–7, vol. 1, 2008.
2. Al-Hiary H, Bani-Ahmad S, Reyalat M, Braik M, ALRahamneh Z.
and execute prevention techniques in order to increase the growth
Fast and accurate detection and classification of plant diseases. Int J
and output of banana plantations. In addition, as a future work, the
Comput Appl 2011;17:31–8.
algorithm should be improved to form an early warning system for 3. Behmann J, Mahlein A, Paulus S, Dupuis J, Kuhlmann H, Oerke E.
banana diseases by collecting the data of climatic conditions, such Generation and application of hyperspectral 3D plant models: methods
as temperature, humidity, water level, soil conditions, etc. and challenges. Mach Vis Appl 2016;27(5):611–24.
4. Scharr H, Dee H, French AP, Tsaftaris SA. Special issue on computer
AUTHOR CONTRIBUTIONS vision and image analysis in plant phenotyping. Mach Vis Appl
2016;27(5):607–9.
All authors made substantial contributions to conception and 5. Rumpf T, Mahlein AK, Steiner U, Oerke EC, Dehne HW, Plümer L.
design, acquisition of data, or analysis and interpretation of Early detection and classification of plant diseases with support vector
data; took part in drafting the article or revising it critically for machines based on hyperspectral reflectance. Comput Electron Agri
important intellectual content; agreed to submit to the current 2010; 74(1):91–9.
journal; gave final approval of the version to be published; and 6. Mahlein A, Steiner U, Dehne H, Oerke E. Spectral signatures of sugar
agree to be accountable for all aspects of the work. All the authors beet leaves for the detection and differentiation of diseases. Precis
Agri 2010;11(4):413–31.
are eligible to be an author as per the international committee of
7. Liao W, Ochoa D, Gao L, Zhang B, Philips W. Morphological analysis
medical journal editors (ICMJE) requirements/guidelines. for banana disease detection in close range hyperspectral remote
sensing images. In: IGARSS 2019-2019 IEEE international geoscience
FUNDING and remote sensing symposium, IEEE, pp 3697–700, 2019.
8. Camargo A, Smith J. An image-processing based algorithm to
There is no funding to report.
automatically identify plant disease visual symptoms. Biosyst Eng
2009;102(1):9–21.
CONFLICTS OF INTEREST 9. Mohanty SP, Hughes DP, Salathe M. Using deep learning for image-
The authors report no financial or any other conflicts of interest based plant disease detection. Front Plant Sci 2016;7:1419.
in this work. 10. Intelligence G. The mobile economy Africa 2016. GSMA, London,
UK, 2016.
11. Siricharoen P, Scotney B, Morrow P, Parr G. A lightweight mobile
ETHICAL APPROVALS system for crop disease diagnosis. In: International conference on
This study does not involve experiments on animals or human image analysis and recognition. Springer, Berlin, Germany, pp 783–
subjects. 91, 2016.
12. Ramcharan A, Baranowski K, McCloskey P, Ahmed B, Legg J,
Hughes DP. Deep learning for image-based cassava disease detection.
PUBLISHER’S NOTE Front Plant Sci 2017;8:1852.
This journal remains neutral with regard to jurisdictional claims in 13. Wiesner-Hanks T, Stewart EL, Kaczmar N, DeChant C, Wu H, Nelson
published institutional affiliation. RJ, et al. Image set for deep learning: field images of maize annotated
with disease symptoms. BMC Res Notes 2018;11(1):440.
14. Mwebaze E, Owomugisha G. Machine learning for plant disease
REFERENCES incidence and severity measurements from leaf images. In: 2016 15th
1. Ahmad MNB, Fuad NA, Ahmed SK, Abidin AAZ, Ali Z, et al. Image IEEE international conference on machine learning and applications
processing of an agriculture produce: determination of size and (ICMLA). IEEE, New York, NY, pp 158–63, 2016.
220 Krishnan et al.: Journal of Applied Biology & Biotechnology 2022;10(01):213-220

15. Kamilaris A, Prenafeta-Boldu FX. Deep learning in agriculture: a 21. Eldem A. An application of deep neural network for classification of
survey. Comput Elect Agric 2018;147:70–90. wheat seeds. Avrupa Bilim ve Teknoloji Dergisi 2020;19:213–20.
16. Hughes D, Salathe M. An open access repository of images on plant 22. Balasubramaniam P, Ananthi VP. Segmentation of nutrient deficiency
health to enable the development of mobile disease diagnostics. arXiv in incomplete crop images using intuitionistic fuzzy C-means
2015;1511.08060. clustering algorithm. Nonlinear Dyn 2016;83(1):849–66.
17. He K, Zhang X, Ren S, Sun J. Deep residual learning for image
recognition. In: Proceedings of the IEEE conference on computer
vision and pattern recognition, pp 770–8, 2016.
18. Ferentinos KP. Deep learning models for plant disease detection and
diagnosis. Comput Electron Agri 2018;145:311–8.
19. Selvaraj MG, Vergara A, Ruiz H, Safari N, Elayabalan S, Ocimati W,
et al. AI-powered banana diseases and pest detection. Plant Methods
2019;15(1):1–1. How to cite this article:
20. Mathew D, Kumar CS, Cherian KA. Foliar fungal disease classification Krishnan VG, Divya V, Rao PV, Deepa J. An automated
in banana plants using elliptical local binary pattern on multiresolution segmentation and classification model for banana leaf disease
dual tree complex wavelet transform domain. Inform Process Agri detection. J Appl Biol Biotech 2022; 10(01):213–220.
2020.

You might also like