You are on page 1of 31

CHAPTER 5:

LANE DETECTION USING ENTROPY-BASED FUSION MODEL (EBFM)


FOR RBIS AND OPTIMIZED DEEP CNN

The third method introduced for lane detection is an entropy function model for
preventing road accidents. An entropy function Model combines outputs acquired by
an EW-CSA based Deep CNN and RBIS segmentation technique to detect multilane.
In an EW-CSA based Deep CNN, image/picture transformation is done for acquiring
a birds’ eye view picture. After that, an EW-CSA based Deep CNN performs lane
detection. The EW-CSA is built by combining the Earth-Worm algorithm and Crow
Search Algorithm. The EW-CSA algorithm is a training algorithm in Deep CNN.

In the second approach, the RBIS segmentation method is used for multilane detection.
Entropy is an essential factor, and it measures the information content of the picture.
It can differentiate information from the noise. As a result, entropy function is used to
combine the calculated result of an EW-CSA based Deep CNN and RBIS
segmentation method, to detect multi-lines by preserving information successfully.

5.1. THE PROPOSED ENTROPY-BASED FUSION MODEL (EBFM)

The proposed EBFM developed for multilane detection. Here, two techniques, namely
EW-CSA based DEEP CNN and RBIS segmentation method, and entropy function, is
employed to detect multilane. The lane detection model using the proposed EBFM is
illustrated in figure 5.1. The proposed EBFM model is based on the entropy measure.

The purpose of lane detection scheme is to determine lanes from multi road lane
images to provide secure driving. EBFM model employs the entropy function for
multilane detection. The fusion approach is developed newly with the combination of
EW-CSA based deep CNN and RBIS segmentation. The fusion combines output
generated from the first stage and the second stage to achieve multilane detection.

106
Figure 5.1: Lane detection model using EBFM

The input image is depicted in figure 5.2.

107
Figure 5.2: Sample of the input image

5.2. DETERMINING LANES USING EW-CSA BASED DEEP CNN

Primarily, EW-CSA based deep CNN Technique is used for multiple lane
detection. At first, an input image is transformed from one area to another area for
processing. Image transformation performed by using IPM. IPM transforms input
image sample into BEV. BEV image is then provided to deep CNN classifier as input
for detecting lanes. Then, EW-CSA algorithm is used for choosing the best weights.
This algorithm is a combination of earthworm and crow search algorithm. The selected
best weights are used for training deep CNN. The combination of EWA and CSA is
used to solve design as well as an Optimization problem in EW-CSA. This integration
of EWA and CSA can solve the local minima problem. Deep CNN has 3 layers, and
the layers are a convolutional, pooling, and fully connected layers. Here, all layer
executes a particular task. Training deep CNN is the first step, where EW-CSA does

108
initialization of weight. The updates are carried out based on the contribution of
weights towards minimum error value. The optimization process is regulated using the
updates. Thus, altered Cauchy mutation equation for EW-CSA is given below:

 k  W k ,m − 1   k  W k ,m  h l ,m 
C k ,l =  K  R − 
 k  W k , m  1 −  k  W k ,m 
l
 (5.1)

Where, a random number is denoted by  k , and it is in the range [0,1]. The flight

length is denoted by W k , m ,and Weight vector is given by K l , Cauchy distribution

random number presented by R and h l ,m states memory of most exceptional solution


in m th iteration.
Hence, here we have updated weights of deep CNN using EW-CSA. The weights
which have minimum error value when algorithm reaches maximum iteration. Figure
5.3 depicts the image detected by EW-CSA based DEEP CNN classifier.

Figure 5.3: The image detected by EW-CSA based DEEP CNN classifier

5.3. MULTILANE DETECTION BASED ON RBIS SEGMENTATION


METHOD

The segmentation method, called RBIS segmentation method, is employed on the


input image sample for multilane detection. This multilane detection process is

109
illustrated in this section. In the multiline detection process, at first, input samples have
to undergo through segmentation process. The segmentation is performed by using a
sparking method [49]. The sparking method is used for improving the effectiveness of
image by using probable threshold value. The sparking process is applied to input
image samples for obtaining the segmented image. After that, RBIS segmentation is
applied for detecting lanes, and images are separated into grids. Then, random target
selection is performed. Next, the distance between the grids and targets is computed
using the Bhattacharyya distance. The grids that are having the least distance are
chosen. The Chosen grids are belonging to specific targets. The relative nearness of
two samples is determined by using Bhattacharya distance. It also measures the class's
separability. Bhattacharyya distance is a reliable distance measure as compared to
other distance measures. Here, the average of grid point is taken, which belongs to
matching targets. This process gets repeated until it determines the ultimate targets.
Afterwards, grid points are grouped for splitting lanes from the street by using the
nearest neighbor distance.

Figure 5.4: The image detected by the RBIS method

The NN distance is more comfortable to employ and execute in a smaller amount of


time. When final targets are evaluated at that time, the NN distance measure gives
accurate results. Hence, the proposed method detects road patterns and lanes
depending on. Consequently, the RBIS segmentation method detects road patterns and

110
lanes based on highest arisen instances. Figure 5.4 shows the image detected by the
RBIS method.

5.4. EBFM for lane detection

The proposed entropy function for multilane detection is elaborated in this section.
The fused image contains information about the source images, and entropy evaluates
this information. The output of the EW-CSA based DEEP CNN and the output of the
RBIS segmentation method is fused by using the entropy function for successful
multilane detection. Considering the earlier stages, the output is viewed in two types
of categories, that is, the output is either lane or road depend on an NN distance.

The first and second stage output is provided to the fusion approach as an input. Lane
and road regions are presented in the numerical form, for example, 1 ad 2. The pixels
belonging to the lane are presented by 1, and the pixels belonging to the road are
presented by 2. Now, the lane pixels are obtained. Then lane pixels occurring
probability from the first and second stages are computed and in presented the matrix
form as P1 .

The neighborhood probability of the occurrences is estimated individually for the


outcome of the first and second phases and represented in the form of matrices as P2

and P3 . Lastly, the fusion is performed by obtaining an average of probabilities

acquired. It means, an average of P1 , P2 and P3 is stored as P. Next, the acquired


probabilities entropy is estimated by fusing the first and second stage output to detect
the lanes. Execution of the standard operation is done by employing the pixel-wise
processing on all the pixel positions for output generation. Then, the position of the
next pixels is evaluated, and the sampling process is iterated until all the pixels get
processed. The phases associated with the proposed entropy fusion method are talked
about beneath:

Step 1: Input

111
The output image produced by EW-CSA based DEEP CNN and RBIS segmentation
technique is given as an input to the fusion algorithm. Lanes and the roads in the image
are presented in a numerical form separate them as lane pixel and road pixel. Therefore
figure 5.5and 5.6 depict the output of EW-CSA based DEEP CNN and RBIS
segmentation technique.

Figure 5.5: Output image generated from EW-CSA based deep CNN

112
Figure 5.6: Output image generated from the RBIS segmentation method

Step 2: Frame the contextual features depending on multilane pixels

Contextual information is a model for boosting the algorithm to choose the relevant
contextual features for detecting Lane marking. Lane pixels are taken from the output
image produced by EW-CSA based DEEP CNN and RBIS segmentation technique.
The pixel which is having value 1 (lane) is considered for estimation of the image
composed after the line information based on multilane pixel using the output
generated by EW-CSA based DEEP CNN, and RBIS segmentation is depicted in
figure 5.7.

113
Figure 5.7:Image after acquiring multilane pixels

Step 3: Determine the probability of occurrence


The probability of the occurrence is estimated depend upon the acquired Lane
information. Therefore, the two acquired inputs are used for calculating
probability of occurrence and it is denoted by P1. The image based on the
probability of the occurrence is depicted in figure 5.8. If the pixel is identified
as lane by both the approaches, then the result will be 2/2, and if the pixel is
identified as a road by both the approaches, then the result will be 1/2.

114
Figure 5.8. Matrix P1 based on probability of occurrence using the outputs of EW-
CSA based deep CNN and RBIS

Step 4: Determine the probability of occurrence based on neighborhood


The pixels which belong to Lane marks are labeled as lane pixels. These pixels
detect the points at lane marks and are responsible for improving the robustness
under different illumination conditions. Estimation of the probability of
occurrence is based on neighborhood pixel is the next step. It uses the outputs
obtained from Phase 1 and 2. and denotes it as P2 and P3 respectively. The
results acquired after determining the probability of occurrence of length
pictures on the neighborhood is depicted in figure 5.9 and 5.10.

115
Figure 5.9.The matrix P2 representing the probability of occurrence based on the
neighborhood using EW-CSA based DEEP CNN

Figure 5.10:Matrix P3 representing the probability of occurrence based on the

neighborhood using RBIS segmentation method

116
Step 5: Apply the fusion process by estimating the average of probabilities

The image acquired taking the average of the probabilities is depicted in figure
5.11. The fusion is performed depend upon the probabilities acquired from the
earlier stages and for these probabilities averages estimated, the formula is
given below:
P1 + P2 + P3
P=
3 (5.2)

Figure 5.11: Image P obtained after taking the average of the pixels

Step 6: implement the EBFM

Now the next step is computation of Entropy for fusing the outputs of both the
techniques. The entropy is a standard measure [31] to determine the uncertainty in any
data. It is used to maximize mutual information in various operations. The entropy
variations are widely available, and they inspire the suitability preference for a specific
operation. Hence, the entropy of an image help to target the difference between the

117
pixel group or neighbor pixels. Additionally, entropy is also described as
corresponding states of intensity level that an individual pixel can adapt. In the
quantitative analysis, the entropy of an image pixel is used. It is also used to evaluate
the details of an image and give a better comparison between the image details. The
image required after implementing the fusion-based model is depicted in figure
5.12. The entropy for the obtained probabilities is computed using the following
formula:

F = − P log(P ) (5.3)

where, P denotes the probability distribution of pixels in an image.

Figure 5.12: Representation of EBFM

Step 7: Detect lanes and roads based on a predefined threshold

This is the last step. A predefined threshold is used for lane detection. The aim
of applying predefined threshold is to differentiate lane pixels and road pixels,
and its value is set to 0.145. If the value of the pixel is below the threshold,

118
then it represents lane, and if the value of the pixel is higher than the threshold,
then it represents the road. The main idea behind thresholding is separating the
image pixels into two parts.
Thresholding is a convenient way of performing the processing depends on
various intensities from image regions. It is also used to evaluate the image
regions which have pixels ranging within a particular range. Additionally,
thresholding is also used in image preprocessing to extract the interesting
points from the image structure. In this step, we have separated lane pixels
from the road pixels by setting the threshold value 0.145.
A lane detected image depend on the entropy fusion using threshold value is
depicted in figure 5.13 in the figure, lane area is represented by 1, and the road
area is represented by 2. Final acquired matrix is given below:

Figure 5.13:Resulting output based on the threshold

119
Figure 5.13 shows pixel-wise line detection according to the device categories
and positions. Advantage of pixel-wise lane detection is the shape limitation
of a lane is avoided.
The detection result is enhanced by using the information of the neighboring
pixels. Neighboring pixels information is also used to ready whether a given
pixel boundary or not. Figure 5. 14 show the image detected by the EBFM.

Figure 5.14: An image detected by the EBFM

5.5. RESULTS AND DISCUSSION

The proposed entropy-based lane detection model’s obtained results are demonstrated
in this section. The enactment of the projected system is evaluated with regard to the
existing methods based on the changing number of iterations.

The outcomes of the experiments implemented by the proposed entropy fusion model
are depicted in figure 5.15. The input image samples taken from the KITTI database

120
are depicted in figure 5.15 (a). Here, figure 5.15 (b) depicts the images that are
detected using an EW-CSA based DEEP CNN. Correspondingly, figure 5.15(c)
depicts the RBIS segmentation method.Entropy function’s acquired experimental
results are shown in figure 5.15(d)for the input samples.

(a)

(b)

(c)

121
(d)

Figure 5.15:Experimental results of EBFM, (a) The original input image, (b) The
image detected by EW-CSA based deep CNN classifier (c) The image detected by
RBIS method (d) The image detected by EBFM

5.5.1. EXPERIMENTAL SETUP

The experiments of the proposed system are carried out in MATLAB.The operating
system required for the computer is Windows 10 OS, and 2GB RAM is needed.

5.5.2. DATASET DESCRIPTION

Here we have used the KITTI vision benchmark dataset. This dataset we have already
discussed in chapter 3 section 3.5.2.

5.5.3. PERFORMANCE MEASURES

The performance evaluation of the techniques is done by using the measures like
detection accuracy, specificity, and sensitivity.These measues are described below:

a) Detection accuracy

Detection accuracy is the average of the lanes that are identified correctly.

122
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒
𝐷𝐴 =
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒

(5.4)

where, DA is detection accuracy

b) Sensitivity

Sensitivity measure is used to recognize correct detected lanes. It can also be referred
as the true positive rates.

𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝑇𝑃)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝑇𝑃) + 𝐹𝑎𝑙𝑠𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝐹𝑁)

(5.5)

c) Specificity

Specificity measure is used to recognize the lanes that are rejected correctly. This
measue is also known asthe false positive rate (FPR).

𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝑇𝑁)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝑇𝑁)) + 𝐹𝑎𝑙𝑠𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝐹𝑃)

(5.6)

5.5.4 COMPARATIVE TECHNIQUES

The proposed systems comparative analysis is done with the comparison include
Dense Vanishing Point Estimation (DVPE), Modified Min-Between-Max
Thresholding (MMBMT), DEEP CNN, RBIS, and EW-CSA based DEEP CNN.

123
5.6. PERFORMANCE ANALYSIS

The proposed entropy fusion model’s performance analysis for the changing image
sizes and for the number of grids is presented in this section. The performance metrics
are used for analyzing the performance.

5.6.1. ANALYSIS BASED ON VARYING IMAGE SIZE

The proposed model performance analysis for the changing image size is depicted in
Figure 10. The proposed model detection accuracy analysis is depicted in figure 5.16
for changing the image size. Here, the number of iterations taken into account is 40,
and the detection accuracy for the proposed method with the image size 64 64 is
0.921,with image size 128128 is 0.890, with image size 192192is 0.941, and the
proposed model with image size 256 256 is 0.941.

Figure 5.16: Performance analysis of EBFM based on image size for Detection
Accuracy

124
The same way for the 160th iteration, the detection accuracy value estimated by the
proposed method with the image size 64 64 is 0.924, with image size 128128 is
0.940, with image size 192192is 0.959, and EBFM with image size 256 256 is
0.969.

Figure 5.17: Performance analysis of EBFM based on the image for Sensitivity

The proposed models sensitivity analysis is depicted in figure 5.17 for changing the
image size. Here, the number of iterations taken into account is 40, and the detection
accuracy for the proposed method with the image size 64 64 is 0.937,with image
size 128128 is 0.881, with image size 192192is 0.923, and the proposed model with
image size 256 256 is 0.959.

The same way for the 160th iteration, the detection accuracy value estimated by the
proposed method with the image size 64 64 is 0.946, with image size 128128 is

125
0.957, with image size 192192is 0.971, and the proposed model with image size
256 256 is 0.976.

Figure 5.18: Performance analysis of EBFM based on image size for Specificity

The proposed model Specificity analysis is depicted in figure 5.18 for changing the
image size. Here, the number of iterations taken into account is 40, and the detection
accuracy for a the proposed method with the image size 64 64 is 0.774,with image
size 128128 is 0.966, with image size 192192is 0.956, and the proposed model with
image size 256 256 is 0.948. The same way for the 160th iteration, the detection
accuracy value estimated by the proposed method with the image size 64 64 is 0.771,
with image size 128128 is 0.690, with image size 192192is 0.700, and the proposed
model with image size 256 256 is 0.707.

126
Table 5.1:Performance analysis based on image size

Image
No of Size of the Image Image
Detection
iterations image Sensitivity Specificity
Accuracy

64 64 0.921 0.937 0.774

128128 0.890 0.881 0.966


40
192192 0.941 0.923 0.956

256 256 0.941 0.959 0.948

64 64 0.924 0.946 0.771

128128 0.940 0.957 0.690


160
192192 0.959 0.971 0.700

256 256 0.969 0.976 0.707

5.6.2. ANALYSIS BASED ON THE NUMBER OF GRIDS

The performance of the proposed entropy fusion method is presented for the detection
accuracy with the changing number of grids as well for the changing number of
iterations. The enactment of the detection accuracy is depicted in figure 5.19. Let the
number of iterations is 40, then the number of iterations is 40, the resultant detection
accuracy values for the proposed entropy fusion model with grid size 3is 0.886, EBFM
model size 4 is 0.909, EBFM model size 5 is 0.922, and EBFM model size 6 is 0.927.

Similarly, at 160th iteration, the detection accuracy values computed by the proposed
entropy fusion model with grid size 3is 0.935, EBFM model size 4 is 0.935, EBFM
model size 5 is 0.961, and EBFM model size 6 is 0.954.

The proposed model analysis based on sensitivity with a varying number of iterations
is depicted in figure 5.20. When the number of iterations is 40, the corresponding

127
sensitivity values of the proposed entropy value model with grid size 3is 0.952, EBFM
model size 4 is 0.860, EBFM model size 5 is 0.915, and EBFM model size 6 is 0.922.

Similarly, for the number of iterations 160, the corresponding sensitivity values
computed with grid size 3is 0.972, EBFM model size 4 is 0.903, and EBFM model
size 5 is 0. 0.963,and EBFM model size 6 is 0.955.

Figure 5.19: Performance analysis of EBFM based on a number of grids for


Detection accuracy

128
The analysis of EBFM based on specificity with a varying number of iterations is
depicted in figure 5.21. When the number of iterations is 40, the corresponding
specificity values of EBFM model size 3is 0.707, EBFM model size 4 is 0.651, EBFM
model size 5 is 0.759, and EBFM model size 6 is 0.773.

Figure 5.20: Performance analysis of EBFM based on the number of grids for
sensitivity

when the number of iterations is increased to 160, then the corresponding specificity
values calculated by the proposed model with the grid size 3 is 0.947, with the grid
size 4 is 0.933, with the grid size 5 is 0.933, and with the grid size, 6 is 0.865.

129
Figure 5.21: Performance analysis of EBFM based on the number of grids
Specificity

Table 5.2:Performance analysis of EBFM based on the number of grids

No of iterations Varying Grid size Detection Accuracy Sensitivity Specificity

3 0.886 0.952 0.707

4 0.909 0.860 0.651


40
5 0.922 0.915 0.759

6 0.927 0.922 0.773

3 0.935 0.972 0.947

4 0.935 0.903 0.933


160
5 0.961 0.963 0.786

6 0.954 0.955 0.865

130
5.7 COMPARATIVE ANALYSIS

The comparative analysis of the techniques for changing number of iterations for
detection accuracy is depicted in figure 5.22. For the number of iteration 40, the
detection accuracy for DVPE is 0.910, MMBMT is 0.823, DEEP CNN is 0.947, EW-
CSA based deep CNN is 0.937, RBIS segmentation is 0.961, and for the proposed
technique is 0.984.

When the number of iterations is 160, then the detection accuracy computed for DVPE
is 0.914, MMBMT is 0.910, DCNN is 0.983, EW-CSA based deep CNN is 0.978,
RBIS segmentation is 0.988, and for the proposed technique is 0.991.

Figure 5.22:Comparative analysis of the techniques based on the accuracy

131
The sensitivity-based analysis of the proposed system is depicted in figure 5.23. The
sensitivity value measured for a number of iteration 40 for DVPE is 0.871, MMBMT
is 0.877, DCNN is 0.964, EW-CSA based deep CNN is 0.957, RBIS segmentation is
0.987, and for the proposed technique is 0.990. The sensitivity value measured for a
number of iteration 160 for DVPE is 0.871, MMBMT is 0.941, DCNN is 0.976, EW-
CSA based deep CNN is 0.978, RBIS segmentation is 0.991, and for the proposed
technique is 0.992.

Figure 5.23:Comparative analysis of the techniques based on Sensitivity

132
The specificity -based analysis of the proposed system is depicted in figure 5.24. The
sensitivity value measured for a number of iteration 40 for DVPE is 0.74, MMBMT is
0.608, DEEP CNN is 0.5, EW-CSA based deep CNN is 0.667, the RBIS segmentation
is 0.781, and for the proposed technique is 0.869.

The specificity value measured for a number of iteration 160 for DVPE is 0.74,
MMBMT is 0.744, DCNN is 896, EW-CSA based deep CNN is 0.765, the RBIS
segmentation is 0.886, and for the proposed technique is 0.887.

Figure 5.24: Comparative analysis of the techniques based on Specificity

133
Table 5.3: Comparative analysis

Method No of Detection Sensitivity Specificity


iterations accuracy

DVPE 40 0.910 0.871 0.74

160 0.914 0.871 0.74


MMBMT 40 0.823 0.877 0.608

160 0.910 0.941 0.744


DCNN 40 0.947 0.964 0.5
160 0.983 0.976 0.896
EW-CSA based deep 40 0.937 0.957 0.667
CNN 160 0.978 0.978 0.765
RBIS 40 0.961 0.987 0.781

160 0.988 0.991 0.886


The proposed EBFM 40 0.984 0.990 0.869
Method 160 0.991 0.992 0.887

5.8. COMPARATIVE DISCUSSION

A comparative discussion of the techniques is based on measures like sensitivity,


specificity, and detection accuracy is presented in this section. The maximum
performance obtained by the methods through the iteration change is discussed in
Table 5.3.

Table 5.4 has represented the comparative discussion of the techniques. The analysis
is done in terms of specificity, sensitivity and detection accuracy for the changing
number of iterations. The detection accuracy achieved by the DVPE is 0.914,
MMBMT is 0.910, DCNN is 0.983, EW-CSA based DEEP CNN is 0.978, and for
RBIS segmentation is 0.988. The proposed system has achieved 0.991 detection
accuracy. The sensitivity values estimated by DVPE are 0.871, MMBMT is 0.941,

134
DCNN is 0.976, EW-CSA based DEEP CNN is 0.978, RBIS segmentation method is
0.991, and the proposed methods sensitivity value is 0.992. The specificity value for
DVPE is 0.74, MMBMT is 0.744, DCNN is 0.886, EW-CSA based DEEP CNN is
0.765, RBIS segmentation method is 0.886, and the proposed methods sensitivity
value is 0.887. Here, the proposed method has achieved the highest detection accuracy,
specificity as well as sensitivity.

Table 5.4: Comparative discussion

Comparative Evaluation metrics


techniques Detection accuracy Sensitivity Specificity

DVPE 0.914 0.871 0.74

MMBMT 0.910 0.941 0.744

DCNN 0.983 0.976 0.886

EW-CSA based
0.978 0.978 0.765
DCNN

RBIS 0.988 0.991 0.886

Proposed entropy
0.991 0.992 0.887
based fusion

Summary

In this chapter, a method for helping the driver to prevent the mishaps is proposed. An
entropy fusion model (EBFM) is used for detecting the lanes on the road.TheEBFM
incorporates the outcomes acquired from two stages to identify mutilanes.In the main

135
stage, the multi-path location detection is performed by utilizing EW-CSA based deep
CNN. EW-CSA is utilized to prepare the deep CNN. In the subsequent stage, the multi-
path recognition is completed utilizing the RBIS segmentation technique, in which the
sparking strategy was employed for the

segmentation. At that point, the proposed EBFM is utilized to settle on an ultimate


choice by joining the consequences of both the methods depend on the entropy
measure. Presentation of EBFM gives prevalent outcomes.regarding specificity with
value 0.887 , sensitivity with value 0.992 , and accuracy with value 0.991.

136

You might also like