Professional Documents
Culture Documents
Chapter 5
Chapter 5
The third method introduced for lane detection is an entropy function model for
preventing road accidents. An entropy function Model combines outputs acquired by
an EW-CSA based Deep CNN and RBIS segmentation technique to detect multilane.
In an EW-CSA based Deep CNN, image/picture transformation is done for acquiring
a birds’ eye view picture. After that, an EW-CSA based Deep CNN performs lane
detection. The EW-CSA is built by combining the Earth-Worm algorithm and Crow
Search Algorithm. The EW-CSA algorithm is a training algorithm in Deep CNN.
In the second approach, the RBIS segmentation method is used for multilane detection.
Entropy is an essential factor, and it measures the information content of the picture.
It can differentiate information from the noise. As a result, entropy function is used to
combine the calculated result of an EW-CSA based Deep CNN and RBIS
segmentation method, to detect multi-lines by preserving information successfully.
The proposed EBFM developed for multilane detection. Here, two techniques, namely
EW-CSA based DEEP CNN and RBIS segmentation method, and entropy function, is
employed to detect multilane. The lane detection model using the proposed EBFM is
illustrated in figure 5.1. The proposed EBFM model is based on the entropy measure.
The purpose of lane detection scheme is to determine lanes from multi road lane
images to provide secure driving. EBFM model employs the entropy function for
multilane detection. The fusion approach is developed newly with the combination of
EW-CSA based deep CNN and RBIS segmentation. The fusion combines output
generated from the first stage and the second stage to achieve multilane detection.
106
Figure 5.1: Lane detection model using EBFM
107
Figure 5.2: Sample of the input image
Primarily, EW-CSA based deep CNN Technique is used for multiple lane
detection. At first, an input image is transformed from one area to another area for
processing. Image transformation performed by using IPM. IPM transforms input
image sample into BEV. BEV image is then provided to deep CNN classifier as input
for detecting lanes. Then, EW-CSA algorithm is used for choosing the best weights.
This algorithm is a combination of earthworm and crow search algorithm. The selected
best weights are used for training deep CNN. The combination of EWA and CSA is
used to solve design as well as an Optimization problem in EW-CSA. This integration
of EWA and CSA can solve the local minima problem. Deep CNN has 3 layers, and
the layers are a convolutional, pooling, and fully connected layers. Here, all layer
executes a particular task. Training deep CNN is the first step, where EW-CSA does
108
initialization of weight. The updates are carried out based on the contribution of
weights towards minimum error value. The optimization process is regulated using the
updates. Thus, altered Cauchy mutation equation for EW-CSA is given below:
k W k ,m − 1 k W k ,m h l ,m
C k ,l = K R −
k W k , m 1 − k W k ,m
l
(5.1)
Where, a random number is denoted by k , and it is in the range [0,1]. The flight
Figure 5.3: The image detected by EW-CSA based DEEP CNN classifier
109
illustrated in this section. In the multiline detection process, at first, input samples have
to undergo through segmentation process. The segmentation is performed by using a
sparking method [49]. The sparking method is used for improving the effectiveness of
image by using probable threshold value. The sparking process is applied to input
image samples for obtaining the segmented image. After that, RBIS segmentation is
applied for detecting lanes, and images are separated into grids. Then, random target
selection is performed. Next, the distance between the grids and targets is computed
using the Bhattacharyya distance. The grids that are having the least distance are
chosen. The Chosen grids are belonging to specific targets. The relative nearness of
two samples is determined by using Bhattacharya distance. It also measures the class's
separability. Bhattacharyya distance is a reliable distance measure as compared to
other distance measures. Here, the average of grid point is taken, which belongs to
matching targets. This process gets repeated until it determines the ultimate targets.
Afterwards, grid points are grouped for splitting lanes from the street by using the
nearest neighbor distance.
110
lanes based on highest arisen instances. Figure 5.4 shows the image detected by the
RBIS method.
The proposed entropy function for multilane detection is elaborated in this section.
The fused image contains information about the source images, and entropy evaluates
this information. The output of the EW-CSA based DEEP CNN and the output of the
RBIS segmentation method is fused by using the entropy function for successful
multilane detection. Considering the earlier stages, the output is viewed in two types
of categories, that is, the output is either lane or road depend on an NN distance.
The first and second stage output is provided to the fusion approach as an input. Lane
and road regions are presented in the numerical form, for example, 1 ad 2. The pixels
belonging to the lane are presented by 1, and the pixels belonging to the road are
presented by 2. Now, the lane pixels are obtained. Then lane pixels occurring
probability from the first and second stages are computed and in presented the matrix
form as P1 .
Step 1: Input
111
The output image produced by EW-CSA based DEEP CNN and RBIS segmentation
technique is given as an input to the fusion algorithm. Lanes and the roads in the image
are presented in a numerical form separate them as lane pixel and road pixel. Therefore
figure 5.5and 5.6 depict the output of EW-CSA based DEEP CNN and RBIS
segmentation technique.
Figure 5.5: Output image generated from EW-CSA based deep CNN
112
Figure 5.6: Output image generated from the RBIS segmentation method
Contextual information is a model for boosting the algorithm to choose the relevant
contextual features for detecting Lane marking. Lane pixels are taken from the output
image produced by EW-CSA based DEEP CNN and RBIS segmentation technique.
The pixel which is having value 1 (lane) is considered for estimation of the image
composed after the line information based on multilane pixel using the output
generated by EW-CSA based DEEP CNN, and RBIS segmentation is depicted in
figure 5.7.
113
Figure 5.7:Image after acquiring multilane pixels
114
Figure 5.8. Matrix P1 based on probability of occurrence using the outputs of EW-
CSA based deep CNN and RBIS
115
Figure 5.9.The matrix P2 representing the probability of occurrence based on the
neighborhood using EW-CSA based DEEP CNN
116
Step 5: Apply the fusion process by estimating the average of probabilities
The image acquired taking the average of the probabilities is depicted in figure
5.11. The fusion is performed depend upon the probabilities acquired from the
earlier stages and for these probabilities averages estimated, the formula is
given below:
P1 + P2 + P3
P=
3 (5.2)
Figure 5.11: Image P obtained after taking the average of the pixels
Now the next step is computation of Entropy for fusing the outputs of both the
techniques. The entropy is a standard measure [31] to determine the uncertainty in any
data. It is used to maximize mutual information in various operations. The entropy
variations are widely available, and they inspire the suitability preference for a specific
operation. Hence, the entropy of an image help to target the difference between the
117
pixel group or neighbor pixels. Additionally, entropy is also described as
corresponding states of intensity level that an individual pixel can adapt. In the
quantitative analysis, the entropy of an image pixel is used. It is also used to evaluate
the details of an image and give a better comparison between the image details. The
image required after implementing the fusion-based model is depicted in figure
5.12. The entropy for the obtained probabilities is computed using the following
formula:
F = − P log(P ) (5.3)
This is the last step. A predefined threshold is used for lane detection. The aim
of applying predefined threshold is to differentiate lane pixels and road pixels,
and its value is set to 0.145. If the value of the pixel is below the threshold,
118
then it represents lane, and if the value of the pixel is higher than the threshold,
then it represents the road. The main idea behind thresholding is separating the
image pixels into two parts.
Thresholding is a convenient way of performing the processing depends on
various intensities from image regions. It is also used to evaluate the image
regions which have pixels ranging within a particular range. Additionally,
thresholding is also used in image preprocessing to extract the interesting
points from the image structure. In this step, we have separated lane pixels
from the road pixels by setting the threshold value 0.145.
A lane detected image depend on the entropy fusion using threshold value is
depicted in figure 5.13 in the figure, lane area is represented by 1, and the road
area is represented by 2. Final acquired matrix is given below:
119
Figure 5.13 shows pixel-wise line detection according to the device categories
and positions. Advantage of pixel-wise lane detection is the shape limitation
of a lane is avoided.
The detection result is enhanced by using the information of the neighboring
pixels. Neighboring pixels information is also used to ready whether a given
pixel boundary or not. Figure 5. 14 show the image detected by the EBFM.
The proposed entropy-based lane detection model’s obtained results are demonstrated
in this section. The enactment of the projected system is evaluated with regard to the
existing methods based on the changing number of iterations.
The outcomes of the experiments implemented by the proposed entropy fusion model
are depicted in figure 5.15. The input image samples taken from the KITTI database
120
are depicted in figure 5.15 (a). Here, figure 5.15 (b) depicts the images that are
detected using an EW-CSA based DEEP CNN. Correspondingly, figure 5.15(c)
depicts the RBIS segmentation method.Entropy function’s acquired experimental
results are shown in figure 5.15(d)for the input samples.
(a)
(b)
(c)
121
(d)
Figure 5.15:Experimental results of EBFM, (a) The original input image, (b) The
image detected by EW-CSA based deep CNN classifier (c) The image detected by
RBIS method (d) The image detected by EBFM
The experiments of the proposed system are carried out in MATLAB.The operating
system required for the computer is Windows 10 OS, and 2GB RAM is needed.
Here we have used the KITTI vision benchmark dataset. This dataset we have already
discussed in chapter 3 section 3.5.2.
The performance evaluation of the techniques is done by using the measures like
detection accuracy, specificity, and sensitivity.These measues are described below:
a) Detection accuracy
Detection accuracy is the average of the lanes that are identified correctly.
122
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒
𝐷𝐴 =
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝐹𝑎𝑙𝑠𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒
(5.4)
b) Sensitivity
Sensitivity measure is used to recognize correct detected lanes. It can also be referred
as the true positive rates.
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝑇𝑃)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝑇𝑃) + 𝐹𝑎𝑙𝑠𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝐹𝑁)
(5.5)
c) Specificity
Specificity measure is used to recognize the lanes that are rejected correctly. This
measue is also known asthe false positive rate (FPR).
𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝑇𝑁)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 (𝑇𝑁)) + 𝐹𝑎𝑙𝑠𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒 (𝐹𝑃)
(5.6)
The proposed systems comparative analysis is done with the comparison include
Dense Vanishing Point Estimation (DVPE), Modified Min-Between-Max
Thresholding (MMBMT), DEEP CNN, RBIS, and EW-CSA based DEEP CNN.
123
5.6. PERFORMANCE ANALYSIS
The proposed entropy fusion model’s performance analysis for the changing image
sizes and for the number of grids is presented in this section. The performance metrics
are used for analyzing the performance.
The proposed model performance analysis for the changing image size is depicted in
Figure 10. The proposed model detection accuracy analysis is depicted in figure 5.16
for changing the image size. Here, the number of iterations taken into account is 40,
and the detection accuracy for the proposed method with the image size 64 64 is
0.921,with image size 128128 is 0.890, with image size 192192is 0.941, and the
proposed model with image size 256 256 is 0.941.
Figure 5.16: Performance analysis of EBFM based on image size for Detection
Accuracy
124
The same way for the 160th iteration, the detection accuracy value estimated by the
proposed method with the image size 64 64 is 0.924, with image size 128128 is
0.940, with image size 192192is 0.959, and EBFM with image size 256 256 is
0.969.
Figure 5.17: Performance analysis of EBFM based on the image for Sensitivity
The proposed models sensitivity analysis is depicted in figure 5.17 for changing the
image size. Here, the number of iterations taken into account is 40, and the detection
accuracy for the proposed method with the image size 64 64 is 0.937,with image
size 128128 is 0.881, with image size 192192is 0.923, and the proposed model with
image size 256 256 is 0.959.
The same way for the 160th iteration, the detection accuracy value estimated by the
proposed method with the image size 64 64 is 0.946, with image size 128128 is
125
0.957, with image size 192192is 0.971, and the proposed model with image size
256 256 is 0.976.
Figure 5.18: Performance analysis of EBFM based on image size for Specificity
The proposed model Specificity analysis is depicted in figure 5.18 for changing the
image size. Here, the number of iterations taken into account is 40, and the detection
accuracy for a the proposed method with the image size 64 64 is 0.774,with image
size 128128 is 0.966, with image size 192192is 0.956, and the proposed model with
image size 256 256 is 0.948. The same way for the 160th iteration, the detection
accuracy value estimated by the proposed method with the image size 64 64 is 0.771,
with image size 128128 is 0.690, with image size 192192is 0.700, and the proposed
model with image size 256 256 is 0.707.
126
Table 5.1:Performance analysis based on image size
Image
No of Size of the Image Image
Detection
iterations image Sensitivity Specificity
Accuracy
The performance of the proposed entropy fusion method is presented for the detection
accuracy with the changing number of grids as well for the changing number of
iterations. The enactment of the detection accuracy is depicted in figure 5.19. Let the
number of iterations is 40, then the number of iterations is 40, the resultant detection
accuracy values for the proposed entropy fusion model with grid size 3is 0.886, EBFM
model size 4 is 0.909, EBFM model size 5 is 0.922, and EBFM model size 6 is 0.927.
Similarly, at 160th iteration, the detection accuracy values computed by the proposed
entropy fusion model with grid size 3is 0.935, EBFM model size 4 is 0.935, EBFM
model size 5 is 0.961, and EBFM model size 6 is 0.954.
The proposed model analysis based on sensitivity with a varying number of iterations
is depicted in figure 5.20. When the number of iterations is 40, the corresponding
127
sensitivity values of the proposed entropy value model with grid size 3is 0.952, EBFM
model size 4 is 0.860, EBFM model size 5 is 0.915, and EBFM model size 6 is 0.922.
Similarly, for the number of iterations 160, the corresponding sensitivity values
computed with grid size 3is 0.972, EBFM model size 4 is 0.903, and EBFM model
size 5 is 0. 0.963,and EBFM model size 6 is 0.955.
128
The analysis of EBFM based on specificity with a varying number of iterations is
depicted in figure 5.21. When the number of iterations is 40, the corresponding
specificity values of EBFM model size 3is 0.707, EBFM model size 4 is 0.651, EBFM
model size 5 is 0.759, and EBFM model size 6 is 0.773.
Figure 5.20: Performance analysis of EBFM based on the number of grids for
sensitivity
when the number of iterations is increased to 160, then the corresponding specificity
values calculated by the proposed model with the grid size 3 is 0.947, with the grid
size 4 is 0.933, with the grid size 5 is 0.933, and with the grid size, 6 is 0.865.
129
Figure 5.21: Performance analysis of EBFM based on the number of grids
Specificity
130
5.7 COMPARATIVE ANALYSIS
The comparative analysis of the techniques for changing number of iterations for
detection accuracy is depicted in figure 5.22. For the number of iteration 40, the
detection accuracy for DVPE is 0.910, MMBMT is 0.823, DEEP CNN is 0.947, EW-
CSA based deep CNN is 0.937, RBIS segmentation is 0.961, and for the proposed
technique is 0.984.
When the number of iterations is 160, then the detection accuracy computed for DVPE
is 0.914, MMBMT is 0.910, DCNN is 0.983, EW-CSA based deep CNN is 0.978,
RBIS segmentation is 0.988, and for the proposed technique is 0.991.
131
The sensitivity-based analysis of the proposed system is depicted in figure 5.23. The
sensitivity value measured for a number of iteration 40 for DVPE is 0.871, MMBMT
is 0.877, DCNN is 0.964, EW-CSA based deep CNN is 0.957, RBIS segmentation is
0.987, and for the proposed technique is 0.990. The sensitivity value measured for a
number of iteration 160 for DVPE is 0.871, MMBMT is 0.941, DCNN is 0.976, EW-
CSA based deep CNN is 0.978, RBIS segmentation is 0.991, and for the proposed
technique is 0.992.
132
The specificity -based analysis of the proposed system is depicted in figure 5.24. The
sensitivity value measured for a number of iteration 40 for DVPE is 0.74, MMBMT is
0.608, DEEP CNN is 0.5, EW-CSA based deep CNN is 0.667, the RBIS segmentation
is 0.781, and for the proposed technique is 0.869.
The specificity value measured for a number of iteration 160 for DVPE is 0.74,
MMBMT is 0.744, DCNN is 896, EW-CSA based deep CNN is 0.765, the RBIS
segmentation is 0.886, and for the proposed technique is 0.887.
133
Table 5.3: Comparative analysis
Table 5.4 has represented the comparative discussion of the techniques. The analysis
is done in terms of specificity, sensitivity and detection accuracy for the changing
number of iterations. The detection accuracy achieved by the DVPE is 0.914,
MMBMT is 0.910, DCNN is 0.983, EW-CSA based DEEP CNN is 0.978, and for
RBIS segmentation is 0.988. The proposed system has achieved 0.991 detection
accuracy. The sensitivity values estimated by DVPE are 0.871, MMBMT is 0.941,
134
DCNN is 0.976, EW-CSA based DEEP CNN is 0.978, RBIS segmentation method is
0.991, and the proposed methods sensitivity value is 0.992. The specificity value for
DVPE is 0.74, MMBMT is 0.744, DCNN is 0.886, EW-CSA based DEEP CNN is
0.765, RBIS segmentation method is 0.886, and the proposed methods sensitivity
value is 0.887. Here, the proposed method has achieved the highest detection accuracy,
specificity as well as sensitivity.
EW-CSA based
0.978 0.978 0.765
DCNN
Proposed entropy
0.991 0.992 0.887
based fusion
Summary
In this chapter, a method for helping the driver to prevent the mishaps is proposed. An
entropy fusion model (EBFM) is used for detecting the lanes on the road.TheEBFM
incorporates the outcomes acquired from two stages to identify mutilanes.In the main
135
stage, the multi-path location detection is performed by utilizing EW-CSA based deep
CNN. EW-CSA is utilized to prepare the deep CNN. In the subsequent stage, the multi-
path recognition is completed utilizing the RBIS segmentation technique, in which the
sparking strategy was employed for the
136