Professional Documents
Culture Documents
Page 208
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on August 14,2023 at 18:36:07 UTC from IEEE Xplore. Restrictions apply.
and a GPS module also have been connected to the raspberry has been used for face detection which has later been used as
pi. These three parts have been the input for the car the custom detection model. After that videos and images has
authentication system. For output, there has been used a relay been tested. The face data that the camera has caught provides
module that relates to both the raspberry pi and the self-starter the input. Then the built-in model has been loaded throughout
motor controls the starting system of the car. A mobile app the detecting procedure and then utilized bounding boxes to
that has been used to control the whole authentication system. conduct categorization and prediction and the confidence
For face detection, in raspberry pi, the modified Open CV level. The outcomes of the detection have taken the form of a
YOLOv5 model and coco’s modified model have been used. class in the report database, after which the information on the
report page has been displayed in the database. The page of
This setup can detect faces within 1 second. After launching
the report has been developed to provide data in the app in a
the app, it shows the face of the driver who has been trying to
form of an image. If the Face data matches, then the user didn’t
start the car. If the driver’s face is known to the owner, then require any involvement from the app and database. Thus, the
the owner may press the allow button and the unknown face circuit was completed, and it passed signals to the relay
has been saved in the server after that the driver is given access modules, which further passed electricity to the starter motor.
to the car. In the case of an unknown driver’s face, the car For unmatched face data, one notification through the app has
owner may select the decline button which deletes the face been provided to the user where he has accepted the face data
data of that unknown driver, and the driver may become to be recognized data if that face has been known to the owner,
unauthorized. if not user simply deleted the data which results in a non-
starting starter motor condition.
IV. METHODOLOGY AND EXPERIMENTAL SETUP
This system has two parts: object detection and face V. RESULTS AND ANALYSIS
recognition and app-based vehicle authentication. In both Here the image analysis has been done on FLIR
places, customized versions of Open CV YOLOv3 and Research IR Max software, and it has been only done for the
YOLOv5 models have been used for object detection. thermal image as the raspberry pi camera module provides
Thermal Imaging is the art of converting infrared radiation proper images where face detection is pretty much easier than
into a radiometric form. In a thermal imaging camera, a focal object detection. In this software here one thermal feed has
plane array sensor has been used which has a pixel dimension been inserted which has detected objects in various conditions
of 80×60 pixels. Here each pixel works as a sensory element. like glare, dark, and fog. then this thermal feed snapshot has
infrared (IR) radiation known as a thermogram, has been been inserted into Research IR software, and then it has been
captured by a thermal imager. IR is not visible to humans, and analyzed. The image analysis had been done in four situations,
it ranges from 400nm to 1400nm [15]. As thermal camera nominal situation, dark situation, foggy situation, and glare
module has been producing images only from thermal energy situation. In Fig. 2. the nominal condition had been explained
emission that’s why they have been used for object detection with the help of a histogram and image profile. The
in poor weather condition such as fog and heavy rain, as well conventional image shows the user view, and the thermal
as wind-blown snow, dust, and smoke. In all these conditions image shows the object detection. The histogram here shows
thermal camera has provided a significant output and can the pixel activation rate according to the infrared radiation
overcome the shortcomings of conventional systems. For the which means the greater the radiation greater the pixel
detection task, a modified YOLOv3 model has been used. Out activation rate. On the other hand, the image profile explains
of four types of neural networks Faster R-CNN has been at a time, how many pixels had been activated for a certain
chosen. The time required for real-time object detection is period of infrared radiation. In Fig. 3. for glare conditions, the
decreased with the Faster R-CNN method. In this system, performance of the thermal imaging camera is moderate
customized YOLO models with the combination of thermal where it had been partially activating around 1406 pixels due
imaging have been emphasized for the detection of humans to the heat produced in the glare condition which is about 121
and vehicles in different weather conditions. All developed degrees Fahrenheit. As the headlamp of a car produces bright
models have been successfully implemented to detect objects. light they work as a heat source that interferes with specific
According to Alexander et al. [16], the resolution for object objects in front. The image profile is also dull in this situation
detection corresponds to its processing time had been a as in this condition the object detection accuracy is moderate.
flexible option. A standard YOLOv3 has been trained on the Then Fig. 4. and Fig. 5. explain the image quality in dark and
coco’s data file (trained model for various objects on road) fog. In both cases, this system provided excellent outcomes.
which has achieved 97% accuracy in segmenting objects. The histogram and image profile charts also properly matched
FLIR LEPTON 2.5 thermal imaging camera module [17] with the nominal condition histogram and image profile
helped to evaluate the performance. charts. So as a result, it can be said that this system may
The vehicle authentication system is sub-categorized into provide promising output in dark and foggy situations and can
three portions which are: API Service, Facial biometrics using remove the shortcomings of existing solutions. Though
modified YOLOv5, and owner identification. The modified existing systems provide more good results in glare conditions
YOLOv5 has been operated on a convolutional neural rather than this proposed system. The AP measure has been
network (CNN). Three elements make up the YOLO v5 used to evaluate model performance. The detection boxes and
architecture [18]. For creating a data set to train for face the corresponding ground truth bounding box's “intersection
recognition an image has been collected from the raspberry pi over-union” (IoU) value have been fixed at 50%. After the
camera module and after that, the image has been labeled detection findings have been compared to the
using LabelImg which is called annotation [19], and the result ground truth. If is to be evaluated as a true positive. In the
of annotation has been converted into XML format. Then coco data set the maximum AP value has been given 0.95 and
these XML formats have been combined for the processing of
facial image data. For the training process, a custom YOLOv5
Page 209
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on August 14,2023 at 18:36:07 UTC from IEEE Xplore. Restrictions apply.
Truck 1. 00
Car 1.00
(a) Traditional perception (b) The IR photo (c) Histogram (d) Image profile
Fig. 2. Normal condition
(a) Traditional perception (b) The IR photo (c) Histogram (d) Image profile
Fig. 3. Glare condition
Person 1.00
Car 1.00
(a) Traditional perception (b) The IR photo (c) Histogram (d) Image profile
Fig. 4. Dark condition
Person 0.58
(a) Traditional perception (b) The IR photo (c) Histogram (d) Image profile
Fig. 5. Foggy condition
The minimum value is 0.50 and the interval has been given
to 0.05. So, for the coco data set this set of values for true (P)
ĀPositiveā ı 0.5; false (P) “Positive” <0.5, and false (N)
“Negative” for no detection. Only one detection has been
deemed a genuine positive even though the same object has
been picked up more than once. For each class, the average
precision (AP) has been determined, but only one class Person
has been considered in this system. The detector's confidence
matrix's threshold has been modified, and precision-recall
curves for the desired classes have been made. With the help
of the confusion matrix [20] the calculation, recall, and
precision curve has been developed.
ܶ)ܲ( ݁ݑݎ
Recall = (1)
ܶ)ܲ( ݁ݑݎ+)ܰ( ݁ݏ݈ܽܨ
ܶ)ܲ( ݁ݑݎ
Precision = (2)
ܶ)ܲ( ݁ݑݎ+)ܲ( ݁ݏ݈ܽܨ
݈ܽݎ݁ݒ ݂ ܽ݁ݎܣ
Iou = (3)
݊݅݊ݑ ݂ ܽ݁ݎܣ
mAP = 1 ∑ܰ ܲܣ (4)
ܰ ݅=2 ݅
Fig. 6. Fig. 6. For a stock Yolo model, the slope of precision to recall
here, N= number of classes and
= ݅ܲܣThe Average precision of class “i”
Page 210
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on August 14,2023 at 18:36:07 UTC from IEEE Xplore. Restrictions apply.
[1] Y.-H. Ren, “Analysis of Road Traffic Accident Caused by Human
Error,” ICTIS 2013, Jun. 2013, doi: 10.1061/9780784413036.073
[2] “Association of reduced visibility with crash outcomes,” IATSS
Research, vol. 42, no. 3, pp. 143–151, Oct. 2018, doi:
10.1016/j.iatssr.2017.10.003.
[3] M. P. Levy and C. Tartaro, “Auto Theft: A site-survey and analysis of
environmental crime factors in Atlantic City, NJ,” Security Journal,
vol. 23, no. 2, pp. 75–94, Sep. 2008, doi: 10.1057/palgrave.sj.8350088.
[4] S. Mehra, “A Review on Enhancement of Face Recognition
Technology in Biometrics,” International Journal of Mechanical
Engineering and Information Technology, Aug. 2016, doi:
10.18535/ijmeit/v4i8.05.
[5] “Review of Pedestrian Detection by Video Processing in Automotive
Night Vision System,” International Journal of Science and Research
(IJSR), vol. 5, no. 4, pp. 2310–2312, Apr. 2016, doi:
10.21275/v5i4.nov163154.
[6] J. Kobbert, K. Kosmas, and T. Khanh, “Glare-free high beam
optimization based on country road traffic simulation,” Lighting
Research & Technology, vol. 51, no. 6, pp. 922–936, Nov. 2018, doi:
Fig. 7. For accustom trained Yolo model, the slope of precision to 10.1177/1477153518810997.
recall [7] P. Teichmann, “Night vision 2 — The first night-vision system to
Fig. 6. represents the AP scores of the stock YOLOv3 recognize pedestrians and cyclists,” ATZ worldwide, vol. 109, no. 10,
which is untrained, and Fig. 7. corresponds to the AP scores pp. 10–12, Oct. 2007, doi: 10.1007/bf03224960.
obtained with the trained YOLOv3 model further trained on [8] MarkVollrath, S. Schleicher, and C. Gelau, “The influence of Cruise
the custom dataset. The 29% AP score achieved in the custom Control and Adaptive Cruise Control on driving behaviour – A driving
trained YOLOv3 is higher than the 7% AP score achieved in simulator study,” Accident Analysis & Prevention, vol. 43, no. 3, pp.
1134–1139, May 2011, doi: 10.1016/j.aap.2010.12.023
stock YOLOv3. The stock YOLOv3 achieves 97% accuracy
[9] “Car alarm at the FBI,” Network Security, vol. 2016, no. 4, p. 2, Apr.
with 6% recall, while the custom trained YOLOv3 model 2016, doi: 10.1016/s1353-4858(16)30032-0.
achieves similar accuracy with a significantly greater recall of [10] J. C. van Ours and B. Vollaard, “The Engine Immobilizer: A Non-
about 20%. In TABLE I. a comparative analysis between Starter for Car Thieves,” SSRN Electronic Journal, 2013, doi:
stock and the custom-trained YOLO models has been given. 10.2139/ssrn.2214895.
[11] “MegaMatcher Automated Biometric Identification System for
national-scale projects,” www.neurotechnology.com.
TABLE I. OBJECT DETECTION RESULTS OF STOCK AND https://www.neurotechnology.com/megamatcher-
CUSTOM-TRAINED ALGORITHM
abis.html?gclid=Cj0KCQjwjvaYBhDlARIsAO8PkE3J48FHFTG9M
AkXthMbxMyfBbvA7al1WZhFHrcY-
YOLOv3 YOLOv5 GlcKEAOhxnK1Y4aAm91EALw_wcB (accessed Sep. 11, 2022).
Algorithms Stock Custom Custom Stock [12] N. Kiruthiga, L. latha, and S. Thangasamy, “Real Time Biometrics
trained trained Based Vehicle Security System with GPS and GSM Technology,”
Procedia Computer Science, vol. 47, pp. 471–479, 2015, doi:
Mean 0.803 0.82 0.82 0.747 10.1016/j.procs.2015.03.231
average 2 1 [13] I. Journal, “Advanced Car Security System Using GSM,”
precisio www.academia.edu, Accessed: Sep. 14, 2022. [Online]. Available:
n https://www.academia.edu/10175076/Advanced_Car_Security_Syste
(mAP) m_Using_GSM
Recall 0.229 0.06 0.20 0.110 [14] B. Almadani, S. Khan, T. R. Sheltami, E. M. Shakshuki, M. Musaddiq,
0 9 and B. Saeed, “Automatic Vehicle Location and Monitoring System
based on Data Distribution Service,” Procedia Computer Science, vol.
Accuracy 0.97 0.97 0.95 0.89 37, pp. 127–134, 2014, doi: 10.1016/j.procs.2014.08.021.
Speed (FPS) 41 30 30 50 [15] M. Kristo and M. Ivasic-Kos, “An Overview of Thermal Face
Recognition Methods,” in 2018 41st International Convention on
Information and Communication Technology, Electronics and
VI. CONCLUSION Microelectronics (MIPRO), 2018.
A significant number of road accidents are caused by [16] A. Alexander and M. M. Dharmana, “Object detection algorithm for
vision issues or the inability to recognize obstacles, typically segregating similar colored objects and database formation,” in 2017
because of low visibility under unsuitable weather conditions. International Conference on Circuit ,Power and Computing
Technologies (ICCPCT), Apr. 2017, pp. 1–5.
To assist the driver to understand better about his surrounding,
[17] M. G. Acorsi, L. M. Gimenez, and M. Martello, “Assessing the
we have created a project. Its goal is to show the driver a Performance of a Low-Cost Thermal Camera in Proximal and Aerial
thermal image and night vision of the road ahead and classify Conditions,” Remote Sensing, vol. 12, no. 21, p. 3591, Nov. 2020, doi:
the objects accordingly. Life forms are easier to detect in 10.3390/rs12213591.
hazardous weather using thermal vision. In addition to that, a [18] Z. Li, “Road Aerial Object Detection Based on Improved YOLOv5,”
smart anti-theft system is incorporated to stop theft from Journal of Physics: Conference Series, vol. 2171, no. 1, p. 012039, Jan.
happening. This works by sending pictures of the driver to the 2022, doi: 10.1088/1742-6596/2171/1/012039.
owner’s smartphone where the owner can allow or reject the [19] S. CAI and Q. YU, “Optimization and application of connected
component labeling algorithm based on run-length encoding,” Journal
person from starting the car. If our idea is deployed in a of Computer Applications, vol. 28, no. 12, pp. 3150–3153, Feb. 2009,
significant number of automobiles, it might prevent numerous doi: 10.3724/sp.j.1087.2008.03150.
accidents and save countless lives. [20] “(PDF) Confusion Matrix-based Feature Selection.,” ResearchGate.
https://www.researchgate.net/publication/220833270_Confusion_Mat
REFERENCES rix-based_Feature_Selection
Page 211
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DA PARAIBA. Downloaded on August 14,2023 at 18:36:07 UTC from IEEE Xplore. Restrictions apply.