You are on page 1of 25

LIFE-DETECTION RADAR BASED ON

WIDEBAND CHAOTIC SIGNAL

A SEMINAR REPORT
Submitted by

MUHAMMED KHALEEFA ZAYED


(MES19CS064)

to
the APJ Abdul Kalam Technological University
in partial fulfillment of the requirements for the award of the
Degree of
Bachelor of Technology
in
Computer Science and Engineering

Department of Computer Science and


Engineering [B.Tech. Programme accredited by NBA]
MES College of Engineering Kuttippuram
Thrikkanapuram P.O, Malappuram Dt, Kerala, India

679582 2022-2023
DECLARATION

I, hereby declare that the seminar report ”Life-detection radar based on


wideband chaotic signal”, submitted for partial fulfillment of the requirements for
the award of degree of Bachelor of Technology of the APJ Abdul Kalam
Technological University, Kerala is a bonafide work done under the supervision
of Mr. Badharudheen P, Assistant Professor, Computer Science and
Engineering. This submission represents my ideas in my own words and where
ideas or words of others have been included, I have adequately and accurately
cited and referenced the original sources. I also declare that I have adhered to
ethics of academic honesty and integrity and have not misrepresented or
fabricated any data or idea or fact or source in my submission. I understand that
any violation of the above will be a cause for disciplinary action by the institute
and/or the University and can also evoke penal action from the sources which
have thus not been properly cited or from whom proper permission has not been
obtained. This report has not been previously formed the basis for the award of
any degree, diploma or similar title of any other University.

Place: Kuttippuram
Date: 09-12-2022 Muhammed Khaleefa Zayed

DEPARTMENT OF COMPUTER SCIENCE AND


ENGINEERING
MES COLLEGE OF ENGINEERING, KUTTIPPURAM
CERTIFICATE

This is to certify that the report entitled “Life-detection radar based on

wideband chaotic signal" submitted by Muhammed Khaleefa Zayed, to the


APJ Abdul Kalam University in partial requirements for the award of the Degree
of Bachelor of Technology in Computer Science and Engineering is a bonafide
record of the seminar work carried out under my guidance and supervision.
This report in any form has not been submitted to any other University or
Institute for any purpose.

Seminar Guide MES College of Engineering


Dr. Anil K Jacob
Mr. Badharudheen P
Professor & Head
Assistant Professor
Dept.of Computer Science
Dept. of Computer Science
and Engineering.
and Engineering .
MES College of Engineering
ACKNOWLEDGEMENT
I am grateful to Dr. Rahumathunza I, Principal, MES College of Engineer
ing, Kuttippuram, for providing the right ambiance to do this project. I would like
to extend our sincere gratitude to Dr. Anil K Jacob, Head of the Department,
CSE, MES College of Engineering, Kuttippuram.
I am deeply indebted to the seminar coordinator Dr. Asha G, Associate
Profes sor, Department of Computer Science and Engineering for her
continued support. It is with great pleasure that I express a deep sense of
gratitude to our seminar guide Mr. Badharudheen P, Assistant Professor,
Department of Computer Science Engineering, for his guidance, supervision,
encouragement and valuable advice in each and every phase.
I would like to thank all other faculty members and fellow students of MES
College of Engineering, Kuttippuram for their warm friendship, support and
help.

MUHAMMED KHALEEFA ZAYED

i
ABSTRACT
We propose a novel life-detection radar using a wideband Boolean-
chaos signal as a probe signal. The range between the radar and the
human target can be obtained by correlating the echo signal from the target
with its delayed duplicate. This range is modulated periodically by human
chest's displacements along the recording time axis and the modulation
frequency is the same as the respiratory frequency. Therefore, the authors
propose a life-detection algorithm based on correlation method to detect the
human target's respiratory frequency and range. Experimental results
demonstrate that the authors’ radar can simultaneously detect the
respiratory frequency and range of the human target behind a 20-cm-thick
wall. In addition, the high range resolution and excellent anti-jamming
property of the chaotic radar have been demonstrated by their previous
studies, which make it perform superbly in complex electromagnetic
environments.

ii
CONTENTS

Contents Page No.

ACKNOWLEDGEMENT i ABSTRACT ii LIST OF FIGURES v Chapter 1.


INTRODUCTION 1 Chapter 2. LITERATURE REVIEW 2 Chapter 3.
METHODOLOGY 4 Chapter 4. ARCHITECTURE 5
4.1 Faster R-CNN . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2 SSD (Single Shot
Detector) . . . . . . . . . . . . . . . . . . 6 Chapter 5. IMPLEMENTATION 7 5.1
Resources or components used for implementation . . . . . 7 5.2 Dataset
Specifications . . . . . . . . . . . . . . . . . . . . . 7 5.2.1 Case I: Video
specifications . . . . . . . . . . . . . . 7 5.2.2 Case II: Image
specifications . . . . . . . . . . . . . 8 5.3 Assumptions and Constraints made for
implementation . . . 8 5.4 Dataset Creation . . . . . . . . . . . . . . . . . . . . . . . 9 5.5
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.5.1 Training of Faster R-
CNN . . . . . . . . . . . . . . 11 5.5.2 Training of SSD . . . . . . . . . . . . . . . . . . . . 11
Chapter 6. RESULT AND ANALYSIS 12 6.1 Detection of weapons using SSD
algorithm . . . . . . . . . 12 6.1.1 Case I: Using pre-labelled dataset . . . . . . . . . .
12 6.1.2 Case 2: Using self-created dataset . . . . . . . . . . 12 6.2 Detection of
weapons using Faster R-CNN algorithm . . . . 13
6.2.1 Case I: Using pre-labelled dataset . . . . . . . . . . 13 6.2.2
Case 2: Using self-created dataset . . . . . . . . . . 13 6.3 Performance Analysis . .
. . . . . . . . . . . . . . . . . . . 14 6.3.1 Faster R-CNN . . . . . . . . . . . . . . . . . . . . . 14
6.3.2 SSD . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Chapter 7. CONCLUSION 16
REFERENCES

LIST OF FIGURES

No. Title Page No. 3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

4.1 Detection and Tracking . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2 Detection


and Tracking . . . . . . . . . . . . . . . . . . . . . . . . 6

5.1 Image along with its label . . . . . . . . . . . . . . . . . . . . . . . 9 5.2 Pseudo


code of faster RCNN . . . . . . . . . . . . . . . . . . . . . 11 5.3 Pseudo code of SSD
. . . . . . . . . . . . . . . . . . . . . . . . . . 11

6.1 Detection of AK47 gun . . . . . . . . . . . . . . . . . . . . . . . . 12 6.2 Detection


of COLT M1911 and SW Model 10 . . . . . . . . . . . . 12 6.3 Detection of AK47
using Faster R-CNN . . . . . . . . . . . . . . . 13 6.4 Detection of COLT M1911
and SW Model 10 . . . . . . . . . . . . 13 6.5 Performance analysis: Faster R-
CNN algorithm . . . . . . . . . . . 14 6.6 Performance analysis: SSD
algorithm . . . . . . . . . . . . . . . . 14
v
CHAPTER 1
INTRODUCTION

Violence committed with guns puts significant impact on public, health,

psy chological, and economic cost. Many people die each year from gun-

related violence. Psychological trauma is frequent among children who are

exposed to high levels of violence in their communities or through the media.

Children exposed to gun-related violence, whether they are victims,

perpetrators, or witnesses, can experience nega tive psychological effects over

the short and long terms. Number of studies show that handheld gun is the

primary weapon used for various crimes like break-in, robbery, shoplifting, and

rape.

Weapon or Anamoly detection is the identification of irregular,

unexpected, unpredictable, unusual events or items, which is not considered as

a normally occur ring event or a regular item in a pattern or items present in a

dataset and thus different from existing patterns. An anomaly is a pattern that

occurs differently from a set of standard patterns. Therefore, anomalies depend

on the phenomenon of interest [3] [4]. Object detection uses feature extraction

and learning algorithms or models to recognize instances of various category of

objects [6]. Proposed implementation focuses on accurate gun detection and

classification. Also concerned with accuracy, since a false alarm could result in

adverse responses [11] [12]. Choosing the right approach required to make a

proper trade-off between accuracy and speed.


1
CHAPTER 2
LITERATURE REVIEW

A. Warsi, M. Abdullah, M. N. Husen and M. Yahya, "Automatic Handgun and

Knife Detection Algorithms: A Review," 2020 14th International Conference on

Ubiquitous Information Management and Communication (IMCOM), 2020.

This paper reviewed and categorized various algorithms that have been

used in the detection of handgun and knives with their strengths and

weaknesses. This paper presents a review of various algorithms used in

detecting handguns and knives.

N. Dwivedi, D. K. Singh and D. S. Kushwaha, "Weapon Classification using

Deep Convolutional Neural Network," 2019 IEEE Conference on Information

and Com munication Technology (CICT), 2019.

This paper presents a novel approach for weapon classification using Deep

Con volutional Neural Networks (DCNN) that is based on the VGG Net

architecture. VGG Net is the most recognized CNN architecture which got its

place in Image Net competition 2014, organized for image classification

problems. Thus, weights of pre-trained VGG16 model are taken as the initial

weights of convolutional layers for the proposed architecture, where three

classes: knife, gun and no-weapon are used to train the classifier. To fine tune

the weights of the proposed DCNN, it is trained on the images of these classes

downloaded from internet and other captured in the lab achieved for weapon

classification.
2
Verma, Gyanendra K. and Anamika Dhillon. “A Handheld Gun Detection using

Faster R-CNN Deep Learning.” International Conference on Computing and

Con vergence Technology, 2017.

This paper presents an automatic handheld gun detection system using

deep learning particularly CNN model. Gun detection is a very challenging

problem be cause of the various subtleties associated with it. One of the most

important chal lenges of gun detection is occlusion of gun that arises

frequently. There are two types of occlusions of gun, namely gun to object and

gun to site/scene occlusion. Normally, occlusions in gun detection are arises

beneath three conditions: self-occlusion, inter object occlusion or by

background site/scene structure. Self- occlusion arises when one portion of the

gun is occluded by another.


3
CHAPTER 3
METHODOLOGY

Figure 3.1 shows the methodology of weapons detection using deep learning.

Frames are extracted from the input video. Frame differencing algorithm is

applied and bounding box created before the detection of object [7] [8] [14].

Figure 3.1: Methodology

In object detection and tracking, Dataset is created, trained and fed to

object detection algorithm. Based on application suitable detection algorithm

(SSD or fast RCNN) is chosen for gun detection. The approach addresses a

problem of detection using various machine learning models like Region

Convolutional Neural Network (RCNN), Single Shot Detection (SSD) [2][9][15].

4
CHAPTER 4
ARCHITECTURE
4.1 Faster R-CNN

Figure 4.1: Detection and Tracking

Faster RCNN architecture is depicted in figure 4.1. It has two networks

RPN to generate region proposals and network for object detection. To

generate region proposals it uses selective search approach. Anchors or region

boxes are ranked by RPN network.

5
4.2 SSD (Single Shot Detector)
Figure 4.2: Detection and Tracking

SSD algorithm reached new milestones in terms of precision and

performance detection. SSD speeds up the process by eliminating the need of

region proposal network. To overcome the drop in accuracy SSD brings a few

technology including default boxes and multi-scale features. These

improvements allow SSD to match the Faster R-CNN’s accuracy using lower

resolution images, which further pushes speed higher. Average scoring is

around 74 percent mAP and 59 fps on COCO dataset. Figure 4.2 shows SSD

VGG-16 Architecture [1] [5] [10].

6
CHAPTER 5
IMPLEMENTATION
5.1 Resources or components used for implementation

• OpenCV 3.4- Open source computer vision library version

3.4.

• Python 3.5- High level programming language used for various image-

processing applications.

• COCO Dataset- Dataset consisting of common objects with respective

labels. • Anaconda and Tensorfflow 1.1

• NVIDIA GeForce 820M GPU-GeForce is a brand of graphics processing

units designed by Nvidia.

5.2 Dataset Specifications

5.2.1 Case I: Video specifications

• System Configuration- Intel i5 7th Generation (4 Cores)

• Clock Speed- 2.5 GHz

• GPU- NVIDIA GeForce 820M

• Input Frames per Second- 29.97 fps

• Output Frames per Second- 0.20 fps

• Video Format- .mov

7
• Video Size- 4.14 MB

• COCO and self-created image dataset


5.2.2 Case II: Image specifications

• System Configuration- Intel i5 7th Generation (4 Cores)

• Clock Speed- 2.5 GHz

• GPU- NVIDIA GeForce 820M

• Input Image Size- 200-300 KB

• Training Time- 0.6 seconds(SSD), 1.7 seconds(RCNN)

• Image Format - .JPG

• COCO and self-created image dataset

• Number of classes trained for- 5

5.3 Assumptions and Constraints made for implementation • The gun

is in line of sight of camera and fully/partially exposed to the camera. •

There is enough background light to detect the ammunition.

• GPU with high-end computation power were used to remove lag in the

ammu nition detection.

• This is not a completely automated system. Every gun detection warning

will be verified by a person in charge.

8
5.4 Dataset Creation

Images are downloaded in bulk using Fatkun Batch Image Downloader

(chrome extension) which can download multiple Google Images at once. Then
the down loaded images are labelled. 80% of total images used for training and

20% images for testing. XML data is converted into CSV file by executing the

following com mand in Anaconda Prompt: python xml_to_csv.py

Figure 5.1: Image along with its label

9
5.5 Training

Ground truth box and scale is represented by


• To make sure object is detected, changes are made in the label map and

tf_record file.

• Label map is the file which stores the total number of types of objects that

will be detected. AK47 is added in the label map.

• The images are converted into tf_record format in Tensorflow so that they

can be processed in batches. AK47 is added in the tf_record format.

10
5.5.1 Training of Faster R-CNN

Faster R-CNN algorithm is trained with a loss less than 0.15.


Figure 5.2: Pseudo code of faster RCNN

5.5.2 Training of SSD

SSD model trained until to obtain loss training 0.05.

Figure 5.3: Pseudo code of SSD

11
CHAPTER 6
RESULTS AND ANALYSIS

6.1 Detection of weapons using SSD algorithm

6.1.1 Case I: Using pre-labelled dataset

Figure 6.1: Detection of AK47 gun

Figure 6.1 shows detection of a gun AK47 using SSD algorithm. Accuracy of

detec tion is 79%. Further accuracy can be increased by increasing more

number training samples.

6.1.2 Case 2: Using self-created dataset

Figure 6.2: Detection of COLT M1911 and SW Model 10

Figure 6.2 shows detection of COLT M1911 and Smith Wesson Model gun with

an accuracy of 72% and 67%.

12
6.2 Detection of weapons using Faster R-CNN algorithm

6.2.1 Case I: Using pre-labelled dataset


Figure 6.3: Detection of AK47 using Faster R-CNN

Figure 6.3 shows detection of AK47 gun using faster RCNN with an accuracy

of 99% which shows Faster R-CNN provides the superior accuracy.

6.2.2 Case 2: Using self-created dataset

Figure 6.4: Detection of COLT M1911 and SW Model 10

Figure 6.4 shows detection of Colt M1911 gun Smith & Wesson Model 10 gun

using Faster R-CNN algorithm with the accuracy of 74% and 91%.

13
6.3 Performance Analysis

6.3.1 Faster R-CNN


Figure 6.5: Performance analysis: Faster R-CNN algorithm

The Figure 6.5 shows that highest average accuracy is obtained for pre-

labelled dataset (i.e. AK47) and the Colt M1911, Smith Wesson Model 10, UZI

Model, Rem ington Model obtained accuracy in the range of 76% to 91%.

Faster R-CNN achieves an average Accuracy of 84.6% and average speed

1.606s/frame. This concludes that the pre labelled dataset provided better

accuracy because it is trained for millions of images in comparison to the self-

created dataset.

6.3.2 SSD

Figure 6.6: Performance analysis: SSD algorithm

14
Figure 6.6 shows performance Analysis for the SSD algorithm. Obtained

SSD Average Accuracy is 73.80.736 s/frame. Trained model for 5 classes of

guns such as AK47, Smith and Wesson Model 10, Colt M1911, UZI Model and

Remington model and obtained maximum confidence level for AK47 gun. Used

SSD and RCNN In ception V2 models to train the guns. SSD took 12 more
hours to train model in com parison with RCNN model but provided lower

accuracy. Faster R-CNN achieved 10.8accuracy than SSD algorithm. SSD

provides faster speed than Faster R-CNN by 0.7 seconds. Pre-labeled dataset

like AK47 gun provides higher accuracy in SSD and Faster R-CNN models,

compared to self-created image dataset.

15
CHAPTER 7
CONCLUSION

SSD and Faster RCNN algorithms are simulated for pre labeled and self

created image dataset for weapon (gun) detection. Both the algorithms are
efficient and give good results but their application in real time is based on a

tradeoff be tween speed and accuracy. In terms of speed, SSD algorithm gives

better speed with 0.736 s/frame. Whereas Faster RCNN gives speed

1.606s/frame, which is poor compared to SSD. With respect to accuracy,

Faster RCNN gives better accuracy of 84.6%. Whereas SSD gives an accuracy

of 73.8%, which is poor compared to faster RCNN.SSD provided real time

detection due to faster speed but Faster RCNN pro vided superior accuracy.

Further, it can be implemented for larger datasets by training us ing GPUs and

high-end DSP and FPGA kits [16] [17].

16
REFERENCES

[1] Wei Liu et al., “SSD: Single Shot MultiBox Detector”, European

Conference on Conputer Vision, Volume 169, pp 20-31 Sep. 2017.

[2] D. Erhan et al., “Scalable Object Detection Using Deep Neural Networks,”
IEEE Conference on Computer Vision and Pattern Recognition(CVPR),2014.

[3] Ruben J Franklin et.al., “Anomaly Detection in Videos for Video

Surveillance Applications Using Neural Networks,” International

Conference on Inventive Systems and Control,2020.

[4] H R Rohit et.al., “A Review of Artificial Intelligence Methods for Data

Science and Data Analytics: Applications and Research Challenges,”2018

2nd Interna tional Conference on I-SMAC (IoT in Social, Mobile, Analytics

and Cloud), 2018.

[5] Abhiraj Biswas et. al., “Classification of Objects in Video Records using

Neural Network Framework,” International conference on Smart Systems

and Inventive Technology,2018.

[6] Pallavi Raj et. al.,“Simulation and Performance Analysis of Feature

Extraction and Matching Algorithms for Image Processing Applications”

IEEE Interna tional Conference on Intelligent Sustainable Systems,2019.

[7] Mohana et.al., “Simulation of Object Detection Algorithms for Video Survil

lance Applications”, International Conference on I-SMAC (IoT in Social,

Mo bile, Analytics and Cloud),2018.

17
[8] Yojan Chitkara et. al.,“Background Modelling techniques for foreground

detec tion and Tracking using Gaussian Mixture model” International

Conference on Computing Methodologies and Communication,2019.

[9] Rubner et.al, “A metric for distributions with applications to image

databases”, International Conference on Computer Vision,2016.

[10] N. Jain et.al., “Performance Analysis of Object Detection and Tracking Al


gorithms for Traffic Surveillance Applications using Neural Networks,”

2019 Third International conference on I-SMAC (IoT in Social, Mobile,

Analytics and Cloud), 2019.

[11] A. Glowacz et.al., “Visual Detection of Knives in Security Applications

using Active Appearance Model”,Multimedia Tools Applications, 2015.

[12] S. Pankanti et.al.,“Robust abandoned object detection using regionlevel

analy sis,”International Conference on Image Processing,2011.

[13] Ayush Jain et.al.,“Survey on Edge Computing - Key Technology in Retail

In dustry” International Conference on Intelligent Computing and Control

Sys tems,2019.

[14] Mohana et.al., Performance Evaluation of Background Modeling Methods

for Object Detection and Tracking,” International Conference on Inventive

Systems and Control,2020.

[15] J. Wang et.al., “Detecting static objects in busy scenes”, Technical Report

TR99-1730, Department of Computer Science, Cornell University, 2014.

18
[16] V. P. Korakoppa et.al., “An area efficient FPGA implementation of moving

object detection and face detection using adaptive threshold method,”

Interna tional Conference on Recent Trends in Electronics, Information

Communica tion Technology,2017.

[17] S. K. Mankani et.al., “Real-time implementation of object detection and

track ing on DSP for video surveillance applications,”International

Conference on Recent Trends in Electronics, Information Communication


Technology,2016.

19

You might also like