You are on page 1of 41

CONTENTS

CHAPTER TITLE PAGE NO


CERTIFCATE
DECLARATION
ACKNOWLDGEMENT
CONTENTS
SYNOPSIS
INTRODUCTION
1
2 SYSTEM STUDY AND ANALYSIS

2.1 EXISTING SYSTEM


2.1.1 DISADVANTAGES
2.2 PROPOSED SYSTEM
2.2.1 ADVANTAGES

3 DEVELOPMENT ENVIRONMENT
3.1 HARDWARE SPECIFICATION
3.2 SOFTWARE SPECIFICATION
3.2.1 PROGRAMMING ENVIRONMENT

4 SYSTEM DESIGN
4.1 ARCHITECTURE
4.2 INPUT DESIGN
4.3OUTPUT DESIGN
4.4 DATBASE DESIGN

5 MODULES

6 SYSTEM TESTING AND IMPLEMENTATION


7 CONCLUSION
BIBLIOGRAPHY
APPENDICES
A. SOURCE CODE
B. SCREENSHOTS
SYNOPSIS

In this paper image processing based forest fire detection using YCbCr colour model
is proposed. The proposed method adopts rule based colour model due to its less complexity
and effectiveness. YCbCr colour space effectively separates luminance from chrominance
compared to other colour spaces like RGB and rgb(normalized RGB). The proposed method
not only separates fire flame pixels but also separates high temperature fire centre pixels by
taking in to account of statistical parameters of fire image in YCbCr colour space like mean
and standard deviation. In this method four rules are formed to separate the true fire region.
Two rules are used for segmenting the fire region and other two rules are used for segmenting
the high temperaturefire centre region. The results obtained are compared with the other
methods in the literature and shows higher true fire detection rate and less false detection rate.
The proposed method can be used for real time forest fire detection with moving camera.

.
CHAPTER-1
INTRODUCTION

Forests are the protectors of earth's ecological balance. Unfortunately, the forest fire is
usually only observed when it has already spread over a large area, making its control and
stoppage arduous and even impossible at times. The result is devastating loss and irreparable
damage to the environment and atmosphere (30% of carbon dioxide (CO 2) in the atmosphere
comes from forest fires) , in addition to irreparable damage to the ecology (huge amounts of
smoke and carbon dioxide (CO2) in the atmosphere). Among other terrible consequences of
forest fires are long-term disastrous effects such as impacts on local weather patterns, global
warming, and extinction of rare species of the flora and fauna.

The problem with forest fires is that the forests are usually remote,
abandoned/unmanaged areas filled with trees, dry and parching wood, leaves, and so forth
that act as a fuel source. These elements form a highly combustible material and represent the
perfect context for initial-fire ignition and act as fuel for later stages of the fire. The fire
ignition may be caused through human actions like smoking or barbeque parties or by natural
reasons such as high temperature in a hot summer day or a broken glass working as a
collective lens focusing the sun light on a small spot for a length of time thus leading to fire-
ignition. Once ignition starts, combustible material may easily fuel to feed the fires central
spot which then becomes bigger and wider. The initial stage of ignition is normally referred
to as “surface fire” stage. This may then lead to feeding on adjoining trees and the fire flame
becomes higher and higher, thus becoming “crown fire.” Mostly, at this stage, the fire
becomes uncontrollable and damage to the landscape may become excessive and could last
for a very long time depending on prevailing weather conditions and the terrain.

Millions of hectares of forest are destroyed by fire every year. Areas destroyed by
these fires are large and produce more carbon monoxide than the overall automobile traffic.
Monitoring of the potential risk areas and an early detection of fire can significantly shorten
the reaction time and also reduce the potential damage as well as the cost of fire fighting.
Known rules apply here: 1 minute1 cup of water, 2 minutes—100 litres of water, 10 minutes
—1,000 litres of water. The objective is to detect the fire as fast as possible and its exact
localization and early notification to the fire units is vital. This is the deficiency that the
present Invention attempts to remedy, by means of detection of a forest fire at the very early
stage, so as to enhance or ensure the chance to put it out before it has grown beyond control
or causes any significant damage.

There are a number of detection and monitoring systems used by authorities. These
include observers in the form of patrols or monitoring towers, aerial and satellite monitoring
and increasingly promoted detection and monitoring systems based on optical camera
sensors, and different types of detection sensors or their combination.

The following part presents a brief overview of automatic and semiautomatic detection and
monitoring systems of fire protection in the world, experience with these systems in practical
operation, and their evaluation in terms of efficiency, accuracy, versatility, and other key
attributes.

Fire accidents pose a serious threat to industries, crowded events, social gatherings,
and densely populated areas that are observed across India. These kinds of incidents may
cause damage to property, environment, and pose a threat to human and animal life.
According to the recent National Risk Survey Report, Fire stood at the third position
overtaking corruption, terrorism, and insurgency thus posing a significant risk to our
country’s economy and citizens. The recent forest-fires in Australia reminded the world, the
destructive capability of fire and the impending ecological disaster, by claiming millions of
lives resulting in billions of dollars in damage. Early detection of fire-accidents can save
innumerable lives along with saving properties from permanent infrastructure damage and the
consequent financial losses.

In order to achieve high accuracy and robustness in dense urban areas, detection
through local surveillance is necessary and also effective. Traditional opto-electronic fire
detection systems have major disadvantages: Requirement of separate and often redundant
systems, fault-prone hardware systems, regular maintenance, false alarms and so on. Usage of
sensors in hot, dusty industrial conditions is also not possible. Thus, detecting fires through
surveillance video stream is one of the most feasible, cost-effective solution suitable for
replacement of existing systems without the need for large infrastructure installation or
investment. The existing video-based machine learning models rely heavily on domain
knowledge and feature engineering to achieve detection therefore, have to be updated to meet
new threats.

We aim to develop a classification model using Deep learning and Transfer Learning
to recognise fires in images/video frames, thus ensuring early detection and save manual
work. This model can be used to detect fires in surveillance videos. Unlike existing systems,
this neither requires special infrastructure for setup like hardware-based solutions, nor does it
need domain knowledge and prohibitive computation for development.

Literature review

Among the different computer-based approaches to detect fire, the prominent


approaches we found were using Artificial Neural network, Deep Learning, Transfer learning
and convolutional neural network. Artificial Neural Network based approaches seen in paper
[2] uses LevenbergMaraquardt training algorithm for a fast solution. The accuracy of the
algorithm altered between 61% to 92%. False positives ranged from 8% to 51%. This
approach yielded high accuracy and low false positive rate, yet it requires immense domain
knowledge.

In this paper [3], The author says that the present hardware-based detection systems
offer low accuracy along with high occurrence of false alarms consequently making it more
likely to misclassify actual fires. It is also not suitable for detecting fires breaking out in large
areas such as forests, warehouses, fields, buildings or oil reservoirs. The authors used a
simplified YOLO (You Only Look Once) model with 12 layers. Image augmentation
techniques such as rotation, adjusting contrast, zooming in/out, saturation and aspect ratio
were used to create multiple samples of each image, forming 1720 samples in total. It aims to
draw a bounding box around the flame region. It outperformed existing models when the
color features of the flames varied from those in training set.

Paper [4] provides two approaches First approach is to perform training on the data
set using Transfer Learning and later fine tune it. The next approach was to extract flame
features, fuse them and classify it using a machine learning classifier. The transfer learning
algorithms used were Xception, Inception V3, ResNet-50, trained in ImageNet. In the first
approach, accuracy up to 96% was achieved. The second approach, stacking Xgboost and
lightbgm achieved an AUC of 0.996. Transfer learning models greatly reduces the training
time required for our model. It requires comparatively smaller data set. Both approaches don't
require any sort of domain knowledge. In works [7] and [8], Deep CNN approach was taken
to detection and localization of fires. The accuracy obtained was between 90 to 97% in both
of these papers. This approach is time consuming and training was performed using Nvidia
GTX Titan X with 12 GB of onboard memory.
The fire detection performance depends critically on the performance of the flame pixel
classifier which generates seed areas on which the rest of the system operates. The flame
pixel classifier is thus required to have a very high detection rate and preferably a low false
alarm rate. There exist few algorithms which directly deal with the flame pixel classification
in the literature. The flame pixel classification can be considered both in grayscale and color
images. Most of the works on flame pixel classification in color image or video sequences are
rule based.

Krull et al. [6] used low-cost CCD cameras to detect fires in the cargo bay of long range
passenger aircraft. The method uses statistical features, based on grayscale video frames,
including mean pixel intensity, standard deviation, and second-order moments, along with
non-image features such as humidity and temperature to detect fire in the cargo compartment.
The system is commercially used in parallel to standard smoke detectors to reduce the false
alarms caused by the smoke detectors. The system also provides visual inspection capability
which helps the aircraft crew to confirm the presence or absence of fire. However, the
statistical image features are not considered to be used as part of a standalone fire detection
system.

Marbach et al. [10] used YUV color model for the representation of video data, where time
derivative of luminance component Y was used to declare the candidate fire pixels and the
Chrominance components U and V were used to classify the candidate pixels to be in the fire
sector or not. In addition to luminance and chrominance they have incorporated motion into
their work. They report that their algorithm detects less than one false alarm per week;
however, they do not mention the number of tests conducted.

Celik et al. [12] used normalized RGB (rgb) values for a generic color model for the flame.
The normalized RGB is proposed in order to alleviate the effects of changing illumination.
The generic model is obtained using statistical analysis carried out in r–g, r–b, and g–b
planes. Due to the distribution nature of the sample fire pixels in each plane, three lines are
used to specify a triangular region representing the region of interest for the fire pixels.
Therefore, triangular regions in respective r–g, r–b, and g–b planes are used to classify a
pixel. A pixel is declared to be a fire pixel if it falls into three of the triangular regions in r–g,
r–b, and g–b planes.
Celik et al. [13] proposed a novel model for detection of fire and smoke detection
using image processing approach. For fire detection the proposed method uses RGB and
YCbCr color space. Few rules are identified to fire pixels, and then given to a Fuzzy
Inference System (FIS). A rule table is formed depending on the probability value the pixel is
considered to be fire. They report to have 99% accuracy but, this cannot be used for real time
monitoring. In case of smoke detection they have given some threshold values but, this
method may fail because the texture of smoke varies depending on the materials which are
burned.
CHAPTER-2
SYSTEM STUDY AND ANALYSIS
2.1 EXISTING SYSTEM

Forest Fire Finder. This optical system has totally different techniques and is a
system based on intelligent analysis of the atmosphere instead of detecting the smoke or fire
glow. Forest Fire Finder tracks the way the atmosphere absorbs the sun light, which depends
on the chemical composition in the atmosphere. Different composition has different
absorption behaviour, so Forest Fire Finder can recognise the organic smoke

2.1.1 DISADVANTAGES

 Less flexible
 Security is the main problem

2.2 PROPOSED SYSTEM

The problem with forest fires is that the forests are usually remote,
abandoned/unmanaged areas filled with trees, dry and parching wood, leaves, and so forth
that act as a fuel source. These elements form a highly combustible material and represent the
perfect context for initial-fire ignition and act as fuel for later stages of the fire. The fire
ignition may be caused through human actions like smoking or barbeque parties or by natural
reasons such as high temperature in a hot summer day or a broken glass working as a
collective lens focusing the sun light on a small spot for a length of time thus leading to fire-
ignition. Once ignition starts, combustible material may easily fuel to feed the fires central
spot which then becomes bigger and wider. The initial stage of ignition is normally referred
to as “surface fire” stage. This may then lead to feeding on adjoining trees and the fire flame
becomes higher and higher, thus becoming “crown fire.” Mostly, at this stage, the fire
becomes uncontrollable and damage to the landscape may become excessive and could last
for a very long time depending on prevailing weather conditions and the terrain.

2.2.1 ADVANTAGES
 Communication overhead reduced.
 Less energy
 More flexible
CHAPTER-3

SYSTEM DESIGN AND DEVELOPMENT

3.1 HARDWARE CONFIGURATION

The Hardware Configuration involved in this project is

Processer : Any Update Processer


Ram : Min 4 GB
Hard Disk : Min100GB

3.2SOFTWARE SPECIFICATION

The software requirements to develop the project are

Operating System : Windows 10

Technology : Python 3.6

IDE : Anaconda, Spyder

3.2.1 PROGRAMMING ENVIRONMENT

PYTHON

Python is an easy to learn, powerful programming language. It has efficient high-level


data structures and a simple but effective approach to object-oriented programming. Python’s
elegant syntax and dynamic typing, together with its interpreted nature, make it an ideal
language for scripting and rapid application development in many areas on most platforms.

The Python interpreter and the extensive standard library are freely available in source or
binary form for all major platforms from the Python Web site, https://www.python.org/, and
may be freely distributed. The same site also contains distributions of and pointers to many
free third party Python modules, programs and tools, and additional documentation.

The Python interpreter is easily extended with new functions and data types implemented in
C or C++ (or other languages callable from C). Python is also suitable as an extension
language for customizable applications.

This tutorial introduces the reader informally to the basic concepts and features of the Python
language and system. It helps to have a Python interpreter handy for hands-on experience, but
all examples are self-contained, so the tutorial can be read off-line as well.

For a description of standard objects and modules, see The Python Standard Library. The
Python Language Reference gives a more formal definition of the language. To write
extensions in C or C++, read Extending and Embedding the Python Interpreter and Python/C
API Reference Manual. There are also several books covering Python in depth.

Python is a powerful programming language ideal for scripting and rapid application
development. It is used in web development (like: Django and Bottle), scientific and
mathematical computing (Orange, SymPy, NumPy) to desktop graphical user Interfaces
(Pygame, Panda3D).

This tutorial introduces you to the basic concepts and features of Python 3. After reading the
tutorial, you will be able to read and write basic Python programs, and explore Python in
depth on your own.

This tutorial is intended for people who have knowledge of other programming languages
and want to get started with Python quickly.

Python is an interpreted language. Interpreted languages do not need to be compiled to run. A


program called an interpreter runs Python code on almost any kind of computer. This means
that a programmer can change the code and quickly see the results. This also means Python is
slower than a compiled language like C, because it is not running machine code directly.

Python is a good programming language for beginners. It is a high-level language, which


means a programmer can focus on what to do instead of how to do it. Writing programs in
Python takes less time than in some other languages.

Python drew inspiration from other programming languages like C, C++, Java, Perl, and
Lisp.

Python's developers strive to avoid premature optimization. Additionally, they reject patches
to non-critical parts of the CPython reference implementation that would provide
improvements on speed. When speed is important, a Python programmer can move time-
critical functions to extension modules written in languages such as C or PyPy, a just-in-time
compiler. Cython is also available. It translates a Python script into C and makes direct C-
level API calls into the Python interpreter.

Keeping Python fun to use is an important goal of Python’s developers. It reflects in the
language's name, a tribute to the British comedy group Monty Python. On occasions, they are
playful approaches to tutorials and reference materials, such as referring to spam and eggs
instead of the standard foo and bar.

Python is used by hundreds of thousands of programmers and is used in many places.


Sometimes only Python code is used for a program, but most of the time it is used to do
simple jobs while another programming language is used to do more complicated tasks.

Its standard library is made up of many functions that come with Python when it is installed.
On the Internet there are many other libraries available that make it possible for the Python
language to do more things. These libraries make it a powerful language; it can do many
different things.

Some things that Python is often used for are:

 Web development
 Scientific programming
 Desktop GUIs applications
 Network programming
 Game programming.

Syntax

Python has a very easy-to-read syntax. Some of Python's syntax comes from C, because that
is the language that Python was written in. But Python uses whitespace to delimit code:
spaces or tabs are used to organize code into groups. This is different from C. In C, there is a
semicolon at the end of each line and curly braces ({}) are used to group code. Using
whitespace to delimit code makes Python a very easy-to-read language.
CHAPTER-4

SYSTEM DESIGN

4.1 ARCHITECTURE
4.2 INPUT DESIGN
Input design is one of the most important phase of the system design. Input design is
the process where the input received in the system are planned and designed, so as to get
necessary information from the user, eliminating the information that is not required. The aim
of the input design is to ensure the maximum possible levels of accuracy and also ensures that
the input is accessible that understood by the user.

The input design is the part of overall system design, which requires very careful
attention. If the data going into the system is incorrect then the processing and output will
magnify the errors.

The objectives considered during input design are :

 Nature of input processing.


 Flexibility and thoroughness of validation rules.
 Handling of properties within the input documents.
 Screen design to ensure accuracy and efficiency of the input
relationship with files.
 Careful design of the input also involves attention to error handling,
controls, batching and validation procedures.
Input design features can ensure the reliability of the system and produce result
from accurate data or they can result in the production of erroneous information.

4.3OUTPUT DESIGN
Computer output is the most important and direct source of information to the user.
Efficient, intelligible output design should improve the system’s relationships with the user
and help in decision making. A major form of output is the hard copy from the printer. The
output devices to consider depend on factors such as compatibility of the device with the
system, response time requirements, expected print quality and number of copies needed. .
All nodes in the network may depart or fail unpredictably.

The partition the continuously generated measurement data by time slots, where a
source block refers to the amount of the data generated in one time slot on a node. Clearly,
how many time slots of data can be cached depends on the size of the node cache storage.

A synchronization packet (commonly known as the timing reference signal) occurs


immediately before the first active sample on every line, and immediately after the last active
sample (and before the start of the horizontal blanking region).A systems flowchart specifies
master files, transaction files and computer programs. Input Data are collected and organized
into groups of similar data. Once identified, appropriate input media are selected for
processing. The output devices to consider depend on factors such as compatibility of the
device with the system, response time requirements, expected print quality and number of
copies needed. . All nodes in the network may depart or fail unpredictably.
CHAPTER-5

MODULES

The model is divided into two parts

1. Data Collection and Pre-processing.

2. Building fire detection model by Transfer Learning

Data Collection and Pre-processing

The first step is to gather video frames for the problem statement. The dataset has 2
classes - fire and nonfire. Positive samples consist of images with real fire. False Positives
consists of images which have objects that look like fire but are not. False positives are easier
to collect. Thus, we need to collect diverse video frames which will help better fire detection.
The collected dataset is divided into train and test video frames. The dataset currently has
1678 fire images/video frames and 1368 that of non-fire sourced from google since there is
no standard data set available.

Building fire detection model by Transfer Learning

The second step is to use various available pretrained models in Keras to extract the
video frame features. The pre-trained models are trained on very large-scale video frames
classification problems. The convolutional layer’s act as feature extractor and the fully
connected layers’ act as Classifiers. Since these models are very large and have seen a huge
number of images, they tend to learn very good, discriminative features.

In order to do extract, the video frames feature we remove the last layer i.e. fully
connected layer. This provides us with a feature vector. The feature vector sizes differ from
model to model. The central concept of Transfer Learning is to use a more complex but
successful pre-trained DNN model to transfer its learning to our more simplified problem.
Instead of creating and training deep neural nets from scratch (which takes significant time
and computing resources), we use the pre-trained weights of these deep neural net
architectures (trained on ImageNet) and use it for our own dataset.
We have used ResNet-50, InceptionV3 and InceptionResNetV2. models to extract the
features and various ML algorithms [SVM, Logistic Regression, Naive Bayes and Decision
Tree] on the extracted features to detect fire in video frames.
CHAPTER-6

SYSTEM TESTING AND IMPLEMENTATION


SYSTEM TESTING

Testing is a series of different tests that whose primary purpose is to fully exercise
the computer based system. Although each test has a different purpose, all work should verify
that all system element have been properly integrated and performed allocated function.
Testing is the process of checking whether the developed system works according to the
actual requirement and objectives of the system.

The philosophy behind testing is to find the errors. A good test is one that has a
high probability of finding an undiscovered error. A successful test is one that uncovers the
undiscovered error. Test cases are devised with this purpose in mind. A test case is a set of
data that the system will process as an input. However the data are created with the intent of
determining whether the system will process them correctly without any errors to produce the
required output.

Types of Testing:

 Unit testing
 Integration testing
 Validation testing
 Output testing
 User acceptance testing
Unit Testing

All modules were tested and individually as soon as they were completed and were
checked for their correct functionality.

Integration Testing

The entire project was split into small program; each of this single programs gives a
frame as an output. These programs were tested individually; at last all these programs where
combined together by creating another program where all these constructors were used. It
give a lot of problem by not functioning is an integrated manner.

The user interface testing is important since the user has to declare that the arrangements
made in frames are convenient and it is satisfied. when the frames where given for the test,
the end user gave suggestion. Based on their suggestions the frames where modified and put
into practice.

Validation Testing

At the culmination of the black box testing software is completely assembled as a


package. Interfacing errors have been uncovered and corrected and a final series of test i.e.,
Validation succeeds when the software function in a manner that can be reasonably accepted
by the customer.

Output Testing

After performing the validation testing the next step is output testing of the proposed
system. Since the system cannot be useful if it does not produce the required output. Asking
the user about the format in which the system is required tests the output displayed or
generated by the system under consideration. Here the output format is considered in two
ways. one is on screen and another one is printed format. The output format on the screen is
found to be corrected as the format was designed in the system phase according to the user
needs. And for the hardcopy the output comes according to the specifications requested by
the user.

User Acceptance System

An acceptance test as the objective of selling the user on validity and reliability of
the system. It verifies that the procedures operate to system specification and mat the
integrity of vital is maintained.

Performance Testing

This project is a application based project, and the modules are interdependent with
the other modules, so the testing cannot be done module by module. So the unit testing is not
possible in the case of this driver. So this system is checked only with their performance to
check their quality.

1IMPLEMENTATION

The purpose of System Implementation can be summarized as follows:


It making the new system available to a prepared set of users (the deployment), and
positioning on-going support and maintenance of the system within the Performing
Organization (the transition). At a finer level of detail, deploying the system consists of
executing all steps necessary to educate the Consumers on the use of the new system, placing
the newly developed system into production, confirming that all data required at the start of
operations is available and accurate, and validating that business functions that interact with
the system are functioning properly. Transitioning the system support responsibilities
involves changing from a system development to a system support and maintenance mode of
operation, with ownership of the new system moving from the Project Team to the
Performing Organization.

List of System implementation is the important stage of project when the theoretical design is
tuned into practical system. The main stages in the implementation are as follows:

 Planning
 Training
 System testing and
 Changeover Planning
Planning is the first task in the system implementation. Planning means deciding on
the method and the time scale to be adopted. At the time of implementation of any system
people from different departments and system analysis involve. They are confirmed to
practical problem of controlling various activities of people outside their own data processing
departments. The line managers controlled through an implementation coordinating
committee. The committee considers ideas, problems and complaints of user department, it
must also consider;

 The implication of system environment


 Self selection and allocation form implementation tasks
 Consultation with unions and resources available
 Standby facilities and channels of communication
The following roles are involved in carrying out the processes of this phase. Detailed
descriptions of these roles can be found in the Introductions to Sections I and III.

_ Project Manager

_ Project Sponsor
_ Business Analyst

_ Data/Process Modeler

_ Technical Lead/Architect

_ Application Developers

_ Software Quality Assurance (SQA) Lead

_ Technical Services (HW/SW, LAN/WAN, TelCom)

_ Information Security Officer (ISO)

_ Technical Support (Help Desk, Documentation, Trainers)

_ Customer Decision-Maker

_ Customer Representative

_ Consumer

The purpose of Prepare for System Implementation is to take all possible steps to
ensure that the upcoming system deployment and transition occurs smoothly, efficiently, and
flawlessly. In the implementation of any new system, it is necessary to ensure that the
Consumer community is best positioned to utilize the system once deployment efforts have
been validated. Therefore, all necessary training activities must be scheduled and
coordinated. As this training is often the first exposure to the system for many individuals, it
should be conducted as professionally and competently as possible. A positive training
experience is a great first step towards Customer acceptance of the system.

During System Implementation it is essential that everyone involved be absolutely


synchronized with the deployment plan and with each other. Often the performance of
deployment efforts impacts many of the Performing Organization’s normal business
operations. Examples of these impacts include:

_ Consumers may experience a period of time in which the systems that they depend on to
perform their jobs are temporarily unavailable to them. They may be asked to maintain
detailed manual records or logs of business functions that they perform to be entered into the
new system once it is operational.
_ Technical Services personnel may be required to assume significant implementation
responsibilities while at the same time having to continue current levels of service on other
critical business systems.

_ Technical Support personnel may experience unusually high volumes of support


requests due to the possible disruption of day-to-day processing.

Because of these and other impacts, the communication of planned deployment


activities to all parties involved in the project is critical. A smooth deployment requires
strong leadership, planning, and communications. By this point in the project lifecycle, the
team will have spent countless hours devising and refining the steps to be followed. During
this preparation process the Project Manager must verify that all conditions that

must be met prior to initiating deployment activities have been met, and that the final ‘green
light’ is on for the team to proceed. The final process within the System Development
Lifecycle is to transition ownership of the system support responsibilities to

the Performing Organization. In order for there to be an efficient and effective transition, the
Project Manager should make sure that all involved parties are aware of the transition plan,
the timing of the various transition activities, and their role in its execution.

Due to the number of project participants in this phase of the SDLC, many of the necessary
conditions and activities may be beyond the direct control of the Project Manager.
Consequently, all Project Team members with roles in the implementation efforts must
understand the plan, acknowledge their responsibilities, recognize the extent to which other
implementation efforts are dependent upon them, and confirm their commitment
CHAPTER 7

CONCLUSION

The present decade is marked by huge strides in areas of processing, computation and
algorithms. This has enabled great progress in many fields including processing of
surveillance video streams for recognizing abnormal or unusual events and actions. Fire
accidents have caused death and destruction all over the world, consuming countless lives
and causing billions in damages. This implies that developing an accurate, early, affordable
fire-detection system is imperative Therefore, we have proposed a fire detection model for
videos/video frames using transfer learning for deep learning. The models make use of
ResNet-50, InceptionV3 and Inception-ResNet-V2 models to extract the features and various
ML algorithms such as SVM, Logistic Regression, Naive Bayes and Decision Tree on the
extracted features to detect fire in video frames. Looking at the results, ResNet-50 with SVM
works best for our problem statement. Coming to the application on the whole, it works in
real-time and has the ability to notify alert along with offering a user-friendly graphical
interface. It’s cost-effective, reliable, robust, and accurate compared to existing opto-
electronic hardware and software-based systems in the market.

FUTURE SCOPE

The application can be enhanced by training the model with a larger dataset
consisting of fires at various stages and dimensions. With higher GPU memory, we could use
two deep learning models for feature extraction, whose output feature vectors are
concatenated and classified to offer more robustness. An R-CNN model can be used to
implement fire localization along with classification. We can also expect better deep learning
architectures to emerge in the future, offering better feature extraction. The application will
also offer a considerably better performance when run on machines having better processing
power compared to existing one of which it has been developed.
BIBLIOGRAPHY

Books referred

 Python: The Complete Reference Paperback – 20 March 2018 by Martin C. Brown

 Python Programming: Using Problem Solving Approach Paperback – 10 June

2017 Reema Thareja (Author)

Papers referred

[1]. National Risk Survey Report - Pinkerton, FICCI (2018).

[2]. Janku P., Kominkova Oplatkova Z., Dulik T., Snopek P. and Liba J. 2018. “Fire
Detection in Video Stream by Using Simple Artificial Neural”. Network. MENDEL. 24, 2
(Dec. 2018), 55–60.

[3]. Shen, D., Chen, X., Nguyen, M., & Yan, W. Q. (2018). “Flame detection using deep
learning”. 2018 4th International Conference on Control, Automation and Robotics (ICCAR).

[4]. Li, C., & Bai, Y. (2018). “Fire Flame Image Detection Based on Transfer Learning”.
2018 5th IEEE International Conference on Cloud Computing and Intelligence Systems
(CCIS).

[5]. K. Muhammad, J. Ahmad, I. Mehmood, S. Rho and S. W. Baik, “Convolutional Neural


Networks Based Fire Detection in Surveillance Videos," in IEEE Access, vol. 6, pp. 18174-
18183, 2018.

[6]. S. J. Pan and Q. Yang, "A Survey on Transfer Learning," in IEEE Transactions on
Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345-1359, Oct. 2010.
[7]. [Zhang, Qingjie & Xu, Jiaolong & Xu, Liang & Guo, Haifeng. (2016). “Deep
Convolutional Neural Networks for Forest Fire Detection”.
[8]. K. Muhammad, J. Ahmad, Z. Lv, P. Bellavista, P. Yang and S. W. Baik, “Efficient Deep
CNN-Based Fire Detection and Localization in Video Surveillance Applications,” in IEEE
Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 7, pp. 1419-1434, July
2019.

[9]. Z. Jiao et al., “A Deep Learning Based Forest Fire Detection Approach Using UAV and
YOLOv3,” 2019 1st International Conference on Industrial Artificial Intelligence (IAI),
Shenyang, China, 2019.

[10].Faming Gong ,1 Chuantao Li,1 Wenjuan Gong ,1 Xin Li,1 Xiangbing Yuan,2 Yuhui Ma
and Tao Song. “A RealTime Fire Detection Method from Video with Multifeature Fusion”.
Hindawi, Computational Intelligence and Neuroscience, Volume 2019.

[11].L. Shao, F. Zhu and X. Li, “Transfer Learning for Visual Categorization: A Survey,” in
IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 5, pp. 1019-1034,
May 2015.

[12].S. Mohd Razmi, N. Saad and V. S. Asirvadam, “Visionbased flame detection: Motion
detection & fire analysis,” 2010 IEEE Student Conference on Research and Development
(SCOReD), Putrajaya, 2010, pp. 187-191.

[13].T. Qiu, Y. Yan and G. Lu, “A new edge detection algorithm for flame image
processing,” 2011 IEEE International Instrumentation and Measurement Technology
Conference, Binjiang, 2011, pp. 1-4.

[14].M. Ligang, C. Yanjun and W. Aizhong, “Flame region detection using color and motion
features in video sequences,” The 26th Chinese Control and Decision Conference (2014
CCDC), Changsha, 2014, pp. 3005- 3009.

[15].Toulouse, Tom & Rossi, Lucile & Celik, Turgay & Akhloufi, Moulay. (2015).
“Automatic fire pixel detection using image processing: A comparative analysis of Rulebased
and Machine Learning-based methods. Signal, Image and Video Processing”.
[16].K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,”
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas,
NV, 2016, pp. 770-778.

[17].S. Wilson, S. P. Varghese, G. A. Nikhil, I. Manolekshmi and P. G. Raji, “A


Comprehensive Study on Fire Detection,” 2018 Conference on Emerging Devices and Smart
Systems (ICEDSS), Tiruchengode, 2018, pp. 242- 246.

[18].X. Wu, X. Lu and H. Leung, “An adaptive threshold deep learning method for fire and
smoke detection,” 2017 IEEE International Conference on Systems, Man, and Cybernetics
(SMC), Banff, AB, 2017, pp. 1954-1959.

[19].Chenebert, Audrey & Breckon, Toby & Gaszczak, Anna. (2011). “A nontemporal
texture driven approach to realtime fire detection”.

[20].Seebamrungsat, Jareerat et al. “Fire detection in the buildings using image processing.”
2014 Third ICT International Student Project Conference (ICTISPC) (2014): 95-98.
.

Websites referred
 https://www.learnpython.org/
 https://www.w3schools.com/python/
APPENDICES

SOURCE CODE
import numpy as np

import cv2

import numpy as np

from matplotlib import pyplot as plt

thr = 0

def matching(img):

img2 = img.copy()

gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

template = cv2.imread('t.png',0)

w, h = template.shape[::-1]

# All the 6 methods for comparison in a list

methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',

'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']

for meth in methods[2:3]:

# img = img2.copy()

method = eval(meth)

# Apply template Matching

res = cv2.matchTemplate(gray,template,method)

thr = np.argmax(res)

# print(thr)
if thr < 65000:

min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum

if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:

top_left = min_loc

else:

top_left = max_loc

bottom_right = (top_left[0] + w, top_left[1] + h)

cv2.rectangle(img,top_left, bottom_right, 255, 2)

font = cv2.FONT_HERSHEY_SIMPLEX

# org

org = top_left

# fontScale

fontScale = 1

# Blue color in BGR

color = (255, 0, 0)

# Line thickness of 2 px
thickness = 2

# Using cv2.putText() method

image = cv2.putText(img, 'Fire detected', org, font,

fontScale, color, thickness, cv2.LINE_AA)

return img

cap = cv2.VideoCapture('forest_fire_video.mp4')

cnt = 0

while(True):

# Capture frame-by-frame

ret, frame = cap.read()

# Our operations on the frame come here

# gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame

cv2.imshow('frame',matching(frame))

k = cv2.waitKey(80)

if k == ord('q'):

break

if k == ord('c'):
cv2.imwrite('template_'+str(cnt)+'.jpg',frame)

cnt += 1

# When everything done, release the capture

cap.release()

cv2.destroyAllWindows()

import numpy as np

import cv2

import numpy as np

from matplotlib import pyplot as plt

thr = 0

def matching(img):

img2 = img.copy()

gray = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)

template = cv2.imread('t.png',0)

w, h = template.shape[::-1]

# All the 6 methods for comparison in a list

methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',

'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']

for meth in methods[2:3]:

# img = img2.copy()

method = eval(meth)

# Apply template Matching


res = cv2.matchTemplate(gray,template,method)

thr = np.argmax(res)

# print(thr)

if thr < 65000:

min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)

# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum

if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:

top_left = min_loc

else:

top_left = max_loc

bottom_right = (top_left[0] + w, top_left[1] + h)

cv2.rectangle(img,top_left, bottom_right, 255, 2)

font = cv2.FONT_HERSHEY_SIMPLEX

# org

org = top_left

# fontScale

fontScale = 1

# Blue color in BGR


color = (255, 0, 0)

# Line thickness of 2 px

thickness = 2

# Using cv2.putText() method

image = cv2.putText(img, 'Fire detected', org, font,

fontScale, color, thickness, cv2.LINE_AA)

return img

cap = cv2.VideoCapture('forest_fire_video.mp4')

cnt = 0

while(True):

# Capture frame-by-frame

ret, frame = cap.read()

# Our operations on the frame come here

# gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

# Display the resulting frame

cv2.imshow('frame',matching(frame))

k = cv2.waitKey(80)

if k == ord('q'):
break

if k == ord('c'):

cv2.imwrite('template_'+str(cnt)+'.jpg',frame)

cnt += 1

# When everything done, release the capture

cap.release()

cv2.destroyAllWindows()
SCREENSHOT

You might also like