You are on page 1of 12

fire

Article
A Study on a Complex Flame and Smoke Detection Method
Using Computer Vision Detection and Convolutional
Neural Network
Jinkyu Ryu and Dongkurl Kwak *

Graduate School of Disaster Prevention, Kangwon National University, Samcheok 25913, Korea;
jkryu@kangwon.ac.kr
* Correspondence: dkkwak@kangwon.ac.kr; Tel.: +82-33-570-6823

Abstract: This study sought an effective detection method not only for flame but also for the smoke
generated in the event of a fire. To this end, the flame region was pre-processed using the color
conversion and corner detection method, and the smoke region could be detected using the dark
channel prior and optical flow. This eliminates unnecessary background regions and allows selection
of fire-related regions. Where there was a pre-processed region of interest, inference was conducted
using a deep-learning-based convolutional neural network (CNN) to accurately determine whether it
was a flame or smoke. Through this approach, the detection accuracy is improved by 5.5% for flame
and 6% for smoke compared to when a fire is detected through the object detection model without
separate pre-processing.

Keywords: fire safety system; computer vision; image processing; deep learning

1. Introduction
Citation: Ryu, J.; Kwak, D. A Study In a fire, engineering approaches to reducing the spread of smoke and flames involve
on a Complex Flame and Smoke compartmentalization, dilution, airflow, and pressurization [1]. These methods are very
Detection Method Using Computer important for extinguishing a fire in its early stages, but there is a problem in that the fire
Vision Detection and Convolutional has already begun to grow or become responsive after full development. Therefore, to
Neural Network. Fire 2022, 5, 108. solve this problem, this study attempts an image-based fire detection method. In particular,
https://doi.org/10.3390/fire5040108
it aims to effectively respond to a fire by detecting all the flame and smoke that may occur
Academic Editor: James A. Lutz in the early stages of the fire. It is very important to detect smoke, not just flames, in a
fire, particularly considering that in general, smoke damage to human health occurs more
Received: 24 June 2022
often than direct damage caused by flames. Smoke generated from such a fire can affect
Accepted: 26 July 2022
the human body due to high temperatures, lack of oxygen, and carbon monoxide. In
Published: 27 July 2022
addition to these direct factors, reduced visibility and subsequent psychological anxiety
Publisher’s Note: MDPI stays neutral may adversely affect evacuation behavior [2,3].
with regard to jurisdictional claims in To this end, many studies on fire detection based on artificial intelligence have recently
published maps and institutional affil- been conducted. Existing deep learning computer vision-based flame detection studies have
iations. included a method proposed by Shen et al. [4], in which flames were detected using a “you
only look once” (YOLO) model based on Tensorflow, without separate filtering of input
images. In this case, if additional image pre-processing was added, the accuracy would be
improved, as the unnecessary background area would be removed in advance, reducing
Copyright: © 2022 by the authors.
false negatives significantly. Another study was a fire detection method proposed by
Licensee MDPI, Basel, Switzerland.
Muhammad et al. [5], which classified fire or non-fire from the image in order to efficiently
This article is an open access article
distributed under the terms and
detect fire in resource-constrained environments. However, in this case, the fire is judged
conditions of the Creative Commons
not only for the flame but also for the entire unnecessary image area. Nguyen et al. [6]
Attribution (CC BY) license (https://
achieved 92.7% accuracy in detecting fire using a UAV-based object detection algorithm,
creativecommons.org/licenses/by/ while Jeon et al. [7] achieved a 97.9% F1 score by detecting fire using a CNN based on a
4.0/). multi-scale prediction framework, but both of these previous studies detected only flame.

Fire 2022, 5, 108. https://doi.org/10.3390/fire5040108 https://www.mdpi.com/journal/fire


Fire 2022, 5, x FOR PEER REVIEW 2 of 13

Fire 2022, 5, 108 2 of 12


a CNN based on a multi-scale prediction framework, but both of these previous studies
detected only flame. However, there is a problem of high error detection rate because
there is no
However, separate
there filtering of
is a problem method. Therefore,
high error in this
detection ratestudy,
because the there
disadvantages
is no separateof the
existingmethod.
filtering image-based fire detection
Therefore, methods
in this study, were supplemented
the disadvantages of the through color and dy-
existing image-based
namic
fire characteristics.
detection methods In wereparticular, when detection
supplemented through is difficult
color because the
and dynamic shape is not
characteristics.
Inconstant,
particular, when
such detection
as with smoke, is difficult
detection because the shape
is facilitated is notan
through constant, such as
appropriate with
pre-pro-
smoke,
cessingdetection
method.isInfacilitated
addition,through
both of an appropriate
these previouspre-processing
studies detected method. In addition,
only flame. There-
both
fore,oftothese previous
supplement studies
these fire detected
detectiononly flame.this
methods, Therefore, to supplement
study proposes a method these fire
capable
detection
of detecting methods,
flame andthis smoke
study proposes a method capable of detecting flame and smoke
in combination.
in combination.
Therefore, and as shown in Figure 1, an effective image pre-processing method was
Therefore,
designed andflame,
for each as shown in Figure
as well 1, an
as for the effective
smoke fromimage pre-processing
the input image, so that method
objectswasthat
designed for each
are not related toflame,
the fireascanwell beasfiltered
for theout smoke from theIninput
in advance. image,
the case of asoflame,
that objects
when a
that are not related
combustible to the fireby
gas generated can be filtered
pyrolysis of aout in advance.
solid combustibleIn the case ofsuch
material a flame, whenis
as wood
amixed
combustible gas generated by pyrolysis of a solid combustible
with air and combusted, a flame may be generated. In addition, a flame may be material such as wood
isgenerated
mixed with byair and combusted,
evaporative a flameinmay
combustion, whichbe generated. In addition,
the combustible liquidaevaporates
flame may and be
generated
burns. When by evaporative
such flaming combustion, in which thewas
occurs, pre-processing combustible
attemptedliquid evaporates
in order to detect andthe
burns.
flame When
through such
its flaming occurs, pre-processing
appearance-related characteristics.wasForattempted in order
this purpose, to detect
hue, the
saturation,
flame
valuethrough
(HSV) colorits appearance-related
conversion and corner characteristics.
detection Forwere this
usedpurpose,
duringhue, image saturation,
pre-pro-
value (HSV) color conversion and corner detection were used during
cessing. First, in the case of HSV color conversion, a color region where a flame is likely image pre-processing.
First, in the
to exist case of HSV
is detected. color conversion,
In addition, among thea objects
color region whereafter
that remain a flameHSV is color
likelyconversion,
to exist is
detected. In addition, among the objects that remain after HSV color
the flame is found to possess the characteristic of a sharp texture of the object, resulting conversion, the flamein
isafound
large number of corners [8,9]. Based on this fact, if the Harris corner detectora is
to possess the characteristic of a sharp texture of the object, resulting in large
per-
number
formed on the HSV color converted image, corners are generated intensively only inon
of corners [8,9]. Based on this fact, if the Harris corner detector is performed the
the HSV color converted image, corners are generated intensively only in the flame, so that
flame, so that filtering can be performed more precisely. Therefore, following the color
filtering can be performed more precisely. Therefore, following the color conversion and
conversion and filtering, the region where the corners are gathered is detected as a flame
filtering, the region where the corners are gathered is detected as a flame candidate region.
candidate region.

Figure1.1.Flow
Figure Flowchart
chartofofthe
theproposed
proposedfire
firedetection.
detection.

InInthis
thisstudy,
study,both
boththe theoptical
opticalflow
flowtechnique,
technique,whichwhichusedusedthethedark
darkchannel
channelprior,
prior,andand
theLucas–Kanade
the Lucas–Kanademethod methodwere wereused
usedtotoeffectively
effectivelypre-process
pre-processsmoke.
smoke.TheThedark
darkchannel
channel
priorwas
prior wasoriginally
originallyproposed
proposedby byHeHeetetal.
al.[10]
[10]asasananalgorithm
algorithmdesigned
designedtotoremove
removehaze haze
fromthe
from theimage.
image. However,
However,ininthis thisstudy,
study,thethesmoke
smoke region in the
region image
in the was was
image detected using
detected
thesethese
using haze haze
detection characteristics.
detection The The
characteristics. characteristic of the
characteristic smoke
of the region
smoke thatthat
region was wasde-
tected using
detected usingthethedark
darkchannel
channel prior
priorwas
was identified
identified in in
pixels where
pixels wherehaze or or
haze smoke
smoke does
doesnot
exist;
not here,
exist; at least
here, oneone
at least color channel
color channelamong
among R, G,
R, and
G, andB had a value
B had close
a value to 0,
close toand this
0, and
this pixel
pixel waswas defined
defined as aasdark
a dark pixel.
pixel. TheThe
smoke smoke region
region detected
detected through
through these
these features
features was
was additionally
additionally filtered
filtered using using optical
optical flow,flow,
based based
on theonLucas–Kanade
the Lucas–Kanade method.method. This
This allows
allows
the smokethe smoke to be effectively
to be effectively detecteddetected by filtering
by filtering the background
the background throughthrough
dynamic dynamic
charac-
characteristics,
teristics, in which in which
the smokethe smoke
moves moves in the upward
in the upward direction.
direction. Optical
Optical flow is anflow is an
important
important technique for analyzing the motion of an object in computer
technique for analyzing the motion of an object in computer vision, and includes a differ-vision, and includes
a differential method, a matching method, and a phase-based method. Although there
are various optical flow techniques, in this study the smoke motion characteristics were
detected using the Lucas–Kanade method [11].
Fire 2022, 5, x FOR PEER REVIEW 3 of 13

Fire 2022, 5, 108 3 of 12


ential method, a matching method, and a phase-based method. Although there are vari-
ous optical flow techniques, in this study the smoke motion characteristics were detected
using the Lucas–Kanade method [11].
Finally,
Finally, a CNN
a CNN waswas used
used to to detect
detect fire
fire withhigher
with higheraccuracy
accuracyandandreliability
reliability for
for the
the pre-
processed flame and candidate smoke regions. Among the CNN models,
pre-processed flame and candidate smoke regions. Among the CNN models, the Incep- the Inception-V3
model
tion-V3 was was
model usedused
for inference, and images
for inference, and imagesrelated to flames
related and smoke
to flames were were
and smoke collected
col- and
configured as a training dataset.
lected and configured as a training dataset.
2. Image Pre-Processing for Fire Detection
2. Image Pre-Processing for Fire Detection
2.1. Flame Detection
2.1. Flame Detection
The first image pre-processing step employed for flame detection in this study was
The first
HSV colorimage pre-processing
conversion. The HSV step coloremployed
model canfor be flame
used todetection
identify in thethis study
color was in
of objects
HSVvarious
color conversion. The HSV color model can be used to identify
applications other than image pre-processing. Hue and saturation componentsthe color of objects in are
various
veryapplications
useful because other
they than
are image
similarpre-processing.
to how humansHue and color,
perceive saturation
whichcomponents
can be an ideal
are very useful because they are similar to how humans perceive
method for developing image-processing algorithms. Hue represents the distribution color, which can be an of
idealcolors
method for developing image-processing algorithms. Hue
based on red, and saturation represents the degree to which white light represents the distribu-
is included
tion in
of color.
colors Additionally,
based on red, value
and saturation represents the degree to which
is used to control the intensity of light. The value white lightcanis be
included in color. Additionally, value is used to control the intensity
independent of a single component to control its range, thus creating algorithms that areof light. The value
can robust
be independent
to lightingofchanges
a single[12,13].
component to control its range, thus creating algorithms
that are robust to lighting
In Equation (1), it changes
should be [12,13].
noted that when each pixel value is 1, it indicates a region
In Equation (1),toitashould
corresponding be noted
color space that when
in which a flameeach pixel
can value
exist at anisimage
1, it indicates
location,a region
and pixels
corresponding to a color space
in the corresponding rangeinare which a flame
extracted as acan exist at an
candidate imageAlocation,
region. pixel valueandof pixels
0 means
in the corresponding range are extracted
that a pixel is classified as a non-flame. as a candidate region. A pixel value of 0 means
that a pixel is classified as a non-flame.
(20 < H ( x, y) < 40) and
 (20 < 𝐻(𝑥, 𝑦) < 40) and


1, (50 < S( x, y) < 255) and
RoI ( x,y1, (50 < 𝑆(𝑥, 𝑦) < 255) and (1)
𝑅𝑜𝐼 HSV ( , )
)
 (50 < V ( x, y) < 255) and (1)
 (50 < 𝑉(𝑥, 𝑦) < 255) and

0, otherwise
0, otherwise
Figure
Figure 2 shows
2 shows suchsuch
HSVHSV colorcolor conversion:
conversion: Figure
Figure 2a is2a
theisoriginal
the original
flameflame image,
image, and and
Figure
Figure 2b is2b
theis image
the image of result
of the the result to which
to which the HSV
the HSV color
color conversion
conversion is applied.
is applied.

(a) (b)
Figure 2. HSV
Figure colorcolor
2. HSV conversion of flame
conversion image.
of flame (a) original
image. images;
(a) original (b) HSV
images; colorcolor
(b) HSV conversion in thein the
conversion
specified range.
specified range.

Even after HSV color conversion, the results of objects, including light yellow other
than flame, remain. To further filter this, Harris corner detector was used as the second
image pre-processing step. Among the remaining objects following HSV color conversion,
Fire 2022, 5, x FOR PEER REVIEW 4 of 13

Fire 2022, 5, 108 Even after HSV color conversion, the results of objects, including light yellow 4other of 12
than flame, remain. To further filter this, Harris corner detector was used as the second
image pre-processing step. Among the remaining objects following HSV color conversion,
the flame had a sharp texture, which resulted in a large number of corners. Therefore, in
the flame had a sharp texture, which resulted in a large number of corners. Therefore, in
a region where corners are intensively generated, it is highly likely that it is a flame, and
a region where corners are intensively generated, it is highly likely that it is a flame, and
such a region is detected as a candidate region.
such a region is detected as a candidate region.
𝐸(𝑥, 𝑦) = ∑ 𝐼(𝑥 , 𝑦 ) − 𝐼(𝑥 + 𝑢, 𝑦 + 𝑣) (2)
E( x, y) = ∑W [ I ( xi , yi ) − I ( xi + u, yi + v)]2 (2)
First, when there is a reference point (𝑥, 𝑦) in the image, it can be expressed as Equa-
tion First,
(2). When
whenthe amount
there of change point
is a reference x, y(𝑢,
shifts (by 𝑣)the
) in from the reference
image, it can bepoint. 𝐼 repre-
expressed as
sents brightness, and (𝑥, 𝑦) represents points inside Gaussian
Equation (2). When the amount of change shifts by (u, v) from the reference point. window 𝑊. The regionI
represents (𝑢, 𝑣) can and
moved by brightness, be organized as shown
( x, y) represents in Equation
points (3) below,
inside Gaussian usingW.
window theThe
Taylor se-
region
ries. by (u, v) can be organized as shown in Equation (3) below, using the Taylor series.
moved
𝑢
𝐼(𝑥 + 𝑢, 𝑦 + 𝑣) 𝐼(𝑥 , 𝑦 ) + 𝐼 (𝑥 , 𝑦 )𝐼 (𝑥 , 𝑦)u (3)
I ( xi + u, yi + v) ≈ I ( xi , yi ) + Ix ( xi , yi ) Iy ( xi , yi ) 𝑣 (3)
v
The first-order derivative in the 𝑥 and 𝑦 directions, 𝐼 and 𝐼 , could be obtained
via convolution arithmetic, using 𝑆 ,xthe
andSobel 𝑥 kernel,I and
y directions, 𝑆 , the Sobel 𝑦 kernel, as
The first-order derivative in the x and Iy , could be obtained via
convolution arithmetic, using Sx , the Sobel x kernel, and Sy , the Sobelitycan
shown in Figure 3. If Equation (3) is substituted for Equation (2), be expressed
kernel, as shown inas
Equation (4).
Figure 3. If Equation (3) is substituted for Equation (2), it can be expressed as Equation (4).
" ∑ 𝐼 (𝑥 , 𝑦 ) ∑ 𝐼 (𝑥 , 𝑦 )𝐼 (𝑥 , 𝑦#) 
∑W ( Ix ( xi , yi )) ∑W Ix ( xi , yi ) Iy ( xi , yi ) u 𝑢 = 𝑢 𝑣 𝑀u 𝑢
2  
𝐸(𝑢, 𝑣) = 𝑢
E(u, v) = [u v] 𝑣 (4)
2 𝑣= [u v] M v 𝑣 (4)
∑W∑Ix (𝐼xi(𝑥
, yi,)𝑦Iy)𝐼
( xi(𝑥
, yi,)𝑦∑)W∑ Iy (𝐼xi (𝑥
, yi ,)𝑦 ) v

Figure 3. Mask used by Sobel filter.


Figure 3. Mask used by Sobel filter.
𝐴A𝐶C
 
If 𝑀 is defined 𝑀 = ,
If M is defined M = 𝐶 𝐵 properties such
, properties as as
such Equations
Equations(5) (5)
andand
(6) (6)
arearesatisfied. Fi-
satisfied.
CB
nally, Equation
Finally, Equation(7)(7)allows
allowsusustotodetermine
determinethetheedge,
edge,corner,
corner, and flat. 𝑘k is
and flat. is an
an empirical
empirical
constant, and a value of 0.04 was used in this paper.
constant, and a value of 0.04 was used in this paper.
𝑑𝑒𝑡(𝑀) = 𝐴𝐵 − 𝐶 = 𝜆 𝜆 (5)
det( M ) = AB − C2 = λ1 λ2 (5)
𝑡𝑟𝑎𝑐𝑒(𝑀) = 𝐴 + 𝐵 = 𝜆 + 𝜆 (6)
trace( M) = A + B = λ1 + λ2 (6)
x, y)𝑦)==det
R(𝑅(𝑥, 𝑑𝑒𝑡(𝑀) trace( M))2
( M) −−k(𝑘(𝑡𝑟𝑎𝑐𝑒(𝑀)) (7)
(7)
Each
Each pixel’s
pixel’s location
location will
will have
have aa different
different value,
value, and
and the
the final calculated R
final calculated ( x, 𝑦)
𝑅(𝑥, y) will
will
be
be compared to the following conditions to distinguish between the edge, corner, andand
compared to the following conditions to distinguish between the edge, corner, flat
flat [14–17].
[14–17].
•• When
When|R| |R| is
is small,
small, which
whichhappens
happenswhen
whenλ𝜆 1 and
andλ2𝜆arearesmall, these
small, points
these belong
points to
belong
flat regions;
to flat regions;
•• When R
When R << 0,0, if
if only
only one
one eigenvalue
eigenvalue of
of 𝜆λ1 and
and λ𝜆2 isisbigger
biggerthan
thanthe
the other
other eigenvalue,
eigenvalue,
the region belongs to edges;
the region belongs to edges;
• If R has a large value, the region is a corner.
In these conditions, R has a large value and corresponds to a corner, and Figure 4 is
the result of visualizing pixels that satisfy this condition.
Fire 2022, 5, x FOR PEER REVIEW 5 of 13

• If R has a large value, the region is a corner.


Fire 2022, 5, 108 5 of 12
In these conditions, 𝑅 has a large value and corresponds to a corner, and Figure 4 is
the result of visualizing pixels that satisfy this condition.

(a) (b)
Figure 4. Corner
Figure detection
4. Corner images.
detection (a) the
images. (a)result of detecting
the result the corner
of detecting of the
the corner offlame image;
the flame (b) the
image; (b) the
result of detecting the corner of the non-flame image.
result of detecting the corner of the non-flame image.

Figure 4a is4athe
Figure pre-processing
is the pre-processing result of of
result thethe image
imagewithout
withouta aflame,
flame,andandFigure
Figure4b4b is
is the
the pre-processing
pre-processing resultresult of
ofthe
theimage
imagewhere
wherethe theflame
flameexists; the pixels that satisfy thethe
exists; the pixels that satisfy corner
corner condition
condition in the
in the HSV HSVcolorcolor conversion
conversion image image
are are marked
marked withwith
green green dots.
dots. In the
In the case of
casenon-flame
of non-flame images,
images, therethere
areare
manymany pixels
pixels thatthat have
have notnot been
been filtered
filtered even
even via
via HSVHSV color
colorconversion,
conversion, but but when
when corner
corner detection
detection is performed,
is performed, it can
it can be confirmed
be confirmed thatthat
mostmost
corners
corners
do notdo exist.
not exist.
In theIncase
the case
of theofflame
the flame
image,image, one result
one result of intensively
of intensively detecting
detecting a in
a corner
corner in a region where the flame exists may be confirmed. Therefore,
a region where the flame exists may be confirmed. Therefore, through this result, whenthrough this result,
when various
various objects
objects exist
exist in the
in the image,
image, onlyonly the flame
the flame region region
may bemay be effectively
effectively pre-
pre-processed.
processed. In addition,
In addition, the regionthe where
regionthese
wherecorners
these corners are clustered
are clustered is used is
asused as a candidate
a candidate region that
region
canthat can be inferred
be inferred throughthrough a deep-learning-based
a deep-learning-based CNN. CNN.

2.2. 2.2.
SmokeSmoke Detection
Detection
If smoke
If smoke occurs occurs
in a in a fire,
fire, it canit cause
can cause negative
negative physiological
physiological effects,
effects, suchsuch as poisoning
as poisoning
or asphyxiation,
or asphyxiation, leading
leading to problems
to problems in evacuation
in evacuation or extinguishing
or extinguishing activities.
activities. In addition,
In addition,
when smoke is generated, visibility is poor, the range of action for evacuation
when smoke is generated, visibility is poor, the range of action for evacuation is narrowed, is narrowed,
and adverse effects such as the malfunction of fire alarm equipment
and adverse effects such as the malfunction of fire alarm equipment can be caused. There- can be caused. There-
fore, detecting smoke early in the event of a fire is important. To this end, in this study the the
fore, detecting smoke early in the event of a fire is important. To this end, in this study
smokesmoke region
region waswas detected
detected usingusing the dark
the dark channel
channel priorprior
andand optical
optical flow.
flow.
The dark channel refers to a case in which at least one channel withwith
The dark channel refers to a case in which at least one channel a lowa low intensity
intensity
value among R, G, and B color channels exists in the case where
value among R, G, and B color channels exists in the case where an image has no haze. an image has no haze.
DarkDark channels
channels areare algorithms
algorithms that
that removehaze
remove hazebased
basedon on these
these characteristics,
characteristics,as asproposed
pro-
by He et al. When haze or smoke exists in the atmosphere, some of the light reflected from
posed by He et al. When haze or smoke exists in the atmosphere, some of the light re-
the object is lost in the process of being transmitted to the observer or camera, causing the
flected from the object is lost in the process of being transmitted to the observer or camera,
object to appear blurred. This can be expressed as Equation (8), based on pixel x [10].
causing the object to appear blurred. This can be expressed as Equation (8), based on pixel
𝑥 [10]. ( x ) = J ( x )t( x ) + A(1 − t( x )) (8)
(𝑥) = 𝐽(𝑥)𝑡(𝑥) + 𝐴(1 − 𝑡(𝑥)) (8)
J ( x ) represents an undistorted pixel, I ( x ) represents a pixel that has reached the actual
𝐽(𝑥) represents
camera, and t( x )an
hasundistorted when𝐼(𝑥)
a value of 1pixel, represents
it reaches a pixel completely
the camera that has reached
withoutthehaze
ac- or
tualsmoke
camera, and 𝑡(𝑥) has a value of 1 when it reaches the camera completely without
with medium transmission. A is air light, and it can be assumed that all pixels in
hazethe
or image
smokehave withthe
medium transmission.
same value. 𝐴 is air
The operation light,
of the and
dark it can be
channel assumed
existing that
in the all x in
pixel
the image can be expressed as Equation (9).

J dark ( x ) = min ( min ( J C (y))) (9)


C ∈r,g,b y∈Ω( x )
Fire 2022, 5, x FOR PEER REVIEW 6 of 13

pixels in the image have the same value. The operation of the dark channel existing in the
Fire 2022, 5, 108 pixel 𝑥 in the image can be expressed as Equation (9). 6 of 12

𝐽 (𝑥) = min ( min (𝐽 (𝑦))) (9)


∈ , , ∈ ( )

Here, ΩΩ(𝑥)
Here, ( x ) isisthe
thekernel
kernelcentered
centered on on pixel
pixel x,𝑥,and
and C𝐶 ∈∈ r,𝑟,g,
𝑔,b𝑏represents
representsthe thevalue
valueof of
eachchannel.
each channel.Therefore,
Therefore,aacase casewhere
wherethe thebrightness
brightnessvalue
valueof ofat
atleast
leastone
onechannel
channelamong
among
theR,
the R,G,G,and valuesofofΩΩ(𝑥)
andBBvalues ( x ) isisvery
verylow
lowisisdefined 𝐽 [10].
definedasasJ dark [10]. Using
Using thethe characteris-
characteristics
of these
tics darkdark
of these channels, it is possible
channels, not only
it is possible to effectively
not only remove
to effectively hazehaze
remove or fogor from the
fog from
image, but also
the image, but toalsopre-process smokesmoke
to pre-process regions that may
regions thatoccur
may during a fire. Figure
occur during a fire. 5Figure
shows5
the
showsresults of a thresholding
the results set through
of a thresholding the dark
set through channel
the dark characteristics
channel thatthat
characteristics existexist
in
the pixel.
in the pixel.

Figure5.5.Smoke
Figure Smokeimage
imagethresholded
thresholdedthrough
throughaadark
darkchannel.
channel.

The smoke
The smoke region
region detected
detected through
through the the dark
dark channel
channel prior
prior isis filtered
filtered onceonce more
more
through
through the the dynamic characteristics
characteristics of the smoke. Combustion of solid fuels in a usu-
of the smoke. Combustion of solid fuels in a fire fire
ally entails
usually heat
entails heatin inthetheadjacent
adjacentmaterial
materialthatthatburns
burnsor orininthe
the fuel
fuel itself. As a result,
result, hothot
volatileor
volatile orflammable
flammablevapors vaporsare areemitted,
emitted,and andwhen
whenaafirefirecolumn
columnand andgasgasaccompanying
accompanying
hot smoke
hot smoke are generated,
generated, they theyrise riseabove
abovethe thesurrounding
surrounding cold
cold airair
due to the
due lowered
to the lowered gas
gas density
density [18–20].
[18–20]. Therefore,
Therefore, in orderin order to pre-process
to pre-process the image
the image with the with the smoke
smoke flow
flow charac-
characteristics,
teristics, the motionthe motion
of theof the smoke
smoke risingrising
upward upward was detected
was detected usingusing the optical
the optical flow
flow algo-
algorithm. Estimating the motion of an object through optical
rithm. Estimating the motion of an object through optical flow uses the change in contrast flow uses the change in
contrast from two adjacent images with
from two adjacent images with a time difference [21,22]. a time difference [21,22].
The
Theoptical
opticalflowflowbased
based onon thetheLucas–Kanade
Lucas–Kanade method
methodmakesmakesappropriate
appropriate assumptions
assump-
within the range
tions within the that
range does
thatnotdoessignificantly deviatedeviate
not significantly from the actual
from reality.reality.
the actual AmongAmong them,
brightness constancy
them, brightness is the most
constancy is theimportant
most importantassumption with regard
assumption to the to
with regard optical flow
the optical
estimation algorithm. According to the brightness constancy,
flow estimation algorithm. According to the brightness constancy, the same part of two the same part of two scenes
with a time
scenes withdifference from thefrom
a time difference video thehave
videothehave
samethe orsame
almost orthe samethe
almost contrast values.
same contrast
This brightness constancy is not always correct in reality, but it
values. This brightness constancy is not always correct in reality, but it is based on the is based on the principle
whereby
principleitwhereby
can be assumed
it can bethat the change
assumed in contrast
that the change of inan object of
contrast is not large in
an object is the
notshort
large
time difference
in the short time between
differenceimage framesimage
between [23,24].frames [23,24].
IfIfthe
thetime
timedifference
differencebetween
betweentwo twoadjacent
adjacentimages
imagesisissufficiently
sufficientlysmall,
small,the thefollowing
following
Equation (10) is established according to
Equation (10) is established according to the Taylor series.the Taylor series.

𝑓(𝑦x+ ∂ f + ∂𝑑𝑦f + 𝜕𝑓𝜕𝑥𝜕𝑓𝜕𝑡


∂f
f (y + dy, +𝑑𝑦,
dx,𝑥t +
+ 𝑑𝑥,
dt) 𝑡=+f𝑑𝑡) = t𝑓(𝑦,
(y, x, ) + 𝑥, 𝑡)
dy + dx + dt + . . . (10)
(10)
∂y ∂x ∂t

Assuming that is small, dy and dx, which represent the movement of an object, are
also small, so there is no significant error even if the quadratic term or higher are ignored.
As mentioned earlier, according to the assumption with regard to brightness constancy,
the new point f (y + dy, x + dx, t + dt) is formed by moving (dx, dy) during the time dt, so
Fire 2022, 5, x FOR PEER REVIEW 7 of 13

Assuming that 𝑑𝑡 is small, 𝑑𝑦 and 𝑑𝑥, which represent the movement of an object,
Fire 2022, 5, 108
are also small, so there is no significant error even if the quadratic term or higher are ig-
7 of 12
nored.
As mentioned earlier, according to the assumption with regard to brightness con-
stancy, the new point 𝑓(𝑦 + 𝑑𝑦, 𝑥 + 𝑑𝑥, 𝑡 + 𝑑𝑡) is formed by moving (𝑑𝑥, 𝑑𝑦) during the
that f (y + dy, x + dx, t + dt) of the new point is the same as f (y, x, t) of the original point,
time
dy
𝑑𝑡, so that 𝑓(𝑦 + 𝑑𝑦, 𝑥 + 𝑑𝑥, 𝑡 + 𝑑𝑡) of the new point is the same as 𝑓(𝑦, 𝑥, 𝑡) of the
= v, and dxdt = u. Therefore, Equation (10) can be written as Equation (11).
original point,
dt = 𝑣, and = 𝑢. Therefore, Equation (10) can be written as Equation
(11). ∂f ∂f ∂f
v+ u+ =0 (11)
∂y ∂x ∂t
𝑣 + 𝜕𝑓𝜕𝑥𝜕𝑓𝜕𝑡 (11)
This equation is a differential equation and is called an optical flow constraint equation
This equation is a differential equation and is called an optical flow constraint equa-
or a gradient constraint equation. Although the motion of the object can be estimated
tion or a gradient constraint equation. Although the motion of the object can be estimated
through this equation, the resulting value cannot be determined because there are two
through this equation, the resulting value cannot be determined because there are two
unknowns, v and u. In order to solve two vectors that could not be obtained in the optical
unknowns, 𝑣 and 𝑢. In order to solve two vectors that could not be obtained in the opti-
flow estimation algorithm, the Lucas–Kanade algorithm, a local computation method, is
cal flow estimation algorithm, the Lucas–Kanade algorithm, a local computation method,
used. The Lucas–Kanade method solves the equation using the least squares method as
is used. The Lucas–Kanade method solves the equation using the least squares method as
shown in Equation (12) below [25].
shown in Equation (12) below [25].
( , ) ( , ) ( ,  ( ) ( )
∑ n 𝑤  ∂(yi ,xi ) 2 ∑ n 𝑤 ∂ f (yi ,xi ) ∂ f (yi ,xi ) ) −1  − ∑n 𝑤∂ f (yi ,xi ), ∂ f (yi ,xi ) ,

v 𝑣 = ∑i=1 wi ∑ ∑
!
w − w
 ( , )2  ×× (12)
∂y i = 1 i ∂y ∂x i = 1 i ∂y ∂t
𝑢 =  n ∂ f (yi ,x( i ) , ∂ f)(yi ,x(i ) , ) n

∂(yi (,xi ), ∂()yi ,x(i ) , )
(12)
−∑∑in=1 w𝑤

∑∑ ∑ 𝑤
u ∂(yi ,xi )
i =1 ∂y ∂x ∑ i = 1 w i ∂y
− i ∂x ∂t

i𝑖 corresponds
correspondsto tothe
thecoordinate
coordinatevalues
valuesofofall
allpixels,
pixels,and
andthe
the optical
optical flow
flow isis calculated
calculated
based on the derivative value calculated in each pixel. The change in the direction
based on the derivative value calculated in each pixel. The change in the direction of optical of op-
tical flow
flow can distinguish
can distinguish the smoke
the smoke area byarea by manually
manually settingsetting the threshold
the threshold T. 𝑇.
Figure 66 depicts
Figure depicts aa scene
scene where
where the
the smoke
smoke flow
flow is
is detected
detected using
using optical
optical flow
flow from
from the
the
smoke generated images. Using the optical flow algorithm, the vector change
smoke generated images. Using the optical flow algorithm, the vector change of the object of the object
is calculated
is calculated only
only for
for the
the area
area where
where the
the smoke
smoke extracted
extracted through
through the
the dark
dark channel
channel feature
feature
is expected to exist, not the entire input image.
is expected to exist, not the entire input image.

Figure 6.
Figure 6. Optical
Optical flow
flow detection
detection result
result of
of smoke
smoke videos.
videos.

The information obtained through this includes θ𝜃i ,, which


The information which isis the
the angle,
angle, as
as shown
shown in
in
Equation (13).
Equation (13).
Yt+1 − Yt
 
θi ( X, Y ) = arctan (13)
(13)
X t +1 − X t
1, 45° (𝑋, 𝑌) < 135° ◦
𝐶𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒 𝑟𝑒𝑔𝑖𝑜𝑛 = 1, 45◦<<𝜃 θi ( X,

Y ) < 135 (14)
Candidate region = 0, 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
otherwise
(14)
Considering that 𝜃 (𝑋, 𝑌) is the direction of the optical flow vector of the pixel at
Considering that θ ( X, Y ) is the direction of the optical flow vector of the pixel at
position (𝑋, 𝑌) of the 𝑖 i− th frame according to Equation (13), 𝑑𝑥 and 𝑑𝑦 are the mo-
position ( X, Y ) of the i-th frame according to Equation (13), dx and dy are the motion flow
tion flow vectors of the row and column, respectively. Among the values of the vectors
vectors of the row and column, respectively. Among the values of the vectors obtained
obtained here, the area that moved from 45 degrees to 135 degrees, as shown in Equation
here, the area that moved from 45 degrees to 135 degrees, as shown in Equation (14), can be
filtered. Through these pre-processing processes, a region with a high probability of acting
can be used as a candidate region and predictions can be made through a deep-learning-
based CNN.
Fire 2022, 5, x FOR PEER REVIEW 8 of 13

Fire 2022, 5, 108 (14), can be filtered. Through these pre-processing processes, a region with a high proba-
8 of 12
bility of acting can be used as a candidate region and predictions can be made through a
deep-learning-based CNN.
2.3. Inference Using Inception-V3
2.3. Inference Using Inception-V3
In order to detect fire with higher accuracy in the region of interest obtained during
In order to detect fire with higher accuracy in the region of interest obtained during
image
imagepre-processing,
pre-processing,in inthis
this study,
study, CNN
CNN waswas constructed
constructedasasthe thelast
laststep
step toward
toward finally
finally
detecting
detecting whether a fire occurred. CNN is used in similar computer vision studies, such as
whether a fire occurred. CNN is used in similar computer vision studies, such
inas
image classification,
in image object
classification, detection
object andand
detection recognition,
recognition, and
andimage
image matching.
matching.InInaddition,
addi-
from
tion, from the simple neural networks of the past, complex and deep network-typemodels
the simple neural networks of the past, complex and deep network-type modelsare
now
are being developed.
now being developed.
When
Whentraining
trainingthrough
through deep
deep learning,
learning, itit is
is common
commontotoobtainobtainhighhighprecision
precision when
when
using
using it with a deep
it with a deeplayer
layerand
anda wide
a widenode.
node.However,
However, inin
this case,
this case,the
theamount
amountofofparameters
param-
increases, and theand
eters increases, computational amount
the computational increases
amount considerably,
increases and and
considerably, an over-fitting prob-
an over-fitting
lem or a gradient
problem vanishing
or a gradient problem
vanishing occurs.
problem Therefore,
occurs. we made
Therefore, we madethe the
connections between
connections be-
nodes
tween sparse
nodesand theand
sparse matrix operations
the matrix dense.dense.
operations Reflecting this, this,
Reflecting the inception structure
the inception struc- in
Figure
ture in 7 makes
Figure 7the overall
makes the network deep but
overall network notbut
deep difficult to operate.
not difficult to operate.

Figure
Figure 7. 7.Schematic
Schematicdiagram
diagramof
ofInception-V3.
Inception-V3.

Therefore,the
Therefore, theInception-v3
Inception-v3 model
model hashas the
the advantage
advantageofofhavinghavinga adeeper
deeper layer than
layer than
other CNN models, but not having a relatively large parameter. Table
other CNN models, but not having a relatively large parameter. Table 1 shows the con- 1 shows the config-
uration ofof
figuration the CNN
the CNN layer configured
layer using
configured Inception
using modules.
Inception The sizeThe
modules. of the input
size image
of the input
image was set to 299 × 299, and a reduction layer was added between the inception[26–
was set to 299 × 299, and a reduction layer was added between the inception modules mod-
28].[26–28].
ules With most With CNN,
most the pooling
CNN, thelayer is used,
pooling but
layer is itused,
is constructed to solve the to
but it is constructed represen-
solve the
tational bottleneck
representational problem.
bottleneck Finally, Finally,
problem. softmaxsoftmax
was used as used
was the activation function function
as the activation of the
final layer is a classification problem for flame, smoke, and non-fire.
of the final layer is a classification problem for flame, smoke, and non-fire.
Table 1. Inception-V3 CNN parameter.
Table 1. Inception-V3 CNN parameter.
Layer Kernel Size Input Size
Layer 3×3 Kernel Size 299 × 299 Input
× 3 Size
Convolution
Convolution Convolution 3×3 3×3 149 × 149
299××32 299 × 3
Convolution Convolution 3×3 3×3 149×
147 × 147 ×32149 × 32
Convolution 3×3 147 × 147 × 32
Max pooling 3×3 147 × 147 × 64
Max pooling 3×3 147 × 147 × 64
Convolution Convolution 3×3 3×3 73 × 7373××6473 × 64
Convolution Convolution 3×3 3×3 73 × 7373××8073 × 80
Max pooling Max pooling 3×3 3×3 71 × 7171××192
71 × 192
Inception module Inception module - - 35 × 3535××192
35 × 192
Reduction Reduction - - 35 × 3535×× 35 × 228
228
Inception module - 17 × 17 × 768
Inception module - 17 × 17 × 768
Reduction - 17 × 17 × 768
Reduction Inception module - - 17 × 17 8××768
8 × 1280
Inception module Average pooling - - 8 × 8 ×81280
× 8 × 2048
Fully connected - 1 × 2048
Softmax - 3

In addition, the dataset used for training this CNN model is shown in Table 2. Here,
the train dataset used for training and the test dataset for evaluating and reflecting the
Average pooling - 8 × 8 × 2048
Fully connected - 1 × 2048
Fire 2022, 5, 108
Softmax - 3 9 of 12

In addition, the dataset used for training this CNN model is shown in Table 2. Here,
the train dataset used for training and the test dataset for evaluating and reflecting the
learning understanding of the training intermediate model were divided into about 8 to 2
learning understanding of the training intermediate model were divided into about 8 to 2
ratios, and the training was conducted, and accuracy and loss did not change significantly.
ratios, and the training was conducted, and accuracy and loss did not change significantly.
Learning was ended at the converging 5000 steps. The train and test image dataset used
Learning was ended at the converging 5000 steps. The train and test image dataset used
was obtained from Kaggle and CVonline as public materials for use in research.
was obtained from Kaggle and CVonline as public materials for use in research.
Table 2. Composition of datasets for fire detection.
Table 2. Composition of datasets for fire detection.
Classes
Classes of Image
of Image Datasets
Datasets
FlameFlame Smoke
Smoke Non-Fire
Non-Fire
Train setset
Train 6200 6200 62006200 6200
6200
Test setset
Test 1800 1800 18001800 1800
1800

3.3.Experimental
ExperimentalResults
Results
Images
Imagesof
offlame
flameand
andsmoke
smokethat
thatmay
may occur
occur in
in fires
fires were
were pre-processed
pre-processed using
using appear-
appear-
ance characteristics and classified through an Inception-V3 model based on a CNN. Figure
ance characteristics and classified through an Inception-V3 model based on a CNN. Figure 8
visualizes the final detection of the flame region from the test images.
8 visualizes the final detection of the flame region from the test images.

Figure8.8.Flame
Figure Flamedetection
detectionresults
resultsof
ofinput
inputvideos.
videos.

IfIfthe
thecandidate
candidateregion
regiondetected
detectedthrough
throughpre-processing
pre-processingisisjudged
judgedto tobebeaaflame,
flame,ititisis
visualized as a red bounding box, and if it is an object not related to fire, it is visualized as as
visualized as a red bounding box, and if it is an object not related to fire, it is visualized a
a green
green bounding
bounding box.box. In addition,
In addition, Figure
Figure 9 visualizes
9 visualizes the detection
the detection of smokeof smoke
from thefrom the
input
input images,
images, and similarly,
and similarly, in the
in the case of case
the redof the red bounding
bounding box, it isbox, it is inferred
inferred to be thetosmoke
be the
smoke in
region; region; in the
the case casegreen
of the of thebounding
green bounding
box, it isbox, it is visualized
visualized as anunrelated
as an object object unrelated
to fire.
to fire.
Accuracy, precision, recall, and F1 score were calculated to determine the objective
performance of the experimental results obtained through this study, where TP is the
number of true positives, FP the number of false positives, FN the number of false negatives
and TN the number of true negatives. The relationships among them are listed as shown
below. First, accuracy and precision were obtained via Equations (15) and (16), and recall
was obtained using Equation (17), while F1 score, which is the harmonic mean of the
precision and detection rate, was obtained using Equation (18).

TP + TN
Accuracy = (15)
TP + TN + FP + FN
TP
Precision = (16)
TP + FP
Fire 2022, 5, 108 10 of 12

TP
Recall = (17)
TP + FN
Fire 2022, 5, x FOR PEER REVIEW Precision ∗ Recall 10 of 13
F1 Score = 2 ∗ (18)
Precision + Recall

Figure9.9.Smoke
Figure Smokedetection
detectionresults
resultsof
ofinput
inputvideos.
videos.

Accuracy,
The precision,
performance recall, and
evaluation wasF1 score were
conducted calculated
using to determine
five videos featuring the objective
flames, five
performance
videos of theand
with smoke, experimental
five videosresults obtained
not related through
to fire. Whenthis study,out
carrying where TP is the
the detection
number using
method of trueoptical
positives,
flow,FPthe
theperformance
number of false positives,
should FN the number
be evaluated throughofcontinuous
false nega-
tives and TNis,
images—that thebynumber of truerather
using videos, negatives.
than The relationships
a single still image.among them50
Therefore, are listed in
frames as
showninference
which below. First,
wasaccuracy
performed andonprecision
the objectwere obtained
were via Equations
extracted from each(15) testand (16),and
video, and
recall
the waswas
result obtained using In
calculated. Equation
addition, (17), while F1 score,
a performance which is was
evaluation the harmonic
performedmean in theof
the precision
same way in the andflame
detection
and rate, wasvideos.
non-fire obtainedMoreover,
using Equation (18). the results with the
to compare
model presented in this study, the deep-learning-based 𝑇𝑃 + 𝑇𝑁 object detection model Single
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =R-CNN (Region proposal Convolutional Neural
Shot Multibox Detector (SSD) [29]–Faster (15)
𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁
Network) [30] was used.
The flame detection results are shown in Table 𝑇𝑃
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 3, and for the model presented in this (16)
study, the accuracy was 96.0%, precision was 94.2%, 𝑇𝑃and
+ 𝐹𝑃 recall and F1 score were 98.0% and
96.1, respectively. In the case of SSD, the accuracy was 𝑇𝑃 89.0% and that of Faster R-CNN was
𝑅𝑒𝑐𝑎𝑙𝑙 = (17)
92.0%. The accuracy of the flame detection algorithm 𝑇𝑃 + presented
𝐹𝑁 in this study was relatively
high. The frequency of false positives, in which non-flame
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ∗ 𝑅𝑒𝑐𝑎𝑙𝑙objects are mis-detected as
flames, was decreased by more𝐹1 𝑆𝑐𝑜𝑟𝑒
than 10%=compared
2∗ to other
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑅𝑒𝑐𝑎𝑙𝑙 studies, which greatly affected (18)
the overall increase in precision.
The performance evaluation was conducted using five videos featuring flames, five
videos with smoke,
Table 3. Evaluation and five
of flame videos
detection not from
results related
eachtomodel.
fire. When carrying out the detection
method using optical flow, the performance should be evaluated through continuous im-
ages—that is, by using videos, rather than a single still Indicator
Evaluation image. Therefore, 50 frames in
which inference was performed on
Accuracy the object were
Precisionextracted from
Recall each test video,
F1 Scoreand the
result was calculated.
Our proposed model
In addition,
96.0
a performance 94.2
evaluation was
98.0
performed in the
96.1
same
way in the SSDflame and non-fire videos.
89.0 Moreover,86.8 to compare the
92.0 results with the
89.3 model
presented
Fasterin this study, the deep-learning-based
R-CNN 92.0 object detection
88.9 model Single Shot
96.0 92.3 Multi-
box Detector (SSD) [29]–Faster R-CNN (Region proposal Convolutional Neural Network)
[30] was used.
The smoke detection results were similar, and in the model presented in this study,
The flame detection results are shown in Table 3, and for the model presented in this
the accuracy was 93.0%, precision was 93.9%, and detection rate and F1 score were 92.0%
study, the accuracy was 96.0%, precision was 94.2%, and recall and F1 score were 98.0%
and 92.9%, respectively. In the case of SSD, the accuracy was 85.0% and Faster R-CNN was
and 96.1, respectively. In the case of SSD, the accuracy was 89.0% and that of Faster R-
89.0%, as shown in Table 4.
CNN was 92.0%. The accuracy of the flame detection algorithm presented in this study
was relatively high. The frequency of false positives, in which non-flame objects are mis-
detected as flames, was decreased by more than 10% compared to other studies, which
greatly affected the overall increase in precision.
Fire 2022, 5, 108 11 of 12

Table 4. Evaluation of smoke detection results from each model.

Evaluation Indicator
Accuracy Precision Recall F1 Score
Our proposed model 93.0 93.9 92.0 92.9
SSD 85.0 84.3 86.0 85.1
Faster R-CNN 89.0 89.8 88.0 88.9

4. Conclusions
In this study, an appropriate pre-processing method was presented to detect both
flames and smoke that may occur during a fire in its early stages. To this end, color-based
and optical flow methods were used, and in order to make a precise judgment on the de-
tected candidate region, inferences were made using a deep-learning-based CNN. Through
this approach, it was possible to reduce false detections due to unnecessary background
regions while improving accuracy when detecting fire. Our tests of the proposed flame
detection method found that the accuracy was improved by 5.5% compared to the object
detection models without separate pre-processing. For the smoke detection method pro-
posed in this study, dark channel feature and optical flow were utilized, and accuracy was
improved by 6% compared to other object detection models. In future studies, a CNN
that can accurately detect objects with irregular shapes, such as flames and smoke, will
be developed or improved for future applications. In addition, research will pursue the
development of an intelligent fire detector that can be applied to low-specification systems,
and which can easily perform real-time detection by supplementing pre-processing meth-
ods. In addition, we will study a method that can accurately detect fire even from small
characteristics in images and develop a fire detection model with higher reliability than
can be achieved by human observation.

Author Contributions: Conceptualization, J.R. and D.K.; Methodology, J.R.; Software, J.R.; Supervi-
sion, D.K.; Validation, D.K.; Writing—original draft preparation, J.R.; Writing—review and editing,
D.K. All authors have read and agreed to the published version of the manuscript.
Funding: This research was supported by a grant (20010162) of Regional Customized Disaster-Safety
R&D Program funded by Ministry of Interior and Safety (MOIS, Korea).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Dubinin, D.; Cherkashyn, O.; Maksymov, A.; Beliuchenko, D.; Hovalenkov, S.; Shevchenko, S.; Avetisyan, V. Investigation of the
effect of carbon monoxide on people in case of fire in a building. Sigurnost 2020, 62, 347–357. [CrossRef]
2. Hadano, H.; Nagawa, Y.; Doi, T.; Mizuno, M. Study of effectiveness of CO and Smoke Alarm in smoldering fire. ECS Trans. 2020,
98, 75–79. [CrossRef]
3. Gałaj, J.; Saleta, D. Impact of apartment tightness on the concentrations of toxic gases emitted during a fire. Sustainability 2019,
12, 223. [CrossRef]
4. Shen, D.; Chen, X.; Nguyen, M.; Yan, W. Flame detection using deep learning. In Proceedings of the 2018 4th International
Conference on Control, Automation and Robotics (ICCAR), Auckland, New Zealand, 20–23 April 2018; pp. 416–420.
5. Muhammad, K.; Khan, S.; Elhoseny, M.; Ahmed, S.; Baik, S. Efficient Fire Detection for Uncertain Surveillance Environment. IEEE
Trans. Ind. Inform. 2019, 15, 3113–3122. [CrossRef]
6. Nguyen, A.Q.; Nguyen, H.T.; Tran, V.C.; Pham, H.X.; Pestana, J. A visual real-time fire detection using single shot MultiBox
detector for UAV-based fire surveillance. In Proceedings of the 2020 IEEE Eighth International Conference on Communications
and Electronics (ICCE), Phu Quoc Island, Vietnam, 13–15 January 2021.
7. Jeon, M.; Choi, H.-S.; Lee, J.; Kang, M. Multi-scale prediction for fire detection using Convolutional Neural Network. Fire Technol.
2021, 57, 2533–2551. [CrossRef]
Fire 2022, 5, 108 12 of 12

8. Lai, T.Y.; Kuo, J.Y.; Fanjiang, Y.-Y.; Ma, S.-P.; Liao, Y.H. Robust little flame detection on real-time video surveillance system. In
Proceedings of the 2012 Third International Conference on Innovations in Bio-Inspired Computing and Applications, Kaohsiung,
Taiwan, 26–28 September 2012.
9. Ryu, J.; Kwak, D. Flame detection using appearance-based pre-processing and Convolutional Neural Network. Appl. Sci. 2021,
11, 5138. [CrossRef]
10. Kaiming, H.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel prior. In Proceedings of the 2009 IEEE Conference
on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009.
11. Kwak, D.-K.; Ryu, J.-K. A study on the dynamic image-based dark channel prior and smoke detection using Deep Learning. J.
Electr. Eng. Technol. 2021, 17, 581–589. [CrossRef]
12. Kang, H.-C.; Han, H.-N.; Bae, H.-C.; Kim, M.-G.; Son, J.-Y.; Kim, Y.-K. HSV color-space-based automated object localization for
robot grasping without prior knowledge. Appl. Sci. 2021, 11, 7593. [CrossRef]
13. Chen, W.; Chen, S.; Guo, H.; Ni, X. Welding flame detection based on color recognition and progressive probabilistic Hough
Transform. Concurr. Comput. Pract. Exp. 2020, 32, e5815. [CrossRef]
14. Gao, Q.-J.; Xu, P.; Yang, L. Breakage detection for grid images based on improved Harris Corner. J. Comput. Appl. 2013,
32, 766–769. [CrossRef]
15. Chen, L.; Lu, W.; Ni, J.; Sun, W.; Huang, J. Region duplication detection based on Harris Corner Points and step sector statistics. J.
Vis. Commun. Image Represent. 2013, 24, 244–254. [CrossRef]
16. Sánchez, J.; Monzón, N.; Salgado, A. An analysis and implementation of the Harris Corner Detector. Image Process. Line 2018,
8, 305–328. [CrossRef]
17. Semma, A.; Hannad, Y.; Siddiqi, I.; Djeddi, C.; El Youssfi El Kettani, M. Writer identification using deep learning with fast
keypoints and Harris Corner Detector. Expert Syst. Appl. 2021, 184, 115473. [CrossRef]
18. Appana, D.K.; Islam, R.; Khan, S.A.; Kim, J.-M. A video-based smoke detection using smoke flow pattern and spatial-temporal
energy analyses for alarm systems. Inf. Sci. 2017, 418–419, 91–101. [CrossRef]
19. Wang, Y.; Wu, A.; Zhang, J.; Zhao, M.; Li, W.; Dong, N. Fire smoke detection based on texture features and optical flow vector
of contour. In Proceedings of the 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China,
12–15 June 2016.
20. Bilyaz, S.; Buffington, T.; Ezekoye, O.A. The effect of fire location and the reverse stack on fire smoke transport in high-rise
buildings. Fire Saf. J. 2021, 126, 103446. [CrossRef]
21. Plyer, A.; Le Besnerais, G.; Champagnat, F. Massively parallel lucas kanade optical flow for real-time video processing applications.
J. Real-Time Image Process. 2014, 11, 713–730. [CrossRef]
22. Sharmin, N.; Brad, R. Optimal filter estimation for Lucas-Kanade Optical Flow. Sensors 2012, 12, 12694–12709. [CrossRef]
23. Liu, Y.; Xi, D.-G.; Li, Z.-L.; Hong, Y. A new methodology for pixel-quantitative precipitation nowcasting using a pyramid Lucas
Kanade Optical Flow Approach. J. Hydrol. 2015, 529, 354–364. [CrossRef]
24. Douini, Y.; Riffi, J.; Mahraz, M.A.; Tairi, H. Solving sub-pixel image registration problems using phase correlation and Lucas-
Kanade Optical Flow Method. In Proceedings of the 2017 Intelligent Systems and Computer Vision (ISCV), Fez, Morocco,
17–19 April 2017.
25. Hambali, R.; Legono, D.; Jayadi, R. The application of pyramid Lucas-Kanade Optical Flow Method for tracking rain motion
using high-resolution radar images. J. Teknol. 2020, 83, 105–115. [CrossRef]
26. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings
of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016.
27. Demir, A.; Yilmaz, F.; Kose, O. Early detection of skin cancer using deep learning architectures: Resnet-101 and inception-V3. In
Proceedings of the 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, 3–5 October 2019.
28. Kristiani, E.; Yang, C.-T.; Huang, C.-Y. ISEC: An optimized deep learning model for image classification on Edge Computing.
IEEE Access 2020, 8, 27267–27276. [CrossRef]
29. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer
Vision—ECCV 2016; Springer: Cham, Switzerland, 2016; pp. 21–37.
30. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans.
Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [CrossRef] [PubMed]

You might also like