Professional Documents
Culture Documents
Jinsong Zhu, Wei Li*, Da Lin, Hengyu Cheng and Ge Zhao, School of
Mechatronic Engineering, China University of Mining and Technology,
Xuzhou 221116, People’s Republic of China
Abstract. An infrared image-based feedback control system for intelligent fire moni-
tor was proposed in this paper, whose aims are to realize automatic aiming of the fire
site and continuous fire tracking in the process of fire extinguishing through adjusting
of the yaw angle of the fire monitor. Firstly, infrared camera was used to capture
images of the fire site, and under high temperatures, the fire would be easily
observed. An improved adaptive image threshold segmentation scheme was developed
for image background segmentation and fire acquisition. The fire region in the cap-
tured image was considered as region of interest (ROI) and the geometric center of
which was calculated. Due to the special advantageous structural design of the pro-
posed system, the geometric center of ROI representing the actual fire was considered
as the direction which the fire monitor should aim at. Furthermore, a fuzzy control
scheme simulating the operating mode of firefighters was proposed, and the ROI cen-
ter position deviation and deviation rate were considered as inputs of the control sys-
tem. Experimental results show that, after the fire was detected by the system, the fire
monitor could be yawed to the direction of the fire within 2000 ms, and the deviation
of image position is less than 1.6 pixels. Similarly, the proposed system could be well
adapted to the changing position of the fire caused by combustion. And after no
more than image iteration of three frames, the controller realizes aiming of the fire by
the fire monitor, and the final deviation is less than 0.1 pixels. In general, the pro-
posed infrared image feedback control system for fire horizontal position aiming of
intelligent fire monitor has great potential for intelligent fire extinguishing by fire-
fighting robots.
Keywords: Infrared image, Adaptive threshold segmentation, Servo control, Fire monitor, Fire robots
1. Introduction
Firefighting operations are dangerous, especially when the fire site is full of smoke
or other toxic gases [1]. The fire accident in Tianjin, China, in August 2015, resul-
ted in death of 99 firefighters [2]. The accident investigation results show that the
explosion of dangerous goods warehouse is the main cause of a large number of
deaths during the fire extinguishing process. Fire robots have long been consid-
ered as an alternative to firefighters entering the fireground to perform fire-fighting
1
Fire Technology 2020
operations. In recent years, this domain has attracted more and more attention. In
China, national-level scientific research projects are particularly concerned about
firefighting and rescue equipment. Intelligent fire robots usually need to be able to
detect and locate the fire after approaching the fire, and can accurately spray
water to the fire site. Furthermore, fire robot was required to track the fire trend
and adjust the spray angle of the fire monitor timely according to the change of
the fire position during the fire extinguishing process. Nowadays, equipment simi-
lar to above work process is widely used in the fire protection of large space
buildings, which is called automatic fire suppression system. Typically, automatic
fire suppression system was installed in the ceiling of building, and fire protection
for large space is achieved by a plurality of such devices. However, after the fire
robot enters the fire region, the action of fire monitor still relies on the remote
operation of firefighter during the fire-fighting process. As shown in Fig. 1, a
wheel-type fire-fighting robot manufactured by a Chinese company, the infrared
and vision cameras are mounted on the fire-fighting robot. Fire image acquired by
the camera was merely used to assist the firefighter to get the fire situation, yet the
action of fire monitor still depends on manual operation. Usually, manual remote
control cannot achieve fast fire-fighting operations, and there are high require-
ments for firefighters’ professional skills.
Many types of fire robots have been proposed and used in indoor or outdoor
fire environments. Kuo et al. [3]. designed a fire fighting robot with one extin-
guisher and three flame sensors for intelligent building, and an adaptive fusion
method was proposed for fire detection of the robot. Kim et al. [4]. proposed a
fire detection algorithm for a smart fire robot, which was based on long-wave
infrared camera, ultraviolet radiation sensor and laser radar. Experimental results
show that the robot could immediately calculate its path towards the fire and
return to its original starting place when the fire is out. Rangan [5] developed a
fire fighting robot which uses a modular design concept, including fire detection,
path directing and extinguishing operations. Computer vision based fire detection
algorithm was used to extract the non-static characteristics of fire, and tempera-
ture and UV sensors were used to confirm the presence of fire along with depth
mapping.
Nowadays, firefighting robots are primarily used to assist firefighters in per-
forming fire positioning or fire-fighting. At present, many intelligent fire robots or
fire-fighting solutions based on multi-sensor fusion have been proposed, but are
limited to conceptual stage [6–8]. For instance, China CITIC Group [9] designed a
small crawler fire robot which could accurately detect fire temperature and dan-
gerous objects through temperature and vision sensors. Prabha et al. [10] pro-
posed a dual-robot cooperative fire-fighting scheme, in which sensor information
is shared among fire robots, and the fire-fighting operation is performed by the
robot closest to the fire. Gao et al. [11] developed a fire robot based on vision and
infrared sensors, which carries the water nozzle through the mechanical arm, and
the detection, tracking and extinguishing of fire are realized by the control system.
A fire-fighting robot based on vision and ultrasonic sensors was proposed by Sire-
gar et al. [12], the vision sensor was used for fire field detection, and the ultrasonic
sensor was used to avoid collision of robots with obstacles. Moreover, other fire-
fighting robot items that had entered the research phase have been proposed,
including infrared stereo vision, laser radar, monocular vision and so on.
Joshua [13] uses a pair of infrared cameras to detect and locate the fire, and
designs a feedback controller to adjust the yaw and pitch angles of the nozzle.
Experimental results show that the scheme is ideal for fire extinguishing in close
and small scenes. Furthermore, Joshua [14, 15] combines infrared stereo vision
with ultraviolet sensor for fire extinguishing of automatic fire suppression system
under low visibility. And the servo control of the nozzle is realized through seg-
mentation and positioning of fire site aims. Similarly, Kim [16, 17] included laser
radar into the infrared stereo vision. The infrared stereo vision mainly provides
three-dimensional information of fire while laser radar provides a more accurate
distance of objects in the view field. Research shows that this scheme can realize
the positioning and tracking of fire targets based on scene matching algorithm,
and has good adaptability to both smoked and smokeless scenes. Similarly,
Starr [18] applied the fusion data of 3D laser and long-wave infrared stereo vision
system in ranging operations under fire environment. It was found that the fused
data sensors had satisfactory measurement results under different concentrations
of smoke, and it could provide more accurate location service for fire-fighting
robots to enter the fire environment. In general, intelligent operation of a fire-
fighting robot can be divided into several processes: fire detection, fire positioning,
fire aiming, and tracking fire extinguishing.
The focus of this paper is to realize fire monitors horizontal aiming and contin-
uous tracking of the fire site based on infrared image in the process of fire-fight-
ing. The main contributions include: Firstly, horizontal geometric center position
of the infrared image represents the jet direction of the jet trajectory based on the
unique system structure. Then, the improved maximum entropy image segmenta-
tion algorithm achieves accurate extraction of fire ROI, and the comparison with
other segmentation methods further proves the superiority of the proposed
Fire Technology 2020
method. Finally,the improved fuzzy control rule simulates the manually operates
way of a professional fireman, it is closer to the actual fire suppression process.
The chapters of the paper are arranged as follows: Sect. 2 introduces the hard-
ware structure of the intelligent fire monitor system and explains the basic flow of
the control scheme. Section 3 describes the improved adaptive threshold segmenta-
tion method for extracting fire regions from infrared images and some segmenta-
tion results. The fuzzy controller introduction and the controller simulation results
are given in the Sect. 4. Section 5 gives an introduction to the experiment and an
analysis of some experimental results. Finally, some conclusions of this paper are
given.
2. System Architecture
The infrared image vision system for intelligent fire monitor proposed in this
paper includes a MAGNITY14min long-wave infrared camera, an electric fire
monitor, and a visual servo controller.
fire monitor to detect jet trajectory and so as to adjust range of falling points.
Therefore, it is beyond the scope of this article(Dotted boxes in Figs. 2 and 3,
respectively). The infrared cameras optical axis direction of the is consistent with
the fire monitors head in the intelligent fire monitor system we designed. Based on
this special structure design, the horizontal geometric center position of the infra-
red image represents the jet direction of the jet trajectory. Therefore, in order to
aim the jet trajectory at the fire, it is only necessary that the fire location in the IR
image coincides with the image horizontal geometric center. The above two cam-
eras are mounted on the fire monitor through a specific bracket. The yaw and
pitch motors included in the fire monitor are used to adjust the yaw and pitch
angles of the nozzle, respectively. In this paper, the infrared vision servo system
was designed to control the yaw motor to achieve the horizontal aiming of the fire
by the fire monitor. In particular, mounting point of the bracket is between the
yaw and pitch motor of the fire monitor, so the bracket as well as the two cam-
eras will follow the horizontal rotation of the fire monitor, instead of with the
pitch rotation.
½n2 X ½n2
1 X
gðx; yÞ ¼ 2 f ðx þ i; y þ jÞ ð1Þ
n
i¼½n2 j¼½n2
where n2 represents the integer result for n/2, n is the size of the neighborhood.
Thus, a new image F(x, y) was constructed with the original image pixel point
gray value f(x, y) and its neighborhood gray level mean g(x, y). A two-dimen-
sional threshold was assumed to be used for fire segmentation of the new image
F(x, y), expressed as:
Fire Technology 2020
b0 f ðx; yÞ s and gðx; yÞ t
Fs;t ðx; yÞ ¼ ð2Þ
b1 f ðx; yÞ > s and gðx; yÞ > t
where 0 b0 ; s; t; b1 L 1. For image f(x, y), Pij represents the probability that
the pixel point gray level and its neighborhood gray mean value appear in the
image pixel F(i, j). Thus, the two-dimensional histogram of the image is repre-
sented as:
Fs;t ði; jÞ
Pi;j ¼ ð3Þ
W H
We know that the gray levels of the pixels in the fire region and the background
region are evenly distributed in the image respectively. Therefore, the gray value
of the pixel and its neighbor gray level mean are very close. Conversely, in the
target and background boundary regions of the image, the gray level of the pixel
and the mean value of the gray of the neighborhood are different. This causes the
two-dimensional histogram of the image to exhibit bimodal characteristics. The
discrete two-dimensional entropy of the image is expressed as:
XX
H ¼ Pi;j lgPi;j ð4Þ
i j
Furthermore, the entropy sum of the target and background in the image is
expressed as:
( )
s1 X
X t1 L1 X
X L1
/ðs; tÞ ¼ Pi;j lg Pi;j þ Pi;j lg Pi;j ð5Þ
i¼0 j¼0 i¼s j¼t
fire center point position (63, 90) based on the segmentation result. In our work,
geometric center position abscissa of the fire region in the image is applied as the
horizontal aiming direction of the fire monitor.
In contrast, the traditional one-dimensional maximum entropy target segmenta-
tion method is used for fire segmentation in Fig. 4a, and the results are as shown
in Fig. 4d. It is not difficult to find that there is still false detection in the seg-
mented image. The main cause of the detection deviation is the interference
caused by high temperature regions, while one-dimensional entropy value based
on single pixel value gray could not reflect the difference between the target and
background. Furthermore, the commonly used adaptive threshold segmentation
method was used for fire segmentation in Fig. 4a, and the result is as shown in
Fig. 5. It is found that the segmentation result of the Yen method is similar to
that using the one-dimensional maximum entropy method, but they are still not
accurate. The calculated segmentation threshold of the Yen method is 176. While
the results of methods like average grayscale, Otsu threshold, percentage thresh-
old, threshold iteration, and bimodal thresholds are very poor. And the segmenta-
tion thresholds matched by these methods are 103, 87, 88, 87, 67, respectively. A
large number of false detection occurs and the segmentation threshold is smaller.
It can be seen that the image acquired by the infrared camera is usually affected
by smoke and other high temperature regions, resulting in a low signal-to-noise
ratio, and it is difficult to segment the fire accurately with the conventional
method. Fortunately, the two-dimensional maximum entropy method considering
the gray value of single pixel and its neighborhood gray value has obtained satis-
factory fire detection results. Experimental results show that the method is very
suitable for fire detection and central geometric position extraction.
where a and b represent the length of the triangle bottom, and c represents the
height. Based on the operation mode of the firefighters, the fuzzy control rules of
the controller are designed as shown in Table 1. The rule table is stored in the
controller, and the output of the controller is acquired by means of look-up
table according to deviation e and deviation rate ec. The weighted averaging
method is used to defuzzify the controller output from the rule table, which is
expressed as:
Pn
lAi ðxÞlBi ðyÞZi
u ¼ Pi¼1
n ð7Þ
i¼1 lAi ðxÞlBi ðyÞ
where u represents the output of the controller, and Zi represents the abscissa of
the triangle membership function corresponding to u. In our work, the maximum
resolution of the fire image captured by the infrared camera is 160 120. There-
Fire Technology 2020
Table 1
Fuzzy Controller Rule Table
EC
U NB NM NS Z PS PM PB
E
NB NB NB NB NB NM Z Z
NM NB NB NB NB NM Z Z
NS NM NM NM NM Z PS PS
Z NM NM NS Z PS PM PM
PS NS NS Z PM PM PM PM
PM Z Z PM PB PB PB PB
PB Z Z PM PB PB PB PB
fore, deviation e and deviation change rate ec are set to be 80 and 160, respec-
tively. The maximum rotation time of the selected firefighting yaw motor in the
view field of the infrared camera is taken as the output maximum of the con-
troller, which is 10,000 ms. Taking the fire position deviation in Fig. 4f as an
example, the simulation results of the developed controller are shown in Fig. 6.
As can be seen from Fig. 6, the initial deviation is 17 pixels and the deviation is
reduced to 8.1 pixels after one action of the controller. After 5 actions, the devia-
tion is reduced to less than 4 pixels, and after 11 times, the deviation is less than
2 pixels. The results show that the controller responds quickly to the fire position
deviation, which greatly ensures that the fire robot can quickly adjust the yaw
angle of the fire monitor to ensure that the sprayed water aims at the fire position.
Furthermore, the controller’s steady-state error is 0.2 pixels, which exceeds the
maximum resolution of the infrared camera, and meets the precision requirements
of the fire monitor for fire aiming.
position is 61 pixels. Figure 10 shows the response curve of the controller after the
first fire frame is acquired and it can be seen that the controller performs 5
actions within the response time of 2000 ms. The controller contains an inactive
region since the DC motor is insensitive to operating time less than 40 ms. After
five actions of the controller, the fire position deviation was reduced to 0:6 pix-
els. Figure 9c shows the second frame fire image collected by the infrared camera.
It can be seen that the adjusted fire horizontal center pixel is 81 and the final devi-
ation is 1.6 pixels. Because the fire is constantly changing during the combustion
process, such error results are satisfactory. In general, the horizontal deviation of
fire position in the infrared image is minor when fire monitor is used to perform a
fire-fighting mission, since the firefighter has roughly known the position and
direction of the fire. Therefore, experimental results show that the proposed con-
trol system can quickly adjust the fire monitor nozzle to the fire direction after
detection of fire, and ensure a small error.
Figures 11 and 12 show the fire image and controller response curves about the
second part of the experiment. Figure 11a–c are consecutive three-frame fire ima-
ges acquired by the proposed system, with the fire position on the left of the infra-
red image vertical center line. The fire position is artificially moved in the image
field of view, the fire shown in Fig. 11b is moved vertically closer to the fire moni-
tor based on Fig. 11a. Similarly, the fire shown in Fig. 11c is moved horizontally
away from the fire monitor based on Fig. 11b. The left bottom of Fig. 12 shows
the response curve of the proposed controller after obtaining the three consecutive
frames fire images. It can be seen that although the fire location was artificially
changed, the controller’s position adjustment output for the fire monitor is still
reliable. After the response of the third frame image ends, the remaining fire devi-
ation is 0:1 pixels. It shows that the control system is very robust and can resist
interference caused by the change of the fire position.
A similar operation is performed with the fire location on the right side of the
infrared image vertical centerline. Fire shown in Fig. 11e is moved horizontally
away from the fire monitor based on Fig. 11d and the fire shown in Fig. 11f is
Fire Technology 2020
Table 2
Controller Output Response with Each Frame Image
Anthropomorphic fuzzy
controller
moved vertically closer to the fire monitor based on Fig. 11e. It is not difficult to
find that the fire deviation on the right side of the infrared image field is still cor-
rected in time. After the end of the third frame image response, the fire deviation
is 0.1 pixels. In the above experiment, the controller output response with each
frame image is recorded in Table 2 minutely.
It can be seen from Table 2 that, in the case of a large deviation of fire posi-
tion, the controller cannot reduce the deviation to a satisfactory degree within the
2000 ms time interval. In Fig. 11a, d, e, the horizontal deviation exceeds 35 pixels,
and the remaining deviation in one cycle exceeds 15 pixels. The state changes dra-
matically as the fire is burning. For this reason, the sampling frequency of the
controller is set to 0.5 Hz. Thus the long sampling interval may cause the con-
troller output to be inaccurate, and even cause the fire target to escape from the
view field of infrared camera and missed. Positively, in the case of manual inter-
ference, the proposed controller output responses for the images in each of the
Intelligent Fire Monitor for Fire Robot Based on Infrared Image Feedback Control
two groups obtained good deviation correction results. This means that the pro-
posed controller can accurately aim the fire monitor nozzle to the fire position
within 6000 ms, and the deviation does not exceed 0.1 pixels. In particular, the
fire maximum positional deviation in experiment was set to 50 pixels for the pur-
pose of controller performance verification. However, fire monitors would merely
encounter such large positional deviations at the beginning of the fire-fighting
operation. In the process of burning, it is almost impossible that fire positional
deviation would exceed 60% of the camera view field in a short time.
6. Conclusion
In this paper, an infrared image feedback control system for fire aiming of fire
monitor was proposed and experimentally verified. Firstly, the infrared camera is
installed vertically above the fire monitor, and the aim of regarding the center
position of the fire region in the infrared image as the horizontal aiming target of
the fire monitor could be realized. Then, the improved maximum entropy adaptive
fire segmentation algorithm is proposed and applied to determine the fire position
in infrared image under complex fire environment. Finally, a fuzzy controller that
simulates the Firefighters’ professional operations is built and verified,experimen-
tal results and action curves well verify the performance of the controller based on
the anthropomorphic fuzzy rules.
After acquiring the fire target through the proposed controller, the fire monitor
could realize accurate aiming of the fire in less than 6000 ms. Furthermore, even if
the fire position changes greatly during the fire-fighting operation, the proposed
controller can still achieve tracking and aiming of the fire. In the future, our focus
is on expanding the function of the system to achieve the aim of the fire monitor
jet trajectory range to the fire distance. On the other hand, it is to improve struc-
tural design of the intelligent fire monitor and mount it on fire robot for fire-fight-
ing operations in the real fire environment.
Acknowledgements
This work is supported by National Key R&D Program of China
(2016YFC0802900) and Xu-gong Construction Machinery Group (XCMG)
Research Institute and a Project Funded by the Priority Academic Program
Development of Jiangsu Higher Education Institutions, Top-notch Academic Pro-
grams Project of Jiangsu Higher Education Institutions.
References
1. China Fire Protection Association (2018) Statistical report of the 17th international
fire equipment and technology exchange exhibition. Fire Sci Technol 15: 23 34
2. Min X (2015) On-the-spot characteristics of extra serious explosion accident in Tianjin
port on August 12. City Disaster Reduct 5:9–12
Fire Technology 2020
3. Su KL (2006) Automatic fire detection system using adaptive fusion algorithm for fire
firghting robot. In: IEEE international conference on systems
4. Kim JH, Keller B, Lattimer BY (2013) Sensor fusion based seek-and-find fire algorithm
for intelligent firefighting robot. In: IEEE/ASME international conference on advanced
intelligent mechatronics. https://doi.org/10.1109/AIM.2013.6584304
5. Sandeep GSP (2014) A computer vision based approach for detection of fire and direc-
tion control for enhanced operation of fire fighting robot. In: International conference
on control. https://doi.org/10.1109/CARE.2013.6733740
6. Bose J, Mehrez SC (2017) Development and designing of fire fighter robotics using
cyber security. In: International conference on anti-cyber crimes, pp 118–122. https://do
i.org/10.1109/Anti-Cybercrime.2017.7905275
7. Qian J, Zeng JJ, Yang RQ, Weng XH (2006) A fire scout robot with accurate returning
for urban environment. Robotica 25(3):351–358. https://doi.org/10.1017/
S0263574706003146
8. Cai L, Zhang R (2013) Design and research of intelligent fire-fighting robot. Adv Mater
Res 823:358–362. https://doi.org/10.4028/www.scientific.net/AMR.823.358
9. Jia YZ, Li JS, Guo N, Jia QS, Du BF, Chen CY (2018) Design and research of small
crawler fire fighting robot. Chin Autom Congress 823:4120–4123
10. Prabha PS, Shivaanivarsha N (2017) A design of firefighting and supervising self-suffi-
cient robots. In: 3rd IEEE international conference on science technology engineering
and management
11. Gao S, Zhang ZY, Zhao ZH, Jamali MM (2018) Vision and infra-red sensor based fire
fighting robot. In: 61st IEEE international midwest symposium on circuits and systems.
https://doi.org/10.1109/MWSCAS.2018.8624080
12. Siregar B, Purba HA, Efendi S, Fahmi F (2017) Fire extinguisher robot using ultrasonic
camera and wifi network controlled with android smartphone. In: Materials science and
engineering conference series
13. McNeil JG, Lattimer BY (2017) Robotic fire suppression through autonomous feed-
back control. Fire Technol 53(3):1171–1199. https://doi.org/10.1007/s10694-016-0623-1
14. McNeil JG, Lattimer BY (2016) Autonomous fire suppression system for use in high
and low visibility environments by visual servoing. Fire Technol 52(5):1343–1368.
https://doi.org/10.1007/s10694-016-0564-8
15. McNeil JG, Lattimer BY (2015) Real-time classification of water spray and leaks for
robotic firefighting. IGI Glob 1:1–26
16. Kim JH, Starr JW, Lattimer BY (2015) Firefighting robot stereo infrared vision and
radar sensor fusion for imaging through smoke. Fire Technol 51(4):823–845. https://
doi.org/10.1007/s10694-014-0413-6
17. Kim JH, Lattimer BY (2015) Real-time probabilistic classification of fire and smoke
using thermal imagery for intelligent firefighting robot. Fire Saf J 72:40–49. https://
doi.org/10.1016/j.firesaf.2015.02.007
18. Starr JW, Lattimer BY (2013) Application of thermal infrared stereo vision in fire envi-
ronments. In: IEEE/ASME international conference on advanced intelligent mechatron-
ics. https://doi.org/10.1109/AIM.2013.6584337
19. Xu X, Xu S, Jin L, Song E (2011) Characteristic analysis of Otsu threshold and its
applications. Pattern Recogn Lett 32(7):956–961. https://doi.org/10.1016/
j.patrec.2011.01.021
20. Gull SF, Skilling J (1984) Maximum entropy method in image processing. Commun
Radar Signal Proces IEE Proc F 131(6):646–659. https://doi.org/10.1049/ip-f-
1:19840099
Intelligent Fire Monitor for Fire Robot Based on Infrared Image Feedback Control
21. Stuller JA (1995) A two-dimensional image model based on occlusion and maximum
entropy. In: Conference on signals, systems and computers. https://doi.org/10.1109/AC
SSC.1995.540935
22. Chen HK, Yang XL (2012) Improved two-dimensional maximum entropy segmentation
algorithm for infrared images based on fractal theory. Infrared 33(8):27–31. https://
doi.org/10.3969/j.issn.1672-8785.2012.08.005
23. Santis AD, Siciliano B, Villani L (2008) A unified fuzzy logic approach to trajectory
planning and inverse kinematics for a fire fighting robot operating in tunnels. Intel Serv
Robot 1:41–49. https://doi.org/10.1007/s11370-007-0003-2
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.