You are on page 1of 3

2.

2 Image Processing for Detection of Intruder, Fire, and Smoke


A robot that uses image processing to detect an image. In a series of preprocessing operations
such as graying based on color components, filtering denoising, and so on are carried out to improve the
image effect using image processing technology. The suspicious area that was identified using the color
component difference image is then processed using threshold segmentation and edge detection. Image
processing relies heavily on pattern recognition technology for image extraction [1]. Moreover, in
acquisition and processing of an image, color segmentation can identify and extract specific colors from
an image. Using color spectrum, it can determine the thresholds for hue, saturation, and lightness when
extracting specific colors. In a detection system, the camera only detects the desired color for processing
its target. [2]. In addition to the study, the images provided by a video image recognition system are all
color images, and more importantly, the color attributes perform an indispensable role, which is
especially vital in the field of an image identification. Color is a powerful descriptor in human vision and
photography. It carries a lot of visual information and is essential for distinguishing between targets and
extraction areas but the algorithm using the RGB is quite complex. So as to make it easier Grayscale
representations are frequently used for extracting descriptors instead of directly operating on color
images because they simplify the algorithm and reducing the computational specifications [3].

A laser-based machine vision system for instant image recognition. The method detects 3D
position using an infrared laser ranging sensor. When combined with the proposed image analysis
algorithm, the method can be used in the detection field. An active triangulation-based ranging system
comprised of several independent laser systems is proposed. Each laser system generates a light
scattering sheet, which is projected onto the detected object [4]. Another study by Nirmalya et al.
proposed a development and path planning in mobile robots which utilizes image processing and Q –
learning for indoor environments navigation. It plans the shortest path from current state to the goal
state using images captured from ceiling of the indoor environment. In the system, template matching is
used to detect the position captured image of a mobile robot and then processed with matlab. Matlab
detects the robot's position and any obstacles that may be present within the map, the mobile robot's
goal position is fed in the matlab environment then the software creates the Q - Learning created path
based on the processed image [4].

2.4 Auto-Pilot and Manual Controls of Robot


Bolu et al. created a cost-effective, long-lasting surveillance robot using an Arduino
microcontroller, a motor shield, and an Android smartphone running the operating system. The Arduino
microcontroller was used to implement the system, and this model controls the robot with a Wi-Fi
module Robot link V4.0, an Arduino motor shield driver, and geared dc motors. The robot is remotely
controlled and also has an autonomous mode; in this mode, the initial loading of the code requires no
user intervention throughout its operation and according to client preference, the robot is controlled by
any Android device. The remote operator controls the robot via sending control signals to the
smartphone which are then forwarded to the Microcontroller, which then navigates the robot in the
direction desired [5].
2.5 Live Visual and Camera Systems

A cost-effective and hard surveillance robot constructed with a Microcontroller, a motor shield,
and a Mobile platform running the operating system. The robot is equipped with a video camera and a
Wi-Fi robot link, and the operator can control the robot's movement using the mobile robot control
platform. The camera on the smartphone sends video feedback to the remote operator over the internet
at the same time, allowing the operator to navigate the robot from a distance [5].

Huegle et al. presented a method for automated and precise calibration to enable vision-based
robot control using a multi-camera setup, with 3 components: intrinsic calibration of each individual
camera, extrinsic calibration of each individual camera, and determining the camera-to-robot
relationship. In general, camera calibration entails determining the camera's intrinsic parameters and
distortion coefficients. Unless the camera lens is changed, these parameters remain constant. Extrinsic
parameters in multi-camera systems include the relative rotation and translation between the cameras,
which are required in depth estimation applications [6].

[1] W. Xiong, “Research on Fire Detection and Image Information Processing System Based on Image
Processing,” Proceedings - 2020 International Conference on Advance in Ambient Computing and
Intelligence, ICAACI 2020, pp. 106–109, Sep. 2020, doi: 10.1109/ICAACI50733.2020.00027.

[2] C.-Y. Lu, C.-C. Kao, Y.-H. Lu, and J.-G. Juang, “Application of Path Planning and Image Processing for Rescue
Robots,” Sensors and Materials, vol. 34, no. 1, pp. 65–80, 2022, doi: 10.18494/SAM.2022.3546.

[3] E. Liu, “Research on video smoke recognition based on dynamic image segmentation detection
technology,” Proceedings - 2019 12th International Conference on Intelligent Computation Technology
and Automation, ICICTA 2019, pp. 240–243, Oct. 2019, doi: 10.1109/ICICTA49267.2019.00058.

[4] N. Roy, A. Mukherjee, A. Bhuiya, R. Chattopadhay, and B. T. Student, “Implementation of Image


Processing and Reinforcement Learning in Path Planning of Mobile Robots Bitcoins and Blockchains View
project Implementation of Image Processing and Reinforcement Learning in Path Planning of Mobile
Robots View project Implementation of Image Processing and Reinforcement Learning in Path Planning
of Mobile Robots,” International Journal of Engineering Science and Computing, 2017, Accessed: Oct. 07,
2022. [Online]. Available: http://ijesc.org/

[5] J. Azeta et al., “An Android Based Mobile Robot for Monitoring and Surveillance,” Procedia Manuf, vol.
35, pp. 1129–1134, Jan. 2019, doi: 10.1016/J.PROMFG.2019.06.066.
[6] O. Kroeger, J. Huegle, and C. A. Niebuhr, “An automatic calibration approach for a multi-camera-robot
system,” IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, vol.
2019-September, pp. 1515–1518, Sep. 2019, doi: 10.1109/ETFA.2019.8869522.

You might also like