Professional Documents
Culture Documents
N Madhavi
Indira Institute of Engineering and Technology,
Thiruvallur Taluk, Pandur, India
madhavinalandula@gmail.com
Abstract—We present a wireless vision-based scheme for driving moving objects and tracks over time through perceptual color
a wireless mobile robot to track a moving object. Objects are space.
identified based on their unique color. These visual cues will be
removed in the near future so that new vision algorithms are Environments where mobile robots should operate are
needed to cope with this. In this paper we present a method for subject to dynamic changes: learning models in dynamic
detecting and tracking the ball with the help of the color worlds is an important aspect for autonomous robots navigation
information obtained from the colored object. Our method relies [1]. One of the keys to success is a good vision system, so the
on a two-level approach. On the lower level, to detect the color question is: how to make a robot, to look a human vision? The
object controlling the Wireless Mobile robot platform which only way is to approach human sight with the use of a vision
carries the wireless camera is controlled so as to keep the target sensor, as a perspective device for vision applications.
at the center of the image plane. On the higher level, the robot
operates under the assumption that the camera system achieves In this proposed system the tracking application is done via
perfect tracking. In particular, the relative position of the object wireless pinhole camera which is mounted on a mobile robot. It
is retrieved from the Image coordinates through simple does by taking the RGB (Red, Green, and Blue) values of the
geometry, and used to compute a control law driving the robot to tracked object and following it on the screen. The RGB image
the target. Various possible choices are discussed for the high- obtained from the camera is transformed to HSV (Hue,
level robot controller. Control of the Wireless robot is done based Saturation, Value) color space for better tracking of the object
on the perceptual color information of the colored object and also to control the motion of robot to keep the object
acquired from the wireless camera centered in the camera. The goal is to make a fast program for
ball localization with MATLAB that can capture images from a
Keywords-Object Tracking; Spatial Coordinates; Wireless video device and calculate the position of the object example
vision; Mobile robot; Color Space ball in front of the robot .To enhance tracking, we introduce a
new combination of color space based image segmentation for
I. INTRODUCTION
location detection that preserves object location information in
Object tracking has been challenging more than a decade the frames. The system is able to locate and track moving color
for many researchers due to its complex analysis under non object based on the information attained from vision sensor. To
ideal environment. One of the major problems in tracking the track an object, a wireless camera system was installed in the
object is illumination of the object under various lighting mobile robot and the image processing algorithm implemented
conditions, these parameters can be analyzed by means of color in the server computer on the network. This paper mainly
space. Various methods of differencing frame with the clear describes on the visual object tracking method in ideal
background or between adjacent frames are suited for the environment.
object detection. In this paper we consider the problem of
simultaneously tracking moving object that appear in the path Section II describes the related works. Section III describes
of moving robot and controlling of the robot through the pixel about the perceptual color space and its transformation. Section
information of the image acquired from real time vision sensor. IV discusses about the algorithm for object detection,
The moving object is assumed to be a point-object and is positioning and Threshold values settings based on the color
projected onto an image plane to form a geometrical space transformation. Section V describes the interfacing
representation that provides the position of the object based on details with system model implementation details and its
the active color information of the image. The aims of the results are discussed in the following in section VI
system are: 1) to realize the task of moving objects detection ; II. RELATED WORK
2) to determine the position of each moving objects on non
ideal environment; 3) to maintain a correct association between Mobile robots were used for several application fields
because of their high workability especially with autonomous
21
an image size (320x240).Each pixel value f(x, y) represents the Individual (R; G; B) component for a pink object and its
brightness and color distortion ,these information are obtained equivalent HSV transformation value are displayed in Fig. 3
from the mean color of f(x, y). If the luminance changes a lot and Fig.4 respectively with its concurrent values are discussed
and result is not so good. Francois et al. [21] presented an HSV in Table. I. Out of this entire image information Value (V) has
color space based background subtraction technique. Color more vital role for tracking an object under illumination of
transformation produces a different noise level can be analyzed light
from dimension of each of the HSV color space and it is
depended on the (R; G; B) value ranging from (0-255). TABLE I. .
All acquisition devices are based on the RGB color model Color Transformation RGB to HSV for various color object
to for analyzing the image information hence the model is Color R G B H S V
Pink 125 42 68 0.94 0.64 0.48
related to the hardware implementation. HSV color model are Green 95 220 143 0.44 0.57 0.87
more similar to human perception of the color. The RGB to Yellow 92 116 64 0.25 0.40 0.45
HSV color conversion are obtained from equation 1, 2 and 3: Orange 105 45 25 0.03 0.75 0.44
max(R, G, B)
V
255
0, if max 0
S min
1 , otherwise
max
0, if max min
G B Figure 4. HSV image for pink colored object
, if max R, G B
6(max min)
HSV to RGB transformation provides less noise, high
G B
H 1 , if max R, G B accuracy and separated the brightness from chromaticity.
6(max min) Major advantage is removal of shadows from the background
1 ( B R) for better detection of object and its tracking.
, if max G
3 6(max min)
IV. COORDINATE SYSTEMS AND ALGORITHM
2 ( R G)
, if max B Any 2D image is represented as f(x, y) function with row
3 6(max min)
and column for image size (320x240) which has been
Where, max and min are respectively the maximum and discussed in Section III, ‗f‘ as amplitude function representing
minimum of the (R; G; B) components of an image the intensity or gray level of the image at that point which
respectively denotes the pixel information attained from spatial coordinates
In order to obtain the color information of the object RGB
threshold values are defined for each (R; G; B) components,
these components are transformed to ‗0‘ and ‗1‘.Once the
conditions are satisfied, the pixel value of the image is set to
‗1‘ else the value would be fixed to‘0‘.After transforming the
(R; G; B) to (H; S; V) components, we would find the region of
the object, unwanted white regions in the image has to be
removed these regions are considered to be noise. Before
identifying the object at the centre of the image, the noisy parts
of the image has to eliminate through image Morphological
concept.
Figure 3. HSV Indivitucal components for the Pink Morphological removes the unwanted noisy also smoothes
colored object for image size(320x240)
object contours and removes thin protrusions. The resultant
Individual (R; G; B) component for a pink object and its image tends to smooth the contours of objects besides joining
equivalent HSV transformation value are displayed in Fig. 3 narrow breaks and filling long, and holes smaller than the
and Fig.4 respectively with its concurrent values are discussed structuring element. Now the filtered image finally contains the
in Table. I. Out of this entire image information Value (V) has white region. Since moving object has to be tracked
more vital role for tracking an object under illumination of continuously, there might me some computational errors may
light. occur. In order to avoid such difficulties the filtering process
22
has to be implemented for the entire image acquired before
calculating the centre of mass for all objects.
A. Position of the object
Position of the object is obtained as shown in Fig. 5, the
object to be extracted from image captured through wireless
camera based on the pixel information of the ‗row‘ and
‗column‘ values. The position of the object depends on the
points prescribed as ‗x1‘ and ‗x2‘ (row) on X-axis and points
‗y1‘ and ‗y2‘ on Y-axis (column).
The position of the object is derived from equation. 4
x x
X1 numx; X 2 numx
2 2
y y
Y1 numy; Y 2 numy
2 2
23
Figure 9. Robot Backward Direction
VI. RESULT
Figure. 13 Robot Stop
For controlling the wireless mobile robot based on the
spatial co-ordinates we need to consider the following The decision making rule implementation is done through
RGB2HSV Transformation values obtained from of the real simple switch case for robot movement based on different
time image acquired through Wireless Surveillance Camera via position of the object located on. The output results various ball
PCI based Image Grabber i.e., FRONT TECH TV Tuner Card. positions in the image coordinates are shown in Fig. 9 ,10, 11,
Once the object is detected, we need to find the position of the 12 and 13 consecutively which clearly shows the robot
object located in the frame size as discussed in Section IV. movements based on the spatial coordinates by executing the
MATLAB code. The wireless mobile robot module and
To control the Wireless Mobile robot serial data are wireless receiver module are displayed in Fig. 14 and Fig.15
transferred from PC by running the MATLAB code on the PC respectively.
integrated with serial data transfer as discussed in Table III. In
order to obtain the exact position of the object we need to
divide the frame into 4 equally spaced coordinates. Based on
the object detected on the X1, X2, Y1, Y2 coordinate values,
the robot control is done through wireless RF Modules
24
VII. CONCLUSION
Object detection and tracking through vision sensor has
[7] M. Selsis, C. Vieren, and F. Cabestaing, ―Automatic tracking and 3D
been challenging for many researchers. With the advancement localization of moving objects active contour models,‖ Proc. of the IEEE
in the technological aspects wireless vision sensor are preferred International Symposium on Intelligent Vehicles, pp. 96-100, 1995.
for many such applications. In this paper we intend to interface [8] S. Segvic and S. Ribaric, ―Determining the absolute orientation in a
low cost PCI based image grabber card for visual robot control corridor using projective geometry and active vision,‖ IEEE Trans. on
from remote location through Wireless means. Moreover object Industrial Electronics, vol. 48, no. 3, pp. 696-710, June 2001.
tracking through perceptual HSV color space would be [9] A. Adam, E. Rivlin, and I. Shimshoni, ―Computing the sensory
advantageous than other color space techniques sine noisy uncertainty field of a vision-based localization sensor,‖ Proc. of the
IEEE International Conference on Robotics & Automation, pp. 2993-
regions are adjusted dynamically through transformation. 2999, April 2000.
Machine vision applications such as Image processing based
[10] V. Caglioti, ―An entropic criterion for minimum uncertainty sensing in
Inspection system would be build with such PCI based Image recognition and localization part II-A case study on directional distance
grabber cards. measurements,‖ IEEE Trans. On Systems, Man, and Cybernetics, vol.
31, no. 2, pp. 197-214, April 2001.
VIII. FUTUREWORK [11] C. F. Olson, ―Probabilistic self-localization for mobile robots,‖ IEEE
Further research can be extended by extended fuzzy Trans. on Robotics and Automation, vol. 16, no. 1, pp. 55-66, February
2000.
inference engine for multiple color objects tracking .Vision
[12] H. Zhou and S. Sakane, ―Sensor planning for mobile robot localization
based obstacle avoidance autonomous robot could be used to based on probabilistic inference using Bayesian network,‖ Proc. of the
detect the object by background subtraction using appropriate 4th IEEE International Symposium on Assembly and Task Planning, pp.
thresholds .Use of two cameras to can be employed for 7-12, May 2001.
stereovision based applications to assist in 3D reconstruction of [13] C. Colombo, AD. Bimbo, A. Valli, "Visual capture and understanding of
the object which would able to track the object on both the hand pointing actions in a 3-D environment", IEEE Transactions on
front and rear vision for object tracking. Systems, Man and Cybernetics—Part B 33 (2003) 677–686.
[14] T. Darrell, D. Demirdjian, N. Checka, P. Felzenszwalb, "Plan-view
ACKNOWLEDGMENT trajectory estimation with dense stereo background models", Eighth
IEEE International Conference on Computer Vision (ICCV 2001), vol.
The authors gratefully acknowledge the following 2, 2001, pp. 628–635.
individuals: Prof. V. Bhaskaran, Department of Mathematics, [15] G. Bradski, "Computer vision face tracking as a component of a
Mrs. I. Manju Jackin, Assistant Professor, Department of perceptual user interface", Workshop on Application of Computer
Electronics and Communication of Velammal Engineering Vision, Princeton, NJ, 1998, pp. 214–219.
College, our colleagues for their support and valuable guidance [16] B. Menser, M. Brunig, "Face detection and tracking for video coding
for devoting their precious time and sharing their knowledge. applications", Conference Record of the Thirty-Fourth Asilomar
Conference on Signals, Systems and Computers, 2000, pp. 49–53.
REFERENCES [17] W.E. Vieux, K. Schwerdt, J.L. Crowley." Face-tracking and coding for
video compression", International Conference on Computer Vision
[1] R. Araujo, G. Gouveia, and N. Santos ―Learning self organizing maps Systems, 1999, pp. 151–160.
for navigation in dynamic worlds‖ In Proc. IEEE Int. Conf. Robotics and
[18] B. Lucas, T. Kanade, "An iterative image registration technique with an
Automation (ICRA), pages 1312–1317, 2003.
application to stereo vision", international Joint Conference on Artificial
[2] K. Daniilidis and C. Krauss, ―Real-time tracking of moving objects with Intelligence, 1981, pp. 674– 679.
an active camera,‖ Real- Time Imaging, Academic Press Limited, 1998.
[19] M. Isard, A. Blake, "Contour tracking by stochastic propagation of
[3] S. M. Lavalle and R. Sharma, ―On motion planning in changing partially conditional density", European Conference on Computer Vision, 1996,
predictable environments,‖ International Journal of Robotics Research, pp. 343–356.
vol. 16, no. 6, pp. 705-805, December 1997.
[20] Marchand. E, Bouthemy. P., Chaumette. F., and Moreau. V., "Robust
[4] H. W. Sorenson, ―Kalman filtering techniques,‖ Advances in Control Real-Time Visual Tracking using a 2D-3D Model Based Approach,"
Systems Theory and Applications, vol. 3, pp. 219-292, 1996. Proc. of the Seventh IEEE International Conference on Computer
[5] J. W. Park and J. M. Lee, ―Robust map building and navigation for a Vision. Vol.1, pp. 262- 268, 1999
mobile robot using active camera,‖ Proc. of ICMT, pp. 99-104, October [21] A. Francois, G. Medioni, "Adaptive Color Background Modeling for
1999. Real -Time Segmentation of Video Streams", Proc. of International on
[6] R. A. Brooks, ―A robust layered control system for a mobile robot,‖ Imaging Science, Systems, and Technology, pp.227-232, 1999.
IEEE Journal of Robotics and Automation, vol. RA-2, no. 1, pp. 14-23,
April 1986.
25