You are on page 1of 6

Wireless Vision based Moving Object Tracking

Robot through Perceptual Color Space


M Manigandan G Malathi
Indira Institute of Engineering and Technology, Indira Institute of Engineering and Technology,
Thiruvallur Taluk, Pandur, India Thiruvallur Taluk, Pandur, India
mani.mit2005@gmail.com malathi_itwin@yahoo.co.in

N Madhavi
Indira Institute of Engineering and Technology,
Thiruvallur Taluk, Pandur, India
madhavinalandula@gmail.com

Abstract—We present a wireless vision-based scheme for driving moving objects and tracks over time through perceptual color
a wireless mobile robot to track a moving object. Objects are space.
identified based on their unique color. These visual cues will be
removed in the near future so that new vision algorithms are Environments where mobile robots should operate are
needed to cope with this. In this paper we present a method for subject to dynamic changes: learning models in dynamic
detecting and tracking the ball with the help of the color worlds is an important aspect for autonomous robots navigation
information obtained from the colored object. Our method relies [1]. One of the keys to success is a good vision system, so the
on a two-level approach. On the lower level, to detect the color question is: how to make a robot, to look a human vision? The
object controlling the Wireless Mobile robot platform which only way is to approach human sight with the use of a vision
carries the wireless camera is controlled so as to keep the target sensor, as a perspective device for vision applications.
at the center of the image plane. On the higher level, the robot
operates under the assumption that the camera system achieves In this proposed system the tracking application is done via
perfect tracking. In particular, the relative position of the object wireless pinhole camera which is mounted on a mobile robot. It
is retrieved from the Image coordinates through simple does by taking the RGB (Red, Green, and Blue) values of the
geometry, and used to compute a control law driving the robot to tracked object and following it on the screen. The RGB image
the target. Various possible choices are discussed for the high- obtained from the camera is transformed to HSV (Hue,
level robot controller. Control of the Wireless robot is done based Saturation, Value) color space for better tracking of the object
on the perceptual color information of the colored object and also to control the motion of robot to keep the object
acquired from the wireless camera centered in the camera. The goal is to make a fast program for
ball localization with MATLAB that can capture images from a
Keywords-Object Tracking; Spatial Coordinates; Wireless video device and calculate the position of the object example
vision; Mobile robot; Color Space ball in front of the robot .To enhance tracking, we introduce a
new combination of color space based image segmentation for
I. INTRODUCTION
location detection that preserves object location information in
Object tracking has been challenging more than a decade the frames. The system is able to locate and track moving color
for many researchers due to its complex analysis under non object based on the information attained from vision sensor. To
ideal environment. One of the major problems in tracking the track an object, a wireless camera system was installed in the
object is illumination of the object under various lighting mobile robot and the image processing algorithm implemented
conditions, these parameters can be analyzed by means of color in the server computer on the network. This paper mainly
space. Various methods of differencing frame with the clear describes on the visual object tracking method in ideal
background or between adjacent frames are suited for the environment.
object detection. In this paper we consider the problem of
simultaneously tracking moving object that appear in the path Section II describes the related works. Section III describes
of moving robot and controlling of the robot through the pixel about the perceptual color space and its transformation. Section
information of the image acquired from real time vision sensor. IV discusses about the algorithm for object detection,
The moving object is assumed to be a point-object and is positioning and Threshold values settings based on the color
projected onto an image plane to form a geometrical space transformation. Section V describes the interfacing
representation that provides the position of the object based on details with system model implementation details and its
the active color information of the image. The aims of the results are discussed in the following in section VI
system are: 1) to realize the task of moving objects detection ; II. RELATED WORK
2) to determine the position of each moving objects on non
ideal environment; 3) to maintain a correct association between Mobile robots were used for several application fields
because of their high workability especially with autonomous

978-1-4244-9005-9/10/$26.00 ©2010 IEEE 20


capabilities [2-6]. Therefore, the ability of a mobile robot to disadvantages such as device dependency for better tracking.
process moving targets is necessary for detection of the object The next section gives brief description of how the RGB color
and avoidance is necessary. By means of an active camera space can be transformed to HSV Color space and that makes
system is applied to navigation and the tracking of moving the HSV model so suitable for image processing based
objects has many advantages [7-8]. There are several applications for removing the noise from the frames.
approaches [9-12] that can be used to overcome the
uncertainties of measuring the locations of the mobile robot or
other objects. Various applications of these kinds such as
human–computer interaction [13], ambient intelligent systems
[14], robotics [15], video compression [16-17], are the reason
for growing interest in people for object tracking. Generally,
several methods have been proposed for the solving the
problem of object tracking, [18] to locate the object in
subsequent frames. In [19], the edges of object utilized as
features for tracking based on the object size from each frames.
In visual object tracking, handling changes of viewpoint or
object aspect has still been challenging [20]. In this object
tracking method to extract the group of candidates for objects
using the color distribution and the motion information and Figure 1. RGB Color space
decide object regions using decision making algorithm. The
normalized RGB color distribution and moving color
information were merged together for separating the object C. HSV colour space
from it background. Further the objects are segmented and The HSV color space is nonlinear transformation of the
extracted irrespective of their shape, while separating the object RGB color space are defined by each individual components
from the background noises in the frame has to be removed for combined together. The transformation of a color in the RGB
better tracking. Finally we show one application of mobile color space (R; G; B) represented in Fig.2 defined by its red,
robot tracking the moving object. Based on the estimated green and blue component in the range of 0 and 1 to the HSV
velocities of the object, active camera should locate its position color space.
details from the acquired images. The various methods for
extracting the object from it background are first, the RGB
color distribution and moving color information are combined.
Second, the objects are segmented and extracted based on the
shape variations. Finally, to detect the object from noisy image
acquired from wireless video device.
Decision making rule based model for color space is used
for tracking of moving object and robot control. This color
model makes discrimination between background and the
object by enlightening the dominant color elements of the
object in the scene. Moreover, a method for color model
updating is proposed to deal with the changes in aspect and
structured lighting condition especially with the HSV
parameters.
Figure 2. HSV Color system
III. PERCEPTUAL COLOR SPACE
A. Color spaces
In HSV component, Saturation (S) of a color denotes the
For tracking any moving object we need to concentrate on non whiteness of the colored space which is centered at axis of
the luminance of the image acquired under non ideal the cone. Most of colors have HSV component with either high
environment. This requires accurate color representation with or low saturation values. The low saturation is assigned as gray
proper mathematical model. Such colour models are called tones of the images in HSV color space. This gray tone defines
colour spaces and there are numerous of these spaces available. the Value (V) component either for intensity or brightness of a
B. RGB color space color transformed color space domain. ‘V‘ can be of two states
either non brightness or brightness color and it can be made
All most every image acquired from the digital device is adjusted for shadings and shadows due to very high lighting
RGB color space and their colors combinations are highlighted structure elements on the object under various environmental
in the Fig. 1.Much of the colors are obtained based on these conditions.
components (R; G; B). These component values are not
sufficient to track the object efficiently, to have efficient While tracking an object the mean and variance of each
tracking the entire (R; G; B) image has to transform to HSV pixel color are calculated for image size f(x ,y),where ‗x‘ and
color space. An HSV color space does overcome some of the ‗y‘ representing row and column pixel values respectively for

21
an image size (320x240).Each pixel value f(x, y) represents the Individual (R; G; B) component for a pink object and its
brightness and color distortion ,these information are obtained equivalent HSV transformation value are displayed in Fig. 3
from the mean color of f(x, y). If the luminance changes a lot and Fig.4 respectively with its concurrent values are discussed
and result is not so good. Francois et al. [21] presented an HSV in Table. I. Out of this entire image information Value (V) has
color space based background subtraction technique. Color more vital role for tracking an object under illumination of
transformation produces a different noise level can be analyzed light
from dimension of each of the HSV color space and it is
depended on the (R; G; B) value ranging from (0-255). TABLE I. .

All acquisition devices are based on the RGB color model Color Transformation RGB to HSV for various color object
to for analyzing the image information hence the model is Color R G B H S V
Pink 125 42 68 0.94 0.64 0.48
related to the hardware implementation. HSV color model are Green 95 220 143 0.44 0.57 0.87
more similar to human perception of the color. The RGB to Yellow 92 116 64 0.25 0.40 0.45
HSV color conversion are obtained from equation 1, 2 and 3: Orange 105 45 25 0.03 0.75 0.44

max(R, G, B)
V
255

0, if max 0
S min
1 , otherwise
max

0, if max min
G B Figure 4. HSV image for pink colored object
, if max R, G B
6(max min)
HSV to RGB transformation provides less noise, high
G B
H 1 , if max R, G B accuracy and separated the brightness from chromaticity.
6(max min) Major advantage is removal of shadows from the background
1 ( B R) for better detection of object and its tracking.
, if max G
3 6(max min)
IV. COORDINATE SYSTEMS AND ALGORITHM
2 ( R G)
, if max B Any 2D image is represented as f(x, y) function with row
3 6(max min)
and column for image size (320x240) which has been
Where, max and min are respectively the maximum and discussed in Section III, ‗f‘ as amplitude function representing
minimum of the (R; G; B) components of an image the intensity or gray level of the image at that point which
respectively denotes the pixel information attained from spatial coordinates
In order to obtain the color information of the object RGB
threshold values are defined for each (R; G; B) components,
these components are transformed to ‗0‘ and ‗1‘.Once the
conditions are satisfied, the pixel value of the image is set to
‗1‘ else the value would be fixed to‘0‘.After transforming the
(R; G; B) to (H; S; V) components, we would find the region of
the object, unwanted white regions in the image has to be
removed these regions are considered to be noise. Before
identifying the object at the centre of the image, the noisy parts
of the image has to eliminate through image Morphological
concept.
Figure 3. HSV Indivitucal components for the Pink Morphological removes the unwanted noisy also smoothes
colored object for image size(320x240)
object contours and removes thin protrusions. The resultant
Individual (R; G; B) component for a pink object and its image tends to smooth the contours of objects besides joining
equivalent HSV transformation value are displayed in Fig. 3 narrow breaks and filling long, and holes smaller than the
and Fig.4 respectively with its concurrent values are discussed structuring element. Now the filtered image finally contains the
in Table. I. Out of this entire image information Value (V) has white region. Since moving object has to be tracked
more vital role for tracking an object under illumination of continuously, there might me some computational errors may
light. occur. In order to avoid such difficulties the filtering process

22
has to be implemented for the entire image acquired before
calculating the centre of mass for all objects.
A. Position of the object
Position of the object is obtained as shown in Fig. 5, the
object to be extracted from image captured through wireless
camera based on the pixel information of the ‗row‘ and
‗column‘ values. The position of the object depends on the
points prescribed as ‗x1‘ and ‗x2‘ (row) on X-axis and points
‗y1‘ and ‗y2‘ on Y-axis (column).
The position of the object is derived from equation. 4

x x
X1 numx; X 2 numx
2 2
y y
Y1 numy; Y 2 numy
2 2

Figure 5. Object Position on coordinate system f(x, y) for


image size 320x240
Where ‗numx‘ and ‗numy‘ are arbitrary numbers which
depends on the size of object, the colored object considered
Figure 6. Flow chart for objects detection, position and
here is pink colored ball. The coordinates of the centre of the decision making rule for robot control
frame are nothing but (x/2, y/2). Based on the object position
in the frame appropriate data is transmitted for controlling the
robot. The decision tables derived from the above conditions PCI based TV TUNER card is used for grabbing the images
are given in Table II. Entire flow process for detection of the from the remote location for tracking the object based on the
object and its position is shown in Fig .6 color with its spatial coordinated, monitoring the activities
from remote place implemented in MATLAB is based on the
TABLE II. .
decision making algorithm coded on the system. Since the
Robot control based on X1,X2,Y1,Y2 for image f(x, y) wireless camera is placed on Mobile robot, it‘s necessary to use
x y Row & Column value Robot Control PCI or USB based TV tuner card for acquiring the remote
of the Image Action images from wireless camera. The PCI TV tuner is interfaced
X1 40 Left
with MATLAB Image Acquisition Tool Box, via vcap2.dll file
X2 280 Right
Y1 90 Backward
which act as a bridge between the MATLAB and PCI TV
320 240 Tuner card for grabbing the images from camera. These images
Y2 150 Forward
Other Values of Stop are used are fused with algorithms for further processing for
X1,X2,Y1,Y2 controlling the robot action.
The serial binary data is received from the computer. The
data is transferred to the microcontroller. Depending upon the
data received, the controller transmits 4-bit data to the encoder.
V. SYSTEM DESCRIPTION The transmitter RWS 434 transmits the data at 434MHz.
This paper focuses on the implementation of a wireless The receiver RWS 434 receives the serially transmitted data
Mobile Robot control with Object detection based on and it is given to the decoder IC. The decoder converts the
coordinates and to process the images using MATLAB. To serial data into parallel data and is transmitted to the
transfer the data to mobile robot, serial port COM is used for microcontroller. Depending upon the received data the
transferring the data byte by byte so that Wireless control of the controller drives the motor to perform the function. The
robot is done from MATLAB based on the position of the computer processes the image of the pink ball and sends out
object from detected from the image. Wireless control different data control through its serial COM port. This data
eliminates the constraint in distance between the color object depends on the location of the red ball, upper, lower, left, right
and robot for better tracking from remote location .The entire and centre of the frame, as captured by the camera. By
framework of implementation Wireless robot module and mounting Wireless surveillance camera in front of the robot, so
system module shown in Fig.7 and Fig. 8 respectively that it acts as a visual sensor

23
Figure 9. Robot Backward Direction

Figure 10. Robot Forward Direction


Figure 7. Wireless Robot Module Framework

Figure 11 Robot Left Direction

Figure 12. Robot Right Movement

Figure 8. System Module for Wireless data transfer for


Mobile robot control

VI. RESULT
Figure. 13 Robot Stop
For controlling the wireless mobile robot based on the
spatial co-ordinates we need to consider the following The decision making rule implementation is done through
RGB2HSV Transformation values obtained from of the real simple switch case for robot movement based on different
time image acquired through Wireless Surveillance Camera via position of the object located on. The output results various ball
PCI based Image Grabber i.e., FRONT TECH TV Tuner Card. positions in the image coordinates are shown in Fig. 9 ,10, 11,
Once the object is detected, we need to find the position of the 12 and 13 consecutively which clearly shows the robot
object located in the frame size as discussed in Section IV. movements based on the spatial coordinates by executing the
MATLAB code. The wireless mobile robot module and
To control the Wireless Mobile robot serial data are wireless receiver module are displayed in Fig. 14 and Fig.15
transferred from PC by running the MATLAB code on the PC respectively.
integrated with serial data transfer as discussed in Table III. In
order to obtain the exact position of the object we need to
divide the frame into 4 equally spaced coordinates. Based on
the object detected on the X1, X2, Y1, Y2 coordinate values,
the robot control is done through wireless RF Modules

TABLE III. . Figure 14. Wireless Wheeled Mobile Robot

Serial data command for Robot Action based on


Object Position Data from Table II
Wireless Serial Data Robot
ASCII Direction
‗L‘ Left
‗R‘ Right
Figure 15. Wireless transmitter section
‗B‘ Backward
‗F‘ Forward
‗S‘ Stop

24
VII. CONCLUSION
Object detection and tracking through vision sensor has
[7] M. Selsis, C. Vieren, and F. Cabestaing, ―Automatic tracking and 3D
been challenging for many researchers. With the advancement localization of moving objects active contour models,‖ Proc. of the IEEE
in the technological aspects wireless vision sensor are preferred International Symposium on Intelligent Vehicles, pp. 96-100, 1995.
for many such applications. In this paper we intend to interface [8] S. Segvic and S. Ribaric, ―Determining the absolute orientation in a
low cost PCI based image grabber card for visual robot control corridor using projective geometry and active vision,‖ IEEE Trans. on
from remote location through Wireless means. Moreover object Industrial Electronics, vol. 48, no. 3, pp. 696-710, June 2001.
tracking through perceptual HSV color space would be [9] A. Adam, E. Rivlin, and I. Shimshoni, ―Computing the sensory
advantageous than other color space techniques sine noisy uncertainty field of a vision-based localization sensor,‖ Proc. of the
IEEE International Conference on Robotics & Automation, pp. 2993-
regions are adjusted dynamically through transformation. 2999, April 2000.
Machine vision applications such as Image processing based
[10] V. Caglioti, ―An entropic criterion for minimum uncertainty sensing in
Inspection system would be build with such PCI based Image recognition and localization part II-A case study on directional distance
grabber cards. measurements,‖ IEEE Trans. On Systems, Man, and Cybernetics, vol.
31, no. 2, pp. 197-214, April 2001.
VIII. FUTUREWORK [11] C. F. Olson, ―Probabilistic self-localization for mobile robots,‖ IEEE
Further research can be extended by extended fuzzy Trans. on Robotics and Automation, vol. 16, no. 1, pp. 55-66, February
2000.
inference engine for multiple color objects tracking .Vision
[12] H. Zhou and S. Sakane, ―Sensor planning for mobile robot localization
based obstacle avoidance autonomous robot could be used to based on probabilistic inference using Bayesian network,‖ Proc. of the
detect the object by background subtraction using appropriate 4th IEEE International Symposium on Assembly and Task Planning, pp.
thresholds .Use of two cameras to can be employed for 7-12, May 2001.
stereovision based applications to assist in 3D reconstruction of [13] C. Colombo, AD. Bimbo, A. Valli, "Visual capture and understanding of
the object which would able to track the object on both the hand pointing actions in a 3-D environment", IEEE Transactions on
front and rear vision for object tracking. Systems, Man and Cybernetics—Part B 33 (2003) 677–686.
[14] T. Darrell, D. Demirdjian, N. Checka, P. Felzenszwalb, "Plan-view
ACKNOWLEDGMENT trajectory estimation with dense stereo background models", Eighth
IEEE International Conference on Computer Vision (ICCV 2001), vol.
The authors gratefully acknowledge the following 2, 2001, pp. 628–635.
individuals: Prof. V. Bhaskaran, Department of Mathematics, [15] G. Bradski, "Computer vision face tracking as a component of a
Mrs. I. Manju Jackin, Assistant Professor, Department of perceptual user interface", Workshop on Application of Computer
Electronics and Communication of Velammal Engineering Vision, Princeton, NJ, 1998, pp. 214–219.
College, our colleagues for their support and valuable guidance [16] B. Menser, M. Brunig, "Face detection and tracking for video coding
for devoting their precious time and sharing their knowledge. applications", Conference Record of the Thirty-Fourth Asilomar
Conference on Signals, Systems and Computers, 2000, pp. 49–53.
REFERENCES [17] W.E. Vieux, K. Schwerdt, J.L. Crowley." Face-tracking and coding for
video compression", International Conference on Computer Vision
[1] R. Araujo, G. Gouveia, and N. Santos ―Learning self organizing maps Systems, 1999, pp. 151–160.
for navigation in dynamic worlds‖ In Proc. IEEE Int. Conf. Robotics and
[18] B. Lucas, T. Kanade, "An iterative image registration technique with an
Automation (ICRA), pages 1312–1317, 2003.
application to stereo vision", international Joint Conference on Artificial
[2] K. Daniilidis and C. Krauss, ―Real-time tracking of moving objects with Intelligence, 1981, pp. 674– 679.
an active camera,‖ Real- Time Imaging, Academic Press Limited, 1998.
[19] M. Isard, A. Blake, "Contour tracking by stochastic propagation of
[3] S. M. Lavalle and R. Sharma, ―On motion planning in changing partially conditional density", European Conference on Computer Vision, 1996,
predictable environments,‖ International Journal of Robotics Research, pp. 343–356.
vol. 16, no. 6, pp. 705-805, December 1997.
[20] Marchand. E, Bouthemy. P., Chaumette. F., and Moreau. V., "Robust
[4] H. W. Sorenson, ―Kalman filtering techniques,‖ Advances in Control Real-Time Visual Tracking using a 2D-3D Model Based Approach,"
Systems Theory and Applications, vol. 3, pp. 219-292, 1996. Proc. of the Seventh IEEE International Conference on Computer
[5] J. W. Park and J. M. Lee, ―Robust map building and navigation for a Vision. Vol.1, pp. 262- 268, 1999
mobile robot using active camera,‖ Proc. of ICMT, pp. 99-104, October [21] A. Francois, G. Medioni, "Adaptive Color Background Modeling for
1999. Real -Time Segmentation of Video Streams", Proc. of International on
[6] R. A. Brooks, ―A robust layered control system for a mobile robot,‖ Imaging Science, Systems, and Technology, pp.227-232, 1999.
IEEE Journal of Robotics and Automation, vol. RA-2, no. 1, pp. 14-23,
April 1986.

25

You might also like