You are on page 1of 8

Journal of the Chinese Institute of Engineers, Vol. 28, No. 6, pp.

907-914 (2005)

907

DESIGN AND DEVELOPMENT OF AUTOMATED 3D VISUAL TRACKING SYSTEM

Nazim Mir-Nasiri*, Nur Azah binti Hamzaid, and Abdul Basid bin Shahul Hameed

ABSTRACT
Camera-based systems are frequently used to track moving objects which are in the field of their view. This paper describes design and development of a camerabased visual system that can constantly track a moving object without the necessity of calibrating the camera in real world coordinates. This reduces the complexity of the system and processing time due to unnecessary conversions and calibrations. The system consists of a two-motor pan-tilt camera driving mechanism, PCI image acquisition board, and PWM-based DC-motor driver board. It uses image processing techniques to identify and locate the object in the 3D scene and motion control algorithms to direct the camera towards the object. Thus the objective of the project is to develop a vision system and control algorithms which can be locked on a moving object within the field of its view. The developed software and related interface hardware monitor and control the motors in such a way that the moving object should always be located right at the center of the camera image plane. The system, in general, simulates the 3D motion of the human eye which always tends to focus on a moving object within the range of its view. It actually imitates the tracking ability of human eye. Key Words: image processing, object recognition, motion control.

I. INTRODUCTION Many developed visual systems are used to track moving objects. There are many methods used to implement visual servoing or visual tracking. Some methods use training, a known model, or initialization, whereas other methods involve a signature vector or motion prediction. These methods somehow require grid and calibration for position estimation. One of the approaches for visual tracking is to use Active Appearance Models (AAM). However, it is limited to having all points of the model visible in all frames. Birkbeck et al . (2004) have introduced a notion of visibility uncertainty for the points in the AAM in their work, removing the above limitation and

Based on an awarded paper presented at Automation 2005, the 8th international conference on automation technology, Taichung, Taiwan, R.O.C. during May 5-6, 2005. *Corresponding author. (Email: nazim@iiu.edu.my) The authors are with the Department of Mechatronics Engineering, International Islamic University Malaysia, Jalan Gombak, 53100, KL, Malaysia.

therefore allowing the object to contain selfocclusions. The visibility uncertainty is easily integrated into the existing AAM framework, keeping model initialization time to a minimum. Mikhalsky et al . (2004) have proposed an algorithm which is based on extracting a signature vector from a target image and subsequently detecting and tracking its location in the vicinity of the origin. The process is comprised of three main phases: signal formation, extraction of signatures, and matching. Leonard et al. (2004) have implemented visual servoing by learning to perform tasks such as centering. The system uses function approximation from reinforcement learning to learn the visuomotor function of a task which relates actions to perceptual variations. Sim et al . (2002) proposed the Modified Smith PredictorDeMenthon-Horaud (MSP-DH) visual servoing system. The DH pose estimation algorithm has the desired accuracy and convergence rate to make visual tracking possible. For both the pose estimation and visual servo controller, a simplistic and elegant structure is retained. The research done by Denzler et al . (1994) describes a two stage active vision

908

Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005)

system for tracking a moving object which is detected in an overview image of the scene, and a close up view is then taken by changing the frame grabbers parameters and by a positional change of the camera mounted on a robots hand. For object tracking, they used active contour models, where the active contour is interactively initialized on the first image sequence. However, errors may occur if there are strong background edges near the object, or if the ROI only partially covers the moving object. Another method by Carter et al . (2003) for tracking an object is by using a robust algorithm for arbitrary object tracking in long image sequences. This technique extends the dynamic Hough to detect arbitrary shapes undergoing affine motion. The proposed tracking algorithm requires the whole image sequence to be processed globally. Crowley et al . (1995) have conducted research to compare kinematics and visual servoing techniques. Kinematics is shown to be efficient if there is a sufficiently precise model of the kinamatic chain. Large errors in the kinamatic model can cause the system to oscillate. In contrast, visual servoing is extremely robust with respect to errors in the kinematic model, but requires a much larger number of cycles to converge. Xiao et al . (2003) used state vector for visual servoing. A simple image Jacobian matrix is taken from the state equation and leads to a simple adaptive controller which can drive a camera to the ideal position. However, the system needs to be calibrated. Kragic et al. (2001) have discussed methods for integration, which are based on weak or modelfree approaches to integration. In particular, voting and fuzzy logic methods have been studied. The results show that integration using weak methods enables a significant increase in robustness. In this paper, the main objective is to present a simple but effective camera-based vision system which acquires gray images of the scene in continuous mode (25 frames/second), detects the presence of a particular shape object in a 3D scene, determines the objects center of mass in pixel coordinates within the image plane for every frame and automatically guides the camera driving mechanism in order to align the view axis of the camera with the line of sight of the object per frame base. In other words the system forces the mass center of the objects image right into the center of the image plane. Guidance of the camera motors has been accomplished by a quadrant approach. Taking into consideration the high speed of the PCI acquisition board, simple and effective binary image processing and measuring algorithms which do not require calibration of the camera in real world coordinates, and fast interface between the motor IC driver chip and computer, an acceptable level of system time response has been achieved. The response time is mainly determined by the drive ratio of the pan-tilt mechanism.

Fig. 1 Acquired image

II. CONCEPTUAL DESIGN OF THE VISUAL TRACKING PROCESS The developed system enables continuous tracking of the moving object and simulates the motion of the human eye. For our experiments a spherical black ball has been selected in order to simplify the recognition process. A solid black color object has been selected to provide sufficient contrast between the object image and the surrounding background. A spherical object has been selected to provide constant circular boundary projection of the object into the image plane regardless of its 3D orientation and distance from the camera. 1. Image Enhancement and Processing The first step in the tracking system operation after system initialization is image acquisition and then enhancement. The enhancement is essential in order to ensure the images will have sufficient contrast with the background and fewer shades of gray within the objects image. Since the selected object is close to black, i.e. 0 value on the gray scale, the nonlinear power 1/Y function or inverse Gamma correction has been selected to increase the contrast in dark areas at the expense of the contrast in bright areas of the image. This correction tool also increases overall brightness of the image. The Gamma value 1/Y has been selected to be 0.53. The original acquired image is shown in Fig. 1 while the Gammacorrected image is shown in Fig. 2. The next tool used for image enhancement is the linear function to correct the image. The use of this tool makes dark objects darker and light objects lighter. It makes two corrections to the image. Firstly, it reduces variation of pixel gray values within the image of the ball and, secondly, it increases the contrast between black ball pixels and pixels of other objects in the background. The resulting image after the application of this tool with the line slope 53 is shown in Fig. 3.

N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System

909

Fig. 2 Gamma-corrected image

Fig. 4 Binary image of the scene

Fig. 3 Linear-corrected image

2. Object Identification Identification and recognition of the object within the gray image of the entire scene is implemented first by converting the acquired gray image into a binary one and then by shape recognition of the object. An appropriate threshold value for the 8 bit gray image for a given amount of ambient luminescent illumination has been selected in the range between 2 and 211 of gray scale. The result of applying this threshold to the image of the scene is shown in Fig. 4. The identification or recognition of the circular object within the binary image still remains a problem although the image has been enhanced before the binarization. Blob analysis is implemented on the binary image by using several binary filters to isolate the object of interest from other unwanted objects in the scene. The system first labels all blobs (or particles) in the binary image, removes border objects and noise particles from the image, calculates and matches critical circularity parameters of the remaining objects to isolate the object of interest. By removing border objects we always assume that the object of interest is well within the plane of projection and does not touch its borders. Removing noise particles from the image by using well known particle area filters is not a simple

task because the system may consider the object of interest as a noise too if it is located far from the camera. The experiments with the selected size object show that the best choice for the particle area filter is the range between 50 and 32000 pixels. All the particles beyond this range must be removed by the filter. According to measurements, the object appears to be 32000 pixels in area when it is 0.3 m from the camera, and 50 pixels in area when it is 6 m from the camera. According to observation, if the object appears to be less than 0.3 m in distance from the camera, the corresponding image of the object within the image plane shows lack of contrast with the background which causes problems in segmenting the image. Thus the value of 0.3 m is selected to be the minimum allowed distance from the object to the camera. On the other hand, if the object is more than 6 m from the camera it will appear very small in the image plane and this, in turn, will cause difficulties in differentiating between the object and the noises that are always present in the image. Thus the value of 6 m is selected to be the maximum allowed distance from the camera to the object. Of course, these limits can vary depending on resolution of the CCD sensor and conditions of scene illumination. This range of distances was selected based on the experiments done in the lab with a particular type of monochrome camera and fixed lighting conditions. However, the particle area filter alone is not sufficient to recognize the object since some other objects with similar areas but different shapes may be present in the image. An object of interest should have certain shape criteria to differentiate it from the surroundings. It should be identifiable and always distinct from the environment to ensure the system always receives proper input for continuous recognition. In this work the main concern was not to use any complex method for object recognition, but rather to assure continuous success of recognition process for subsequent control of camera motors to follow the moving object. Thus, the strategy in selecting the shape was to choose

910

Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005)

FOV of Camera
(-x, -y)

(x, -y)

Quadrant 1 dx < 0 dy < 0


(0, 0)

Quadrant 2 dx > 0 dy < 0


576 Pixels

Quadrant 4
dy

Quadrant 3 dx > 0 dy > 0

dx < 0 dy > 0
(-X, Y)

dx

Fig. 5 Recognized object

(-x, y) (x, y) 768 Pixels

Fig. 6 Quadrants of the image plane

a 3D object which has a simple and unchanged 2D projection shape in the image plane regardless of its orientation and position in 3D space. The spherical shape object has always a circular projection shape in the image plane regardless of its orientation. The circularity criteria are simple and the most reliable criteria for recognition. The experiments show that the best tool to isolate the circles from other shapes is the elongation factor filter. The elongation factor is the value showing the ratio of the maximum diameter of a binary object to the short side of a rectangle with the same area encompassing the object. The more elongated the shape of a particle, the higher its elongation factor. This factor is able to identify a binary circular object as it will have a smaller elongation factor. Based on the experiments done in the lab (Fig. 1) all blobs or particles with elongation factor greater than 1.4 should be eliminated. The main advantage of this type of elongation factor tool compared to, for example, compactness factor is that it is able to differentiate between a circle and a square having the same pixel area. The result of applying the border objects removal, area and elongation factor filters to the binary image is shown in Fig. 5. 3. Tracking Strategy: Quadrant Approach Once the spherical object has been identified the final step is to apply appropriate strategies for extracting this moving object from every image frame acquired by the camera in continuous mode. The strategy developed in this work is based on calculation of the objects blob centroid and applying the quadrant approach. Just like the human eye performs tracking of moving objects by changing the orientation of the eye in order to keep the object within its view, the developed system changes the orientation of its camera in order to keep the objects centered, right at the center of the image plane. In order to keep the objects centroid always at the center of the image plane the system must constantly check the errors of location

of the centroid with respect to the image planes center in every acquired image frame and take corrective measures to compensate for it. Since the camera acquires 25 frames/second, the systems response to the errors can be quite appreciable. The quadrant approach to estimate current position error of the object with respect to center of the image plane has been explained in Fig. 6. The field of view of the camera is divided into four quadrants. The error between the objects centroid ( x , y ) and the center of the image plane (0, 0), i.e. offsets from the center in the form of two components dx and dy are calculated. The four possible combinations of signs for dx and dy determine uniquely the quadrant in which the object falls. The system then chooses one of the four possible combinations of two motor rotation senses (but not magnitudes) for compensation of the current position errors dx and dy respectively. A pan-motion motor runs with an appropriate speed to compensate for dx error and a tilt-motion motor does the same to compensate for dy error. If the system successfully compensates for both errors the object should appear right at the center of the image plane and the motors cease running immediately. Thus, there is no need for calculation of amount of shaft rotations in this system. If the system does corrections for every frame of successively acquired images (one every 1/25 second) then it should be able to track moving objects with reasonable speeds. The objects moving before the camera can be detected only if their speed does not exceed the corresponding speed of the ready-made camera driving mechanism. The testing of the developed system shows that in a drastically changing environment the system may recognize either more than one object of interest or lose the object from its view. In the first case more than one object may satisfy the recognition criteria set for the system, i.e. pass the elongation factor filter.

N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System

911

In the second case the moving object of interest may temporarily overlap with another object with similar shades of gray color, thus making this object unrecognizable by the system. In order to stabilize performance of the system in such situations a temporarily halt has been incorporated into the system. That means when the system encounters such situations it does not react to ambiguous image frames and waits for the following image frames with expected results. III. CAMERA MOTION CONTROL The camera motion is controlled via speed control of dc motors driving the camera. As mentioned earlier the tracking strategy is to constantly drive the motors as long as there is an error in positioning of the object in the image plane. According to this strategy the amount of camera rotation is not important but the speed of rotation is. The system should be able to track fast moving objects at the full speed of the motors and slow moving objects with reduced motor speed. 1. Motor Speed Control Strategies The dc motor speed is varied by varying

supplied voltage. The voltage is supplied in pulses with variable widths, i.e. using Pulse Width Modulation (PWM). Therefore, the motor speed is controlled by adjusting the duty cycle of the PWM function and can vary from zero to the nominal value. The direction of motor rotation is controlled by the polarity of supplied voltage. The amount of voltage supplied by the system to the motors depends on absolute values of the errors dx and dy. If the instantaneous magnitude of the error is large or getting larger because of the object moving faster than the camera then the motor should accelerate to catch up with the object. On the other hand, if the magnitude of the error is small or getting smaller because of the object moving slower than the camera then the motor should decelerate accordingly in order to avoid excessive oscillations when the camera reaches the target, i.e. then the cameras center is close to the objects centroid. The numerous tests of the system comprising a ready-made pan-tilt surveillance mechanism with built-in dc motors show that the following cycloidal and ramp functions to control the duty cycle (in terms of percentage) of PWM signal help to smooth the motion of the tracking camera and reduce its response time:

f (x) =

(13 5)x +5 ; for x 2 2 2 (x 2) sin(2 (x 2)) ) + 13 ; for 2 x 50 y = 100 13 * ( 2 (50 2) (50 2) y= y = 100 ; for x 50

(1)

y=[ f (x) =

(100 25) sin(2 x) * ( 2 x )] + 25 ; for x 15 2 15 15 y = 100 ; for x 15

(2)

The function (1) is used to control the speed of the pan-motion motor and the function (2) is used to control the speed of the tilt-motion motor. These two functions are different because of the difference in velocity ratios of two gear transmissions for pan and tilt motions of the camera. Reduction ratio for the build-in tilt mechanism is more than that of the pan mechanism. That makes the camera move slower in the vertical direction compared to the horizontal direction for the same value of input motor voltages. The difference in speeds of the two motors becomes more pronounced when the camera needs to be driven

at very low speed just before the stop. That is the reason why an additional ramp function has been introduced into the control of the pan motion to further reduce panning speed when the object is very close to the center of the image plane. The main purpose of two cycloidal functions in (1) and (2) is to provide smooth transition from full speed rotation of the motors when the object is far from the center of the image plane, low speed rotation of the motors when the object is close to the center, and zero speed when the object has actually reached the center. The advantage of the cycloidal speed control function is that

912

Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005)

Waveform for Pan


100

Plot 0

13 5 2 50 Pixel

50 0 -50 -100 -150 -200 -250 -300 -350 -400 0 2 4 6 8 10 12 14 16 18 20 Time

PWM

Fig. 7 Speed control functions for pan motor

dx

Fig. 9 Pan motor response

100

25 15 Pixel Fig. 8 Speed control functions for tilt motor

it has zero acceleration (first derivative of the velocity function) at its boundaries. Any kind of discontinuity between the motion functions at the boundary conditions may destabilize the system and cause unnecessary fluctuation of the object about the center of the image plane. Fig. 7 shows the graph of three speed control functions for the pan-motion motor according to (1). The graph specifies the PWM signal value versus the amount of object positioning error (in pixels) with respect to the center of the image plane. The ramp function is effective when the object is closer than 2 pixels to the center of the image plane and acts for the range of input voltages from 5% to 13% of nominal voltage value of the motor. Any value of the voltage less than 5% of its nominal value is unable to run the motor shaft coupled with the pan mechanism. The cycloidal function is effective when the object is in between 2 and 50 pixels apart from the center of the image plane and acts for the range of voltages from 13% to 100% of nominal value of the motor. Finally constant full motor speed is applied when the object is more than 50 pixels away

from the center of the image plane. These boundary values for three functions have been selected based on the experimental results obtained in the lab and set to achieve the following two objectives. The first objective is to achieve fast response of the system if the object is moving at a the speed higher than that of the camera. The second objective is to reduce the speed response of the system if the camera is moving at a speed higher than that of the object, i.e. the object is getting closer to the center of the image plane. The coefficients of the ramp and cycloidal functions in (1) are selected to satisfy all boundary conditions accordingly. Figure 8 shows the graph of the two distinct functions (2) to control speed of the tilt-motion motor and values of the boundary conditions optimized based on the experiments with the equipment. The figure shows that an optimal boundary condition between full speed motion of the motor and motion with reduced speed is the error value of 15 pixels from the center of the image plane. The figure also shows that any value of the voltage less than 25% of its nominal value is unable to run the motor shaft coupled with the tilt mechanism. The coefficients of the cycloidal function have been selected to match the boundary conditions for the speed of the motor and the position errors of the object in the image plane. 2. Performance Criteria of the Tracking System The time responses of the system for a step input error of value dx = -350 pixels (pan motion) and dy = -250 pixels (tilt motion) are shown in Figs. 9 and 10, respectively. The data has been acquired by National Instruments vision interface hardware, calculated and recorded by NI LabVIEW software. In this experiment the camera was set 1.5 m in front of the object, and field of view was 1.25 1.0 m 2. All major performance criteria of the system are

PWM

N. Mir-Nasiri et al.: Design and Development of Automated 3D Visual Tracking System

913

Table 1 Performance criteria of the system Performance Rise time, T r Peak time, T p Maximum overshoot Settling time, T s Pan motion 4.4 s 5s 45 pixels 8s Tilt motion 7s 8s 20 pixels 10 s
CPU LabVIEW environment Center position error (0, 0) + (dx, dy) Trajectory planning BNC connector block Control circuit board DC motors CCIR Camera

Image processing & blob analysis

Estimated object position in image plane (x, y)

Image acquisition Frame Grabber National Instruments PCI 1409

Waveform for Tilt 50 0 -50 -100


dy

Plot 0
Motion part

Visual part

Fig. 11 Block diagram of the system

-150 -200 -250 -300 0 2 4 6 8 10 12 14 16 18 20 Time

Fig. 10 Tilt motor response

presented in Table 1. The results obtained from the test run of the system as well as visual observations of the system performance show that the response of the system is satisfactory and the quality of system performance is acceptable. For this experiment the camera driving mechanism is capable of tracking the object moving with 0.1 m/s speed. IV. COMPONENT SETUP OF THE VISUAL TRACKING SYSTEM The visual tracking system developed in this work is specifically designed to track a black, spherical object. The entire system consists of three major subsystems. The first one is the vision subsystem which contains a monochrome analog camera with PCI vision acquisition card. The second one is the mechanical subsystem which is a pan-tilt mechanism driven by two dc motors and their speed control circuits. The last one is LabVIEW computer software to implement all control strategies for camera motion control. An analog camera is the main component of the tracking system. It acquires the image of the scene and passes it to the computer for processing. The computer extracts the object of interest from the entire image of the scene and measures the offset in pixels from the objects center to the center of the image plane. By using the quadrant approach, the computer drives two

camera motors to align the camera with the object so that the centers of the camera plane and the object coincide. This sequence is executed repeatedly. Thus the camera acts as a feedback sensor to correct its own position in 3D space through the pan-tilt mechanism. Fig. 11 shows functional interactions between various components of the system. A Graphical User Interface has been developed to control and monitor performance of the system. In this system, a monochrome analog camera CS8320BC with CCIR standard and resolution of 786 576 is used. To cover a large field of view in 3D space, a lens with short focal length of 8 mm has been used. The sensible working distance is selected to be in between 0.3 m and 6 m. Precise focusing on the object of interest is not a crucial issue because the system manipulates the calculated centroid value of the object image, which should have at least an identifiable circular contour with sufficient contrast to the background. The light source was a fluorescent lamp as it provides relatively even illumination over a large area of the room. The data acquisition device is the NI PCI-1409 image acquisition board. It is a high accuracy monochrome device equipped with 10-bit analog-to-digital converter and has its own onboard memory. To control the two DC motors of the pan-tilt mechanism, the L293D motor driver chip is used. It is a pull-pull four-channel driver with diodes. This single chip can be used to control two DC motors of up to 36V of nominal voltage. This chip requires one PWM input for the motor speed control and also two TTL logic input lines for motor direction control (DIR). The TTL logic signals for PWM and DIR are produced by the developed software and forwarded to the chip through NI PCI-6014 DAQ card and NI BNC-2110 connector block. PCI-6014 has eight digital I/O lines that are compatible with 5V TTL lines. The computer program generates only one logic signal for the motor direction control. The second

914

Journal of the Chinese Institute of Engineers, Vol. 28, No. 6 (2005)

L293D
DIO 1 DIO 0 PWM DIR 1,2EN 1 1A 2 1Y 3 4 5 2Y 6 2A 7 VCC2 8 16 15 14 13 12 11 10 9 VCC1 4A 4Y

0.01 F +5 V 0.1 F

LM7805
0.33 F

intelligent eye sensor for an object tracking mobile robot in 3D space. REFERENCES

M LM7812 +12V
Vcc 0.33 F 0.1 F

M
3Y 3A 3,4EN

PAN MOTOR
+Vmotor

TILT MOTOR

DR PWM

DIO 6 DIO 3

74LS04
1 2 3 4 5 6 7 14 13 12 11 10 9 8

DIO 5

DUMMY

0.01 F DUMMY

DIO 2

20k Ohm 100k Ohm SENSOR OUTPUT ACH 7

Fig. 12 Driving circuit of the motors

opposite polarity logic signal which is required by the driver chip is generated within the control circuit of the motor by a TTL inverter chip 74LS04. The motor speed control circuit is shown in Fig. 12. The motor driver chip requires two input voltages. The first is VCC1 = 5V for the chip itself and the second VCC2 = 12V is required for the motor. Both voltages are supplied through two voltage regulators: LM7805 for 5V and LM7812 for 12V. V. CONCLUSIONS This project introduces a new and effective approach for an object tracking system using a camera as a feedback sensor. The system has been designed to identify a black spherical object in 3D space, track its location within the field of view, and turn the camera with two motors right towards the object. The system, in general, simulates human eye behavior, staring at objects of interest before deciding to manipulate them. The developed system uniquely integrates vision and image processing techniques for object recognition and perception with the camera actuation system through the designed computer control programs. The advantage of this system is that it does not require camera calibration and manipulates all measurable parameters only in pixel values. The experimental results show that the system is stable and performs well. The system is intended to be an

Birkbeck, N., and Jagersand, M., 2004, Visual Tracking Using Active Appearance Models, Proceedings of the 1st Canadian Conference on Computer and Robot Vision , pp. 2-9. Mikhalsky, M., and Sitte, J., 2004, Real-Time Motion Tracker for a Robotic Vision System, Proceedings of the 1st Canadian Conference on Computer and Robot Vision , pp. 30-34 Leonard, S., and Jagersand, M., 2004, Approximating the Visuomotor Function for Visual Servoing, Proceedings of the 1st Canadian Conference on Computer and Robot Vision, pp. 112-119. Sim, T. P., Hong, G. S., and Lim, K. B, 2002., A Pragmatic 3D Visual Servoing System, Proceedings of the IEEE International Conference on Robotics and Automation 2002 (ICRA02) , Vol. 4, pp. 4185-4190. Denzler, J., and Paulus, D. W. R, 1994, Active Motion Detection and Object Tracking, Proceedings of the IEEE International Conference on Image Processing (ICIP-94) , Vol. 3, pp. 635-639. Carter, J. N., Lappas, P., and Damper, R. I., 2003, Acoustics Evidence-Based Object Tracking via Blobal Energy Maximization, Proceedings of the 2003 IEEE International Conference on Speech, and Signal (ICASSP03) , Vol. 3, pp. 501-504. Crowley, J. L., Mesrabi, M., and Chaumette, F., 1995, Comparison of Kinematic and Visual Servoing for Fixation, IEEE/RSJ Proceedings of the International Conference on Intelligent Robots and Systems 95 , Vol. 1, pp. 335-341. Shen, X. J., and Pan, J. M., 2003, A simple Adaptive Control for Visual Servoing, Proceedings of the 2003 International Conference on Machine Learning and Cybernetics , Vol. 2, pp. 976-979. Kragic, D., and Christensen, H. I, 2001, Cue Integration for Visual Servoing, IEEE Transactions on Robotics and Automation , Vol. 7, No. 1, pp. 18-27. Manuscript Received: Jun. 17, 2005 and Accepted: Jul. 05, 2005

You might also like