You are on page 1of 6

Soccer Ball Speed Estimation using Optical Flow

for Humanoid Soccer Player


Eric Hernández Castillo, Zizilia Zamudio Beltrán, Juan Manuel Ibarra Zannatha
Departamento de Control Automático
Centro de Investigación y de Estudios Avanzados del IPN
Av. IPN No. 2508, San Pedro Zacatenco, 07360, México D. F.
{ehernandezc, zzamudio, jibarra } @ctrl.cinvestav.mx

Resumen—In this paper we present an implementation of an the relative motion between an object in a scene and a camera
efficient algorithm to calculate the optical flow in real time; with to get 2D image sequence [2], [3], [4], [5]. As the robot
the aim to get the move speed of an orange ball which is used moves, distant objects towards the center of the image tends
to play soccer in kidsize category within RoboCup’s league. This
gives to robot the ability to estimate the direction of movement of to flow more slowly than nearby objects. The optical flow
the ball and its speed. As a first version, the vision system consists calculations allows recovery of the distance to objects in the
of a webcam Logitech Quickcam Pro 9000 with 2 megapixel and image and its speed. In this way, the robot can detect and
autofocus which provides the image or images to be processed in avoid obstacles while moving around them. One of the first
“ real time”. The algorithm implemented is based on the Lucas- steps is to select a suitable algorithm to be programed, with
Kanade method, as this is easily applied to a subset of image’s
points, In addition, it is computationally faster compared with respect to the reliability of optical flow data generated, the
other techniques and it can be adapted to a stereoscopic vision number of stored image frames and the amount of calculation.
system . Finally, we present experimental results that support One of the most popular algorithms for obtaining the optical
the good performance of the algorithm. flow is the Lucas-Kanade. This is a differential method which
Index Terms—Optical Flow; Computer Vision; Mobile Robots; is widely used, and it assumes that the flow is constant in
Humanoid Soccer.
a local neighborhood of the pixel under consideration, and
solves the basic equations of optical flow for pixels within
I. I NTRODUCTION
that neighborhood by the least squares criterion.
The preformance of tasks by robotic systems in structured Since moving objects in three dimensional space are projec-
environments with the presence of objects whose position ted onto an image’s plane, the real speed of them becomes a
and orientation are well known, is a problem currently being two-dimensional field, called motion field. That is, the optical
studied. However, performing tasks in dynamics environments flow is a velocity field in the image, which can transform
presests numerous difficulties that are not yet resolved. Vision an image into the next, in a sequence and as such, is not
systems are able to provide extremely useful informacion in determined uniquely, additional restrictions are necesary to
these changing environments, they provide information about obtain a “particular” optical flow. The motion field, despite
the objects in the scene work. Moreover, the use of vision the loss of information because of the passage of three to
in the fiel of robotics is justified by the wide availability of two dimensions, is a purely geometric, without ambiguity.
cheap cameras. Currently, the results provided the vision as Therefore, the aim is to get enough optical flow field similar to
sensor are low noise, high numer and information and low the motion, to allow obtaining measurements from the images.
cost. However, the extraction of relevant information in real Currently, the optical flow can provide an estimate of range
time from a single image or images set is a hard enough task, of motion accurate enough for some real applications.
which remains a major obstacle to the implementation of the
vision in mobile robots [1]. II. O PTICAL F LOW
Current trends in robotics research anticipate the application Optical flow or optic flow is the pattern of apparent motion
of robots in public environments that help human beings in of objects, surfaces, and edges in a visual scene caused by the
their daily tasks and their entertainment. Within the scope relative motion between an observer (an eye or a camera) and
of entertainment, we have RoboCup league, which allows the scene. It plays an important role in movement estimation
collaboration and research to make improvements in the fields and its description, which is commonly used in detection tasks,
of artificial intelligence, mobile robots, artificial vision, etc. segmentation and tracking of moving objects in a scene from a
The estimate of an object speed in artificial vision is a difficult set of images. Formally it is said that Optical flow field is the
task, since the vision system has to be very flexible to identify apparent motion of brightness patterns between 2 (or several)
the object that is required, in order not to misclassify similar frames in an image sequence. It is assumed that illumination
objects inside and outside the field. does not change, changes are due to relative motion between
Optic flow is a useful piece of information that can be the scene and the camera, for this we have 3 possibilities:
extracted from images captured by a mobile robot to determine camera still, moving scene.
moving camera, still scene. In this paper we selected an algorithm for optical flow,
moving camera, moving scene. which is based on the Lucas-Kanade method, this algorithm
The motion field assign a velocity vector to each pixel in allows us to estimate the soccer ball speed. This algorithm
the imag. These velocities are induced by the relative motion was chosen mainly because it can be implemented easily and
between the camera and the 3D scene. The motion field is the its computational requirements are low.
projection of the 3D velocities on the image plane. In Figure The following describes the back up of optical flow techni-
1 we can see: que which was implemented. It is noteworthy that a video is
a) Translation perpendicular to a surface. considered as a sequence of images in gray levels.
b) Rotation about axis perpendicular to image plane. II-A. Mathematical Foundations
c) Translation parallel to a surface at a constant distant. Optical Flow is an approximation of the local image motion
d) Translation parallel to an obstacle in front of a more based on local derivatives in a given sequence of images. It is
distant background. assumed that all temporal intensity changes are due to motion
only.
Assume I(x, y, t) is the center pixel in a n × n neighbour-
hood and moves by δx, δy in time δt to I(x+δx, y+δy, t+δt).
Since I(x, y, t) and I(x + δx, y + δy, t + δt) are the images
of the same point. We then have:
I(x, y, t) ≈ I(x + δx, y + δy, t + δt) (1)
where (δx, δy) corresponds to the displacement of the region
at (x, y, t), after some time δt. We can perform a 1st order
Taylor series expansion about I(x, y, t) in the equation (1) to
obtain:
I(x+δx, y +δy, t+δt) = I(x, y, t)+5I ·(δx, δy)+δtI1 +Rn (2)

where 5I = (Ix , Iy ) and I1 are the firts order partial


derivatives and Rn are the second order terms and higher,
which we assume small and can safely be ignored. Using the
above two equations (1) and (2) we obtain:
(Ix , Iy ) · (u, v) + I1 = 0 (3)
Figura 1: Examples of Motion Fields
where 5I = (Ix , Iy ) is the spatial intensity gradient and ~v =
Recall that Optical Flow is the apparent motion of brightness (u, v) is the image velocity or optical flow at pixel (x, y) at
dy
patterns, we equate optcial flow field with motion fiel, fre- time t, where u = dxdt , v = dt . 5I · ~
v = −It is called the 2D
quently works, but not always, for example: Motion Constraint Equation. From this equation is not possible
to determine ~v = (dx, dy)T since it is an ill-conditioned
a) A smooth sphere is rotating under constant illumination.
problem. This is a consequence of the aperture problem:
Thus the optical flow field is zero, but the motion field
there is usually insufficient local image intensity structure to
is not. (See Figure 2 (a))
measure full image velocity, but sufficient structure to measure
b) Afixed sphere is illuminated by a moving source so the
the component normal to the local intensity structure. The
shading of the image changes. Thus the motion field is
problem of computing full image velocity then becomes the
zero, but the optical flow field is not. (See Figure 2 (b))
one of finding an additional constraint that yields a second
different equation in the same unknowns [6].
II-B. Lucas-Kanade Method
One of the most popular algorithms currently known for
computing optical flow is the algorithm developed by Lucas
and Kanade [7], These detector is a gradient base displacement
detector and is the basis of many image pairing algorithms. It
does not give an exact solution but a relatively good approxi-
mation. Its aim is to find the best points in which correlation
matrix for the second derivative has higher eigenvalues, “like
the corners”, these points have the characteristic to be found
more easily. Lucas-Kanade supposed a locally constant flow
in a small spatial neighborhood Ω and proposed a solution for
Figura 2: Smooth sphere rotating under constant illumination equation (3) using weighted least squares.
The basic idea of the LK algorithm rests on four assum- least squares sense. Matrix AT W 2 A is in fact singular when
ptions. the gradient is constant in one or more directions, over the
A constant speed model in a small neighborhood Ω. neighborhood Ω. This situation can be interpreted as another
Brightness constancy. A pixel from the image of an example of the problem of opening.
object in the scene does not change in appearance as Matrix AT W 2 A is 2 × 2 as follows
it (possibly) moves from frame to frame. For grayscale T 2
 P
W 2 (x, y)Ix2 (x, y)
P
W 2
P (x,2y)Ix (x,2y)Iy (x, y)

A W A= P 2
images (LK can also be done in color), this means we W (x, y)Iy (x, y)Ix (x, y) W (x, y)Iy (x, y)

assume that the brightness of a pixel does not change as


where all sums are taken over the pixels (x, y) in the spatial
it is tracked from frame to frame (ver Figura 3 (a)).
neighborhood Ω.
Small movements. The image motion of a surface patch
changes slowly in time. In practice, this means the III. O PEN CV I MPLEMENTATION
temporal increments are fast enough relative to the scale
of motion in the image that the object does not move As mentioned, the optical flow computation is not as simple
much from frame to frame (ver Figura 3 (b)). to implement in practice, that’s why we chose the algorithm
Spatial coherence. Neighboring points in a scene belong proposed by Lucas-Kanade, since in general is one of the
to the same surface, have similar motion, and project to simplest. The disadvantage of this method is that it uses small
nearby points on the image plane (ver Figura 3 (c)). windows to delimit the area in a pixel neighbourhood, and
when there are large movements can be points outside the
window so that the algorithm could not locate the scene. To
resolve this and have a more efficient algorithm development
we opted for the pyramidal Lucas-Kanade algorithm, which
estimates the optical flow using Gaussian pyramids in an
iterative manner trying to minimize the error of optical flow,
which is estimated only on a set of points in the scene [8].
This selection criterion tends to choose regions of the image
(a) which shows corners and isolated points. The algorithm input
is a video that meets the contrast conditions and overlap of
moving objects, in our case an oranges’soccer ball. Using this
information we propose the algorithm shown in Figure 4 and
whose description is made later.

Video

(b) (c)
Image
Figura 3: Assumptions behind Lucas_Kanade optical flow Processing

The solution is obtained by minimizing the following term Characteristic points


in the neighborhood Ω selected
X T
W 2 (x) [∇I(x1 ), ..., ∇I(xN )] (4)
x∈Ω Pyramidal
Lucas y Kanade
where W (x) is a weighted function. Using the following
notation
T
A = [∇I(x1 ), ..., ∇I(xN )]
Optical Flow lines

W = diag [W (x1 ), ..., W I(xN )]


Velocities obtained
T ( Vx , Vy )
b = − [It (x1 ), ..., It (xN )]
The equation (4) can be rewritten in compact form as Figura 4: Block diagram of the algorithm

AT W 2 Av = AT W 2 b
We used library OpenCV to capture video [9], [10], with
if AT W 2 A is not singular, a solution can be obtained by using programming environment called Visal Studio C + + 2008 (see
the Moore-Penrose pseudoinverse which is a solution in the Figure 5).
COMPUTER

VISUAL STUDIO

Open CV

Image Processing

Lucas-Kanade Algorithm

Final Figura 6: Velocity ball


Velocity velocities
compute (Vx,Vy)

Figura 5: Outline of System Software

In the algorithm, the first step is to capture video, we


will take two frames from that video and we will create an
image for each frame; these are able to convert each of the
initial frames to an single channel image (grayscale) using
the OpenCV’s functions: allocateOnDemand and cvConver-
tImage. Then we use the function cvGoodFeaturesToTrack,
this implements the corners extraction algorithm called Shi- Figura 7: Direction ball
Thomasi, also, it computes the second derivative (using the
Sobel operator) and from this it gets the required eigenvalues.
this function returns a list of points that are optimal points to and negative in the y-axis, we can appreciate the lines wich
track. With all the features obtained is possible to implement indicate the optical flow sense
the function cvCalcOpticalFlowPyrLK, which implements the
Lucas Kanade algorithm on a pyramidal way, as it seeks
to expand the local window between images to reference
the point to track. When the necessary measurements are
gotten, we plot the optical flow direction and its magnitude.
Also we prepare those measurements for proper display on
the computer through the function cvLine. Finally, we get
an approximation of the speed and direction in x and y by
mathematical calculation of the parameters wich were gotten.

IV. R ESULTS
In this section we present the implementation results of
pyramidal Lucas-Kanade algorithm in order to detect the speed Figura 8: Velocity ball
and movement of an orange soccer ball used in RoboCup. It’s
important to mention that for this tests the camera position
In Figures 10 and 11 show the displacement ball with
was on top of the scene looking down. Another important
negative movement in the x-axis and negative in the y-axis.
point is how to know the sense of the movement. For this, the
algorithm uses numbers, “1” in the positive sense and “0” for
Next, we present results differents ball movements
the negative sense.
In Figure 6 shows the ball speed in the x-axis and y-axis, Negative movement in the x-axis and positiveve in the
as we can see, all thoswe values are zero because the ball is y-axis (see Figure 12).
not moving. Negative movement in the x-axis and negative in the y-
In Figure 7 we can see the corresponding image of the below axis. (see Figure 13).
figure 6 wich present the ball with velocity zero. Negative movement in the x-axis and negative in the y-
Figures 8 and 9 the ball has posite movement in the x-axis axis with bigger angle. (see Figura 14).
(a)
Figura 9: Direction ball

(b)
Figura 10: Velocity ball Figura 12: Results x negative & y positive.

V. C ONCLUSION system, and we will use an industrial camera. Moreover, we


will developed this algorithm in a real image wich represents
We presented a vision system wich process an image in real a soccer field for humanoid robot.
time and compute the velocity and direction of an orange ball
by optical flow algorithm, this system will be implemented in ACKNOWLEDGMENT
a soccer human robot. A webcam captured the video wich was E. Hernández Castillo and Z. Zamudio Beltrán thank
processed and we used pyramidal Lucas-Kanade algorithm CONACyT-México for a doctoral fellowship (203647,
with help of OpenCV wich has differents functions implemen- 203585)
ted for optical flow. This algorithm allows a faster velocity and R EFERENCIAS
direction calculation, thus the algorithm is efficient, and we
[1] C. M. Soria and R. Carelli, “Control de un robot móvil utilizando el flujo
obtained a good number of characteritics points to calculate óptico obtenido través de un sistema omnidireccional catadióptrico,” in
optical flow with the function “goodfeatures”. Memorias Jornadas de Investigación, 2009.
The future work is implement this algorithm to an embedded [2] B. K. P. Horn and B. G. Schunck, “Determining optical flow,” Artificial
Intelligence, vol. 17, pp. 185–203, 1981.
[3] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical
flow techniques,” International Journal of Computer Vision, vol. 12, pp.
43–77, 1994.
[4] D. J. Fleet and A. D. Jepson, “Computation of component image velo-
city from local phase information,” International Journal of Computer
Vision, vol. 5, pp. 77–104, 1990.
[5] S. H. Lai and B. C. Vemuri, “Robust and efficient computation of optical
flow,” 1995.
[6] N. Goddard, “The interpretation of visual motion: recognizing moving
light displays,” in Workshop on Visual Motion, 1989, pp. 212–220.
[7] B. D. Lucas and T. Kanade, “An iterative image registration technique
with an application to stereo vision,” in Workshop on Imaging Unders-
tanding, 1981, pp. 674–679.
[8] J. Shi and C. Tomasi, “Good features to track,” in IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 1994,
pp. 593 –600.
[9] G. Bradski and A. Kaebler, Learning OpenCV Computer Vision with the
OpenCV Library, M. Loukides, Ed. O’REILLY Media, 2008.
Figura 11: Direction ball [10] vpisarev. [Online]. Available: http://opencv.willowgarage.com/wiki/
(a)

(b)

Figura 13: Results x & y both negative.

(a)

(b)

Figura 14: Results x & y both negative with more slope.

You might also like