Professional Documents
Culture Documents
t t + ∆t
Fig.1: 2-D image of the pixel location. Fig.2: various motion fields.
Fig.1 shows the 2-D displacement of the pixel located at point 2. OPTICAL FLOW
p from frame at time t to frame at time . The motion Optical flow or optic flow is the pattern of apparent motion of
field can almost always be unambiguously related to objects, edges and surface in a visual scene caused by the
translational and rotational velocities of rigid surfaces [1]. relative motion between an observer (an eye or a camera) and
These are the following equations for translation and rotation. the scene [3].
Translation- ,
Rotation of Optical flow Optical flow
Here I is identity matrix.
Observer (3d- representation) (2d- representation)
Rotation + Translation:
Or
Where
6
International Journal of Computer Applications (0975 – 8887)
Volume 61– No.10, January 2013
Fig.3 [3] defines the direction and magnitude of optic flow at The 2D and 3D Motion estimation of optical flow according
each location and also defines the length of each arrow. Fig.4 to J.L.Barron and N.A.Thacker [5].
[4] defines the relation between Motion Field and Optical 2D motion equation - Assume is the center pixel in a
Flow Field. neighbourhood and moves by , in time to
. Since and
• Motion Field = Real world 3D motion are the images of the same point (and therefore the
• Optical Flow Field = Projection of the motion field onto the same) we have [5]:
2d image.
= . (1)
=
(2)
Fig.4 : The relation between motion field & optical flow
Where H.O.T. is the Higher Order Terms, which we assume
field. are small term can safely be ignored. Using the above two
J.L.Barron and N.A.Thacker [5] suggested that 2D Motion equations we obtain:
Field can be defined as -2D velocities for all visible points. Or
And Optical Flow Field can be defined as- Estimate of the 2D
motion field. Optical flow is an approximation of the local And finally we get
image motion based upon local derivatives in a given
sequence of images. Optical flow techniques such as motion
detection, object segmentation, time-to-collision and
expansion calculations, motion encoding, and stereo disparity
Here = and = are the x and y components of
measurement utilize this motion of the objects' surfaces and
edges. image velocity or optical flow and , and are image
intensity derivatives at . We normally write these partial
The 2D image sequences used here are formed under
derivatives as:
perspective of projection via the motion of a camera and scene
objects. The 3D volume sequences used here were formed = , = and =
under orthographic projection for a moving object. The
optical flow is tightly linked to the recovery of the 2D are intensity derivatives. This equation can be
projected motion derived from acquisition and projection on
rewritten more compactly as:
the image plane of 3D objects motion in the real world [5].
All the general considerations made on 3D motion projection ・ , =
are still valid dealing with optical flow. ∙ =
Optical flow constraints (images) according to [6] would be. Where = is the spatial intensity gradient and
= , is the image velocity or optical flow at pixel (x, y)
at time t. I ∙ = −It is called 2D motion.
3D motion equation- The 3D motion equation is a simple
extension of the 2D motion equation. In 3D block
at (x, y, z) at time t moving by to
over time . Since =
we can perform a 1st order Taylor
series expansion and obtain [5]:
Fig.5: 2D motion of pixel.
The calculations of optical flow are as follows:
O= – Where = =( , , ) is the 3D volume velocity
≈ – or 3D optical flow and Ix, Iy, Iz and It are the 3D spatio-
temporal derivatives. This Equation can be written more
≈ – compactly as:
≈
≈ ∙ = −It
In the limit as u and v go to zero, this becomes exact Where = is the 3D spatial intensity
gradient and is the 3D velocity.
7
International Journal of Computer Applications (0975 – 8887)
Volume 61– No.10, January 2013
3. OPTICAL FLOW BASED MOSTION be assured to be fulfilled, optical flow based techniques are
generally the better choice. In other cases, when large
ESTIMATION displacements of small structures are expected, correlation-
There are several methods for motion estimation [2]: based approaches usually perform better.
Feature matching methods;
Pel-recursive methods; In the section 4, explains one technic of optical flow which is
Deterministic model based methods; Lucas kanade technic and its calculation for the video.
Stochastic model based methods;
Optical flow based methods;
4. LUCAS KANADE METHOD FOR
OPTICAL FLOW MEASUREMENT
There are several techniques which have been developed to The Lucas–Kanade method is a widely used in differential
approach the computation of optical flow. Such as [2]
method for optical flow estimation and computer vision [9].
Gradient based approach This method solves the basic optical flow equations for all the
Correlation-based approach pixels in that neighbourhood, by the least squares criterion. it
Spatio-temporal energy based approach is a purely local method because it cannot provide flow
Phase-based approach information in the interior of uniform regions of the image.
Here assumes that the flow is essentially constant in a local
A. Gradient based approach neighbourhood of the pixel under consideration [9].
In this approach first method being proposed is the Horn and
Schunck. This method initially defines moving object and The advantages and disadvantages of Lucas kanade method is:
sensor. The second method is the Lucas and Kanade. The Advantages –this method is easy compare another method,
iteration is not done in a strict pixel by pixel way, but very fast calculation and accurate time derivatives.
considering a small neighbourhood of the pixel [2]. So, a
Disadvantage - errors on boundaries of moving object [10].
minimization rule is used, corresponding to a least square fit
of the brightness invariance constraint. Thus the optical flow equation can be assumed to hold for all
B. Correlation-based approach pixels within a window centered at p. namely, the velocity
This method is very close to block matching. The frame is vector for local image must satisfy. The Lucas-
here divided into fixed-size blocks and also this frame Kanade method assumes that the displacement of the image
matching by a searching window. The displacement to the contents between two nearby frames are small and
block that shows the highest likelihood or the lowest approximately constant within a neighborhood of the point p
dissimilarity is taken as measure of the translational motion
[9].
vector to our reference block [2]. The first method for this
approach is the anandan’s method. This computation starts at
the highest level (at the lowest resolution) and a full search is
performed by this method. The highest level is the one at
maximum resolution. The passage from a higher level, L+1, to
a lower one, L is done by interpolation and then subtraction .
from the upper level, L+1.What is the error image at level L. .
So the image at level L can be recovered by adding the error .
at level L to the interpolated estimate at level L [2].
The Singh’s method, divides the information in two
categories: the conservation information and the Where P1,P2…Pn are pixels inside the window,
neighborhood information. Conservation information- are the partial derivatives of the image I
estimate propagates in the neighboring area and then it with respect to position x, y and time t, evaluated at the
iteratively updates the motion vector. Neighborhood point Pn and at the current time. The equations can be written
information - The neighbourhood information are collected in matrix form:
from areas surrounding our interest pixel which have
sufficient intensity variations.
8
International Journal of Computer Applications (0975 – 8887)
Volume 61– No.10, January 2013
Where is the transpose of metrix A, so we can write, There is some problem to move pixel position like its
neighbors:
=
–window size is too large
–what is the ideal window size?
9
International Journal of Computer Applications (0975 – 8887)
Volume 61– No.10, January 2013
[2] Chiara Ercole,"LPA-ICI approximation: applications to [7] Mester R., Some step towards a unified motion
motion estimation and video denoising", University of estimation procedure. In proceedings of the 45th IEEE
Roma TRE, Year 2003-2004. Midwest symposium on circuits and systems, 2002
[3] HustonSJ, Krapp HG. Kurtz, Rafael. "Visuomotor [8] Viral Borisagar and Hiral Raveshiya, “Motion Estimation
Transformation in the Fly Gaze Stabilization System" Using Optical Flow Concepts “, Gujarat University.
Bielefeld University, Germany, Year 2008.
[9] Bruce D. Lucas “Image Matching by the Method of
[4] Lihizelnikmanor, “motion estimation”, Cambridge Differences”, Carnegie Mellon University, year 1984.
University, year 2008.
[10] Mariya Zhariy, “Introduction to Optical Flow”, Uttendorf
[5] J.L.Barron and N.A.Thacker,”Computing 2D and 3D 2005.
Optical Flow”, University of Manchester, year 2005.
[11] Justin graham, “target tracking with Lucas kanade optical
[6] CeLiu, “Motion Estimation”, Microsoft Research New flow “, University of Texas, year 2010.
England year 2011.
10