You are on page 1of 19

Lecture # 32: Motion Analysis

Cont…
Bonus: 18

Muhammad Rzi Abbas Department of Mechatronics and Control Engineering


muhammadrziabbas@uet.edu.pk
Lecturer, Mechatronics Dept.
University of Engineering and Technology, Lahore
• Simple subtraction of images acquired at different instants in time
makes motion detection possible, assuming a stationary camera
position and constant illumination.
• A difference image d(i, j) is a binary image where non-zero values
represent image areas with motion, that is, areas where there was a
substantial difference between gray-levels in consecutive images f1
and f2
• Noise can be suppressed by thresholding the amount of difference,
but this may prevent the detection of slow motion and small object
motion.
• Result’s of this approach are highly dependent on an object-
background contrast.
• On the other hand, we can be sure that all the resulting regions in the
difference images result from motion.
• Trajectories detected using differential image motion analysis may not
reveal the direction of the motion.
• If direction is needed, construction of a cumulative difference image
can solve this problem.
• Cumulative difference images contain information about motion
direction and other time-related motion properties, and about slow
motion and small objects motion as well.
• The cumulative difference image is constructed from a sequence of n
images, with the first image (f1) being considered a reference image.
• A static image is taken as a reference and all subsequent images are
subtracted from it to get the difference image.
• A problem with this approach may be the impossibility of getting an
image of a static reference scene if the motion never ends; then a
learning stage must construct the reference image
• Subsequent analysis usually determines motion trajectories; often
only the center of gravity trajectory is needed
• A practical problem is the prediction of the motion trajectory if the
object position in several previous images is known
• Optical flow reflects the image changes due to motion during a time
interval dt, and the optical flow field is the velocity field that
represents the three-dimensional motion of object points across a
two-dimensional image
• It should represent only those motion-related intensity changes in the
image that are required in further processing, and all other image
changes reflected in the optical flow should be considered errors of
flow detection.
• For example, optical flow should not be sensitive to illumination
changes and motion of unimportant objects (e.g., shadows).
• Optical flow computation is based on two assumptions:
1. The observed brightness of any object point is constant over time
2. Nearby points in the image plane move in a similar manner (the velocity
smoothness constraint).
The partial spatial derivative in the x direction and the partial spatial
derivative in the y direction are shown here
the partial derivative in time direction is shown here
Horn and Schunck
Lucas and Kanade
• Motion, as it appears in dynamic images, is usually some combination
of four basic elements:
• Translation at constant distance from the observer.
• Translation in depth relative to the observer.
• Rotation at constant distance about the view axis.
• Rotation of a planar object perpendicular to the view axis.
• Motion form recognition is based on the following facts
• Translation at constant distance is represented as a set of parallel motion
vectors.
• Translation in depth forms a set of vectors having a common focus of
expansion.
• Rotation at constant distance results in a set of concentric motion vectors.
• Rotation perpendicular to the view axis forms one or more sets of vectors
starting from straight line segments.
• Image Processing, Analysis and Machine Vision by Milan Sonka,
Vaclav Hlavac and Roger Boyle, 3rd Edition, 2008.
• Chapter 16 (Section: 16.1, 16.2, 16.2.1 and 16.2.4)

You might also like