You are on page 1of 31

Kapitel 12*

Tracking Fundamentals

Object representation Object detection Object tracking


A. Yilmaz, O. Javed, and M. Shah Object tracking: A survey ACM Computing Surveys, Vol. 38, No. 4, 1-45, 2006

Kapitel 12* Tracking p. 1

Fundamentals (1)

Kapitel 12* Tracking p. 2

Fundamentals (2)
Applications of object tracking:
motion-based recognition: human identification based on gait, automatic object detection, etc. automated surveillance: monitoring a scene to detect suspicious activities or unlikely events video indexing: automatic annotation and retrieval of videos in multimedia databases human-computer interaction: gesture recognition, eye gaze tracking for data input to computers, etc.

traffic monitoring: real-time gathering of traffic statistics to direct traffic flow vehicle navigation: video-based path planning and obstacle avoidance capabilities

Kapitel 12* Tracking p. 3

Fundamentals (3)
Tracking task:
In the simplest form, tracking can be defined as the problem of estimating the trajectory of an object in the image plane as it moves around a scene. In other words, a tracker assigns consistent labels to the tracked objects in different frames of a video. Additionally, depending on the tracking domain, a tracker can also provide objectcentric information, such as orientation, area, or shape of an object. Two subtasks:

Build some model of what you want to track


Use what you know about where the object was in the previous frame(s) to make predictions about the current frame and restrict the search Repeat the two subtasks, possibly updating the model

Kapitel 12* Tracking p. 4

Fundamentals (4)
Tracking objects can be complex due to:

loss of information caused by projection of 3D world on 2D image noise in images complex object shapes / motion nonrigid or articulated nature of objects partial and full object occlusions scene illumination changes real-time processing requirements
Simplify tracking by imposing constraints: Almost all tracking algorithms assume that the object motion is smooth with no abrupt changes The object motion is assumed to be of constant velocity Prior knowledge about the number and the size of objects, or the object appearance and shape
Kapitel 12* Tracking p. 5

Object Represention (1)


Object representation = Shape + Appearance

Shape representations: Points. The object is represented by a point, that is, the centroid or by a set of points; suitable for tracking objects that occupy small regions in an image Primitive geometric shapes. Object shape is represented by a rectangle, ellipse, etc. Object motion for such representations is usually modeled by translation, affine, or projective transformation. Though primitive geometric shapes are more suitable for representing simple rigid objects, they are also used for tracking nonrigid objects.
Kapitel 12* Tracking p. 6

Object Represention (2)


Object silhouette and contour. Contour = boundary of an object. Region inside the contour = silhouette. Silhouette and contour representations are suitable for tracking complex nonrigid shapes. Articulated shape models. Articulated objects are composed of body parts (modelled by cylinders or ellipses) that are held together with joints. Example: human body = articulated object with torso, legs, hands, head, and feet connected by joints. The relationship between the parts are governed by kinematic motion models, e.g. joint angle, etc.

Skeletal models. Object skeleton can be extracted by applying medial axis transform to the object silhouette. Skeleton representation can be used to model both articulated and rigid objects.

Kapitel 12* Tracking p. 7

Object Represention (3)


Object representations. (a) Centroid, (b) multiple points, (c) rectangular patch, (d) elliptical patch, (e) part-based multiple patches, (f) object skeleton, (g) control points on object contour, (h) complete object contour, (i) object silhouette

Kapitel 12* Tracking p. 8

Object Represention (4)


Appearance representations:
Templates. Formed using simple geometric shapes or silhouettes. Suitable for tracking objects whose poses do not vary considerably during the course of tracking. Self-adapation of templates during the tracking is possibe.

http://www.cs.toronto.edu/vis/projects/dudekfaceSequence.html
Kapitel 12* Tracking p. 9

Object Represention (5)


Probability densities of object appearance can either be parametric (Gaussian and mixture of Gaussians) or nonparametric (histograms) Characterize an image region by its statistics. If the statistics differ from background, they should enable tracking. nonparametric: histogram (grayscale or color)

Kapitel 12* Tracking p. 0 1

Object Represention (6)


parametric: 1D Gaussian distribution

Kapitel 12* Tracking p. 1 1

Object Represention (7)


parametric: n-D Gaussian distribution

Centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction

Kapitel 12* Tracking p. 2 1

Object Represention (8)


parametric: Gaussian Mixture Models (GMM)

Kapitel 12* Tracking p. 3 1

Object Represention (9)


Beispiel: Mixture of three Gaussians in 2D space. (a) Contours of constant density for each mixture component. (b) Contours of constant density of mixture distribution p(x). (c) Surface plot of p(x).

Kapitel 12* Tracking p. 4 1

Object Represention (10)


Object representations are chosen according to the application
Point representations appropriate for tracking objects, which appear very small in an image (e.g. track distant birds) For the objects whose shapes can be approximated by rectangles or ellipses, primitive geometric shape representations are more appropriate (e.g. face) For tracking objects with complex shapes, for example, humans, a contour or a silhouette-based representation is appropriate (surveillance applications)

Kapitel 12* Tracking p. 5 1

Object Represention (11)


Feature selection for tracking:
In general, the most desirable property of a visual feature is its uniqueness so that the objects can be easily distinguished in the feature space Color: RGB, Luv, Lab, HSV, etc. There is no last word on which color space is more effective; a variety of color spaces have been used

Edges: less sensitive to illumination changes compared to color features. Algorithms that track the object boundary usually use edges as features. Because of its simplicity and accuracy, the most popular edge detection approach is the Canny Edge detector.
Texture: measure of the intensity variation of a surface which quantifies properties such as smoothness and regularity
Kapitel 12* Tracking p. 6 1

Object Detection (1)


Object detection mechanism: required by every tracking method either at the beginning or when an object first appears in the video Point detectors: find interest points in images which have an expressive texture in their respective localities

Segmentation: partition the image into perceptually similar regions


Kapitel 12* Tracking p. 7 1

Object Detection (2)


Background subtraction: Object detection can be achieved by building a representation of the scene called the background model and then finding deviations from the model for each incoming frame. Any significant change in an image region from the background model signifies a moving object. The pixels constituting the regions undergoing change are marked for further processing. Usually, a connected component algorithm is applied to obtain connected regions corresponding to the objects.

Kapitel 12* Tracking p. 8 1

Object Detection (3)


Frame differencing of temporally adjacent frames:

Kapitel 12* Tracking p. 9 1

Object Detection (4)


Bildsequenz: 5 Bilder/s

Kapitel 12* Tracking p. 0 2

Object Detection (5)


Bildsubtraktion: Variante 1

Schwche: Doppelbild eines Fahrzeugs (aus dem letzten und aktuellen Bild); Aufteilung einer konstanten Flche
Kapitel 12* Tracking p. 1 2

Object Detection (6)


Bildsubtraktion: Variante 2

Referenzbild fr(r, c): Mittelung einer langen Sequenz von Bildern

Kapitel 12* Tracking p. 2 2

Object Detection (7)

Kapitel 12* Tracking p. 3 2

Object Detection (8)


Statistical modeling of background:
Learn gradual changes in time by Gaussian, I (x, y) N((x, y), (x, y)), from the color observations in several consecutive frames. Once the background model is derived for every pixel (x, y) in the input frame, the likelihood of its color coming from N((x, y), (x, y)) is computed.

Kapitel 12* Tracking p. 4 2

Object Tracking (1)

A. Yilmaz, O. Javed, and M. Shah: Object tracking: A survey. ACM Computing Surveys, Vol. 38, No. 4, 1-45, 2006

Kapitel 12* Tracking p. 5 2

Object Tracking (2)

(a) Point Tracking. Objects detected in consecutive frames are represented by points, and a point matching is done. This approach requires an external mechanism to detect the objects in every frame. (b) Kernel Tracking. Kernel = object shape and appearance. E.g. kernel = a rectangular template or an elliptical shape with an associated histogram. Objects are tracked by computing the motion (parametric transformation such as translation, rotation, and affine) of the kernel in consecutive frames. (c)+(d) Silhouette Tracking. Such methods use the information encoded inside the object region (appearance density and shape models). Given the object models, silhouettes are tracked by either shape matching (c) or contour evolution (d). The latter one can be considered as object segmentation applied in the temporal domain using the priors generated from the previous frames.
Kapitel 12* Tracking p. 6 2

Object Tracking (3)


Template Matching: brute force method for tracking single objects Define a search area Place the template defined from the previous frame at each position of the search area and compute a similarity measure between the template and the candidate

Select the best candidate with the maximal similarity measure


The similarity measure can be a direct template comparison or statistical measures between two probability densities Limitation of template matching: high computation cost due to the brute force search limit the object search to the vicinity of its previous position; position prediction

Kapitel 12* Tracking p. 7 2

Object Tracking (4)


Direct comparison: between template t(i,j) and candidate g(i,j)

Bhattacharyya coefficient between two distributions:

Kapitel 12* Tracking p. 8 2

Object Tracking (5)


Example: Eye tracking (direct grayvalue comparison)

Kapitel 12* Tracking p. 9 2

Object Tracking (6)


Example: Elliptical head tracking using intensity gradients and color histograms

http://robotics.stanford.edu/~birch/headtracker/

Kapitel 12* Tracking p. 0 3

Object Tracking (7)


D. Comaniciu, V. Ramesh, and P. Meer, Kernel-based object tracking. IEEE Trans. Patt. Analy. Mach. Intell. 25, 564575, 2003

Mean-shift tracking (instead of brute force search). (a) estimated object location at time t 1, (b) frame at time t with initial location estimate using the previous object position, (c), (d), (e) location update using mean-shift iterations, (f) final object position at time t.
Kapitel 12* Tracking p. 1 3