You are on page 1of 9

International Journal of Advent Research in Computer & Electronics, Vol.

1, No5, August 2014


E-ISSN: 2348-5523

Moving Object Tracking and Detection in Videos using


MATLAB: A Review
Prof. Abhijeet A. Chincholkar

Ms.Sonali A. Bhoyar

M.E.Digital Electronics, JCOET


Yavatmal.
chincholkarabhijeet@gmail.com

B.E.EXTC ENGG. Dept., JCOET


Yavatmal.
sonalibhoyar11@gmail.com

Abstract-Object tracking is the process of locating


moving objects in the consecutive video frames. Real
time object tracking is a challenging problem in the
field of computer vision, motion-based recognition,
automated surveillance, traffic monitoring, augmented
reality, object based video compression etc. To review
the state-of-the-art tracking methods, classify them into
different categories, and identify new trends.
Difficulties in tracking objects can arise due to abrupt
object motion, changing appearance patterns of both the
object and the scene, non rigid object structures, objectto-object and object-to-scene occlusions, and camera
motion. Tracking is usually performed in the context of
higher-level applications that require the location and/or
shape of the object in every frame. Typically,
assumptions are made to constrain the tracking problem
in the context of a particular application. The tracking
methods on the basis of the object and motion
representations used, provide detailed descriptions of
representative methods in each category, and examine
their pros and cons. The important issues related to
tracking including the use of appropriate image
features, selection of motion models, and detection of
objects. Object tracking for video sensor-based
applications, an online discriminative algorithm based
on incremental discriminative structured dictionary
learning. A discriminative dictionary combining both
positive , negative and trivial patches is designed to
Ssparsely represent the overlapped target patches. The
models are
trained to timely adapt the target
appearance variation. Qualitative and quantitative
evaluations on challenging image sequences compared
with state-of-the-art algorithms demonstrate that the
proposed tracking algorithm achieves a more favorable
performance. In various target tracking related
estimation and data association applications, we extend
or modify particle ltering algorithms. The passive
ranging application when only angle information is
available is discussed for several problems. While
numerous algorithms have been proposed for object
tracking with demonstrated success, it remains a
challenging problem for a tracker to handle large
change in scale, motion, shape deformation with
occlusion.

Ms.Snehal N.Dagwar
B.E.EXTC ENGG. Dept., JCOET
Yavatmal.
Snehaldagwar06@gmail.com

Keywords
Object detection,Object representation, Point tracking,
Shape tracking, Structured dictionary learning; Visual
sensor
networks
1.

INTRODUCTION
Object tracking is an important task within the
field of computer vision. The proliferation of highpowered computers, the availability of high quality and
inexpensive video cameras, and the increasing need for
automated video analysis has generated a great deal of
interest in object tracking algorithms. There are three
key steps in video analysis: detection of interesting
moving objects, tracking of such objects from frame to
frame, and analysis of object tracks to recognize their
behavior. In its simplest form, tracking can be defined
as the problem of estimating the trajectory of an object
in the image plane as it moves around a scene. In other
words, a tracker assigns consistent labels to the tracked
objects in different frames of a video. Additionally,
depending on the tracking domain, a tracker can also
provide object-centric information, such as orientation,
area, or shape of an object. Object tracking video is an
important subject and has long been investigated in the
computer vision community. An object, or a target,
refers to a region in the video frame detected or labeled
for specific purposes. Visual trackers proposed in the
early years typically kept the appearance model fixed
throughout an image sequence. Recently, methods
proposed to track targets while evolving the appearance
model in an online manner, called online visual
tracking. An online visual tracking method typically
follows the Bayesian inference framework and mainly
consists of three components: an object representation
scheme, which considers the appearance formulation
uniqueness of the target; a dynamical model (or state
transition model), which aims to describe the states of
the target and their inter-frame relationship over time;
an observation model, which evaluates the likelihood of
an observed image candidate (associated with a state)
belonging to the object class. Although visual tracking
has been intensively investigated, there are still many
challenges, such as occlusions, appearance changes,
significant motions, background clutter, etc. Target
tracking systems rely heavily upon statistical state
estimation theory. For target or object tracking
applications, modern systems are capable of handling

27

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523
multiple targets. This leads to data association issues.
The tracking system can be located on a moving
vehicle, such as an aircraft, missile or submarine, or
located on the ground for surveillance purposes. It aims
to solve partial occlusion with a representation based on
histograms of local patches. The tracking task is carried
out by combing votes of matching local patches using a
template. The visual tracking decomposition (VTD)
approach effectively extends the conventional particle
filter framework with multiple motion and observation
models to account for appearance variation caused by
change of pose, lighting and scale as well as partial
occlusion. As a result of the adopted generative
representation scheme, this tracker is not equipped to
distinguish target and background patches.
2.

PRESENT THEORY AND PRACTICALS


Kalpesh R. Jadav, M. A. Lokhandwala, and A. P.
Gharge proposed Vision based moving object
detection and tracking [1]. Moving object detection
and tracking is often the first step in applications such
as video surveillance. A moving object detection and
tracking system with a static camera has been
developed to estimate velocity, distance parameters. A
general moving object detection and tracking based on
vision system using image difference algorithm. It
focuses on detection of moving objects in a scene for
example moving people meeting each other, and
tracking and detected people as long as they stay in the
scene. This is done by image difference algorithm with
matlab software and we could calculate distance, frame
per time, velocity .In this paper we estimated the
position of moving people and velocity also. It
describes an algorithm to estimate Moving object
velocity using image processing technique from the
camera calibration parameters and matlab software.
Michalis K. Titsias proposed Matlab toolbox for
learning object models from video [2]. It provides
information for using the toolbox for learning layered
object models from a video which is publically
available. The theoretical foundations of the algorithm
used in the software can be found. A complete
description of the algorithm can be also found. The
toolbox consists of a set of matlab functions. To run this
software you need to have installed a recent version of
matlab (version 6.0 or later) together with the image
processing toolbox. The background object in each
video frame is assumed to follow a translational motion
across all frames, while the foreground objects can
undergo similarity transformations.
G. L. Foresti, C. Micheloni and C. Piciarelli
proposed Detecting Moving People in Video Streams
[3]. The detection of moving people is an important task
for video surveillance systems. It presents a motion
segmentation algorithm for detecting people moving in
indoor environments. The algorithm works with mobile
cameras and it is composed of two main parts. In the
first part, a frame-by-frame procedure is applied to

compute the difference image, and a neural network is


used to classify whether the resulting image represents a
static scene or a scene containing mobile objects. The
second part tries to reduce the detection errors in terms
of both false or missed alarms. A finite state automaton
has been designed to give a robust classification and to
reduce the number of false or missed blobs. Finally, a
bounding ellipse is computed for each detected blob in
order to isolate moving people
Arnab Roy, Sanket Shinde and Kyoung-Don Kang
proposed An Approach for Efficient Real Time
Moving Object Detection [4]. Moving object detection
is essential for real-time surveillance; however, it is
challenging to support moving object detection in a
timely fashion due to the compute-intensive nature. We
tackle the challenge by developing new techniques to
substantially expedite moving object detection. We
have implemented our approaches using a low-end
webcam in a commodity laptop with no special
hardware for high speed image processing. We have
compared the performance of our approaches to the
well-known background modeling technique. Our
approaches reduce the average delay for moving object
detection by up to 45.5% and decrease the memory
consumption by up to approximately 14%, while
supporting equally accurate detection.
G. Shrikanth and Kaushik Subramanian proposed
IMPLEMENTATION OF FPGA-BASED OBJECT
TRACKING ALGORITHM [5]. Using Image
Processing algorithms for the purpose of Object
Recognition and Tracking and implement the same
using an FPGA. In todays world most sensing
applications require some form of digital signal
processing and these are implemented primarily on
serial processors. While the required output is
achievable, it can be beneficial to take advantage of the
parallelism, low cost, and low power consumption
offered by FPGAs. The Field Programmable Gate Array
(FPGA) contains logic components that can be
programmed to perform complex mathematical
functions making them highly suitable for the
implementation of matrix algorithms. The individual
frames acquired from the target video are fed into the
FPGA. These are then subject to segmentation,
thresholding and filtering stages. The object is tracked
by comparing the background frame and the processed
updated frame containing the new location of the target.
The results of the FPGA implementation in tracking a
moving object were found to be positive and suitable
for object tracking.
Zvi Figov , Yoram Tal and Moshe Koppel proposed
Detecting and Removing Shadows [6]. A method for
the detection and removal of shadows in RGB images.
The shadows are with hard borders. This method begins
with a segmentation of the color image. It is then
decided if a segment is a shadow by examination of its
neighboring segments. We use the method introduced

28

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523
to remove the shadows by zeroing the shadows borders
in an edge representation of the image, and then
reintegrating the edge using the method introduced by
Weiss . This is done for all of the color channels thus
leaving a shadow-free color image. Unlike previous
methods, the present method requires neither a
calibrated camera nor multiple images. This method is
complementary of current illumination correction
algorithms. Examination of a number of examples
indicates that this method yields a significant
improvement over previous methods.
Jianwei Zhou and Kefeng Lu, Tamu proposed Realtime Optical Flow-Based Motion Tracking [7]. The
tracking of the object is based on optical flows among
video frames in contrast to image background-based
detection. The proposed optical flow method is
straightforward and easier to implement and we assert
has better performance. The project consist of two parts,
software simulation on Simulink and hardware
implementation on TI TMS320DM6437 DSP board.
The idea of this project is derived from the tracking
section of the demos listed in MATLAB computer
vision toolbox website.
Volker Grabe, Heinrich H. Bulthoff, and Paolo
Robuffo Giordano proposed Robust Optical-Flow
Based Self-Motion Estimation for a Quadrotor UAV
[8]. Robotic vision has become an important field of
research for micro aerial vehicles in the recent years.
While many approaches for autonomous visual control
of such vehicles rely on powerful ground stations, the
increasing availability of small and light hardware
allows for the design of more independent systems. In
this context, we present a robust algorithm able to
recover the UAV ego-motion using a monocular camera
and on-board hardware. Our method exploits the
continuous homography constraint so as to discriminate
among the observed feature points in order to classify
those belonging to the dominant plane in the scene.
Extensive experiments on a real quadrotor UAV
demonstrate that the estimation of the scaled linear
velocity in a cluttered environment improved by a
factor of 25% compared to previous approaches.
Bhavana C. Bendale and Anil R. Karwankar
proposed Moving Object Tracking in Video Using
MATLAB [9]. A method is described for tracking
moving objects from a sequence of video frame. This
method is implemented by using optical flow (HornSchunck) in matlab simulink. It has a variety of uses,
some of which are: human computer interaction,
security and surveillance, video communication and
compression, augmented reality, traffic control, medical
imaging and video editing.
Yi Liu and Yuan F. Zheng, proposed Video
Object Segmentation and Tracking Using Learning
Classification [10]. As a requisite of the emerging
content-based multimedia technologies, video object
(VO) extraction is of great importance. This paper

presents a novel semiautomatic segmentation and


tracking method for single VO extraction. Unlike
traditional approaches, the proposed method formulates
the separation of the VO from the background as a
classification problem. Each frame is divided into small
blocks of uniform size, which are called object blocks if
the centering pixels belong to the object, or background
blocks otherwise. After a manual segmentation of the
first frame, the blocks of this frame are used as the
training samples for the object-background classifier. A
newly developed learning tool called -learning is
employed to train the classifier which outperforms the
conventional Support Vector Machines in linearly
nonseparable cases. To deal with large and complex
objects, a multilayer approach constructing a so-called
hyperplane tree is proposed. Each node of the tree
represents a hyperplane, responsible for classifying only
a subset of the training samples. Multiple hyperplanes
are thus needed to classify the entire set.
Richard Roberts, Christian Potthast and Frank
Dellaert proposed Learning General Optical Flow
Subspaces for Ego motion Estimation and Detection of
Motion Anomalies [11]. It deals with estimation of
dense optical flow and ego-motion in a generalized
imaging system by exploiting probabilistic linear
subspace constraints on the flow. We deal with the
extended motion of the imaging system through an
environment that we assume to have some degree of
statistical regularity. For example, in autonomous
ground vehicles the structure of the environment around
the vehicle is far from arbitrary, and the depth at each
pixel is often approximately constant. The subspace
constraints hold not only for perspective cameras, but in
fact for a very general class of imaging systems,
including catadioptric and multiple-view systems. To
identify and cope with image regions that violate
subspace constraints, such as moving objects, objects
that violate the depth regularity, or gross flow
estimation errors, we employ a per-pixel Gaussian
mixture outlier process.
Aurlie Bugeau , Patrick Prez proposed Track and
Cut: simultaneous tracking and segmentation of
multiple objects with graph cuts [12]. A new method
to both track and segment multiple objects in videos
using min-cut/max-flow optimizations. We introduce
objective functions that combine low-level pixel wise
measures (color, motion), high-level observations
obtained via an independent detection module, motion
prediction
and
contrast-sensitive
contextual
regularization. One novelty is that external observations
are used without adding any association step. The
observations are image regions (pixel sets) that can be
output by any kind of detector. The minimization of
these cost functions simultaneously allows "detectionbefore-track" tracking (track-to-observation assignment
and automatic initialization of new tracks) and
segmentation of tracked objects. When several tracked
objects get mixed up by the detection module (e.g.,

29

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523
single foreground detection mask for objects close to
each other), a second stage of minimization allows the
proper tracking and segmentation of these individual
entities despite the observation confusion. Experiments
on different type of sequences demonstrate the ability of
the method to detect, track and precisely segment
persons as they enter and traverse the field of view,
even in cases of partial occlusions, temporary grouping
and frame dropping.
Fuxin Li, Taeyoung Kim, Ahmad Humayun, David
Tsai, James M. Rehg proposed Video Segmentation
by Tracking Many Figure-Ground Segments [13]. An
unsupervised video segmentation approach by
simultaneously tracking multiple holistic figure ground
segments. Segment tracks are initialized from a pool of
segment proposals generated from a figure-ground
segmentation algorithm. Then, online non-local
appearance models are trained incrementally for each
track using a multi-output regularized least squares
formulation. By using the same set of training examples
for all segment tracks, a computational trick allows us
to track hundreds of segment tracks efficiently, as well
as perform optimal online updates in closed-form.
Besides, a new composite statistical inference approach
is proposed for refining the obtained segment tracks,
which breaks down the initial segment proposals and
recombines for better ones by utilizing high order
statistic estimates from the appearance model and
enforcing temporal consistency. For evaluating the
algorithm, a dataset, Seg Track v2, is collected with
about 1,000 frames with pixel-level annotations. The
proposed framework outperforms state-of-the-art
approaches in the dataset, showing its efficiency and
robustness to challenges in different video sequences.
V.Purandhar Reddy proposed Object Tracking
Based on Pattern Matching [14]. A novel algorithm
for object tracking in video pictures, based on edge
detection, object extraction and pattern matching is
proposed. With the edge detection, we can detect all
objects in images no matter whether they are moving or
not. Using edge detection results of successive frames,
we exploit pattern matching in a simple feature space
for tracking of the objects. Consequently, the proposed
algorithm can be applied to multiple moving and still
objects even in the case of a moving camera. We
describe the algorithm in detail and perform simulation
experiments on object tracking which verify the
tracking algorithms efficiency.
Abhishek Kumar Chauhan and Prashant Krishan
proposed Moving Object Tracking using Gaussian
Mixture Model and Optical Flow [15]. A new
tracking method that uses Gaussian Mixture Model
(GMM) and Optical Flow approach for object tracking.
The GMM approach consists of three different
Gaussian distributions, the average, standard deviation
and weight respectively. There are two important steps
to establish the background for model, and background

updates which separate the foreground and background.


This paper combines the GMM and Optical Flow object
tracking. The advantages of Optical Flow are quick
calculations and the disadvantage is a lack of complete
object tracking. The advantage of GMM is complete
results of the operation the disadvantage is not a
complete object tracking, GMM result of the operation
complete but disadvantages include computing for a
long time with more noise. These two methods can
complement each other and image filtering results in the
successful tracking of objects. It has variety of uses
such as video communication and compression, traffic
control, medical imaging and video editing.
Abel Mendes, Luis Conde Bento and Urbano Nunes
proposed Multi-target Detection and Tracking with a
Laserscanner [16]. A
method of detection and
tracking of moving objects (DATMO) using a Laser
Range Finder (LRF). The DATMO system is able to
classify several kind of objects and can be easily
expanded to detect new ones. It is composed by three
modules: scan segmentation; object classification using
a suitable voting scheme of several object properties;
and object tracking using a Kalman filter that takes the
object type to increase the tracking performance into
account. The goal is the design of a collision avoidance
algorithm to integrate in a Cyber care vehicle, which
uses the computed time-to-collision for each moving
obstacle validated by the DATMO system.
M. Aprile, A. Colombari, A. Fusiello, V. Murino
proposed SEGMENTATION AND TRACKING OF
MULTIPLE OBJECTS IN VIDEO SEQUENCES
[17]. A
system that produces an object-based
representation of a video shots composed by a
background (still) mosaic and moving objects.
Segmentation of moving objects is based on ego-motion
compensation and on background modeling using tools
from robust statistics. Region matching is carried out by
an algorithm that operates on the Mahalanobis distance
between region descriptors in two subsequent frames
and uses Singular Value Decomposition to compute a
set of correspondences satisfying both the principle of
proximity and the principle of exclusion. The sequence
is represented as a layered graph, and specific
techniques are introduced to cope with crossing and
occlusions.
Kay Ch. Fuerstenberg, Klaus C. J. Dietmayer,
Stephan Eisenlauer proposed Multilayer Laserscanner
for robust Object Tracking and Classification in Urban
Traffic Scenes [18]. The latest Laserscanner
development of IBEO combines several innovations for
automotive use. The sensor is a Multilayer scanner,
which measures both distances and reflectivities
simultaneously in 4 horizontal scan planes. This can be
used to compensate pitching of the vehicle. Also, a
multi target capability is integrated which allows the
sensor to detect two distances with a single
measurement, e.g. to be robust against rain. A system

30

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523
architecture for detection and modelling of dynamic
traffic scenes is used to give a general idea of the
different tasks to reach the goal of a complete
environmental model using a sensor for a wide range of
applications.
Nicholas
McKinley
Johnson
proposed
SIMULTANEOUS LOCALIZATION, MAPPING
AND OBJECT TRACKING IN AN URBAN
ENVIRONMENT USING MULTIPLE 2D LASER
SCANNERS [19]. Robotics is a rapidly developing
field. Initially used primarily on the assembly line since
the introduction of Unimate at General Motors in 1961,
robots have grown to take on many different roles.
Todays robots can be seen, not only on the assembly
line but also cleaning peoples homes, moving lawns,
and protecting soldiers on the battlefield. One of the
areas currently getting a lot of attention is the area of
unmanned vehicles. These vehicles are being used to
accomplish missions in dangerous situations where
human life would be at risk. Many of the unmanned
vehicles currently deployed on the battlefield are
operated remotely or with human supervision.
However, as unmanned vehicles continue to evolve
there is an increasing desire to have these vehicles work
independent of human interaction. The research
presented in this document will hopefully assist in
reaching that goal. By using 2D laser scanner the object
is tracked.
Carlo Tomasi and Takeo Kanade proposed
Detection and Tracking of Point Features [20]. The
factorization method described in this series of reports
requires an algorithm to track the motion of features in
an image stream. Given the small inter-frame
displacement made possible by the factorization
approach, the best tracking method turns out to be the
one proposed by Lucas and Kanade in 1981. The
method defines the measure of match between fixedsize feature windows in the past and current frame as
the sum of squared intensity differences over the
windows. The displacement is then defined as the one
that minimizes this sum. For small motions, a
linearization of the image intensities leads to a NewtonRaphson style minimization. In this report, after
rederiving the method in a physically intuitive way, we
Title
Author
Year
Technique

Vision based
moving object
detection and
tracking

K.R.Jadav,
M.A.Lokhand
wala,
A.P.Gharge

2011

The concepts of
dynamic template
matching and frame
differencing have
been used to
implement a robust
automated single
object tracking
system.

answer the crucial question of how to choose the feature


windows that are best suited for tracking. Our selection
criterion is based directly on the definition of the
tracking algorithm, and expresses how well a feature
can be tracked. As a result, the criterion is optimal by
construction.
We show by experiment that the
performance of both the selection and the tracking
algorithm are adequate for our factorization method,
and we address the issue of how to detect occlusions. In
the conclusion, we point out specific open questions for
future research.
Dr. M. Hemalatha and S. Kavitha proposed A
System for Dissecting the Video for Tracing Multiple
Humans in Multifaceted Situation [21]. Segmenting
and tracking multiple humans is a challenging problem
in complex situations in which extended occlusion,
shadow and/or reflection exists. We tackle this problem
with a 3D model-based approach. This method includes
two stages, segmentation (detection) and tracking.
Human hypotheses are generated by shape analysis of
the foreground blobs using a human shape model. The
segmented human hypotheses are tracked with a
Kalman filter with explicit handling of occlusion.
Hypotheses are verified while they are tracked for the
first second or so. The verification is done by walking
recognition using an articulated human walking model.
We propose a new method to recognize walking using
motion template and temporal integration.

Merits

Demerits

Using matlab we
can easily
implemented
Image difference
algorithm.

It included
high resolution
camera and
frame grabber
card which is
support to
camera so it
becomes
costly.

Future Scope

A moving object
detection and
tracking system with
a static camera has
been developed to
estimate velocity,
distance parameters.

31

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523

Matlab toolbox
for learning object
models from
video

Detecting Moving
People in Video
Streams

An Approach for
Efficient Real
Time Moving
Object Detection

Implementation of
FPGA- based
object tracking
algorithm

Detecting and
Removing
Shadows

Real-time Optical
Flow-Based
Motion Tracking

Robust OpticalFlow Based SelfMotion


Estimation
for a Quad-rotor
UAV

M. K. Titsias

G. L. Foresti,
C. Micheloni
and
C.Piciarelli

A.Roy,
S.Shinde and
K.D. Kang

G. Shrikant
And
K.Subramania
n

Z.Figov,
R.Gan, Y. Tal,
M. Koppel .

J.Zhou and
K.L. TAMU

V.Grabe, H.
Bulthoff, and
P.R. Giordano

2004

The toolbox for


learning layered
object models from
a video which is
publically available.

2005

It presents a motion
segmentation
algorithm for
detecting people
moving in indoor
environments.

To run this
software it is
necessary to have
installed a recent
version of matlab.

The number of
objects
increases,
there is a
combinatorial
explosion of
the number of
configurations.

The RGB images of


the software should
automatically
transform the images
to grayscale image
format.

The computed
threshold
highlights the
acquisition noise.

The computed
threshold was
bigger enough
to allow the
detection
of the objects.

The system allows


the mobile robot to
follow a single
moving person, it is
currently
triggered to handle
only scenarios with a
single moving object.

Real-time
surveillance
and visual
tracking is
computationall
y expensive
and resource
hungry.

Improve in moving
object detection
algorithms.

In offline
processing
there is no
timing
constraint.

To work in an
unstructured
environment which
has no artificial
blue/green screen.

An approaches
using a low-end
webcam in a
commodity laptop
with no special
hardware for high
speed image
processing.

2009

This technique is to
directly subtract two
consecutive frames to
extract the difference
image.

2008

It use Image
Processing
algorithms for Object
Recognition and
Tracking and
implement the same
using an FPGA(Field
Programmable Gate
Array).

Image processing
is difficult to
achieve on a serial
processor.

The image is segmented


successfully into
2003
segments based on
colours alone.

It does not
requires a
calibrated camera
nor multiple
images.

The original
image is not
overly
compressed.

To Classify the x, y
gradients of a shadow
area into shadow and
reflectance using
shape information in
addition to color
information.

2005

It identify and track a


moving object within
a
Video sequence.

It corresponds to
a moving object
with minimum
velocity
magnitude.

The DSP
board only
supports two
output video
windows at the
same time.

A more powerful
DSP board is require
to implement the
model.

2012

A robust Algorithm is
used to recover the
UAV ego-motion by
using a monocular
camera and on-board
hardware.

The lowcomputational
power is required
for estimating
self-motion.

There is high
frame rates on
the quality of
the computed
optic flow.

The optical flow


tracking, the features
are not necessarily
distributed evenly
across the image
plane

28

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523

Moving Object
Tracking in Video
Using MATLAB

Video Objec
Segmentation and
Tracking UsingLearning
Classification
Learning General
Optical Flow
Subspaces for
Ego-motion
Estimation and
Detection of
Motion
Anomalies

Simultaneous
tracking and
segmentation of
multiple objects
with
graph cuts

Video
Segmentation by
Tracking Many
Figure-Ground
Segments

B.C.Bendale
and
A.R.Karwanka
r

Yi Liu, Yuan
F.Zheng,
Fellow

R. Roberts,
C.Potthast and
F.Dellaert

A.Bugeau and
P.Prez

F.Li, T. Kim,
A.Humayun,
D.Tsai, J.M.
Rehg

Object Tracking
Based on Pattern
Matching

V. P. Reddy

Moving Object
Tracking using
Gaussian Mixture
Model and
Optical Flow

A.K. Chauhan
and
P. Krishan

2009

It described for
tracking moving
objects from a
sequence of video
frame.

Optical flow
method is
straightforward
and easier to
Implement.

A moving
object have
many small
boundary
boxes due to
the optical
detection on
different part
of the moving
object.
Approach has
lower
computational
complexity
than many
spatial-based
approaches.

It can modified to
differentiate
different class objects
in real time video.

2005

A novel semi
automatic approach
that handles single
VO extraction as a
binary classification
problem.

It can yield
relatively accurate
object boundary.

2009

It deals with
estimation of dense
optical flow and egomotion in a
generalized imaging
system by exploiting
probabilistic linear
subspace constraints
on the flow.

Optical flow
estimation in
general suffers
from the aperture
problem.

It is heavy,
expensive, and
require much
power.

A collection of
learned subspaces
could
describe the depth in
multiple typical
environment.

It jointly
manipulate pixel
labels and track-to
detection
assignment labels.

It do not deal
with the
entrance of
new objects in
the scene and
do not give the
complete
segmentation
of the objects.

The different type of


sequences
demonstrate the
ability
of the method to
detect, track and
precisely segment
persons as they enter
and traverse the field
of view.

It allows to track
hundreds of
segment
tracks efficiently

Due to
multiple
adjacent
moving
objects the
movement in
the case of
Frog is very
small.

To improve the
segment generation
and
feature computation
steps.

They contain
problems in
separating the
information
from the
background

To represent the
objects color
features for the
tracking purpose.

There is a lack
of complete
object
tracking.

Modified to
differentiate different
class objects in real
time video.

2007

It use both track and


segment multiple
objects in videos
using
min-cut/max-flow
optimizations.

2013

It approach by
tracking a pool of
holistic, figure
ground segments on
each frame, generated
by a multiple figureground segmentation
algorithm.

2012

2013

A novel algorithm for


object tracking in
video pictures, based
on edge detection,
object extraction and
pattern matching is
proposed.
The tracking method
that uses Gaussian
Mixture Model and
Optical Flow
approach for object
tracking.

It realized a
shifting operation
in a hardware
realization.

Optical Flow are


quick calculation.

To extend the
proposed
approach to multiple
object tracking.

29

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523

Multi-target
Detection and
Tracking with a
Laser-scanner

Segmentation and
tracking of
multiple
objects in video
sequences

Multilayer Laserscanner for robust


Object Tracking
and Classification
in Urban Traffic
Scenes
Simultaneous
localization,
mapping and
object tracking in
an urban
environment
using multiple 2D
laser scanners.

A.Mendes,
L.C.Bento and
U. Nunes.

M.Aprile, A.
Colombari, A.
Fusiello and
V. Murino

K.C.
Fuerstenberg,
Klaus, C. J.
Dietmayer, S.
Eisenlauer

N.m.johnson

2004

Method of detection
and tracking of
moving objects using
a Laser Range Finder
is used .

The system
proved to be
efficient on
tracking multiobjects over time,
resulting in good
velocity estimates.

An object with
an high
confidence
immediately at
the first scan
when it
appears.

To find solution to
the critical issue
based on range data
most often used with
visual data.

2003

A system that
produces an objectbased representation
of a video shots
composed by a
background and
moving objects

It has an high
compression rate
in the
transmission of
the sequence

Digital video
does not give
any explicit
description of
its content.

The additional alpha


channel in image
representation for a
more realistic
blending of the object
with the background

2002

Object recognition
around the vehicle to
detect dangerous
situations

A reliable
detection and
tracking of the
objects around
can be performed.

It needs a
low-speed
tracking and
detection.

To set up a complete
knowledge based
environmental model
around a driving
passenger car, robust
detection, tracking
and classification
algorithms are
essential.

2010

It is used for sharing


the
information with
other components
within the
autonomous system.

As the task
complexity
increases, the
need for safety
also increases.

High-level
planner is use
the contextual
data to modify
its plan.

To generate singular
representations for
detected objects.

There is
To develop an
intensity
Detection and
To track the motion
inexpensive and
C. Tomasi
differences
Tracking of Point
1991
of features in an
automatic window
andT. Kanade
between a past
size selection
Features
image stream.
and a current
algorithm.
window.
The motion
To automatically
A System for
A simple blob
templates
decide the time to do
Dissecting the
It recognize walking
tracker is used to
responses are
Dr. M.
human detection
Video for Tracing
using motion
keep track of the
integrated over
when a human or a
Hemalatha, S.
2013
Multiple Humans
template and
path of and
time to
Kavitha
group of human
in Multifaceted
temporal integration.
changes to the
achieve
entirely enters the
Situation
moving blobs.
walking
image.
recognition.
discussion on popular object detection. We provide
detailed summaries of object trackers, including
3. CONCLUSION
discussion on the object representations, motion models
This work analyzed an extensive object tracking
and the parameter estimation schemes employed by the
methods and also gives a brief review of related topics.
tracking algorithms. We believe that on object tracking
We divide the object tracking methods into three
with a rich detail contents, we can give valuable insight
categories based on the use of object representations.
into this research topic and encourage new research.
These methods established point correspondence,
methods using primitive geometric models. This object
REFERENCES
detection is done at some point. For instance, point
[1] Vision based moving object detection and
trackers detection in every frame is done where
tracking, Kalpesh R. Jadav, M. A. Lokhandwala ,
geometric region or contours-based trackers require
and
A. P. Gharge, EC Dept, GTU University
detection. Its done only when the object first appears in
Parul
Institute
Engg
&
Tech
the scene. Recognizing the importance of object
Limda,vadodara,India, May 2011.
detection for tracking systems, we include a short
To use windows
with high standard
deviations in the
spatial intensity
profile.

30

International Journal of Advent Research in Computer & Electronics, Vol.1, No5, August 2014
E-ISSN: 2348-5523
[2] Matlab toolbox for learning object models from
video , Michalis k.Titsias, University of Edinburg
EH1 2QL, UK.
[3]

Detecting Moving People in Video Streams , G.


L. Foresti, C. Micheloni
and C. Piciarelli
Department of Mathematics and Computer
Science, University of Udin
Via delle Scienze
206, 33100 Udine (ITALY).

[4] An Approach for Efficient Real Time Moving


Object Detection, Arnab Roy, Sanket Shinde and
Kyoung-Don Kang, Stat University of New York
at Binghamton,2009.
[5] IMPLEMENTATION
OF
FPGA-BASED
OBJECT TRACKING ALGORITHM , G.
Shrikanth and Kaushik Subramanian, ANNA
UNIVERSITY: CHENNAI 600 025, APRIL 2008 .
[6] Detecting and Removing Shadows, Zvi Figov ,
Ramat-Gan,Israel , Yoram Tal 32 shimkin st.Haifa
34750, and Moshe Koppel, Dept. of Computer
Science Bar-Ilan University Ramat-Gan, Israel
Koppel, October 2003.
[7] Real-time Optical Flow-Based Motion Tracking,
Jianwei Zhou and Kefeng Lu, Tamu, Course
Instructor: Professor Deepa Kundur, 2005.
[8] Robust
Optical-Flow
Based
Self-Motion
Estimation for a Quadrotor UAV, Volker Grabe,
Heinrich H. Bulthoff, and Paolo Robuffo
Giordano,2012.
[9] Moving Object Tracking in Video Using
MATLAB, Bhavana C. Bendale and Anil R.
Karwankar.
[10] Video Object Segmentation and Tracking Using
Learning Classification , Yi Liu and Yuan F.
Zheng. School of Interactive Computing, Georgia
Institute of Technology Atlanta, GA 30332,July
2005.

[15] Moving Object Tracking using Gaussian Mixture


Model and Optical Flow , Abhishek Kumar
Chauhan
and Prashant Krishan , Institute of
Technology Dehradun, India, April 2013.
[16] Multi-target Detection and Tracking with a
Laserscanner , Abel Mendes, Luis Conde Bento
and Urbano Nunes, Member of IEEE.June 2004.
[17] SEGMENTATION AND TRACKING OF
MULTIPLE
OBJECTS
IN
VIDEO
SEQUENCES, M. Aprile, A. Colombari, A.
Fusiello, V. Murino Dipartimento di Informatica,
University of Verona Strada Le Grazie 15, 37134
Verona, Italy,2003.
[18] Multilayer Laserscanner for robust Object
Tracking and Classification in Urban Traffic
Scenes , Kay Ch. Fuerstenberg, Klaus C. J.
Dietmayer, Stephan Eisenlauer University of Ulm,
Department of Measurement, Control and
Microtechnology, Albert-Einstein-Allee 41, 89081
Ulm, Germany,2002.
[19] SIMULTANEOUS
LOCALIZATION,
MAPPING AND OBJECT TRACKING IN AN
URBAN ENVIRONMENT USING MULTIPLE
2D LASER SCANNERS,Nicholas McKinley
Johnson,December 2010.
[20] Detection and Tracking of Point Features , Carlo
Tomasi and Takeo Kanade, April 1991.
[21] A System for Dissecting the Video for Tracing
Multiple Humans in Multifaceted Situation, Dr.
M. Hemalatha and S. Kavitha, Karpagam
university
Coimbatore,
S.I.V.E.T
college
Gowrivakkam, India, Volume 1, Issue 4,
September 2013.

[11] Learning General Optical Flow Subspaces for Ego


motion Estimation and Detection of Motion
Anomalies, Richard Roberts, Christian Potthast
and Frank Dellaert, Georgia Institute of
Technology Atlanta, GA 30332,2009.
[12] Track and Cut: simultaneous tracking and
segmentation of multiple objects with graph cuts,
Aurlie Bugeau , Patrick Prez. 2007.
[13] Video Segmentation by Tracking Many FigureGround Segments ,Fuxin Li, Taeyoung Kim,
Ahmad Humayun, David Tsai, James M. Rehg.
School of Interactive Computing Georgia Institute
of Technology.
[14] Object Tracking Based on Pattern Matching , V.
Purandhar Reddy, Associate Professor, Dept of
ECE, S V College of Engineering, Tirupati517501.February 2012.

28