Professional Documents
Culture Documents
Project Report Image Processing
Project Report Image Processing
CHAPTER 1
INTRODUCTION
1.1 INTRODUCTION
In last decade many terrorist attacks took place at railway stations, hotels, malls, Cinema
halls, Parliament House and at many other crowded places. Many people lost their lives in this.
To fight with such anti-social activities government appointed multiple Strategies. Video
Surveillance System is one among them. But the problem with CCTV system is they are fully
controlled and monitored by Human. Survey on CCTV shows that they are mainly used for
investigation of case rather than monitoring live footage of CCTV. That was primary issue why
it didn’t help to avoid the terrorist attacks. Another reason behind that was limitation of human to
response highly sensible situation. Human eyes have some limits to process complex real time
video. Human eyes are not effective for looking at multiple Displays at a time. So to enhance the
processing of CCTV system for real time video stream abandoned object detection System was
introduced.
Many algorithms and techniques were suggested by many researcher, scientists to design
an abandoned Object detection System. But most of them are practically more complex and more
expensive to implement. Some uses physical components such as highly expensive filters, lasers,
sonic waves etc. it is not possible to install all those cause the cost factor.
The proposed System is simple and effective. It avoids the Use of above expensive
physical components. It simply works on background segmentation scheme. And an additional
feature is added to discover the threat (human) by using a face detection algorithm.
1|Page
Department Of ECE, GECI
Abandoned Object Detection
Keep track on object for particular time if that object still for that particular time it means
that object may be an abandoned object so check that object manually. After detection of object
check object size if it is greater than decide size generate alarm and highlight that object we can
also inform police and fire workers through message.
.1.2 CURRENTLY EXISTING TECHNOLOGIES
Most existing techniques of abandoned (and removed) object detection employ
a modular approach with several independent steps where the output of each step serves as the
input for the next one. Many efficient algorithms exist for carrying out each of these steps and
any single complete AOD (Abandoned Object Detection) system has to address the problem of
finding a suitable combination of algorithms to suit a specific scenario.
Following is a brief description of these steps and related methods, in the order they are carried
out:
1.2.1 Background Modeling and Subtraction (BGS): This stage creates a dynamic model of
the scene background and subtracts it from each incoming frame to detect the current foreground
regions. The output of this stage is usually a mask depicting pixels in the current frame that do
not match the current background model. Some popular background modeling techniques
include adaptive medians, running averages, mixture of Gaussians, kernel density estimators,
Eigen-backgrounds and mean-shift based estimation. There also exist methods that employ dual
backgrounds or dual foregrounds for this purpose. The BGS step often utilizes feedback from the
object tracking stage to improve its performance.
2|Page
Department Of ECE, GECI
Abandoned Object Detection
1.2.2 Foreground Analysis: The BGS step is often unable to adapt to sudden changes in the
scene (of lighting, etc.) since the background model is typically updated slowly. It might also
confuse parts of a foreground object as background if their appearance happens to be similar to
the corresponding background, thus causing a single object to be split into multiple foreground
blobs. In addition, certain foreground areas, while being detected correctly, are not of interest for
further processing. The above factors necessitate an additional refinement stage to remove both
false foreground regions, caused by factors like background state changes and lighting variations,
as well as correct but uninteresting foreground areas like shadows.
Several methods exist for detecting sudden lighting changes, ranging from simple gradient and
texture based approaches to those that utilize complex lighting invariant features combined with
binary classifiers like support vector machines. Shadow detection is usually carried out by
performing a pixel-by-pixel comparison between the current frame and the background image to
evaluate some measure of similarity between them. These measures include normalized cross
correlation, edge-width information and illumination ratio.
1.2.3 Blob Extraction: This stage applies a connected component algorithm to the foreground
mask to detect the foreground blobs while optionally discarding too small blobs created due to
noise. Most existing methods use an efficient linear time algorithm. The popularity of this
method is owing to the fact that it requires only a single pass over the image to identify and label
all the connected components therein, as opposed to most other methods that require two passes.
1.2.4 Blob Tracking: This is often the most critical step in the AOD process and is concerned
with finding a correspondence between the current foreground blobs and the existing tracked
blobs from the previous frame (if any). The results of this step are sometimes used as feedback to
improve the results of background modeling. Many methods exist for carrying out this task,
including finite state machines, color histogram ratios, Markov chain Monte Carlo model,
Bayesian inference, Hidden Markov models and Kalman filters.
1.2.5 Abandonment Analysis: This step classifies a static blob detected by the tracking step as
either abandoned or removed object or even a very still person. An alarm is raised if a detected
abandoned/removed object remains in the scene for a certain amount of time, as specified by the
3|Page
Department Of ECE, GECI
Abandoned Object Detection
user. The task of distinguishing between removed and abandoned objects is generally carried out
by calculating the degree of agreement between the current frame and the background frame
around the object‘s edges, under the assumption that the image without any object would show
better agreement with the immediate surroundings. There exist several ways to calculate this
degree of agreement; two of the popular methods are based on edge energy and region growing.
There also exist methods that use human tracking to look for the object‘s owner and evaluate the
owner‘s activities around the dropping point to decide whether the object is abandoned or
removed.
4|Page
Department Of ECE, GECI
Abandoned Object Detection
Some of the approaches based on the other class of methods, based on tracking. The tracking
based approach used in comprises three levels of processing- starting with background modeling
in the lowest level using feedback from higher levels, followed by person and object tracking in
the middle level and finally the person-object split in the highest level to classify an object as
abandoned. The system proposed in considers the abandonment of an object to comprise of four
sub-events, from the arrival of the owner with the object to his departure without it. Whenever
the system detects any unattended object, it traces back in time to identify the person who
brought it into the scene and thus identifies the owner. Tracking and detection of carried objects
is performed using histograms in where the missing colors in ratio histogram between the frames
with and without the object are used to identify the abandoned object. The method used performs
tracking through a trans-dimensional Markov Chain Monte Carlo model suitable for tracking
generic blobs and thus incapable of distinguishing between humans and other objects as the
subject of tracking. The output of this tracking system therefore needs to be subjected to further
processing before luggage can be identified and labeled as abandoned.
5|Page
Department Of ECE, GECI
Abandoned Object Detection
It should be able to detect the face of person who left the object or luggage there (a
particular place i.e., railway station, airports or any other crowded places).
6|Page
Department Of ECE, GECI
Abandoned Object Detection
7|Page
Department Of ECE, GECI
Abandoned Object Detection
CHAPTER 2
LITERATURE SURVEY
2.1 INTRODUCTION
In this section, the various analyses and researches made in the field of ‘abandoned object
detection’ and result already published, taking into account various parameters of project and the
extent of the project are discussed
Visual surveillance is an important computer vision research problem. As more and more
surveillance cameras appear around us, the demand for automatic methods for video analysis is
increasing. Such methods have broad applications including surveillance for safety in public
transportation, public areas, and in schools and hospitals. Automatic surveillance is also essential
in the fight against terrorism. And hence has become a very popular subject and has been
projected remarkably in various areas with a large scope. They have carried out numerous
laboratory experiments and field observations to illuminate the darkness of this field.
8|Page
Department Of ECE, GECI
Abandoned Object Detection
detection and reduction techniques. The Abandoned object detection system is very useful in
multiple environment because of its portability. The simple mathematical processing make
system to be available in real time environment along with post processing.
An abandoned object detection system based on dual background segmentation [2] in this
the detection is based on a simple mathematical model and works efficiently at QVGA resolution
at which most CCTV cameras operate. The pre-processing involves a dual-time background
subtraction algorithm which dynamically updates two sets of background, one after a very short
interval (less than half a second) and the other after a relatively longer duration. The framework
of this algorithm is based on the Approximate Median model. An algorithm for tracking of
abandoned objects even under occlusion is also proposed. Results show that the system is robust
to variations in lighting conditions and the number of people in the scene. In addition, the system
is simple and computationally less intensive as it avoids the use of expensive filters while
achieving better detection results.
Robust unattended and stolen object detection by fusing simple algorithms [3], a new
approach for detecting unattended or stolen objects in surveillance video is used. It is based on
the fusion of evidence provided by three simple detectors. As a first step, the moving regions in
the scene are detected and tracked. Then, these regions are classified as static or dynamic objects
and human or nonhuman objects. Finally, objects detected as static and nonhuman are analyzed
with each detector. Data from these detectors are fused together to select the best detection
hypotheses. Experimental results show that the fusion based approach increases the detection
reliability as compared to the detectors and performs considerably well across a variety of
multiple scenarios operating at real time.
9|Page
Department Of ECE, GECI
Abandoned Object Detection
the bag, and decides the appropriate course of action. The system was successfully tested on the
i-LIDS dataset..
Today, video surveillance is commonly used in security system but requires more
intelligent and more robust technical approaches. Such system are used at Airports, Railway
Station and other public places. Precise and accurate detection of the left luggage or the
abandoned object is very essential in today’s world. In this context, a system is used which is
observed by a multi-camera system and also involves multiple actors .We first detect the object
by background subtraction. In this system there are 4 blocks which are as mentioned Video
Conversion, Blob Detection, Object Tracking, Alarm. Initially the live video stream is segmented
into frames or image. By using the image generated in matrix contain the value of each pixel by
taking values of pixel and performing various operation on it we able to detect the abandoned
object after detection we can track it and raise alarm. The problem of determining if object is left
unattended is solved by analyzing the output of the tracking system in a detection process.
Extraction of stable foreground image regions for unattended luggage detection [6] is a
novel approach to detection of stationary objects in the video stream. Stationary objects are these
separated from the static background, but remaining motionless for a prolonged time. Extraction
of stationary objects from images is useful in automatic detection of unattended luggage. The
mentioned algorithm is based on detection of image regions containing foreground image pixels
having stable values in time and checking their correspondence with the detected moving
objects. In the first stage of the algorithm, stability of individual pixels belonging to moving
objects is tested using a model constructed from vectors. Next, clusters of pixels with stable
color and brightness are extracted from the image and related to contours of the detected moving
objects. This way, stationary (previously moving) objects are detected. False contours of objects
removed from the background are also found and discarded from the analysis. The results of the
10 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
algorithm may be analyzed further by the classifier, separating luggage from other objects, and
the decision system for unattended luggage detection. The main focus of the paper is on the
algorithm for extraction of stable image regions. However, a complete framework for unattended
luggage detection is also presented in order to show that this approach provides data for
successful event detection. The results of experiments in which this algorithm was validated
using both standard datasets and video recordings from a real airport security system.
The experiments performed using the real airport recordings confirmed that this
algorithm works with satisfactory accuracy and is suitable for implementation in a working
unattended luggage detection system. The most important drawback of this approach is that if the
input data coming from the BS procedure is inaccurate, false results may be obtained.
Robust abandoned object detection using region level [7], a robust abandoned object
detection algorithm for real-time video surveillance. Different from conventional approaches that
mostly rely on pixel-level processing, here perform region-level analysis in both background
maintenance and static foreground object detection. In background maintenance, region-level
information is fed back to adaptively control the learning rate. In static foreground object
detection, region-level analysis double-checks the validity of candidate abandoned blobs.
Attributed to such analysis, this algorithm is robust against illumination change, “ghosts” left by
removed objects, distractions from partially static objects, and occlusions. Experiments on nearly
130,000 frames of i-LIDS dataset show the superior performance of this approach.
Unattended and Stolen Object Detection Based on Relocating of Existing Object [9],
proposes a new design and implementation method in supporting a smart surveillance system
that can automatically detect abandoned and stolen objects in public places such as bus stations,
train stations or airports. The developing system is implemented by using image processing
11 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
techniques. In the circumstance such as when suspicious events (i.e. left unattended or stolen
objects) have been detected, the system will alert to people responsible for the role such as
security guards or security staff. The detection process consists of four major components: 1)
Video Acquisition 2) Video Processing 3) Event Detection and 4) Result Presentation. The
experiment will be conducted in order to access the following qualities: 1) Usability: to verify
that the system can detect the object and recognize the events, and 2) Correctness: to measure the
accuracy of the system. Finally, the data sets are tested in the experimental result can measure
and represent the correctness of system by percentage. The correctness of object classification is
approximately 76%, and the correctness of event classification 83%.
An Abandoned Object Detection from Real time video is useful for live processing as
well as post processing of Video Stream and the System works on resolution of 320*240
(QVGA).It is implemented using dual background segmentation. And in this simple
mathematical processing make system to be available in real time environment along with post
processing. In abandoned object detection system based on dual background segmentation the
scheme based on the Approximate Median Model. It avoids the use of expensive filters while
achieving better detection results. In Robust unattended and stolen object detection by fusing
simple algorithms, real-time and robust unattended or stolen object detection is presented. The
fusion based approach increases the detection reliability as compared to the detectors and
performs considerably well across a variety of multiple scenarios operating at real time. In
Detection of Abandoned Objects in Crowded Environments the system keeps a lookout for the
owner, whose presence in or disappearance from the scene defines the status of the bag, and
decides the appropriate course of action. The system was successfully tested on the i-LIDS
dataset.
2.4 CONCLUSION
In the above chapter we discussed about some papers in research in this area of
abandoned object detection. The researches already made in this area shown us that there is
12 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
different type of algorithms for abandoned object detection but does not have an idea about how
to find the owner of the luggage.
CHAPTER 3
PROJECT DESCRIPTION
3.1 INTRODUCTION
1. Intrusion detection
13 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
In this project viola jones face detection algorithm (in MATLAB) is used to detect the
owner. Namely the intruder here is the person who left the bag or luggage in the respective
scenario. When the object is detected as abandoned, either an alarm or a message will be sent to
the authorities.
When the baggage is carried into a scene by a person, baggage alone cannot be detected,
but it can also be detected when the baggage is dropped or thrown into a scene from off-camera.
Additionally, baggage can be detected even if it appears while the scene is temporarily blocked.
The detection of abandoned object offers 3 levels to match the requirements of the scene, semi-
crowded areas, unattended detection in sterile areas, as well as detection in crowded scenarios for
the presence of stationary baggage. The ideal thing is for identifying the appearance of potential
baggage bombs, rail-line obstruction hazards, traffic lane/runway debris, dumping/littering,
fallen rocks etc.
14 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
3.3 CONCLUSION
In this chapter project description for owner detection, abandoned object detection and
the tools used for doing this project has been presented
CHAPTER 4
SOFTWARE
4.1 INTRODUCTION
The software used in this project is MATLAB R2015a
15 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
16 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
You can trust the results of your computations. MATLAB, which has strong roots in the
numerical analysis research community, is known for its impeccable numerics. A MathWorks
team of 350 engineers continuously verifies quality by running millions of tests on the MATLAB
code base every day.
MATLAB does the hard work to ensure your code runs quickly. Math operations are
distributed across multiple cores on your computer, library calls are heavily optimized, and all
code is just-in-time compiled. You can run your algorithms in parallel by changing for-loops into
parallel for-loops or by changing standard arrays into GPU or distributed arrays. Run parallel
algorithms in infinitely scalable public or private clouds with no code changes.
MATLAB provides a desktop environment tuned for iterative engineering and scientific
workflows. Integrated tools support simultaneous exploration of data and programs, letting you
evaluate more ideas in less time.
You can interactively preview, select, and preprocess the data you want to import.
2D and 3D plotting functions enable you to visualize and understand your data and
communicate results.
17 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
The integrated editing and debugging tools let you quickly explore multiple options,
refine your analysis, and iterate to an optimal solution.
MATLAB and add-on toolboxes are integrated with each other and designed to work
together. They offer professionally developed, rigorously tested, field-hardened, and fully
documented functionality specifically for scientific and engineering applications.
Major engineering and scientific challenges require broad coordination to take ideas to
implementation. Every handoff along the way adds errors and delays.
MATLAB automates the entire path from research through production. You can:
Build and package custom MATLAB apps and toolboxes to share with other MATLAB
users.
Create standalone executables to share with others who do not have MATLAB.
Integrate with C/C++, Java, .NET, and Python. Call those languages directly from
MATLAB, or package MATLAB algorithms and applications for deployment within web,
enterprise, and production systems.
Convert MATLAB algorithms to C, HDL, and PLC code to run on embedded devices.
MATLAB is also a key part of Model-Based Design, which is used for multidomain simulation,
physical and discrete-event simulation, and verification and code generation.
18 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Figure 4.1: A MATLAB window in which the code for the project is written
4.3 CONCLUSION
The software part uses MATLAB.ie, the proposed system is tested using
MATLAB.Different functions and features of MATLAB is discussed in this section.
19 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
CHAPTER 5
20 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
21 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Figure above shows the block diagram (flow chart) for the proposed system. The problem
is divided into different modules.
First the video input is given to the system which is from a CCTV camera in the case of
real time. But here the test is done using videos of real time situation. The videos given as input
are of the same resolution as the video from a CCTV camera.
In the second stage the video is extracted to frames for analyzing the video frame by
frame.at a time single frames are analyzed.
In the third stage we check for any faces in the incoming frames and if any save that
frame to a memory location and keeps the co-ordinates of the faces in an array.
The next stage checks whether the frame is first frame or not. If it is first frame store it as
background image after a conversion from RGB to gray. And the next coming frames are also
converted to gray scale and compared with the stored background image. If there is any change
is detected. That is any new object is come in next frames (moving or stationary), that will save
as the foreground image. If there is no change the result of background subtraction will be
zero.i.e., we will get a black image.
So now after change detection we have to ignore the moving objects and concentrate on
stationary objects which are not in the scene before.so we avoid the motion changes using some
thresholding method. And find the stationary objects. We avoid stationary objects of small area.
Because of chance of presence of noise. And find the objects of some range of area. After this
operation the resulting image will be a binary image showing objects detected as white and all
other as black.
Now we calculate the centroids of objects including the faces which are saved earlier to a
particular location.
Then we have to track the stationary objects inorder to detect when it is abandoned.so we
keep an eye on each object is alone without the presence of its owner for a particular time delay,
the object is termed as abandoned, and when the object is detected as abandoned we find the
minimum distance face centroid from the object centroid inorder to find the face of the owner.
22 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
And finally we makes an alarm or pass a message to the authority subject to the detection
of the abandoned object and displays the frame from the video in which the face of the owner is
marked inside a box.
And if there is no object is detected as abandoned the process is repeated looking for any
change in the incoming frames.ie.looking for a foreground image. This process repeats.
The system architecture is shown in figure (5.1). It represents the process of the system.
The system obtains a video from a video surveillance camera (e.g. CCTV) or a video file. Then,
detect objects using image processing techniques. The output of the system is an event
classification result that is acquired from a decision-making. The result of the system processing
can be viewed via a user interface, which is a TV screen or a computer screen.
A. Video Acquisition
This processing unit is the process of importing the video from a video stream and
capture into sequence frames.
1) Video Stream – This method receives a streaming video from a file or a CCTV camera.
Currently, the following video file formats are supported, mp4, avi, bmp and others. By
far, this paper works with only one video from one camera or one video file at a time.
2) Sequence Frame – After the program reads the video file, it takes and processes each
image by querying frames from the video file.
3) Capture Image Displaying – Creating a window in which the captured images from
camera will be shown on that window.
This section saves the faces in each frames and face co-ordinates also. The system
object used to detect face is “vision.CascadeObjectDetector” System object. And it is
explained in detail below.
23 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
vision.CascadeObjectDetector:-
Detect objects using the Viola-Jones algorithm
Construction
detector = vision.CascadeObjectDetector creates a System object, detector, that detects
objects using the Viola-Jones algorithm. The ClassificationModel property controls the type of
object to detect. By default, the detector is configured to detect faces.
To detect a feature:
1. Define and set up your cascade object detector using the constructor.
2. Call the step method with the input image, I, the cascade object detector object, detector,
points PTS, and any optional properties. See the syntax below for using the stepmethod.
Use the step syntax with input image, I, the selected Cascade object detector object, and any
optional properties to perform detection.
24 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
height], that specifies in pixels, the upper-left corner and size of a bounding box. The input
image I, must be a grayscale or true color (RGB) image.
BBOX = step (detector, I, roi) detects objects within the rectangular search region specified
by roi. You must specify roi as a 4-element vector, [x y width height], that defines a rectangular
region of interest within image I. Set the 'UseROI' property to true to use this syntax.
The characteristics of Viola–Jones algorithm which make it a good detection algorithm are:
Robust – very high detection rate (true-positive rate) & very low false-positive rate
always.
Real time – For practical applications at least 2 frames per second must be processed.
Face detection only (not recognition) - The goal is to distinguish faces from non-faces
(detection is the first step in the recognition process).
25 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
The features sought by the detection framework universally involve the sums of image
pixels within rectangular areas. As such, they bear some resemblance to Haar basis functions,
which have been used previously in the realm of image-based object detection. However, since
the features used by Viola and Jones all rely on more than one rectangular area, they are
generally more complex. The figure on the right illustrates the four different types of features
used in the framework. The value of any given feature is the sum of the pixels within clear
rectangles subtracted from the sum of the pixels within shaded rectangles. Rectangular features
26 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
of this sort are primitive when compared to alternatives such as steerable filters. Although they
are sensitive to vertical and horizontal features, their feedback is considerably coarser.
1. Haar Features – All human faces share some similar properties. These regularities may be
matched using Haar Features.
The four features matched by this algorithm are then sought in the image of a face (shown at
left).
Rectangle features:
The integral image at location (x,y), is the sum of the pixels above and to the left of (x,y),
inclusive.
27 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Learning Algorithm
The speed with which features may be evaluated does not adequately compensate for
their number, however. For example, in a standard 24x24 pixel sub-window, there are a total
The threshold value and the polarity are determined in the training, as well as the
coefficients .
Input: Set of positive and negative training images with their labels . If image is a
face , if not .
28 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
2. Apply the feature to each image in the training set, then find the optimal threshold
is where
3. Assign a weight to that is inversely proportional to the error rate. In this
way best classifiers are considered more.
4. The weights for the next iteration, i.e. , are reduced for the images that
were correctly classified.
Cascade architecture
degenerate tree. In the case of faces, the first classifier in the cascade – called the attentional
operator – uses only two features to achieve a false negative rate of approximately 0% and a
false positive rate of 40%.The effect of this single classifier is to reduce by roughly half the
number of times the entire cascade is evaluated.
In cascading, each stage consists of a strong classifier. So all the features are grouped into
several stages where each stage has certain number of features.
The job of each stage is to determine whether a given sub-window is definitely not a face or may
be a face. A given sub-window is immediately discarded as not a face if it fails in any of the
stages.
The cascade architecture has interesting implications for the performance of the
individual classifiers. Because the activation of each classifier depends entirely on the behavior
of its predecessor, the false positive rate for an entire cascade is:
Thus, to match the false positive rates typically achieved by other detectors, each
classifier can get away with having surprisingly poor performance. For example, for a 32-stage
cascade to achieve a false positive rate of , each classifier need only achieve a false positive
rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable
if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%,
each classifier in the aforementioned cascade needs to achieve a detection rate of approximately
99.7%.
30 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
faceDetector = vision.CascadeObjectDetector;
bboxes = step(faceDetector, Img);
figure, imshow(Img), title('Detected faces');hold on
for i=1:size(bboxes,1)
rectangle('Position',bboxes(i,:),'LineWidth',2,'EdgeColor','y');
end
end
C. Object Detection:
31 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Object detection is the process of finding out the area of interest as per user’s
requirement. Here we have proposed the algorithm for object detection using frame difference
method (One of the background subtraction algorithms). Steps are given as:
a) Read all the image frames generated from the video, which are stored on a variable or storage
medium.
b) Convert them into greyscale image using rgb2gray ( ) from coloured image.
c) Store the first frame as background image
c) Calculate the difference as |frame[i]-background frame|
d) If the difference is greater than a threshold (rth), then the value is considered to be the part of
foreground otherwise background (no change is detected)
e) Update the value of i by incrementing with one.
f) Repeat the step c to d up to the last image frame.
g) End the process.
D. Post Processing:
The detected object in the previous phase may lead to have a problem of connectivity and
it may also have some holes which may be useless for object representation. Therefore here we
need to have some post processing which will reduce the problem of handling holes and the
connectivity of pixels within object region. Mathematical morphological analysis is one of post
processing approach which leads to enhance the segmented image in order to improve the
required result. In the proposed method we have used the erosion and dilation iteratively so that
an object will clearly appear in foreground while the rest useless blobs will be removed.
Morphological operations are useful to obtain the useful components from the image. These
components may be the object boundary, region, shape and skeleton etc.
Dilation: Dilation is an increasing transform, used to fill small holes and narrow chasm in the
objects.
Erosion: Erosion as morphological transformation can be used to find the contours of the
objects. It is used to shrink or reduce the pixel values.
32 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
E. Feature Selection:
The features like centroid of an object, height and width of an object are selected so that
it is easy to plot the location of non-rigid body/objects with frame to frame. The proposed
method evaluates the centroid of detected object in each frame. It is assumed that after the
morphological operations there will not be any false object. And then a centroid of the object in
two dimensional frames can be calculated as the average of the pixels in x and y coordinates
belonging to the object.
F. Object Representation:
Here we are using centroid and the rectangular shape to cover the object boundary to
represent the object. After calculating the centroid, find the Width Wi and Height Hi of the
object by extracting the positions of pixels Pxmax and Pxmin which has the maximum and
minimum values of X Coordinate related to the object. Similarly, for the Y coordinates, calculate
Pymax and Pymin.
G. Trajectory Plot:
After the process of object detection using frame differencing method, the detected
components are given as input to the tracking process to plot the trajectory. The frame
differencing algorithm will give all the pixel values of the detected object. The centroid of the
objects is calculated by using the equation 1 and 2. Here input will be the pixel values of the
33 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
object and the output will be the rectangular area around the object. This process will calculate
the centroid, height and width of the object for the purpose of trajectory plotting.
Segmentation is the process of detecting changes and extracting relevant changes for
further analysis and qualification. Changed pixels from previous positions are referred to as
"Foreground Pixels"; those that do not change are called "Background Pixels". The segmentation
method used here is Background Subtraction. Image Pixels remaining after the background has
been subtracted are the foreground pixels. The key factor which is used to identify foreground
pixels by means of “Degree of change" in segmentation and can vary depending on the
application.
The segmentation result is one or more foreground blobs. A blob is nothing but a collection of
connected pixels.
I. Tracking
The next process in object detection is tracking the different blobs so as to find which
blobs correspond to abandoned objects. The first step in this process is to create a set, Track,
whose elements have three variables: blob- Properties, hitCount and missCount. The next step is
to analyze the incoming image for all the blobs. If the area change and the centroid position
change, as compared to any of the elements of the set Track are below a threshold value, we
increment hitCount and reinitialize miscount with a zero; otherwise we create a new element in
the Track-set, initializing the blob-properties variable with the properties of incoming blob and
hitCount and missCount are initialized to zero. We then run a loop through all the elements of the
set. If the hitCount goes above a user defined threshold value, an alarm is triggered. If the
missCount goes above a threshold, we delete the element from the set. These two steps are
repeated until there are no incoming images.
We use the Raise-alarm flag from previous units and highlight that part of the video for
which the alarm has been raised. We also display the face of the person who abandoned that
luggage in the particular location. The face of the person is captured by the following procedure.
We saves the faces and their co-ordinates from each and every frame. Calculates their centroids
and object centroid. When an object is detected as abandoned, we find the minimum distance
face centroid from the object centroid and display that particular frame from the video and the
face of the person is indicated inside a box.
5.3 CONCLUSION
An object tracking algorithm for video pictures, based on image segmentation and pattern
matching of the segmented objects between frames in a simple feature space is proposed.
35 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
CHAPTER 6
SIMULATION AND RESULTS
6.1 INTRODUCTION
The simulation of Owner detection and abandoned object detection is done by using
MATLAB version R2015a by considering background Image in a video of 300 frames. The
simulation results for all the techniques are explained. Initially the moving objects in video
images are tracked based on image segmentation, background subtraction and object detection
techniques. The simulation results of the algorithms are shown below. Abandoned object
detection: The sample frames from input video sequence is shown in Figure 6.1. The abandoned
object that are detected are shown in Figure 6.2a. The abandoned object found is marked in red
color. The results obtained for owner detection are shown in Figure 6.2b .the face of the owner is
marked using a green coloured box.
36 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Figure 6.1b: sample frames from the video showing face detection
37 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
Figure 6.2b: identified owner face (owner face is bounded by green box)
6.2 ADVANTAGES
Produces low false alarms and missing detection
Provides on time security
No special sensors are required
It can also identify the owner face
38 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
6.3 DISADVANTAGES
In a high density scenario, there is a possibility that the object is prone to be hidden from
camera view for most of the time, leading to a failure in detection.
6.4 CONCLUSION
Simulation for the proposed system using MATLAB R2015a is done. The results are
satisfactory. The simulation results using a video is shown in the above figures. It implies that
the system works well on different video streams of practical situations. Also tried to implement
in real time but the problem was the miss detection of the owner of the luggage because of
variations in some parameters. Object detection is done perfectly. This system can be considered
as foundation for a truly robust frame work that only requires a bit of calibration to perform well
in practically any scenario.
39 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
CHAPTER 7
FUTURE SCOPE&CONCLUSION
Owing to modular nature of this system, it is quite easy to add more sophisticated
methods to any of its module. The relative simplicity of this tracking algorithm promises that a
DSP implementation is possible
We are glad to complete this project work successfully and satisfactory. One of the short
comings of the system was occlusion, since this project uses a single camera view from a fixed
point. The occlusion can be avoided by using multiple camera views and a robust algorithm for
object detection. Also there were some problems in the face detection due to the environmental
changes. But the testing results gave us a satisfactory output. And so we concludes that this
system can be considered as foundation for a truly robust frame work that only requires a bit of
calibration to perform well in practically any scenario.
40 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
REFERENCES
[1].An Abandoned Object Detection from Real time video. Kulkarni Abhishek, Kulkarni
Saurabh, Patil Aniket, Patil Girish. International Journal of Scientific & Engineering Research,
Volume 5, Issue 5, May-2014, ISSN 2229-5518
[2]. F. Porikli, Y. Ivanov, and T. Haga, “Robust Abandoned Object Detection Using Dual
Foregrounds”, Eurasip Journal on Advances in Signal Processing, vol. 2008, 2008.
[3]. M. Bhargava et al. “Detection of abandoned objects in crowded environments”. Proc. of
IEEE Conf. on Advanced Video and Signal Based Surveillance, pp. 271-276, 2007.
[4].M. Spengler, B. Schiele, “Automatic Detection and Tracking of Abandoned Objects,” Joint
IEEE International Workshop on Visual Surveillance and PETS, 2003.
[5]. P. L. Venetianer, Z. Zhang, W. Yin, A. J. Liptop, “Stationary Target Detection Using the
Object Video Surveillance System”, IEEE International Conference on Advanced Video and
Signal based Surveillance, London , UK , September 2007.
[6]. R. Cucchiara, et al. "Detecting Moving Objects, Ghosts, and Shadows in Video Streams",
IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(10):1337-1342, 2003.
[7]. H.H. Liao, J.Y. Chang, and L.G. Chen, “A localized approach to abandoned luggage
detection with foreground-mask sampling,” in IEEE International Conference on Advanced
Video and Signal Based Surveillance, 2008, pp. 132–139.
[8]. Y. Tian et al., “Robust detection of abandoned and removed objects in complex surveillance
videos,” IEEE Trans. on Systems, Man, and Cybernetics, Part C: Applications and Reviews,
vol.PP (99), pp. 1–12, 2010.
[9]. M. Beynon, D. Hook, M. Seibert, A. Peacock, and D. Dudgeon, “Detecting Abandoned
Packages in a Multi-camera Video Surveillance System”, IEEE International Conference on
Advanced Video and Signal-Based Surveillance, 2003
[10]. N. Bird, S. Atev, N. Caramelli, R. Martin, O. Masoud, N. Papanikolopoulos, “Real Time,
Online Detection of Abandoned Objects in Public Areas”, IEEE International Conference on
Robotics and Automation, May, 2006.
[11]. Ferrando.S, Gera.G, Regazzoni.C, “Classification of Unattended and Stolen Object in
Video-Surveillance System”, in IEEE International Conference on Video and Signal Based
Surveillance, 2006. AVSS '06
[12]. http://in.mathworks.com/products/matlab/
41 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
APPENDIX
MATLAB code for proposed system
clc;clear all;close all;
videoFReader = vision.VideoFileReader('Train.avi');
videoPlayer = vision.VideoPlayer;
count=1;cor_mat=[];centroids={};count_cent=1;prev_cent=[];flag_check=1;
faceDetector = vision.CascadeObjectDetector();perm_flag=0;got_count=600;
bb=[];face_frame=[];face_count=1;
permanent=[];%58 imshow
frame = step(videoFReader);
bbox = step(faceDetector,frame);
if ~isempty(bbox) & count<got_count
bb=[bb ;bbox];face_frame{face_count}=frame;
face_count=face_count+1;
end
videoOut = insertObjectAnnotation(frame,'rectangle',bbox,'Face');
step(videoPlayer, videoOut);
if count==150
clc;
end
% figure(2),imshow(frame)
if count==1
prev_frame=frame;
bcknd=frame;
% save bcknd bcknd
count=count+1;
continue;
end
stat_change=abs(frame-bcknd);
bw_stat_change=im2bw(stat_change,.1);
42 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
fnl_chnge=abs(stat_change-chnge);
fnl_chnge(pos_zeros)=0;
% figure(2),imshow(fnl_chnge)
bw=im2bw(chnge,.1);
[L,noc]=bwlabel(mask);
props=regionprops(L);
cent=cat(1,props.Centroid);
bound_box=cat(1,props.BoundingBox);
x=cent(:,1);
y=cent(:,2);
% hold on
% plot(x,y,'r*')
% hold off
% pause(.2);
bw2=im2bw(fnl_chnge,.1);
mask2 = imdilate(bw2, strel('rectangle', [17,3]));%was 17 instead 27
mask2 = imopen(mask2, strel('rectangle', [5,5]));
mask2 = imfill(mask2, 'holes');
% figure(1),imshow(mask2)
[L_stat,noc_stat]=bwlabel(mask2);
figure(2),imshow(frame)
if noc_stat==0
pause(.1);
prev_frame=frame;
count=count+1;
continue;
end
%figure(1),imshow(label2rgb(L))
props2=regionprops(L_stat);
area2=cat(1,props2.Area);
43 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
cent2=cat(1,props2.Centroid);
bound_box2=cat(1,props2.BoundingBox);
x2=cent2(:,1);
y2=cent2(:,2);
%fnl_mask=abs(mask2-mask);
%figure(1),imshow(fnl_mask)
%pause(.3);
D = pdist2(cent,cent2,'euclidean');
[r,c]=find(D<40);
val=unique(c);
for i=1:length(val)
L_stat(L_stat==val(i))=0;
end
L_stat=imclearborder(L_stat);
[L_stat,noc_stat]=bwlabel(L_stat);
props_stat=regionprops(L_stat);
area_stat=cat(1,props_stat.Area);
pos=area_stat<800;
pos=find(pos);
for i=1:length(pos)
L_stat(L_stat==pos(i))=0;
end
%
pos_gr=area_stat>900;
pos_gr=find(pos_gr);
for i=1:length(pos_gr)
L_stat(L_stat==pos_gr(i))=0;
end
%
[L_stat,noc_last]=bwlabel(L_stat);
L_stat2=L_stat;
if noc_last>0
dup=rgb2gray(frame);
dupback=rgb2gray(bcknd);
props_L=regionprops(L_stat);
44 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
bound_L=cat(1,props_L.BoundingBox);
for i=1:noc_last
z=zeros(size(L_stat));
c=bound_L(i,1);r=bound_L(i,2);c_last=c+bound_L(i,3);r_last=r+bound_L(i,4);
z(floor(r):floor(r_last),floor(c):floor(c_last))=1;
dup(~z)=0;
dupback(~z)=0;
b_diff=dup-dupback;
b_diff=im2bw(b_diff,.1);
pr=regionprops(b_diff,'Area');
if ~isempty(pr)
ar=pr.Area;
if ar<29
pos_L=find(L_stat==i);
L_stat2(pos_L)=0;
end
end
end
hold on
props_last=regionprops(L_stat2);
bound=cat(1,props_last.BoundingBox);
cent_last=cat(1,props_last.Centroid);
if ~isempty(bound)
bound_one=bound(1,:);
for i=1:size(bound,1)
x=bound(i,1);y=bound(i,2);w=bound(i,3);h=bound(i,4);
line([x,x+w],[y,y]);
line([x,x],[y,y+h]);
line([x,x+w],[y+h,y+h]);
line([x+w,x+w],[y,y+h]);
end
45 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
plot(cent_last(:,1),cent_last(:,2),'r*')
centroids=cent_last(1,:);
if flag_check==1
prev_send=centroids;D=100;flag_check=0;
else
D = pdist2(prev_send,centroids,'euclidean');
prev_send=centroids;
end
if D<5
flag=flag+1;
else
flag=0;
end
if flag>=10
permanent=bound_one;got_count=count-10;
x=floor(permanent(1,1));y=floor(permanent(1,2));w=floor(permanent(1,3));h=floor(permane
nt(1,4));
perm_reg=frame(y:y+h,x:x+w,:);
bw_perm=im2bw(perm_reg,.6);
% figure(1),imshow(perm_reg)
flag=0; perm_flag=1;
% line([x,x+w],[y,y]);
% line([x,x],[y,y+h]);
% line([x,x+w],[y+h,y+h]);
% line([x+w,x+w],[y,y+h]);
end
% if centroids{}
% end
count_cent=count_cent+1;
hold off
end
pause(.01);
% pause(.3);
% hold on
% plot(x2,y2,'r*')
46 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
% hold off
% pause(.2);
% figure(2),imshow(fnl_chnge)
% pause(.0001)
% step(videoPlayer,frame);
% % figure(1),imshow(frame)
% % pause(.00001);
end
hold on
if ~isempty(permanent)
x=floor(permanent(1,1));y=floor(permanent(1,2));w=floor(permanent(1,3));h=floor(permane
nt(1,4));
cur_reg=frame(y:y+h,x:x+w,:);
bw_cur=im2bw(cur_reg,.6);
% figure(2),imshow(cur_reg)
if corr2(cur_reg(:,:,1),perm_reg(:,:,1))>.5
line([x,x+w],[y,y],'Color',[1 0 0]);
line([x,x],[y,y+h],'Color',[1 0 0]);
line([x,x+w],[y+h,y+h],'Color',[1 0 0]);
line([x+w,x+w],[y,y+h],'Color',[1 0 0]);
pause(.2);
end
hold off
end
prev_frame=frame;
count=count+1;
end
release(videoFReader);
release(videoPlayer);
if perm_flag==1
dist_values=sqrt((bb(:,1)-x).^2 + (bb(:,2)-y).^2);
pos=find(dist_values==min(dist_values));
attacker_cord=bb(pos,:);
47 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
face=face_frame{1,pos};
figure,imshow(face),title('Identified Owner Face')
x=attacker_cord(1,1); y=attacker_cord(1,2); w=attacker_cord(1,3); h=attacker_cord(1,4);
line([x,x+w],[y,y],'Color',[0 1 0]);
line([x,x],[y,y+h],'Color',[0 1 0]);
line([x,x+w],[y+h,y+h],'Color',[0 1 0]);
line([x+w,x+w],[y,y+h],'Color',[0 1 0]);
%%%%%%%%ALARM
N=10000;
s=zeros(N,1);
for a=1:N
s(a)=tan(a); %*sin(-a/10);
end
Fs=2000; %increase value to speed up the sound, decrease to slow it down
soundsc(s,Fs)
sound(s)
end
48 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
set(vid,'ReturnedColorSpace','grayscale');
%manual trigger
triggerconfig(vid, 'manual');
start(vid);
preview(vid);
pause();
stoppreview(vid);face_count=1;perm_flag=0;
bb=[];bbox=[];
for count=1:50
frame =getsnapshot(vid);
curr_frame=double(frame);
figure(1),imshow(uint8(curr_frame))
if perm_flag==0
%returns coordinates of face
bbox = step(faceDetector,frame);
end
if ~isempty(bbox)
hold on
x=bbox(1,1);y=bbox(1,2);w=bbox(1,3);h=bbox(1,4);
%saving coordinates of faces in each iteration
bb=[bb;bbox];
%saving the frame in which face is detected in each iteration
face_frame{face_count}=frame;
face_count=face_count+1;
%drawing rectangle around face
line([x,x+w],[y,y],'Color',[0 1 0]);
line([x,x],[y,y+h],'Color',[0 1 0]);
line([x,x+w],[y+h,y+h],'Color',[0 1 0]);
line([x+w,x+w],[y,y+h],'Color',[0 1 0]);
pause(.3);
hold off
end
49 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
motion_chng=abs(curr_frame-prev_frame);
tmp=max(motion_chng(:));
if tmp>70
thresh_motion=(tmp/2)-20;
else
thresh_motion=tmp;
end
motion_img=motion_chng>thresh_motion;
motion_img=imfill(motion_img,'holes');
% figure(3),imshow(motion_img)
[L,noc]=bwlabel(motion_img);
fnl_chng=stat_chnge;
if noc>0
%returning some properties of each white area
props=regionprops(L);
%accessing bounding boxes of objects
bound=cat(1,props.BoundingBox);
for bound_count=1:noc
x=floor(bound(bound_count,1));y=floor(bound(bound_count,2));x_limit=floor(bound(bound_co
unt,3))+x;y_limit=floor(bound(bound_count,4))+y;
if x<1
x=1;
50 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
end
if x_limit>size(L,2)
x_limit=size(L,2);
end
if y<1
y=1;
end
if y_limit>size(L,1)
y_limit=size(L,1);
end
fnl_chng(y:y_limit,x:x_limit)=0;
end
end
figure(4),imshow(fnl_chng)
fnl_bw=bwareaopen(fnl_chng,2000);
fnl_bw=imfill(fnl_bw,'holes');
prop_final=regionprops(fnl_bw);
fnl_bound=cat(1,prop_final.BoundingBox);
if isempty(fnl_bound)
prev_frame=curr_frame;
continue
end
xf=fnl_bound(:,1);yf=fnl_bound(:,2);w=fnl_bound(:,3);h=fnl_bound(:,4);
if count>6
perm_flag=1;
51 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
figure(5),imshow(uint8(curr_frame))
pause(.3);
hold on
line([xf,xf+w],[yf,yf],'Color',[1 0 0]);
line([xf,xf],[yf,yf+h],'Color',[1 0 0]);
line([xf,xf+w],[yf+h,yf+h],'Color',[1 0 0]);
line([xf+w,xf+w],[yf,yf+h],'Color',[1 0 0]);
hold off
cent=cat(1,prop_final.Centroid);
cent=cent(:,1);
%break;%comment if error
end
prev_frame=curr_frame;
end
stop(vid);
delete(vid);
if perm_flag==1
x=cent(1,1);y=cent(1,2);
dist_values=sqrt((bb(:,1)-x).^2 + (bb(:,2)-y).^2);
pos=find(dist_values==min(dist_values));
attacker_cord=bb(pos,:);
face=face_frame{1,pos};
figure,imshow(face),title('Identified Owner Face')
hold on
x=attacker_cord(1,1); y=attacker_cord(1,2); w=attacker_cord(1,3); h=attacker_cord(1,4);
line([x,x+w],[y,y],'Color',[1 0 0]);
line([x,x],[y,y+h],'Color',[1 0 0]);
line([x,x+w],[y+h,y+h],'Color',[1 0 0]);
line([x+w,x+w],[y,y+h],'Color',[1 0 0]);
hold off
fprintf(ser_obj,'%s','AT');
52 | P a g e
Department Of ECE, GECI
Abandoned Object Detection
fprintf(ser_obj,13);
pause(15);
fprintf(ser_obj,'%s','AT+CMGF=1');
fprintf(ser_obj,13);
pause(15);
fprintf(ser_obj,'%s','AT+CMGS="9562421292"');
fprintf(ser_obj,13);
pause(15);
fprintf(ser_obj,26);
fclose(ser_obj);
delete(ser_obj);
end
53 | P a g e
Department Of ECE, GECI