You are on page 1of 45

MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 1.

INTRODUCTION
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

1 INTRODUCTION

The project is proposed for the purpose of intrusion detection, track and destroy the intruding
object. The system will be mounted at some suitable place from which complete and clear view
of the area under surveillance can be captured with camera. Thus the image will be captured;
processed and desired action will be performed on it. Object segmentation separates regions of
interest in image data that identify real world objects. Segmenting and tracking regions of
arbitrary size within a scene allow the application to focus on more complex tasks like object
recognition within a smaller spatial domain of the entire spatial scene which reduces the
processing time required to identify the object of interest. Reducing the spatial domain of the
image decreases the computational resources necessary for the detailed analyses required for
object recognition.

The system is provided with a high resolution camera, image processing hardware,
microcontroller, two servo motors and other supplementary hardware and mechanisms[1]. Image
Processing Hardware will acquire images captured by camera after some predefined interval of
time. Then it will process every captured image for detecting intrusion. If intrusion is detected
Image Processing Hardware will extract the features of that intruding object and compare them
with features of objects stored in database. We have collected database for the objects those are
to be destroyed. If match between intruding object and one of the objects from database is found
object is said to be recognized. System will track that object to calculate its velocity of motion.
This velocity information is needed to decide the angle and time instant at which projectile is to
be launched at intruding object to destroy it. Position of the intruding object in the form of x-y
co-ordinate is found and sent to microcontroller. Microcontroller will control the angle of
rotation of two Servo Motors to position the cannon aiming at the intruding object. At last
cannon will get fired.

Object tracking is an important task within the field of computer vision. The proliferation of
high-powered computers, the availability of high quality and inexpensive video cameras, and the
increasing need for automated video analysis has generated a great deal of interest in object
tracking algorithms. In its simplest form, tracking can be defined as the problem of estimating
the trajectory of an object in the image plane as it moves around a scene. In other words, a
tracker assigns consistent labels to the tracked objects in different frames of a video[2].
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Additionally, depending on the tracking domain, a tracker can also provide object centric
information, such as orientation, area, or shape of an object.

For Object Tracking and Classification basic thing is to obtaining an initial mask for a moving
object, and to preprocess the mask. Normally the mask is affected by “salt-and-pepper” noises.
We apply morphological filters based on combinations of dilation and erosion to reduce the
influence of noise, followed by a connected component analysis for labeling each moving object
region.
Very small regions are discarded. At this stage we calculate the following features for each
moving object region: bounding rectangle: the smallest isothetic rectangle that contains the
object region. We keep record of the coordinate of the upper left position and the lower right
position, what also provides size information (width and height of each rectangle).
Color: the mean R G B values of the moving object.
Center: we use the center of the bounding box as a simple approximation of the centroid of a
moving object region.
Velocity: defined as movement of number of pixels/second in both horizontal and vertical
direction.
In order to track moving objects accurately, especially when objects are partially occluded, and
the position of the camera is not restricted to any predefined viewing angle, these features are
actually insufficient. We have to add further features that are robust and which can also be
extracted even if partial occlusion occurs.

One can simplify tracking by imposing constraints on the motion and/or appearance of objects.
For example, almost all tracking algorithms assume that the object motion is smooth with no
abrupt changes. One can further constrain the object motion to be of constant velocity or constant
acceleration based on a priori information. Prior knowledge about the number and the size of
objects, or the object appearance and shape, can also be used to simplify the problem.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 2.

LITERATURE SURVEY
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

LITERATURE SURVEY:

Motion detection and object tracking is a very rich research area in computer vision. The main
issues that make this research area difficult are:

1. Computational Expense: - If an algorithm for detecting motions and tracking


objects is to be applied to real-time applications, then it needs to be computationally
inexpensive so that a modern PC has enough power to run it. Yet many algorithms
in this research area are very computationally expensive since they require computing
values for each pixel in each image frame.

2. Moving Background Rejection: - The algorithm also needs to be able to reject


moving background such as a swaying tree branch and not mistakenly recognizes it as
a moving object of interest. Misclassification can easily occur if the area of moving
background is large compared to the objects of interest, or if the speed of moving
objects is as slow as the background.

3. Tracking Through Occlusion: - Many algorithms have devised ways of becoming


robust against small occlusions of interested objects, but most algorithms still fail to
track the object if it is occluded for a long period of time.

4. Modeling Targets of Interest:- Many algorithms use a reasonably detailed model of


the targets in objects detection and consequently require a large number of pixels on
target in order to detect and track them properly. This is a problem for real-world
applications where it is frequently impossible to obtain a large number of pixels on
target.

5. Adapting to Illumination Variation:- Real-world applications will inevitably have


variation in scene illumination that a motion detection algorithm needs to cope with.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Yet if an algorithm is purely an intensity-based method then it will fail under


illumination variation.

6. Analyzing Object Motion:- After objects have been correctly classified and
tracked, an algorithm may want to analyze the object motions such as the gait of
moving human, the speed of car (is it speeding or not?) etc… This could be difficult
especially for non-rigid objects such as human if the object view is not in the right
perspective for the algorithm.

7. Adapting to Camera Motion:- Detecting moving entities from mobile camera


video streams still remains a challenge in this research area.
The rest of this section describes the contributions made by recently published papers and how
they tackle some of the above problems.

Automatic object Recognition, Tracking and Destruction for Military Purpose


By Amit Kenjale[1]

In this paper, The system is provided with a high resolution camera, image processing
hardware, microcontroller, servo motors and other supplementary hardware and mechanisms.
Image processing hardware will keep capturing images through camera after some predefined
interval of time. Then it will process every captured image for detecting intrusion.

If intrusion is detected image processing hardware will extract the features of that
intruding object and compare them with features of objects stored in database. We have collected
database for the objects those are to be destroyed. If match between intruding object and one of
the objects from database is found, system keep tracking that object to calculate its velocity of
motion.

This velocity information is needed to decide the angle and time instant at which
projectile is to be launched at intruding object to destroy it. After these calculations, position of
that intruding object in the form of x-y co-ordinate is found and accordingly cannon will be
aimed at that object with help of two servo motors, driven by microcontroller.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Object Classification and Tracking in Video Surveillance By Qi Zang and


Reinhard Kletteg [2]
In this paper, they have reviewed previous research on moving object tracking techniques,
analyze some experimental results, and finally provide our conclusions for improved
performances of traffic surveillance systems. One stationary camera has been used. Many
applications have been developed for monitoring public areas such as offices, shopping malls or
traffic highways. In order to control normal activities in these areas, tracking of pedestrians and
vehicles play the key role in video surveillance systems. We classify these tracking techniques
into four categories:
1. Tracking based on a moving object region
2. Tracking based on an active contour of a moving object
3. Tracking based on a moving object model
4. Tracking based on selected features of moving objects
Besides these four main categories, there are also some other approaches on object tracking
presents a tracking method based on wavelet analysis.

Moving Object Detection, Tracking and Classification For Smart Video


Surveillance By Yi˘githan Dedeo˘glu[3]
In this paper, we present a smart visual surveillance system with real-time moving object
detection, classification and tracking capabilities. The system operates on both color and gray
scale video imagery from a stationary camera. In the proposed system moving object detection is
handled by the use of an adaptive background subtraction scheme which reliably works in indoor
and outdoor environments. We also present two other object detection schemes, temporal
differencing and adaptive background mixture models, for performance and detection quality
comparison. The proposed system is able to distinguish transitory and stopped foreground
removed objects; classify detected objects into different groups such as human, human group and
vehicle; track objects and generate trajectory information even in multi-occlusion cases and
detect fire in video imagery.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

System is assumed to work real time as a part of a video-based surveillance system. The
computational complexity and even the constant factors of the algorithms we use are important
for real time performance. Hence, our decisions on selecting the computer vision algorithms for
various problems are affected by their computational run time performance as well as quality.
Furthermore, our system’s use is limited only to stationary cameras and video inputs from
Pan/Tilt/Zoom cameras where the view frustum may change arbitrarily are not supported. The
system is initialized by feeding video imagery from a static camera monitoring a site. Most of the
methods are able to work on both color and monochrome video imagery.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 3.
SYSTEM IMPLEMENTATION
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.1 SYSTEM OVERVIEW

This system is proposed for detecting intrusions, tracking intruding object and destroying it. The
system will be fixed at some suitable place, from which complete and clear view of the area
under surveillance can be captured with camera. The system is provided with a high resolution
camera, image processing hardware, microcontroller, two servo motors and other supplementary
hardware and mechanisms.

3.2 BLOCK DIAGRAM

Fig 1. Block Diagram for Complete System

Image Processing Algorithm will acquire images captured by camera after some
predefined interval of time. Then it will process every captured image for detecting intrusion. If
intrusion is detected Image Processing Hardware will extract the features of that intruding object
and compare them with features of objects stored in database. We have collected database for the
objects those are to be destroyed. If match between intruding object and one of the objects from
database is found object is said to be recognized. System will track that object to calculate its
velocity of motion. This velocity information is needed to decide the angle and time instant at
which projectile is to be launched at intruding object to destroy it. Position of the intruding
object in the form of x-y co-ordinate is found and sent to microcontroller. Microcontroller will
control the angle of rotation of two Servo Motors to position the cannon aiming at the intruding
object. At last cannon will get fired.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.2.1 CAMERA

Camera used for experimentation purpose is “i-ball face2face, a USB webcam, with 640 by
480 resolution and 64M color depth. A Night-Vision Camera and camera with different
resolution and color depth can be used depending upon requirement of application. Camera is
fixed on the system and should not move from its place once background is captured, otherwise
it will adversely affect accuracy of the system as subtraction is used to detect intruding object

Fig 2. iBall Face2Face C1.3 MP

Description

iBall Face2Face C1.3 web camera with interpolated 1.3MP resolution and 5G Wide angle
lens provides smooth video and lets you enjoy the clarity of web video.

Features of iBall Face2Face C1.3

 1.3 Mega Pixel Interpolated Resolution


 High quality 5G wide angle lens
 6 Auto Lighting LEDs for night vision
 Snapshot button for still image capture
 Video Resolution: 800 x 600 pixels
 Image Resolution: 640x480, 800x600, 1280x1024 pixels
 Interface: USB 1.1, compliant to USB 2.0
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.2.2 IMAGE PROCESSING SOFTWARE

3.2.2.1 COLOR SPACE

A color space is a method by which we can specify, create and visualize color. As
humans, we may define a color by its attributes of brightness, hue and colorfulness. A computer
may describe a color using the amounts of red, green and blue phosphor emission required to
match a color.

A printing press may produce a specific color in terms of the reflectance and absorbance
of cyan, magenta, yellow and black inks on the printing paper. A color is thus usually specified
using three co-ordinates, or parameters. These parameters describe the position of the color
within the color space being used. Different color spaces are better for different applications, for
example some equipment has limiting factors that dictate the size and type of color space that
can be used.

3.2.2.2 HSL (HUE SATURATION AND LIGHTNESS)

This represents a wealth of similar colour spaces; alternative names include HSI
(intensity), HSV (value), HCI (chroma / colorfulness), HVC, TSD (hue saturation and darkness)
etc. Most of these colour spaces are linear transforms from RGB and are therefore device
dependent and non–linear. Their advantage lies in the extremely intuitive manner of specifying
colour. It is very easy to select a desired hue and to then modify it slightly by adjustment of its
saturation and intensity.
The supposed separation of the luminance component from chrominance (colour)
information is stated to have advantages in applications such as image processing. However the
exact conversion of RGB to hue, saturation and lightness information depends entirely on the
equipment characteristics. Failure to understand this may account for the sheer numbers of
related but different transforms of RGB to HSL, each claimed to be better for specific
applications than the others.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

HSV color space transforms standard RGB (Red, Green, Blue) color space into a new color
space comprised of Hue, Saturation and Intensity (Value).

a. The Hue component can be thought of as the actual color of the object. That is, if you
looked at some object what color would it seem to you?
b. Saturation is a measure of purity. Whereas Hue would say that an object is green,
Saturation would tell us how green it actually is.
c. Intensity, which is also referred more accurately as value tells us how light the color is.

Fig 3. This is the standard representation of HSV colour space as a cone. As you move outward, S is larger and
colors are more pure. As you move down, colors get darker. Moving around the cone changes the Hue colour along
the rainbow.

 RGB is converted to HSV in a simple way


1. First compute the base H,S and V
a. H – Hue ifColor1 is Max
H = ( Color2 - Color3 ) /(Max - Min )
b. S – Saturation
S = Max - Min / Max
c. V – Value
V = max of either R, G or B
2. Normalize the values

General H,S,V has ranges:


 0 ≤ Hue ≤ 360
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

 0 ≤ Sat ≤ 100
 0≤ Val ≤ 255
3. H - Hue is set based on the dominant color. It has three different basic ranges based
on what that color is.
a. If RED H* = 60
b. If GREEN H += 2; H* = 60
c. If BLUE H += 4; H* = 60
4. In essence this forces the recognizable 0-360 value seen in hue
5. S - May be S* = 100 , I like in some cases to keep it from 0 to 1
6. V - is V* = 255

There are some other bells and whistles to take care of divide by zero conditions and other such
singularities. For instance if Max = Min then we need to fudge hue.

Note: In this example we set NORM to true for normal HSV conversions. This will make the
values use normal HSV range. Setting NORM to false forces H,S and V to range between 0 and
1. This is useful if next you will convert to H2SV color space. R, G and B are assumed to be
between 0 and 255.

Fig 4. HSI Color Space


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

 HSV is converted to RGB


1. Get some easy work done:
a. If Value V = 0, then we are done, color is black set R,G,B to 0
b. If Saturation S = 0, no color is dominant, set to some gray color
2. Hue is valued from 0 to 360; we chunk the space into 60 degree increments. At each
60 degrees we use a slightly different formula. In general we will assign and
let R,G and B exclusively as:
a. We set the most dominant color:
i. If H is 300 to 60 , set R = V
ii. If H is 60 to 180, set G = V
iii. If H is 180 to 300, set B = V
b. The least dominant color is set as: pv = Value * ( 1 - Saturation )
c. The last remaining color is set as either:
i. qv = Value * ( 1 - Saturation * (Hue/60) - floor(Hue/60))
ii. tv = Value * ( 1 - Saturation * ( 1 - ((Hue/60) - floor(Hue/60))))
3. Clean up, here we allow for i to be -1 or 6 just in case we have a very small floating
point error otherwise, we have an undefined input.
4. Normalize R, G and B from 0 to 255.

Note: S and V are normalized between 0 and 1 in this example. H is its normal 0 to 360.

3.2.3 MICROCONTROLLER
Microcontroller used is a NXD P89V51RD2. It is used to run the servo motor
accordingly to track the moving object. Micro controllers PWM feature to control servo motors.
In this way the PWM with automatically generate signals to lock servo and the CPU is free to do
other tasks.

3.2.4 SERVO MOTOR


The output of controller is given as input to motor.Servo motor is used for the tracking
purpose of a moving object.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.3 STEPS INVOLVED IN IMAGE PROCESSING

When an object is observed form far distance, its peripheral shape and its color are the prominent
feature which distinguishes it from background and other objects. So while extracting features of
the object for identification, its peripheral shape and color are considered, instead of other
features like texture and other minute details. The complete algorithm for this prototypic system
is implemented in Matlab7.2 software using Image Processing Toolbox.

Fig 5 BLOCK DIAGRAM OF IMAGE PROCSSING SOFTWARE.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.3.1. PREPROCESSING
Background image has to be captured after installing camera at its place and care must be taken
that camera shouldn’t move once background is captured. Subtraction between background
image and current image obtained from camera is used to detect intrusion. If there is no intrusion
then previously captured background image and image taken from camera at any later time will
have no difference and result of subtraction will be zero (a complete black image). But if some
object has intruded in the scene then difference between those two images will be the object
itself and is not zero, as in previous case. This subtraction process is shown in Fig.6.

Fig 6. Subtraction of image captured by camera from background image to segment/detect


intruding object
Result of subtraction is not suitable for feature extraction as outer edges of intruding object are
not clearly visible. This problem can be depicted from Fig.7. This is image obtained from
inverting the result of subtraction which prominently indicates above mentioned problem. So this
result of subtraction has to be preprocessed before feature extraction.

Fig 7. Complement of subtracted image


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Preprocessing involves following steps. As subtraction is obtained by subtracting color


images, result of subtraction is also a color image. It is converted into binary image as shown in
Fig.8.a. Our final aim of image reprocessing is to obtain image shown in Fig.8.f which is suitable
for feature extraction. Canny edge detection is function is operated on Binary Image to detect the
edges of the intruding object depending upon threshold chosen adaptively. Correct choice of
threshold leads to proper edge detection, which will increase overall accuracy. Problem of
unclear and broken boundary as depicted in the Fig.7 is present till this stage. To remove this
problem image is dilated with some suitable mask to join these broken edges. The output of
dilation is shown in Fig.8.c. Holes in binary images are black portion of image surrounded by
white boundary. These holes are filled with filling operation to get number of different
unconnected white areas. These white areas are related to different objects in image. At this
stage, we get various unwanted white regions other than white region related to the intruding
object. These unwanted white of capturing images, camera imperfections etc. These unwanted
regions have inherent property that their area is lesser than intruding object. So only white region
with maximum area is kept which corresponds to intruding object as shown in Fig.8.e. Edge of
this image is detected using canny edge detection method. This edge corresponds to boundary of
intruding object with some error introduced due to dilation operation. Error that is less than 13%
is proved to be acceptable. So mask for dilation operation must be chosen adaptively depending
upon size of objects. This edge detected image is Suitable for shape detection.

8.a Binary Image 8.b Edge Detected

Fig 8. Various steps involved in Image preprocessing and shape detection


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

8.c Dilated 8.d Filling Holes

8.e Object with Maximum Area 8.f Edge Detection of Previous Image

8.g Centroid Obtained and Located


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

8.h Finding distance between centroid and points of edge located at various angles

3.3.2 FEATURE EXTRACTION

3.3.2.1 SHAPE DESCRIPTION: Shape of an object is nothing but distances of all the points on
its boundary from some reference point. This reference point can be centroid (center of mass) of
an object Centroid of object does not change though object is rotated. Center of circle is its
centroid and distances of all points from centroid are equal. For square it will be different case.
Similarly if we measure distances of some points on the boundary of an object as shown in
Fig6.h then we can get shape descriptors.
In our case we have considered those points on the boundary of the object which are
separated by angle of 10 degrees. All the angles are measured from centroid of the object. Thus
we have calculated 36 distances corresponding to 36 different angles separated by 10 degrees.
This angle separation can be reduced in order to increase accuracy. But along with reduction in
angle separation, number of readings will increase and it will increase computation time. So
there is tread off between ability of system to work in real time and its accuracy. Normalization
of the data obtained above is done in order to enable scale invariance. Object viewed from
various distances will not differ in their shape. But they will differ in their sizes.
Normalization will enable comparison between objects those are present at various
distances from camera. Normalization is done by dividing all above 36 distances readings by
largest distance reading. This results in get all 36 shape descriptors readings to range from 0 to1.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Again shape descriptors obtained above can be made invariant to rotation of the object. To make
it starting point or rotation invariant, circular shifting of readings (which is used in chain codes)
can be implemented. This enables to compare objects having different rotational orientation.
Thus we have made this system rotation invariant and scale invariant. Thus shape descriptor of
the object is obtained.

3.3.2.2 COLOR DETECTION: If we observe some object from far distance, we consider its
gross features instead of fine details. If that object is having different colors on its different parts
then the color which is occupying maximum area of that object is considered, and it is said that
object that object is of that color. e.g. shows object is having green color covering most of its
parts, and blue and black colors covering only some of its portion. So color of that object is
considered to be green only. To find color of the object, color image obtained from camera is
logically ANDed with preprocessed image shown in Fig.8.e. Result of ANDing is shown in
Fig.9.

Fig 9. a) object b) HSV transformation c) only Hue plane

Now this resulting image is converted into HSV image as shown in Fig10.b. Hue plane of HSV
image contains only color information. All the values of Hue plan lies between 0 to 1. Depending
upon their values, color is detected. E.g. red, green and blue colors can be distinguished as in
HSV plane as red pixels have values >0.8 and <0.15, green pixels have values >0.15 but <0.48,
blue pixels have values >0.48 but <0.8. Thus second gross feature of the objects that is its color
is detected.
Color and shape detection is also performed on database images. If match between any on the
database object and intruding object is found then that object is said to be recognized. And it has
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

to be destroyed. Fig.10 shows result of preprocessing and feature extraction on some sample
database image

Fig 10. Preprocessing and feature extraction

3.3.2.3 OBJECT TRACKING

The goal of the tracking system is to control the camera pan and tilt such that a detected
object remains projected at the center of the image. The camera tracking system hardware will
include a microcontroller based servo controller that interfaces to the software running on the
computer.
The servo controller will adjust the viewing field of the camera by applying the
adjustment output as a pulsed width modulated signal to the servo motors. Adjustment output
will be calculated by the software in the acquisition feedback loop to center the object found with
the specified user constraints. A camera mount will be fabricated for the camera as well as
housing for the servos this allows a range of motion for tracking moving objects

3.4 IMAGE PROCESSING HARDWARE

A computer with Intel 1.6 GHz processor and 512MB RAM was used as Image Processing
hardware. Camera is connected to this PC via USB port. The image acquired by camera is
processed by this hardware and result of processing is sent to Microcontroller. Microcontroller
will control the angle of rotation of servo motor to position cannon accordingly. When an object
is observed form far distance, its peripheral shape and its color are the prominent feature which
distinguishes it from background and other objects. So while extracting features of the object for
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

identification, its peripheral shape and color are considered, instead of other features like texture
and other minute details.

3.4.1 MICROCONTROLLER

The information of intruded object tracked by the image processing block is then given to the
microcontroller. Thus the microcontroller in the system then presumes its further programmed
activity and the required output are forwarded to servo motor.

Features:-
 80C51 Central Processing Unit
 Operating voltage from 0 MHz to 40 MHz
 16/32/64 kB of on-chip Flash user code memory with ISP (In-System Programming) and
IAP (In-Application Programming)
 Supports 12-clock (default) or 6-clock mode selection via software or ISP
 SPI (Serial Peripheral Interface) and enhanced UART PCA (Programmable Counter
Array) with PWM and Capture/Compare functions
 Four 8-bit I/O ports with three high-current Port 1 pins (16 mA each)
 Three 16-bit timers/counters
 Programmable watchdog timer
 Eight interrupt sources with four priority levels
 Second DPTR register
 Low EMI mode (ALE inhibit)
 TTL- and CMOS-compatible logic levels

3.4.2 SERVO MOTOR

WHAT IS SERVOS?

A Servo is a small device that has an output shaft. This shaft can be positioned to specific
angular positions by sending the servo a coded signal. As long as the coded signal exists on the
input line, the servo will maintain the angular position of the shaft. As the coded signal changes,
the angular position of the shaft changes. In practice, servos are used in radio controlled
airplanes to position control surfaces like the elevators and rudders. They are also used in radio
controlled cars, puppets, and of course, robots.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Fig 11. VTS-08A Servo

Servos are extremely useful in robotics. The motors are small, as you can see by the picture
above, have built in control circuitry, and are extremely powerful for thier size.

So, how does a servo work? The servo motor has some control circuits and a potentiometer (a
variable resistor, aka pot) that is connected to the output shaft. In the picture above, the pot can
be seen on the right side of the circuit board. This pot allows the control circuitry to monitor the
current angle of the servo motor. If the shaft is at the correct angle, then the motor shuts off. If
the circuit finds that the angle is not correct, it will turn the motor the correct direction until the
angle is correct. The output shaft of the servo is capable of travelling somewhere around 180
degrees. Usually, its somewhere in the 210 degree range, but it varies by manufacturer. A normal
servo is used to control an angular motion of between 0 and 180 degrees. A normal servo is
mechanically not capable of turning any farther due to a mechanical stop built on to the main
output gear.

The amount of power applied to the motor is proportional to the distance it needs to travel. So, if
the shaft needs to turn a large distance, the motor will run at full speed. If it needs to turn only a
small amount, the motor will run at a slower speed. This is called proportional control.

How do you communicate the angle at which the servo should turn? The control wire is used to
communicate the angle. The angle is determined by the duration of a pulse that is applied to the
control wire. This is called Pulse Coded Modulation. The servo expects to see a pulse every 20
milliseconds (.02 seconds). The length of the pulse will determine how far the motor turns. A 1.5
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

millisecond pulse, for example, will make the motor turn to the 90 degree position (often called
the neutral position). If the pulse is shorter than 1.5 ms, then the motor will turn the shaft to
closer to 0 degress. If the pulse is longer than 1.5ms, the shaft turns closer to 180 degress.

Fig 12. Duration of the Pulse Giving The Angle Of Output Shaft
As you can see in the picture, the duration of the pulse dictates the angle of the output shaft
(shown as the green circle with the arrow). Note that the times here are illustrative, and the actual
timings depend on the motor manufacturer. The principle, however, is the same.

3.4.3 RS232

RS232 is used for serial communication between circuit and PC. It uses only one or two data
lines to transfer data and is generally used for long distance communication. In serial
communication the data is sent as one bit at a time. This article describes the interfacing of 8051
microcontroller with a computer via serial port, RS232. Serial communication is commonly used
in applications such as industrial automation systems, scientific analysis and certain consumer
products.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

3.5 CIRCUIT DIAGRAM

Fig 13 Circuit Diagram.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 4.
SOFTWARE IMPLEMENTATION
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

4.1 Algorithm for Object Recognition

1. Start
2. Initialize camera and computer
3. Wait for the trigger
4. Capture background image
5. Provide some delay
6. Capture test image
7. Provide some delay
8. Capture the object image to be tested
9. Decide the shape and colour for the same
10. Display result
11. stop
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Flowchart for Object Recognition

Fig 14. Flowchart for Object Recognition


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

4.2 Algorithm for Object Tracking

1. Start
2. Initialize camera and hardware
3. Send adjustment to servo motor
4. Track the moving object
5. Repeat step no 3 to 4

Flowchart for Object Tracking

START

INITIALIZE CAMERA AND SERVO


MOTOR

SIMULATE THE CODE

SEND ADJUSTMENT TO SERVO


CONTROLLER

ACQUIRE IMAGE

STOP

Fig 15. Flowchart for Object Tracking


.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 5.
SIMULATION RESULTS
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

RESULTS
After running the recognition code it will ask for BACKGROUND image. After few seconds it

will again ask for TEST image again with the same delay it will ask for the OBJECT image.

After c

apturing these images MATLAB will calculate the results as below.

CASE A when COLOR and SHAPE both not MATCHED

CASE B when COLOR and SHAPE both MATCHED

CASE C when COLOR is MATCHED but SHAPE isn’t MATCHED

CASE D when COLOR isn’t MATCHED but SHAPE is MATCHED

5.1.Simulation Results when COLOR and SHAPE both not MATCHED


INPUT :

a. Background Image b. Test Image

c. Object Image
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

Fig 16.

OUTPUT:

Fig 17 Calculation and comparison results.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

5.2.Simulation Results when both COLOR and SHAPE MATCHED

INPUT :

a. Background Image b. Test Image

c. Object Image

Fig 18.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

OUTPUT:

Fig 19 Calculation and comparison results.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

5.3.Simulation Results when COLOR is MATCHED but SHAPE isn’t


MATCHED

INPUT :

a. Background Image b. Test Image

c. Object Image

Fig 20.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

OUTPUT:

Fig 21 Calculation and comparison results.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

5.4.Simulation Results when COLOR isn’t MATCHED but SHAPE is


MATCHED

INPUT:

a. Background Image b. Test Image

c. Object Image

Fig 22.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

OUTPUT:

Fig 23 Calculation and comparison results.


MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

5.5Tracking result
Tracking system is to control the camera pan and tilt such that a detected object remains
projected at the centre of the image. The camera tracking system hardware will include a
microcontroller based servo controller that interfaces to the software running on the computer.
The servo controller will adjust the viewing field of the camera by applying the adjustment
output as a pulsed width modulated signal to the servo motors. Adjustment output will be
calculated by the software in the acquisition feedback loop to centre the object found with the
specified user constraints. A camera mount will be fabricated for the camera as well as housing
for the servos this allows a range of motion for tracking moving objects.
As the object moves the pointer connected to servo motor points towards the object.
Whenever the object goes outside from the range of the camera the pointer will remain to the last
position and again start tracking when the object comes into the range.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 6.
CONCLUSION AND FUTURE WORK
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

6.1 CONCLUSION
7
In this project, an automated surveillance system is described, which includes the
following four main building blocks: moving object detection, object tracking and event
recognition. In this thesis we presented a set of methods and tools for a “smart” visual
surveillance system.
We implemented three different object detection algorithms and compared their detection
quality and time-performance. The adaptive background subtraction scheme gives the most
promising results in terms of detection quality and computational complexity to be used in a real-
time surveillance system with more than a dozen cameras. However, no object detection
algorithm is perfect, so is our method since it needs improvements in handling darker shadows,
sudden illumination changes and object occlusions. Higher level semantic extraction steps would
be used to support object detection step to enhance its results and eliminate inaccurate
segmentation.
The proposed whole-body object tracking algorithm successfully tracks objects in
consecutive frames. Our tests in sample applications show that using nearest neighbor matching
scheme gives promising results and no complicated methods are necessary for whole-body
tracking of objects.. However, due to the nature of the heuristic we use, our occlusion handling
algorithm would fail in distinguishing occluding objects if they are of the same size and color.
Also, in crowded scenes handling occlusions becomes infeasible with such an approach, thus a
pixel-based method, like optical flow is required to identify object segments accurately.
We proposed a novel object classification algorithm based on the object shape similarity.
The method is generic and can be applied to different classification problems as well. Although
this algorithm gives promising results in categorizing object types, it has two drawbacks: (a) the
method requires effort to create a labeled template object database (b) the method is view
dependent. If we could have eliminated (b), the first problem would automatically disappear
since one global template database would suffice to classify objects. One way to achieve this
may be generating a template database for all possible silhouettes of different classes. This
would increase the computational time, but may help to overcome the need for creating a
template database for each camera position separately.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

6.2 ADAVANTAGES
1. Fully automated system.

2. Military based security system via Air, Ground.

3. Extended features unlike other security system like image extraction, recognition.

4. Better image quality while capturing via camera of resolution 640 * 320.

5. Low power consumption as per the design of circuit.

6.3 APPLICATIONS

 Military purposes.
 RADAR system.
 Robotic Vision.
 Security Cameras.
 Video Editing.

6.4 FUTURE SCOPE

We can create database for the objects those are to be destroyed. If match between intruding
object and one of the objects from database is found, system keep tracking that object to
calculate its velocity of motion. This velocity information is needed to decide the angle and time
instant at which projectile is to be launched at intruding object to destroy it. After these
calculations, position of that intruding object in the form of x-y co-ordinate is found and
accordingly cannon will be aimed at that object with help of two servo motors, driven by
microcontroller and finally the destruction of the intruder can be done.
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

CHAPTER 7

REFERENCES
MOVING OBJECT RECOGNIZATION, TRACKING AND DESTRUCTION

7 REFERENCES

[1].Recognition, Tracking and Destruction for Military Purpose


By Amit Kenjale

[2].Object Classification and Tracking in Video Surveillance


By Qi Zang and Reinhard Kletteg

[3]. Moving Object Detection, Tracking and Classification For Smart Video Surveillance
By Yi˘githan Dedeo˘glu
[4]. Gonzalez, R and Woods, R “Digital Image Processing”, 2nd Edition, Prentice Hall,(2002).

[5]. P.Ramesh Babu,”Digital image processing”,1st Edition .

[6] Mazidi, “microcontroller and embedded systems” Prentice Hall.

You might also like