You are on page 1of 6

TARGET DESTRUCTION BASED ON

SHAPE AND COLOUR


ApurvaPawar Mr. P.P.Kulkarni
Elec.& Telecommunication Department Elec.& Telecommunication Department
BhivrabaiSawant College Of Engineering & Bhivrabai Sawant College Of Engineering
Research Pune, India Research Pune, India.
pawarapurva@ymail.com ppkmicro@gmail.com

Abstract-In this paper we have developed a system, its velocity to decide the angle and time instant at which
which will automatically recognize, track and destroy projectile is to be launched to destroy it. Position of
the intruding object that will be captured by camera. intruding object is at X-Y co-ordinate which is sent to
This system can be used in the military application
the microcontroller. Microcontroller will control the
where it is hard for soldiers to fight. This system will be
installed at a particular place. It will have a camera, angle of rotation of servomotor to position the cannon
which will capture the area under surveillance. The aiming at the intruding object. At last the cannon will
system will then detect the intrusion and recognize the get fired.
object. Now the original image which we have taken
without intruding object is compared with the image II. SYSTEM OVEVIEW
that has intruding object then after comparison, if the
features of object are same as the features of object in This system is proposed for detecting intrusion,
the data base. Then the intruding object will be tracked tracking intrusion object and destroying it. The system
and get destroyed by bullets and bombs. This will help will be fixed at suitable place, from which complete and
to get very tight security without involvement of human
resource. clear view of the area under surveillance can be
captured by the camera. The system is provided with
high resolution camera, image processing hardware,
Index Terms—object tracking, object reorganization, microcontroller and a servomotor and other
intrusion object. supplementary hardware and mechanisms.

I. INTRODUCTION

The project is done for the purpose of intrusion


detection, track and destroy the intruding object. The
system will be mounted on the suitable place from
where all the area under surveillance will be clearly Fig.1 Basic Block diagramof complete system
seen. For that system the image will be captured by the
camera then it is processed and the desired action will Image Processing Algorithm will acquire images
take place. The system is provided with high resolution captured by camera after some predefined interval of
camera, image processing hardware, microcontroller time. Then it will process every captured image for
and a servomotor. The image processing hardware will detecting intrusion. If intrusion is detected Image
acquire the image captured by the camera. Then after Processing Hardware will extract the features of that
processing the intruding object is identified and the intruding object and compare them with features of
image processing hardware will extract the features of objects stored in database. We have collected database
that intruding object and compare them with the feature for the objects those are to be destroyed. If match
that are stored in the database. If the match between the between intruding object and one of the objects
intruding object and the object in the database is found fromdatabase is found object is said to be recognized.
then the system will track the object and will calculate System will track that object to calculate its velocity of
motion. This velocity information is needed to decide image then the previously taken background image and
the angle and time instant at which projectile is to be the current image will be same and the difference
launched at intruding object to destroy it. Position of between them will be zero. But if the object is present
the intruding object in the form of x-y co-ordinate is then the difference will be some non-zero value.
found and sent to microcontroller. Microcontroller will
control the angle of rotation of two Servo Motors to
position the cannon aiming at the intruding object. At
last cannon will get fired.

A. Camera
The Camera used for this project is i-ball face2face, a
USB webcam, with 640 by 480 resolutions. It is a night
vision camera with different resolution camera is fixed Fig.3
on the system and should not move from its place once The result of subtraction is not suitable for feature
the background is captured otherwise it will adversely extraction as the outer edges of intruding object are not
affect the accuracy. clearly visible. This can be seen from fig.4 This image
is obtained from inverting the result of subtraction
B. Image processing software
Steps involved in Image processing hardware, which shows the above mentioned problems very
clearly.

Fig.2 Image processing software

When the object is observed from far distance, its shape


and colouris the feature that separates objects from the Fig.4 Extraction of outer edges
background so while extracting features of the object its
Here the Canny edge detection method is used to detect
shape and colour are considered. The complete
the edges of intruding object correct choice of threshold
algorithm for this prototypic system is implemented in
leads to proper edge detection which will increase the
Matlab 7.2 software using image processing toolbox.
overall accuracy. After Canny edge detection some
unclear and broken boundaries are still seen so to
A. Preprocessing
remove this problem image is dilated with some
Background image has to be captured after installing suitable mask to join these broken edges. The output of
camera at its place, subtraction between the background dilation is shown in fig.5
image and current image obtained from camera is used
to detect the intrusion. If there is no any intruding
Fig. 5 Dilation

In dilated image the boundaries are white but some


holes inside the image are still black these holes are Fig.7 Object with maximum area
filled with filling operation as shown in fig.6 to get
number of different unconnected white areas these
B. Feature Extraction
white areas are related to different object in image at
this stage we get various unwanted white region other Shape of an object is nothing but distance of all the
than intruding object. These unwanted regions have points on its boundary from some reference points. This
inherent property that their area is lesser than intruding reference point can be centroid of an object as shown in
object with maximum area is kept as shown in fig. 7 fig. 8 Centroid of an object does not change even if the
object is rotated, in our case we have considered those
point on the boundary of an object which are separated
by angle of 10 degrees. All the angles are measured
from the centroid of the object. Thus in Fig. 9we have
calculated 36 distances corresponding to 36 different
angles separated by 10 degrees. This angle separation
can be reduced in order to increase the accuracy. But
reduced angle will increase the computation time. Now
if the object is viewed in the camera by various
distancesthen the shape of an object will not change but
the size will change. For that the normalization is done
by dividing all 36 distances reading by the largest
distance reading. This results to get all the 36 shape
descriptor reading to range 0 to 1.
Fig.6 Holes filled
value>0.48 and <0.8. Colour and shape detection is also
performed on database image. If the match is found the
object is said to be recognized.

Fig. 10 a)object b) HSV transformation c) only Hue


plane
Fig.8 Center of object
IV. IMAGE PROCESSING HARDWARE
A computer with Intel 1.6GHz processor and 512MB
RAM was used as image processing hardware camera is
connected to this PC via USB port, the image acquired
by camera is processed by camera is processed by this
hardware and result of processing is sent to
microcontroller. Microcontroller will control the angle
of rotation of servomotor to position cannon
accordingly.
V. EXPERIMENTAL RESULTS AND DISCUSSION
After running the recognition code it will ask for
BACKGROUND image. After few seconds it will
Fig.9 distance of object from centroid again ask for TEST image again with the same delay it
will ask for the OBJECT image.After capturing these
C. Colour Detection images MATLAB will calculate the results as below.

If we observe some object from far distance we CASE A when COLOR and SHAPE both not
consider its gross features instead of fine details. If that MATCHED
object is having different colours on its different parts
then the colour which is occupying maximum area of CASE B when COLOR and SHAPE both MATCHED
that object is considered and it is said that the object is
of that colour e.g. object is having green colour CASE C when COLOR is MATCHED but SHAPE
covering most of its parts and blue and black colour isn’t MATCHED
covering only some of its portion so colour of that
CASE D when COLOR isn’t MATCHED but SHAPE
object is considered to be green only.
is MATCHED
To find colour of the object colour image obtained from
camera is logically ANDed with preprocessed image CASE A. Simulation Results when COLOR and
shown in fig. 7 result of ANDing is shown in fig. 10 SHAPE both not MATCHED
now this resulting image is converted into HSV image
as shown in fig.10 Hue plane of HSV image contains
only colour information all the values of hue plane lies
between 0 to 1. Depending upon their values colour is
detected. Red pixel has value >0.8 and < 0.15, green
pixel has value > 0.15 and < 0.48 blue pixel has
VI. CONCLUSION

CASE B. Simulation Results when both COLOR and We implemented three different object detection
SHAPE MATCHED algorithms and compared their detection quality and
time-performance. The adaptive background
subtraction scheme gives the most promising results in
terms of detection quality and computational
complexity to be used in a real-time surveillance system
with more than a dozen cameras. However, no object
detection algorithm is perfect, so is our method since it
needs improvements in handling darker shadows,
sudden illumination changes and object occlusions.
Higher level semantic extraction steps would be used to
support object detection step to enhance its results and
eliminate inaccurate segmentation

VII. REFERENCE
CASE C. Simulation Results when COLOR is [1].Kouji MURAKAMI, Kazuya MATSUO, Tsutomu
MATCHED but SHAPE isn’t MATCHED HASEGAWA, Ryo KURAZUME,“Position Tracking
and Recognition of Everyday Objects by using Sensors
Embedded in an Environment and Mounted on Mobile
Robots” pp. 2210-2216
[2]. Kwak, Jae Chang,” Implementation of object
recognition andtracking algorithm on real-time basis”
pp. 2000-2003
[3]. PankajBongale,” Implementation of 3D Object
Recognition and Tracking” pp. 77-79
[4]. Kuk Cho,” Real-time 3D Multiple Occluded Object
Detection and Tracking”
[5].AmitKenjale, “Recognition, Tracking and
Destruction for Military Purpose“

CASE D. Simulation Results when COLOR isn’t


MATCHED but SHAPE is MATCHED
[6].Qi Zang and ReinhardKletteg, “Object
Classification and Tracking in Video Surveillance”
[7].Yi˘githanDedeo˘gl, .”Moving Object Detection,
Tracking and Classification For Smart Video
Surveillanc”
[8].Gonzalez, R and Woods, R “Digital Image
Processing”, 2nd Edition, Prentice Hall,(2002).
[9].P.RameshBabu,”Digital image processing”,
1stEdition .
[10].Mazidi, “microcontroller and embedded systems”
Prentice Hall.

You might also like