Professional Documents
Culture Documents
net/publication/278703606
CITATIONS READS
2 367
3 authors:
Hyunchul Shin
Chongshin University
39 PUBLICATIONS 194 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Jahanzeb Pirzada on 18 December 2018.
1 Introduction
Fig. 1. Edge map generated by canny edge detection for rear view of vehicles.
4 Bag-of-Feature Algorithm
5 Methodology
In this paper, main methodology involves two main steps, initial candidate generation
using edge based method and verification using BoF algorithm as shown in Fig. 2.
(c)
Fig. 3. (a) Shows region of interest (RoI) generation, (b) shows canny edge map of RoI image,
(c) Shows results of horizontal edge filtering on an image.
In BoF algorithm, there are three main steps; visual vocabulary generation, training
class generation and testing.
In the Visual Vocabulary Generation, keypoints and descriptors are generated by
using Harris corner detector and Scale Invariant Feature Transform (SIFT)
Descriptors [20] respectively. Each descriptor represents a single patch. After patches
are formed from all images k-mean clustering [21] is used to cluster similar patches
together. After k (k= 500) clusters are generated by clustering, the centre of each
cluster becomes a visual word. Visual vocabulary for vehicle and non vehicle class
are generated separately. We use 500 images of vehicles from Caltech database and
500 images from random Google links for non vehicle database for vocabulary
generation.
In the second step training for class generation is performed to generate training
histogram of vehicle class and non vehicle class. Same algorithm is followed in
training images as of vocabulary generation except the last part of vocabulary
generation is replaced by generation of training histograms. In the training cycle 500
images from Caltech vehicle dataset [22] is used for training of vehicle class and
many random images (about 500 images) from different Google links is used for non
vehicle class.
In the third step testing is performed on the labelled images extracted from edge
response during initial candidate detection. For test images, histograms are generated
using same training class generation algorithm. Then K nearest neighbour algorithm
is used for matching the training and test images histograms. If the difference in
neighbour distance between histograms of training and test images is less than a
threshold then it’s acceptable as a vehicle. Otherwise, the image is considered as a
non-vehicle class. A threshold value of 1.50 is used for comparing neighbour
distances of histogram; this value is selected by hit and trial method. Then the non
vehicle class histogram is matched with histogram of test image and if difference is
less than the threshold, then it’s confirmed as a non vehicle. Verification of vehicles
double checks the results from edge response and hence improves detection by
removing false initial detection.
6 Experimental Results
7 Conclusion
In this paper, edge based initial candidate detection is used with bag-of-feature
based verification. The combination of two methods has helped to detect all vehicles
using edge based method and remove false vehicle detection by verification through
BoF algorithm. So, combination of two well known techniques is used for improving
the detection rate of vehicles. Our results show around 98% detection of vehicles on
highways where there is low traffic and 96% detection on roads inside city where
there is high traffic.
In this work we have emphasized on day time sunny and cloudy weather
conditions. In future, this work will be enhanced for different weather conditions.
(a) (b)
(c) (d)
(e) (f)
Fig. 4. (a) shows detection of vehicles in highway, (b) shows detection of vehicles
inside city roads, (c) shows Wrong detection using inside city roads images, (d)
shows Wrong detection using highway image, (e) shows Undetected vehicle using
inside city roads image. (f) show missed vehicle detection which is at large distance
from vehicle.
Acknowledgment
This work was supported by Ministry of Knowledge Economy (MKE) and IDEC
Platform center (IPC) at Hanyang University. This research work is sponsored by
‘Higher Education Commission (HEC), Govt. Of Pakistan’ under the scholarship
program titled: MS level Training in Korean Universities/Industry.
References
1. World Health Org., “World Report on Road Traffic Injury Prevention,” [Online]. Available:
http://www.who.int/violence_injury_prevention/publications/road_traffic/world_report/facts
heets/en/index.html
2. Z. Sun, G. Bebis, and R. Miller, “On-Road Vehicle Detection Using Evolutionary Gabor
Filter Optimization,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, pp.
125-137, 2005.
3. Z. Sun, G. Bebis, and R. Miller, "On-Road Vehicle Detection: A Review," IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 28, p. 694-711, 2006.
4. M. M. Trivedi, T. Gandhi, and J. McCall, “Looking-in and looking-out of a vehicle:
Computer-vision-based enhanced vehicle safety,” IEEE Trans. Intell. Transp. Syst., vol. 8,
no. 1, pp. 108–120, Mar. 2007.
5. J. McCall and M. M. Trivedi, “Video-based lane estimation and tracking for driver
assistance: Survey, system, and evaluation,” IEEE Trans. Intell. Transp. Syst., vol. 7, no. 1,
pp. 20–37, Mar. 2006.
6. T. Gandhi and M. M. Trivedi, “Computer vision and machine learning for enhancing
pedestrian safety,” in Computational Intelligence in Automotive Applications. Berlin,
Germany: Springer-Verlag, May 2008, pp. 59–77.
7. T. Gandhi and M. M. Trivedi, “Vehicle surround capture: Survey of techniques and a novel
Omni video based approach for dynamic panoramic surround maps,” IEEE Trans. Intell.
Transp. Syst., vol. 7, no. 3, pp. 293– 308, Sep. 2006.
8. Z. Hongna, W. Qi, “Image measuring technique and its applications,” Journal, Electrical
Measurement & Instrumentation, 2003.
9. J. F. Canny, A computational approach to edge detection. IEEE Transactions. On Pattern
Analysis and MachineIntelligence, 8, pp. 679- 714, 1986.
10. Russ, “The image processing handbook”, CRC Press, 2002.
11. S. M. Smith, and J. M. Brady, SUSAN- A New Approach to Low Level Image Processing.
International Journal of Computer Vision, 23(1), pp. 45-78, 1997.
12. Z. Musoromy, Dr. S. Ramalingam, N. Bekooy, “Edge Detection Comparison for License
Plate Detection”. IEEE Transactions on Control, Automation, Robotics and Vision
(ICARCV), pp. 1133-1138, Dec 2010.
13. C. Fan, Y. Ren, “Study on the Edge Detection Algorithms of Road Image”. Third
International Symposium on information Processing, pp 217-220, 2010.
14. S. Brin and L. Page, "The anatomy of a large-scale hypertextual Web search engine," in
Proc. International WWW Conf. Computer Networks and ISDN Systems, pp. 107-117,
1998.
15. Z. Sun, G. Bebis and R. Miller, “On-road vehicle detection using optical sensors a review,”
IEEE International Conference on Intelligent Transportation Systems, Washington, DC,
October 3-6, 585-590. 2004.
16. N. Matthews, P. An, D. Charnley, and C. Harris, “Vehicle Detection and Recognition in
Greyscale Imagery,” Control Eng. Practice, vol. 4, pp. 473-479, 1996.
17. U. Handmann, T. Kalinke, C. Tzomakas, M. Werner, and W. Seelen, “An Image Processing
System for Driver Assistance,” Image and Vision Computing, vol. 18, no. 5, 2000.
18. C. Tzomakas and W. Seelen, “Vehicle Detection in Traffic Scenes Using Shadows,”
Technical Report 98-06, Institutfur Neuroinfor-matik, Ruht-Universitat, Bochum,
Germany, 1998.
19. N. Srinivasa, “A Vision-Based Vehicle Detection and Tracking Method for Forward
Collision Warning,” Proc. IEEE Intelligent Vehicle Symp., pp. 626-631, 2002.
20. D.G. Lowe, “Distinctive image features from scale invariant keypoints,” International
Journal of Computer Vision, vol. 60, pp. 91- 110, 2004.
21. D. J. C. MacKay. Information Theory, Inference, and Learning Algorithms, 2003
22. Caltech Cars Dataset, http://www.robots.ox.ac.uk/~vgg/data3.html.