You are on page 1of 2

1. Describing the Problem with background 2.

Design of object recognition algorithm


knowledge. based on support vector machine.

In our paper, the object feature vector is used as the Support vector machine(SVM) combined with a
input to detect & classify objects and movable objects binary decision tree to form a multi-class classifier.
which is further split into pedestrians, automobiles, Here the two main risks include the empirical risk &
and so on. Since the basic vector support machine is confidence risk.
for binary problems only, here we incorporated a Empirical risk represents the error in the classifier
multi-classification system. while the confidence risk represents the degree of
reliance on classifier performance. The generalization
A) Image processing error bound formula as follows:
R(w) ≤ Remp(w) + Ф(n/h)
Here, the primary goal was to carry out basic image here, R(w) is the real risk, Remp(w) for empirical risk,
processing & detecting the object before going into Ф(n/h) is the risk of confidence.
object detection. For that, images needed to the Consequently, the purpose of statistical learning is
processed in grayscale. The method includes modified from the slightest empirical risk to the
minimum empirical risk and confidence risk. We are
Average Gray = (R+G+B)/3 (1) using SVM to solve the minimum structural risk.
Maximum Gray = Max(R,G,B) (2) Let us assume that we are working with a training
Weighted Average = (xR+yG+zB)/3 (3) sample size of N which contains two categories. The
first category sample is treated as positive while any
The letters x, y, & z represent the weight of sample that does not belong in the first category is
grayscaling of R, G and B. labeled as negative. To create a separation between the
two classes, machine learning formulates a discriminant
B) Moving Object Detection function.
There are two types of training samples: linear and
Because of the “ghost” problem of the traditional nonlinear. We are using a linear sample set. The
Gauss model when detecting an error, we have positive and negative samples are separated from the
incorporated the updated mixed Gauss model to detect linear hyperplane f(x) = wx + b = 0, which is called the
moving objects. This model uses the three frame optimal separating hyperplane containing the largest
difference method where we assume that classification interval.
x t 1 , x t and x t 1 is three continuous image frames.
The t-1, t & t+1 represents moments.

C) The moving objects shape features

The shape of the pedestrian & automobile is vastly


different. When we monitor a scene in real-time, the
moving object displayed on the monitor screen is
either there or not. Any video screen will only have a
part of the moving object. Due to this, the shape
method implemented here will divide the image in 1/3 Fig. 2: Optimal separating hyperplane
part of the total object size. This ratio method is
utilized to make a clear distinction between the lower The nature of the SVM problem is called aconvex
and upper portion of the object in question. problem. Karush-kuhu-Tucher (KKT) condition is a
necessary and sufficient condition for convex problems
If we can satisfy this condition, the product of the
Lagrange multipliers (a strategy for finding the local
maxima and minima of a function) and constraints is
equal to 0. So any standard support vector must satisfy
the following conditions.
y i (wx + b) = 1

We have further trained the SVM classifier on the


Fig. 1: Custom Shape feature calculation sketch individual decision node of decision tree-based method.
The design is shown below. Practically, category 1 is
Due to Support vector machine (SVM) introducing the easy to discern, therefore, the multiple classifiers will
structural risk minimization principle, transforming the distinguish category 1 among all of the other categories
issue into a two-time optimization problem, which present. The SVM structure is outlined in a manner
assisted us for the linear two value classification. based on the binary decision tree. The training SVM
sequence is SVM1 ~ SVM2.
Fig. 3: SVM multi-class classifier

The moving object detection program initially detects


the moving object and generates a feature vector based
on the division ratio & compactness of the object in
question and taken as the classification sample. After
the SVM is trained with the samples, it finds the vector
sample and establishes the optimal classification
hyperplane. When the SVM training is over, a
real-time video is fed as an input test sample into the
SVM, and recognition result is obtained.

The flow chart of moving object recognition and


classification is shown in Fig. 4.

Fig. 4: Object classification flow chart

3. Conclusion
To conclude, an SVM based algorithm is used for
object recognition. The presented data is based on a
moving object who are divided into pedestrians and
automobiles. We can use the shape of ratio &
compactness as a feature vector and train the SVM
model. In terms to form a multi-class classifier the
SVM, and binary decision tree is combined to detect
the target feature vector to classify objects.

You might also like