Professional Documents
Culture Documents
4 Pattern Recognition PDF
4 Pattern Recognition PDF
Pattern
Recognition
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
▷ Introduction to Pattern Recognition System
▷ k Nearest Neighbor
▷ Statistical Clustering
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Introduction
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Machine Perception [2]
• Build a machine that can recognize patterns:
– Speech recognition
– Fingerprint identification
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Components of Pattern Classification System [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Types of Prediction Problems [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature and Pattern [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature and Pattern [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Classifier [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Pattern Recognition Approaches [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Pattern Recognition Approaches [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Machine Perception [2]
(Example)
“Sorting incoming Fish on a conveyor according to species using optical
sensing”
Sea bass
Species
Salmon
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Machine Perception [2]
Problem Analysis: set up a camera
and take some sample images to
extract features
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature Selection [2]
The length of the fish as a possible feature for discrimination
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature Selection [2]
The lightness of the fish as a possible feature for discrimination
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature Selection [2]
• Adopt the lightness and add the width of the fish
Lightness Width
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization [3]
Polynomial Curve Fitting
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization: Model Selection [3]
Polynomial Curve Fitting
0th Order Polynomial 1st Order Polynomial
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization: Model Selection [3]
Polynomial Curve Fitting, Over-fitting
Root‐Mean‐Square (RMS) Error:
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization: Sample Size [3]
Polynomial Curve Fitting
9th Order Polynomial N=15
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization: Sample Size [3]
Polynomial Curve Fitting
9th Order Polynomial N=100
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Generalization: Regularization [3]
Polynomial Curve Fitting
Regularization: Penalize large coefficient values
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Learning and Adaptation [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Linear Discriminant Functions [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Feature Extraction 효율화
: Haar-like feature와
Integral Image
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Haar-like Feature [7]
The simple features used are reminiscent of Haar basis functions which
have been used by Papageorgiou et al. (1998).
Three kinds of features: two-rectangle feature, three-rectangle feature, and
four-rectangle feature
Given that the base resolution of the detector is 24x24, the exhaustive set
of rectangle feature is quite large, 160,000.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Haar-like Feature: Integral Image [7]
Rectangle features can be computed very rapidly using an intermediate representation
for the image which we call the integral image.
The integral image at location x,y contains the sum of the pixels above and to the left of
x, y, inclusive:
where ii (x, y) is the integral image and i (x, y) is the original image (see Fig. 2). Using
the following pair of recurrences:
(where s(x, y) is the cumulative row sum, s(x,−1) =0, and ii (−1, y) = 0) the integral
image can be computed in one pass over the original image.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Haar-like Feature: Integral Image [7]
Using the integral image any rectangular sum can be computed in four
array references (see Fig. 3).
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Dimension
Reduction: PCA
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Abstract [1]
Principal component analysis (PCA) is a technique that is useful for the compression and
classification of data. The purpose is to reduce the dimensionality of a data set (sample)
by finding a new set of variables, smaller than the original set of variables, that
nonetheless retains most of the sample's information.
By information we mean the variation present in the sample, given by the correlations
between the original variables. The new variables, called principal components (PCs),
are uncorrelated, and are ordered by the fraction of the total information each retains.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Geometric Picture of Principal Components [1]
Goal:
Goal to account for the variation in a sample in as few variables as
possible, to some accuracy
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Geometric Picture of Principal Components [1]
PCs are a series of linear least squares fits to a sample, each orthogonal
to all the previous.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Usage of PCA: Data Compression [1]
Because the kth PC retains the kth greatest fraction of the variation
we can approximate each observation by truncating the sum at the first m < p PCs
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Usage of PCA: Data Compression [1]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Derivation of PCA using the Covariance Method [8]
Let X be a d-dimensional random vector expressed as column vector.
Without loss of generality, assume X has zero mean. We want to find
a orthonormal transformation matrix P such that
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Derivation of PCA using the Covariance Method [8]
We now have:
and as:
Notice that in ,
Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the
eigenvectors of the covariance matrix of X, we find a projection matrix P
that satisfies the original constraints.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Bayesian Decision
Theory
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
State of Nature [2]
We let denote the state of nature, with = 1 for sea bass and = 2 for
salmon.
More generally, we assume that there is some a priori probability (or simply
prior) P(1) that the next fish is sea bass, and some prior probability P(2)
that it is salmon.
P(1) + P( 2) = 1 (exclusivity and exhaustivity)
Decision rule with only the prior information
Decide 1 if P(1) > P(2) otherwise decide 2
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Class-Conditional Probability Density [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Posterior, likelihood, evidence [2]
Suppose that we know both the prior probabilities P(j) and the conditional
densities p(x|j) for j=1, 2.
Suppose further that we measure the lightness of a fish and discover that
its value is x.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Posterior, likelihood, evidence [2]
Bayes formula: P( x | j ) P( j )
P( j | x)
P( x)
Then,
P( x | j ) P( j ) P( x | j ) P( j )
P( j | x) j 2
P( x)
P( x | ) P( )
j 1
j j
likelihood prior
posterior
evidence
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Posterior, likelihood, evidence [2]
Posterior probabilities for the particular priors P(ω1) = 2/3 and P(ω2)= 1/3
for the class-conditional probability densities shown in Fig. 2.1.
Thus in this case, given that a pattern is measured to have feature value x
= 14, the probability it is in category ω2 is roughly 0.08, and that it is in ω1
is 0.92.
At every x, the posteriors sum to 1.0.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Decision given the Posterior Probabilities [2]
Therefore:
Whenever we observe a particular x, the probability of error is :
P(error | x) = P(1 | x) if we decide 2
P(error | x) = P(2 | x) if we decide 1
Therefore:
P(error | x) = min [P(1 | x), P(2 | x)]
(Bayes decision)
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Bayesian Decision Theory : Risk Minimization [2]
Generalization of the preceding ideas
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Risk Minimization: Loss Function [2]
Formally, the loss function states how costly each action taken is, and is
used to convert a probability determination into a decision.
Let {1, 2,…, c} be the set of c states of nature (or “categories”)
Let (i | j) be the loss incurred for taking action i when the state of
nature is j
Conditional risk
Overall risk
R = Sum of all R(i | x) for i = 1,…,a R R x | x p x dx
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Risk Minimization [2]
Two Category Classification
1 : deciding 1
2 : deciding 2
ij = (i | j)
Conditional risk:
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Risk Minimization [2]
(21- 11) P(x | 1) P(1) > (12- 22) P(x | 2) P(2)
and decide 2 otherwise
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Risk Minimization [2]
Likelihood ratio:
ratio
P ( x | 1 ) 12 22 P ( 2 )
if .
P ( x | 2 ) 21 11 P ( 1 )
Then take action 1 (decide 1)
Otherwise take action 2 (decide 2)
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Minimum Error Rate Classification [2]
Actions are decisions on classes
If action i is taken and the true state of nature is j then:
the decision is correct if i = j and in error if i j
Seek a decision rule that minimizes the probability of error which is the
error rate
0 i j
( i , j ) i , j 1 ,..., c
1 i j
j c
Therefore, the conditional risk is: R( i | x ) ( i | j )P ( j | x )
j 1
P( j | x ) 1 P( i | x )
j 1
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Classifier, Discriminant Functions, and Decision Surface [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Classifier, Discriminant Functions, and Decision Surface [2]
The Multi-category case
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Classifier, Discriminant Functions, and Decision Surface [2]
The two-category case
g ( x) P(1 | x) P(2 | x)
P ( x | 1 ) P (1 )
ln ln
P( x | 2 ) P (2 )
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Classifier, Discriminant Functions, and Decision Surface [2]
The two-category case
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Normal Density [2]
Univariate density
1 1 x 2
P( x ) exp ,
2 2
Where:
= mean (or expected value) of x
2 = expected squared deviation or variance
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Normal Density [2]
A univariate normal distribution has roughly 95% of its area in the range
|x − μ| ≤ 2σ, as shown. The peak of the distribution has value
p(μ) = 1/ 2
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Normal Density [2]
Multivariate density
1 1
P( x) exp ( x ) t 1 ( x )
(2 ) 2
d /2 1/ 2
where:
x = (x1, x2, …, xd)t (t stands for the transpose vector form)
= (1, 2, …, d)t mean vector
= d×d covariance matrix
|| and -1 are determinant and inverse respectively
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Function for the Normal Density [2]
[6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Covariance Matrix [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Function for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [2]
If the covariance matrices for two distributions are equal and proportional
to the identity matrix, then the distributions are spherical in d dimensions,
and the boundary is a generalized hyperplane of d −1 dimensions,
perpendicular to the line separating the means.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Discriminant Functions for the Normal Density [2]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Linear Discriminant
Analysis
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Two-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Two-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Two-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Two-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Two-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Multi-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Multi-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA, Multi-Classes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
LDA Vs. PCA [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Limitations of LDA [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Linear Discriminant
Functions
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Linear Discriminant Functions [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Gradient Descent [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Perceptron Learning [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Perceptron Learning [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Perceptron Learning [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Minimum Squared Error Solution [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Minimum Squared Error Solution [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Pseudo-Inverse Solution [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Least-Mean-Squares Solution [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Summary: Perceptron vs. MSE Procedures [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Ho-Kashyap Procedure [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Ho-Kashyap Procedure [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Support Vector
Machine
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Optimal Separating Hyperplanes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Optimal Separating Hyperplanes [6]
Distance between a plane and a point
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Optimal Separating Hyperplanes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Optimal Separating Hyperplanes [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Lagrange Multipliers [9]
f(x,y)=d
g(x,y) = c
Contour map. The red line shows the constraint g(x,y) = c. The
blue lines are contours of f(x,y). The point where the red line
tangentially touches a blue contour is our solution.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Lagrange Multipliers [9]
To incorporate these conditions into one equation, we introduce an
auxiliary function
and solve
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Kuhn-Tucker Theorem [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Lagrangian Dual Problem [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Lagrangian Dual Problem [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The Lagrangian Dual Problem [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Dual Problem [10]
Minimize (in w, b)
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-separable Case [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-separable Case [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-separable Case [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-separable Case [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-linear SVMs [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-linear SVMs [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-linear SVMs [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Implicit Mappings: An Example [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Kernel Methods [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Kernel Methods [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Kernel Methods [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Kernel Methods [6]
Kernel Functions
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Architecture of an SVM [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Case Study: XOR [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
k Nearest Neighbor
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k Nearest Neighbor Classification Rule [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k Nearest Neighbor Classification Rule [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k Nearest Neighbor Classification Rule [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k Nearest Neighbor Classification Rule [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k Nearest Neighbor Classification Rule [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Statistical Clustering
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Non-parametric Unsupervised Learning [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Proximity Measures [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Proximity Measures [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Proximity Measures [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Proximity Measures [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Criterion Function for Clustering [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Cluster Validity [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
Iterative Optimization [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k-means Algorithm [6]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
The k-means Algorithm [4]
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
References
1. Frank Masci, “An Introduction to Principal Component Analysis,”
http://web.ipac.caltech.edu/staff/fmasci/home/statistics_refs/PrincipalC
omponentAnalysis.pdf
2. Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification,
second edition, John Wiley & Sons, Inc., 2001.
3. Christopher M. Bishop, Pattern Recognition and Machine Learning,
Springer, 2007.
4. Sergios Theodoridis, Konstantinos Koutroumbas, Pattern Recognition,
Academic Press, 2006.
5. Ho Gi Jung, Yun Hee Lee, Pal Joo Yoon, In Yong Hwang, and Jaihie Kim,
“Sensor Fusion Based Obstacle Detection/Classification for Active
Pedestrian Protection System,” Lecture Notes on Computer Science, Vol.
4292, 294-305.
6. Ricardo Gutierrez-Osuna, “Pattern Recognition, Lecture Notes,” available
at http://research.cs.tamu.edu/prism/lectures.htm
7. Paul Viola, Michael Jones, “Robust real-time object detection,”
International Journal of Computer Vision, 57(2), 2004, 137-154.
8. Wikipedia, “Principal component analysis,” available at
http://en.wikipedia.org/wiki/Principal_component_analysis
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung
References
9. Wikipeida, “Lagrange multipliers,”
http://en.wikipedia.org/wiki/Lagrange_multipliers.
10.Wikipeida, “Support Vector Machine,”
http://en.wikipedia.org/wiki/Support_vector_machine.
E-mail: hogijung@hanyang.ac.kr
http://web.yonsei.ac.kr/hgjung