Professional Documents
Culture Documents
Lecture 39-40-41-42-43/27-10-
09/29-10-09/30-10-09/02-11-
09/03-11-09
1
What is Cluster Analysis?
• Finding groups of objects such that the objects in a group will be similar (or related)
to one another and different from (or unrelated to) the objects in other groups
Inter-cluster
Intra-cluster distances are
distances are maximized
minimized
•Clustering is unsupervised
classification: no predefined
classes 2
What is not Cluster Analysis?
• Supervised classification
– Have class label information
• Simple segmentation
– Dividing students into different registration groups
alphabetically, by last name
• Results of a query
– Groupings are a result of an external specification
3
Notion of a Cluster can be Ambiguous
4
Types of Clusterings
• A clustering is a set of clusters
• Important distinction between hierarchical and
partitional sets of clusters
• Partitional Clustering
– A division of data objects into non-overlapping subsets
(clusters) such that each data object is in exactly one subset
• Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
5
Partitional Clustering
6
Hierarchical Clustering
p1
p3 p4
p2
p1 p2 p3 p4
7
Agglomerative and Divisive
8
Different Types of Clusters
• Exclusive versus non-exclusive
(overlapping)
– The clusters shown on slide 4 are all exclusive.
– In non-exclusive clusters, points may belong to
multiple clusters. E.g. a person enrolled as a
student in a university can also be an employee of
the university.
10
Types of Clusters
• 1. Well-separated clusters
• 4.Density-based clusters
• 5.Property or Conceptual
11
Types of Clusters: Well-Separated
• Well-Separated Clusters:
– A cluster is a set of points such that any point in a
cluster is closer (or more similar) to every other
point in the cluster than to any point not in the
cluster.
– The distance between any two points in different
groups is larger than the distance between any two
points within a group.
3 well-separated clusters
12
Types of Clusters: Center-Based
• Center-based (prototype-based)
– A cluster is a set of objects such that an object in a
cluster is closer (more similar) to the “center” of a
cluster, than to the center of any other cluster
– The center of a cluster is often a centroid, the
average of all the points in the cluster, or a medoid,
the most “representative” point of a cluster
4 center-based clusters
Each point in a cluster is closer to the center of its cluster
than to any other cluster.
13
Contiguity-based Clusters (Nearest neighbor
or graph-based)
– If the data is represented as a graph, where the
nodes are objects and links represent connections
among objects, then a cluster can be defined as
Connected component
– Here cluster is a set of points such that a point in a
cluster is closer (or more similar) to one or more
other points in the cluster than to any point in other
clusters.
8 contiguous clusters
14
Types of Clusters: Density-Based
• Density-based
– A cluster is a dense region of points, which is
separated by low-density regions, from other
regions of high density.
– Used when the clusters are irregular or intertwined,
and when noise and outliers are present.
6 density-based clusters
2 Overlapping Circles
16
Characteristics of the Input Data Are
Important
• Type of proximity or density measure
– This is a derived measure, but central to clustering
• Sparseness
– Dictates type of similarity
– Adds to efficiency
• Attribute type
– Dictates type of similarity
• Type of Data
– Dictates type of similarity
– Other characteristics, e.g., autocorrelation
• Dimensionality
• Noise and Outliers
• Type of Distribution
17
Clustering Algorithms
• K-means and its variants
• Hierarchical clustering
• Density-based clustering
18
K-means Clustering – Details
This is prototype based clustering algorithm.
• Initial centroids are often chosen randomly.
Clusters produced vary from one run to another.
•
–The centroid is (typically) the mean of the
points in the cluster.
• A centroid almost never correspond to an actual data point.
• ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc.
• K-means will converge for common similarity measures mentioned above.
• Most of the convergence happens in the first few iterations.
•
19
K-means Clustering
20
Two different K-means Clusterings
3
2.5
2
Original Points
1.5
y
1
0.5
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2.5
1.5
y
0.5
22
Importance of Choosing Initial
Centroids
Iteration1 Iteration2 Iteration3
3 3 3
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
23
Evaluating K-means Clusters
24
– Centroid of a cluster containing 6 data objects (1,2),
(2,3),(3,2),(5,3),(5,7) and (6,7) is
((1+2+3+5+5+6)/6=22/6=3.67,
(2+3+2+3+7+7)/6=24/6=4)
– So (3.67,4) is the centroid of these set of points.
25
• K-means Clustering for document data
• K-means clustering is not restricted to
spatial data but we can also consider
other kinds of data such as text data.
• But the proximity measure can’t be
Euclidean distance. One measure for text
data is Cosine measure.
• Suppose document data is represented by
document term matrix as follows:
26
• Our aim is to maximize the similarity of the
documents in a cluster to the centroid of the
cluster. This quantity for text data is termed as
cohesion of the cluster.
k
Total cohesion= ∑ ∑ cos ine( x, mi )
i =1 x∈C
i
27
Importance of Choosing Initial
Centroids …
Iteration1 Iteration2
3 3
2.5 2.5
2 2
1.5 1.5
y
y
1 1
0.5 0.5
0 0
2 2 2
y
1 1 1
0 0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x x x
28
Problems with Selecting Initial
Points
If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small.
• Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t
–
Consider an example of five pairs of clusters
–
29
30
31
32
10 Clusters Example
Iteration4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with two initial centroids in one cluster of each pair of clusters
33
10 Clusters Example
Iteration1 Iteration2
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Iteration3 Iteration4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with two initial centroids in one cluster of each pair of clusters
34
10 Clusters Example
Iteration4
1
2
3
8
2
y
-2
-4
-6
0 5 10 15 20
x
Starting with some pairs of clusters having three initial centroids, while other have only
one. 35
10 Clusters Example
Iteration1 Iteration2
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
Iteratio
xn3 Iteratio
xn4
8 8
6 6
4 4
2 2
y
y
0 0
-2 -2
-4 -4
-6 -6
0 5 10 15 20 0 5 10 15 20
x x
Starting with some pairs of clusters having three initial centroids, while other have only
one. 36
Solutions to Selection of Initial
• Multiple runs Centroids
– Helps, but repeated runs may not overcome
the problems associated with random initial
values of K completely.
37
• Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
– Such an approach can select outliers as an initial
centroid rather than points in the dense regions.
38
Additional issues of K-means
Handling Empty Clusters
• Basic K-means algorithm can yield empty
clusters if no points have been allocated to a
cluster in the assignment step.
• Several strategies
– Choose the point that contributes most to SSE i.e.,
the point which is farthest from any current centroid.
– Another way can be, choose a centroid (point) from
the cluster with the highest SSE. This will split the
cluster and reduce the overall SSE of the clustering.
– If there are several empty clusters, the above can be
repeated several times.
39
Additional issues contd…
• Outliers:
– Outliers can unnecessarily influence the
clusters, if SSE criteria is used.
– If outliers are present , the resulting cluster
centroids may not be as representative of a
cluster as in the absence of outliers.
– Also, SSE will be larger.
– So, it is better to eliminate outliers before
applying k-means algorithm.
40
Reducing SSE with Postprocessing
• SSE can be reduced by increasing the no.
of clusters that is by increasing the value of
k.
• Some strategies are used where SSE is
reduced but the no. of clusters are fixed up
(without increasing the no. of clusters).
• The strategy is to focus on each individual
cluster since total SSE is simply the sum of
SSE’s contributed by each cluster.
• Total SSE can be changed by splitting or
merging of clusters.
41
• Two strategies that decrease the total
SSE by increasing the no of clusters:
• Split a cluster: the cluster with the largest SSE is
split.
• Introduce a new cluster centroid: the point that is
farthest from any cluster center is chosen.
• Two strategies that decrease the no of
clusters and minimize the increase in total
SSE:
• Disperse a cluster: remove the centroid of the
cluster that contributes to the Total SSE the least.
Assign the data points to other clusters.
• Merge two clusters: the clusters with the closest
centroids are chosen and merged.
42
Updating Centers Incrementally
• In the basic K-means algorithm, centroids are
updated after all points are assigned to a centroid
• An alternative is to update the centroids after each
assignment (incremental approach)
– This requires either 0 or 2 updates to cluster centroids at
each step, as a point either moves to a new cluster( 2
updates) or stays in the current cluster (0 updates)
43
– Never get an empty cluster since all clusters
start with a single data point.
– Can use “weights” to change the impact.
– One drawback of incremental update is order
dependency
– The clusters may get affected by the order,
points are processed.
44
Pre-processing and Post-
processing
• Pre-processing
– Normalize the data
– Eliminate outliers
• Post-processing
– Eliminate small clusters that may represent outliers
– Split ‘loose’ clusters, i.e., clusters with relatively high
SSE
– Merge clusters that are ‘close’ and that have relatively
low SSE
45
Bisecting K-means
46
Bisecting K-means Example
47
Limitations of K-means
• K-means has problems when clusters are of
differing
– Sizes
– Densities
– Non-globular shapes
48
Limitations of K-means: Differing
Sizes
49
Limitations of K-means: Differing
Density
50
Limitations of K-means: Non-globular
Shapes
51
Overcoming K-means Limitations
53
Overcoming K-means Limitations
54
Hierarchical Clustering
• Produces a set of nested clusters
organized as a hierarchical tree
• Can be visualized as a dendrogram
– A tree like diagram that records the
sequences of merges or splits
6 5
0
.2
4
0
.1
5 3 4
2
5
0
.1 2
0
.0
5 1
3 1
0
1 3 2 5 4 6 55
Hierarchical Clustering
• Do not have to assume any particular
number of clusters
– Any desired number of clusters can be
obtained by ‘cutting’ the dendogram at the
proper level
56
Hierarchical Clustering
• Two main types of hierarchical clustering
– Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster (or k clusters) left
– Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or there are k clusters)
57
Agglomerative Clustering Algorithm
• Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains
58
Starting Situation
• Start with clusters of individual points and
a proximity matrix p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11
59
Intermediate Situation
• After some merging steps, we have some clusters
C1 C2 C3 C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
...
p1 p2 p3 p4 p9 p10 p11
60
Intermediate Situation
• We want to merge the two closest clusters (C2 and
C5) and update the proximity matrix.
C1 C2 C3 C4 C5
C1
C2
C3
C3
C4
C4
C5
Proximity Matrix
C1
C2 C5
... 61
p1 p2 p3 p4 p9 p10 p11
After Merging
• The question is “How do we update the proximity
C2
matrix?” U
C1 C5 C3 C4
C1 ?
C2 U C5 ? ? ? ?
C3
C3 ?
C4
C4 ?
Proximity Matrix
C1
C2 U C5
... 62
p1 p2 p3 p4 p9 p10 p11
How to Define Inter-Cluster Similarity
p1 p2 p3 p4 p5 ...
p1
Similarity?
p2
p3
p4
p5
• MIN
.
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
63
How to Define Inter-Cluster
Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN(Single Link)
.
• MAX(complete link) .
• Group Average .
Proximity Matrix
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
64
Single-linkage
In single-linkage clusters, the distance between
any two clusters C1 and C2 is minimum of all
distances from an object in cluster C1 to an
object in C2
65
Strength of MIN
66
Limitations of MIN
67
Hierarchical Clustering: MIN
5
1
3
5 0
.2
2 1 0
.1
5
2 3 6 0
.1
0
.0
5
4
4 0
3 6 2 5 4 1
68
• Example:
69
How to Define Inter-Cluster
Similarity
p1 p2 p3 p4 p5 ...
p1
p2
p3
p4
p5
• MIN
.
• MAX(Complete Link) .
• Group Average .
Proximity Matrix
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
70
Complete-linkage
In complete-linkage clusters, the distance between
two clusters C1 and C2 is maximum of all distances
from an object in cluster C1 to an object in cluster
C2 as shown in figure below. It generates closely
packed clusters.
71
Strength of MAX
72
Limitations of MAX
p2
p3
p4
p5
• MIN
.
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
74
Average-linkage
In average-linkage clusters, the distance between two
clusters C1 and C2 is the average of all distances of
objects from an object in cluster C1 to an object in
cluster C2 as revealed in Fig.
75
Hierarchical Clustering: Group
Average
• Compromise between Single and
Complete Link
• Strengths
– Less susceptible to noise and outliers
• Limitations
– Biased towards globular clusters
76
How to Define Inter-Cluster
Similarity
p1 p2 p3 p4 p5 ...
p1
× × p2
p3
p4
p5
• MIN
.
• MAX .
• Group Average .
Proximity Matrix
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
77
Cluster Similarity: MIN or Single
Link
• Similarity of two clusters is based on the
two most similar (closest) points in the
different clusters
– Determined by one pair of points, i.e., by one
link in the proximity graph.
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.801 2 3 4 5
79
I5 0 .20 0.50 0.30 0.80 1.00
Hierarchical Clustering: MAX
4 1
2 5 0
.4
0
.3
5
5
2 0
.3
0
.2
5
3 6
0
.2
3 0
.1
5
1 0
.1
4 0
.0
5
0
3 6 4 1 2 5
80
Cluster Similarity: Group
Average
• Proximity of two clusters is the average of pairwise proximity
between points in the two clusters.
∑ proximity(p ,p )
pi∈Cluster
i j
i
pj∈Clusterj
proximity(
Cluster j) =
i , Cluster
|Clusteri | ∗|Clusterj |
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50
1 2 3 4 5
I3 0.10 0.70 1.00 0.40 0.30 81
Hierarchical Clustering: Group
Average
5 4 1
2
0
.2
5
5 0
.2
2
0
.1
5
3 6 0
.1
1 0
.0
5
4 0
3 3 6 4 1 2 5
82
Cluster Similarity: Ward’s Method
• The proximity between two clusters is based on the
increase in squared error that results when two clusters are
merged
– Similar to group average if distance between points is distance
squared
83
Hierarchical Clustering: Time and
Space requirements
• O(N2) space since it uses the proximity matrix.
– N is the number of points.
84
85
86
87
88
89
Hierarchical Clustering: Comparison
5
1 4 1
3
2 5
5 5
2 1 2
MIN MAX
2 3 6 3 6
3
1
4 4
4
5
1 5 4 1
2 2
5 Ward’s Method 5
2 2
3 6 Group Average 3 6
3
4 1 1
4 4
3
90
Hierarchical Clustering: Problems and
Limitations
• Once a decision is made to combine two clusters, it
cannot be undone
91
MST: Divisive Hierarchical
Clustering
• Build MST (Minimum Spanning Tree)
– Start with a tree that consists of any point
– In successive steps, look for the closest pair of points
(p, q) such that one point (p) is in the current tree but
the other (q) is not
– Add q to the tree and put an edge between p and q
92
MST: Divisive Hierarchical
Clustering
• Use MST for constructing hierarchy of
clusters
93
DBSCAN
• DBSCAN is a density-based algorithm.
94
DBSCAN: Core, Border, and Noise
Points Minpts =4
Eps=1
95
Core pts, Border Pts and Noise
Eps =1, Minpts= 5
96
DBSCAN algo
• 1: Label all points as core, border, or
noise points.
• 2: Eliminate noise points.
• 3: Put an edge between all core points
that are within Eps of each other.
• 4: Make a group of connected core
points into a separate cluster.
• 5: assign each border point to one of
the clusters of its associated core points.
97
DBSCAN: Core, Border and Noise
Points
Original Points
Clusters
• As DBSCAN uses density based definition of a cluster, it
is relatively resistant to Noise.
• Can handle clusters of different shapes and sizes.
•DBSCAN can find many clusters that could not be found
using k-means such as shown above.
99
When DBSCAN Does NOT Work Well
Original Points
(MinPts=4, Eps=9.75).
• DBSCAN has trouble when
clusters have varying densities
• it also has trouble when data is
high-dimensional b’coz density is
difficult to define for such data.
•DBSCAN can be expensive when
the computation of NN requires
computing all pairwise proximities
for high dim data. (MinPts=4, Eps=9.92)
100
DBSCAN: Determining EPS and MinPts
• The basic approach is to look at the behavior
of the distance (k-dist) between a point and
its kth NN.
• Idea is that for points in a cluster, their kth
nearest neighbors are at roughly the same
distance
• Noise points have the kth nearest neighbor at
farther distance
• So, plot sorted distance of every point to its kth
nearest neighbor
101
Cluster Validity
• For supervised classification we have a variety of measures to
evaluate how good our model is
– Accuracy, precision, recall
102
Clusters found in Random Data
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.5
y
0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
1 1
0.9 0.9
0.5 0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
103
Different Aspects of Cluster Validation
1. Determining the clustering tendency of a set of data, i.e.,
distinguishing whether non-random structure actually
exists in the data.
2. Determining the ‘correct’ number of clusters.
3. Evaluating how well the results of a cluster analysis fit
the data without reference to external information.
- Use only the data
1. Comparing the results of a cluster analysis to externally
known results, e.g., to externally given class labels
2. Comparing the results of two different sets of cluster
analyses to determine which is better.
104
Measures of cluster validity
Measures of
cluster validity
Internal External
Relative
(unsupervised)) (supervised)
105
Measures of Cluster Validity
• Numerical measures that are applied to judge various
aspects of cluster validity, are classified into the
following three types.
– External Index (supervised): Used to measure the extent to which
cluster labels match externally supplied class labels.
• Entropy
– Internal Index (unsupervised): Used to measure the extent to
which cluster labels match externally supplied class labels.
• Sum of Squared Error (SSE)
– Relative Index: Used to compare two different clusterings or
clusters.
• Often an external or internal index is used for this function,
e.g., SSE or entropy
• Sometimes these are referred to as criteria instead of
indices
– However, sometimes criterion is the general strategy and index is
the numerical measure that implements the criterion.
106
• Internal (unsupervised):
– Used to measure the extent to which cluster labels
match externally supplied class labels.
– E.g. Sum of Squared Error (SSE)
– Unsupervised measures are again divided into two
classes:
• Cluster cohesion: checks for cluster tightness and
compactness
• Cluster separation: checks how well separated or distinct
clusters are.
– These measures use only information provided in the
data so they are called internal measures.
107
• External (supervised):
– Used to measure the extent to which cluster
labels match externally supplied class labels.
– E.g. entropy, which measures how well cluster
labels match externally supplied labels.
– These measures are called external
measures b’coz, the information used for
evaluation is not present in the data.
108
• Relative:
– Used to compare two different clusterings or
clusters.
– This cluster evaluation measure can be
supervised or unsupervised evaluation
measure.
– E.g. K-means clusterings can be compared
using either SSE or entropy.
109
Unsupervised evaluation using proximity
matrix
• Two matrices
– Proximity Matrix
– “Incidence” Matrix
• One row and one column for each data point
• An entry is 1 if the associated pair of points belong to the same cluster
• An entry is 0 if the associated pair of points belongs to different clusters
110
Unsupervised: Measuring Cluster
Validity Via Correlation
• Correlation between ideal and actual similarity
matrices are calculated for the K-means
clusterings of the following two data sets.
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
y
0.5
y
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0 0.2 0.4 0.6 0.8 1
0
0 0.2 0.4 0.6 0.8 1 x
x
Corr = 0.9235
Corr = 0.5810
It reflects that the expected result that the clusters found by k-means
in the random data are worse than the clusters found by k-means in 111
data with well-separated clusters.
Using Similarity Matrix for Cluster
Validation
• Order the similarity matrix with respect to
cluster labels and inspect visually.
1
1
10 0.9
0.9
20 0.8
0.8
30 0.7
0.7
40 0.6
0.6
Points
50 0.5
y
0.5
60 0.4
0.4
70 0.3
0.3
80 0.2
0.2
90 0.1
0.1
100 0
0 20 40 60 80 100 Sim
ilarity
0 0.2 0.4 0.6 0.8 1
Points
x
112
Using Similarity Matrix for Cluster
Validation
• Clusters in random data are not so crisp
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50
y
0.5 0.5
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Sim
ilarity 0 0.2 0.4 0.6 0.8 1
Points x
DBSCAN
113
Using Similarity Matrix for Cluster
Validation
• Clusters in random data are not so crisp 1
1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50
y
0.5 0.5
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Sim
ilarity 0 0.2 0.4 0.6 0.8 1
Points x
K-means
114
Using Similarity Matrix for Cluster
Validation
• Clusters in random data are not so crisp
1 1
10 0.9 0.9
20 0.8 0.8
30 0.7 0.7
40 0.6 0.6
Points
50
y
0.5 0.5
60 0.4 0.4
70 0.3 0.3
80 0.2 0.2
90 0.1 0.1
100 0 0
20 40 60 80 100 Sim
ilarity 0 0.2 0.4 0.6 0.8 1
Points x
Complete Link
115
Using Similarity Matrix for Cluster
Validation
1
0.9
1 500
2 0.8
6
0.7
1000
3 0.6
4
1500 0.5
0.4
2000
0.3
5
0.2
2500
0.1
7
3000 0
500 1000 1500 2000 2500 3000
DBSCAN
116
Internal Measures: SSE
• Clusters in more complicated figures aren’t well
separated
• Internal Index: Used to measure the goodness of a
clustering structure without respect to external
information
– SSE
• SSE is good for comparing two clusterings or two
10
SSE
4 5
4
2
3
0 2
1
-2
0
2 5 10 15 20 25 30
-4
K
117
-6
5 10 15
Internal Measures: SSE
• SSE curve for a more complicated data
set
1
2 6
3
4
118
Framework for Cluster Validity
• Need a framework to interpret any measure.
– For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
• Statistics provide a framework for cluster validity
– The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
– Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
• If the value of the index is unlikely, then the cluster results are valid
– These approaches are more complicated and harder to understand.
• For comparing the results of two different sets of
cluster analyses, a framework is less necessary.
– However, there is the question of whether the difference between two
index values is significant
119
Statistical Framework for SSE
• Example
– Compare SSE of 0.005 against three clusters in
random data
– Histogram shows SSE of three clusters in 500 sets of
random data points of size 100 distributed over the
range 0.2 – 0.8 for x and y values
1
50
0.9
45
0.8
40
0.7
35
0.6
Count 30
y
0.5
25
0.4
20
0.3
15
0.2
10
0.1
5
0
0 0.2 0.4 0.6 0.8 1 0
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
x SSE
120
Statistical Framework for Correlation
• Correlation of incidence and proximity matrices
for the K-means clusterings of the following
two data sets.
1 1
0.9 0.9
0.8 0.8
0.7 0.7
0.6 0.6
y
0.5
y
0.5
0.4 0.4
0.3 0.3
0.2 0.2
0.1 0.1
0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
x x
121
Internal Measures: Cohesion and
Separation
• Cluster Cohesion: Measures how closely related
are objects in a cluster
– Example: SSE
• Cluster Separation: Measure how distinct or well-
separated a cluster is from other clusters
• Example: Squared Error
– Cohesion is measured by the within cluster sum of squares (SSE)
BSS = ∑ Ci ( m − mi ) 2 WSS = ∑ ∑ ( x − mi )2
– Where |Ci| iis the size of cluster i i x∈C i
122
Internal Measures: Cohesion and
Separation
• Example: SSE
– BSS + WSS =
constant m
× × ×
1 m1 2 3 4 m2 5
cohesion separation
124
Internal Measures: Silhouette
• Coefficient
Silhouette Coefficient combine ideas of both cohesion and
separation, but for individual points, as well as clusters and
clusterings
• For an individual point, i
– Calculate a = average distance of i to the points in its cluster
– Calculate b = min (average distance of i to points in another cluster)
– The silhouette coefficient for a point is then given by
126
Final Comment on Cluster Validity
“The validation of clustering structures is
the most difficult and frustrating part of
cluster analysis.
Without a strong effort in this direction,
cluster analysis will remain a black art
accessible only to those true believers who
have experience and great courage.”