2Many of the clustering algorithms were motivated by a certain problem domain.Accordingly, there is a variation on the requirements of each algorithm, includingdata representation, cluster model, similarity measures, and running time. Each of these requirements more or less has a significant effect on the usability of anyalgorithm. Moreover, it makes it difficult to compare different algorithms based ondifferent problem domains. The following section addresses some of theserequirements.
2. Properties of Clustering Algorithms
Before we can analyze and compare different algorithms, we have to define some of the properties for such algorithms, and find out what problem domains impose whatkind of properties. An analysis of different document clustering methods will bepresented in section 3.
2.1. Data Model
Most clustering algorithms expect the data set to be clustered in the form of a set of vectors
, where the vector ,1,,
corresponds to a singleobject in the data set and is called the
. Extracting the proper features torepresent through the feature vector is highly dependent on the problem domain. Thedimensionality of the feature vector is a crucial factor on the running time of thealgorithm and hence its scalability. However, some problem domains by defaultimpose a high dimension. There exist some methods to reduce the problemdimension, such as principle component analysis. Krishnapuram
 were able toreduce a 500-dimensional feature vector to 10-dimension using this method; howeverits validity was not justified. We now turn our focus on document data representation,and how to extract the proper features.
Document Data Model
Most document clustering methods use the
model to representdocument objects. Each document is represented by a vector
, in the term space,such that
, where ,1,,
of the term
in the document. To represent every document with the same set of terms, we haveto extract all the terms found in the documents and use them as our feature vector
.Sometimes another method is used which combines the term frequency with theinverse document frequency (TF-IDF). The
is the numberof documents in a collection of
documents in which the term
occurs. A typicalinverse document frequency (
) factor of this type is given by log(/)
. Theweight of a term
in a document is given by log(/)
. To keep the
Obviously the dimensionality of the feature vector is always very high, in the range of hundreds andsometimes thousands.