This action might not be possible to undo. Are you sure you want to continue?
Data Mining:
Concepts and Techniques
— Chapter 6 —
Jiawei Han
Department of Computer Science
University of Illinois at UrbanaChampaign
www.cs.uiuc.edu/~hanj
©2006 Jiawei Han and Micheline Kamber, All rights reserved
September 7, 2012 Data Mining: Concepts and Techniques 2
September 7, 2012 Data Mining: Concepts and Techniques 3
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 4
Classification
predicts categorical class labels (discrete or nominal)
classifies data (constructs a model) based on the
training set and the values (class labels) in a
classifying attribute and uses it in classifying new data
Prediction
models continuousvalued functions, i.e., predicts
unknown or missing values
Typical applications
Credit approval
Target marketing
Medical diagnosis
Fraud detection
Classification vs. Prediction
September 7, 2012 Data Mining: Concepts and Techniques 5
Classification—A TwoStep Process
Model construction: describing a set of predetermined classes
Each tuple/sample is assumed to belong to a predefined class,
as determined by the class label attribute
The set of tuples used for model construction is training set
The model is represented as classification rules, decision trees,
or mathematical formulae
Model usage: for classifying future or unknown objects
Estimate accuracy of the model
The known label of test sample is compared with the
classified result from the model
Accuracy rate is the percentage of test set samples that are
correctly classified by the model
Test set is independent of training set, otherwise overfitting
will occur
If the accuracy is acceptable, use the model to classify data
tuples whose class labels are not known
September 7, 2012 Data Mining: Concepts and Techniques 6
Process (1): Model Construction
Training
Data
NAME RANK YEARS TENURED
Mike Assistant Prof 3 no
Mary Assistant Prof 7 yes
Bill Professor 2 yes
Jim Associate Prof 7 yes
Dave Assistant Prof 6 no
Anne Associate Prof 3 no
Classification
Algorithms
IF rank = ‘professor’
OR years > 6
THEN tenured = ‘yes’
Classifier
(Model)
September 7, 2012 Data Mining: Concepts and Techniques 7
Process (2): Using the Model in Prediction
Classifier
Testing
Data
NAME RANK YEARS TENURED
Tom Assistant Prof 2 no
Merlisa Associate Prof 7 no
George Professor 5 yes
Joseph Assistant Prof 7 yes
Unseen Data
(Jeff, Professor, 4)
Tenured?
September 7, 2012 Data Mining: Concepts and Techniques 8
Supervised vs. Unsupervised Learning
Supervised learning (classification)
Supervision: The training data (observations,
measurements, etc.) are accompanied by labels
indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with
the aim of establishing the existence of classes or
clusters in the data
September 7, 2012 Data Mining: Concepts and Techniques 9
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 10
Issues: Data Preparation
Data cleaning
Preprocess data in order to reduce noise and handle
missing values
Relevance analysis (feature selection)
Remove the irrelevant or redundant attributes
Data transformation
Generalize and/or normalize data
September 7, 2012 Data Mining: Concepts and Techniques 11
Issues: Evaluating Classification Methods
Accuracy
classifier accuracy: predicting class label
predictor accuracy: guessing value of predicted
attributes
Speed
time to construct the model (training time)
time to use the model (classification/prediction time)
Robustness: handling noise and missing values
Scalability: efficiency in diskresident databases
Interpretability
understanding and insight provided by the model
Other measures, e.g., goodness of rules, such as decision
tree size or compactness of classification rules
September 7, 2012 Data Mining: Concepts and Techniques 12
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 13
Decision Tree Induction: Training Dataset
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
This
follows an
example
of
Quinlan‘s
ID3
(Playing
Tennis)
September 7, 2012 Data Mining: Concepts and Techniques 14
Output: A Decision Tree for “buys_computer”
age?
overcast
student? credit rating?
<=30
>40
no yes
yes
yes
31..40
fair excellent
yes no
September 7, 2012 Data Mining: Concepts and Techniques 15
Algorithm for Decision Tree Induction
Basic algorithm (a greedy algorithm)
Tree is constructed in a topdown recursive divideandconquer
manner
At start, all the training examples are at the root
Attributes are categorical (if continuousvalued, they are
discretized in advance)
Examples are partitioned recursively based on selected attributes
Test attributes are selected on the basis of a heuristic or
statistical measure (e.g., information gain)
Conditions for stopping partitioning
All samples for a given node belong to the same class
There are no remaining attributes for further partitioning –
majority voting is employed for classifying the leaf
There are no samples left
September 7, 2012 Data Mining: Concepts and Techniques 16
Attribute Selection Measure:
Information Gain (ID3/C4.5)
Select the attribute with the highest information gain
Let p
i
be the probability that an arbitrary tuple in D
belongs to class C
i
, estimated by C
i, D
/D
Expected information (entropy) needed to classify a tuple
in D:
Information needed (after using A to split D into v
partitions) to classify D:
Information gained by branching on attribute A
) ( log ) (
2
1
i
m
i
i
p p D Info
¿
=
÷ =
) (
 
 
) (
1
j
v
j
j
A
D I
D
D
D Info × =
¿
=
(D) Info Info(D) Gain(A)
A
÷ =
September 7, 2012 Data Mining: Concepts and Techniques 17
Attribute Selection: Information Gain
Class P: buys_computer = ―yes‖
Class N: buys_computer = ―no‖
means ―age <=30‖ has 5
out of 14 samples, with 2 yes‘es
and 3 no‘s. Hence
Similarly,
age p
i
n
i
I(p
i
, n
i
)
<=30 2 3 0.971
31…40 4 0 0
>40 3 2 0.971
694 . 0 ) 2 , 3 (
14
5
) 0 , 4 (
14
4
) 3 , 2 (
14
5
) (
= +
+ =
I
I I D Inf o
age
048 . 0 ) _ (
151 . 0 ) (
029 . 0 ) (
=
=
=
rating credit Gain
student Gain
income Gain
246 . 0 ) ( ) ( ) ( = ÷ = D Info D Info age Gain
age
age income student credit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
) 3 , 2 (
14
5
I
940 . 0 )
14
5
( log
14
5
)
14
9
( log
14
9
) 5 , 9 ( ) (
2 2
= ÷ ÷ = = I D Info
September 7, 2012 Data Mining: Concepts and Techniques 18
Computing InformationGain for
ContinuousValue Attributes
Let attribute A be a continuousvalued attribute
Must determine the best split point for A
Sort the value A in increasing order
Typically, the midpoint between each pair of adjacent
values is considered as a possible split point
(a
i
+a
i+1
)/2 is the midpoint between the values of a
i
and a
i+1
The point with the minimum expected information
requirement for A is selected as the splitpoint for A
Split:
D1 is the set of tuples in D satisfying A ≤ splitpoint, and
D2 is the set of tuples in D satisfying A > splitpoint
September 7, 2012 Data Mining: Concepts and Techniques 19
Gain Ratio for Attribute Selection (C4.5)
Information gain measure is biased towards attributes
with a large number of values
C4.5 (a successor of ID3) uses gain ratio to overcome the
problem (normalization to information gain)
GainRatio(A) = Gain(A)/SplitInfo(A)
Ex.
gain_ratio(income) = 0.029/0.926 = 0.031
The attribute with the maximum gain ratio is selected as
the splitting attribute
)
 
 
( log
 
 
) (
2
1
D
D
D
D
D SplitInfo
j
v
j
j
A
× ÷ =
¿
=
926 . 0 )
14
4
( log
14
4
)
14
6
( log
14
6
)
14
4
( log
14
4
) (
2 2 2
= × ÷ × ÷ × ÷ = D SplitInf o
A
September 7, 2012 Data Mining: Concepts and Techniques 20
Gini index (CART, IBM IntelligentMiner)
If a data set D contains examples from n classes, gini index, gini(D) is
defined as
where p
j
is the relative frequency of class j in D
If a data set D is split on A into two subsets D
1
and D
2
, the gini index
gini(D) is defined as
Reduction in Impurity:
The attribute provides the smallest gini
split
(D) (or the largest reduction
in impurity) is chosen to split the node (need to enumerate all the
possible splitting points for each attribute)
¿
=
÷ =
n
j
p
j
D gini
1
2
1 ) (
) (
 
 
) (
 
 
) (
2
2
1
1
D
gini
D
D
D
gini
D
D
D
gini
A
+ =
) ( ) ( ) ( D gini D gini A gini
A
÷ = A
September 7, 2012 Data Mining: Concepts and Techniques 21
Gini index (CART, IBM IntelligentMiner)
Ex. D has 9 tuples in buys_computer = ―yes‖ and 5 in ―no‖
Suppose the attribute income partitions D into 10 in D
1
: {low,
medium} and 4 in D
2
but gini
{medium,high}
is 0.30 and thus the best since it is the lowest
All attributes are assumed continuousvalued
May need other tools, e.g., clustering, to get the possible split values
Can be modified for categorical attributes
459 . 0
14
5
14
9
1 ) (
2 2
=

.

\

÷

.

\

÷ = D gini
) (
14
4
) (
14
10
) (
1 1 } , {
D Gini D Gini D gini
medium low income

.

\

+

.

\

=
e
September 7, 2012 Data Mining: Concepts and Techniques 22
Comparing Attribute Selection Measures
The three measures, in general, return good results but
Information gain:
biased towards multivalued attributes
Gain ratio:
tends to prefer unbalanced splits in which one
partition is much smaller than the others
Gini index:
biased to multivalued attributes
has difficulty when # of classes is large
tends to favor tests that result in equalsized
partitions and purity in both partitions
September 7, 2012 Data Mining: Concepts and Techniques 23
Other Attribute Selection Measures
CHAID: a popular decision tree algorithm, measure based on χ
2
test
for independence
CSEP: performs better than info. gain and gini index in certain cases
Gstatistics: has a close approximation to χ
2
distribution
MDL (Minimal Description Length) principle (i.e., the simplest solution
is preferred):
The best tree as the one that requires the fewest # of bits to both
(1) encode the tree, and (2) encode the exceptions to the tree
Multivariate splits (partition based on multiple variable combinations)
CART: finds multivariate splits based on a linear comb. of attrs.
Which attribute selection measure is the best?
Most give good results, none is significantly superior than others
September 7, 2012 Data Mining: Concepts and Techniques 24
Overfitting and Tree Pruning
Overfitting: An induced tree may overfit the training data
Too many branches, some may reflect anomalies due to noise or
outliers
Poor accuracy for unseen samples
Two approaches to avoid overfitting
Prepruning: Halt tree construction early—do not split a node if this
would result in the goodness measure falling below a threshold
Difficult to choose an appropriate threshold
Postpruning: Remove branches from a ―fully grown‖ tree—get a
sequence of progressively pruned trees
Use a set of data different from the training data to decide
which is the ―best pruned tree‖
September 7, 2012 Data Mining: Concepts and Techniques 25
Enhancements to Basic Decision Tree Induction
Allow for continuousvalued attributes
Dynamically define new discretevalued attributes that
partition the continuous attribute value into a discrete
set of intervals
Handle missing attribute values
Assign the most common value of the attribute
Assign probability to each of the possible values
Attribute construction
Create new attributes based on existing ones that are
sparsely represented
This reduces fragmentation, repetition, and replication
September 7, 2012 Data Mining: Concepts and Techniques 26
Classification in Large Databases
Classification—a classical problem extensively studied by
statisticians and machine learning researchers
Scalability: Classifying data sets with millions of examples
and hundreds of attributes with reasonable speed
Why decision tree induction in data mining?
relatively faster learning speed (than other classification
methods)
convertible to simple and easy to understand
classification rules
can use SQL queries for accessing databases
comparable classification accuracy with other methods
September 7, 2012 Data Mining: Concepts and Techniques 27
Scalable Decision Tree Induction Methods
SLIQ (EDBT‘96 — Mehta et al.)
Builds an index for each attribute and only class list and
the current attribute list reside in memory
SPRINT (VLDB‘96 — J. Shafer et al.)
Constructs an attribute list data structure
PUBLIC (VLDB‘98 — Rastogi & Shim)
Integrates tree splitting and tree pruning: stop growing
the tree earlier
RainForest (VLDB‘98 — Gehrke, Ramakrishnan & Ganti)
Builds an AVClist (attribute, value, class label)
BOAT (PODS‘99 — Gehrke, Ganti, Ramakrishnan & Loh)
Uses bootstrapping to create several small samples
September 7, 2012 Data Mining: Concepts and Techniques 28
Scalability Framework for RainForest
Separates the scalability aspects from the criteria that
determine the quality of the tree
Builds an AVClist: AVC (Attribute, Value, Class_label)
AVCset (of an attribute X )
Projection of training dataset onto the attribute X and
class label where counts of individual class label are
aggregated
AVCgroup (of a node n )
Set of AVCsets of all predictor attributes at the node n
September 7, 2012 Data Mining: Concepts and Techniques 29
Rainforest: Training Set and Its AVC Sets
student Buy_Computer
yes no
yes 6 1
no 3 4
Age Buy_Computer
yes no
<=30 3 2
31..40 4 0
>40 3 2
Credit
rating
Buy_Computer
yes no
fair 6 2
excellent 3 3
age income studentcredit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
AVCset on income AVCset on Age
AVCset on Student
Training Examples
income Buy_Computer
yes no
high 2 2
medium 4 2
low 3 1
AVCset on
credit_rating
September 7, 2012 Data Mining: Concepts and Techniques 30
Data CubeBased DecisionTree Induction
Integration of generalization with decisiontree induction
(Kamber et al.‘97)
Classification at primitive concept levels
E.g., precise temperature, humidity, outlook, etc.
Lowlevel concepts, scattered classes, bushy
classificationtrees
Semantic interpretation problems
Cubebased multilevel classification
Relevance analysis at multilevels
Informationgain analysis with dimension + level
September 7, 2012 Data Mining: Concepts and Techniques 31
BOAT (Bootstrapped Optimistic Algorithm
for Tree Construction)
Use a statistical technique called bootstrapping to create
several smaller samples (subsets), each fits in memory
Each subset is used to create a tree, resulting in several
trees
These trees are examined and used to construct a new
tree T’
It turns out that T’ is very close to the tree that would
be generated using the whole data set together
Adv: requires only two scans of DB, an incremental alg.
September 7, 2012 Data Mining: Concepts and Techniques 32
Presentation of Classification Results
September 7, 2012 Data Mining: Concepts and Techniques 33
Visualization of a Decision Tree in SGI/MineSet 3.0
September 7, 2012 Data Mining: Concepts and Techniques 34
Interactive Visual Mining by PerceptionBased
Classification (PBC)
September 7, 2012 Data Mining: Concepts and Techniques 35
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 36
Bayesian Classification: Why?
A statistical classifier: performs probabilistic prediction,
i.e., predicts class membership probabilities
Foundation: Based on Bayes‘ Theorem.
Performance: A simple Bayesian classifier, naïve Bayesian
classifier, has comparable performance with decision tree
and selected neural network classifiers
Incremental: Each training example can incrementally
increase/decrease the probability that a hypothesis is
correct — prior knowledge can be combined with observed
data
Standard: Even when Bayesian methods are
computationally intractable, they can provide a standard
of optimal decision making against which other methods
can be measured
September 7, 2012 Data Mining: Concepts and Techniques 37
Bayesian Theorem: Basics
Let X be a data sample (―evidence‖): class label is unknown
Let H be a hypothesis that X belongs to class C
Classification is to determine P(HX), the probability that
the hypothesis holds given the observed data sample X
P(H) (prior probability), the initial probability
E.g., X will buy computer, regardless of age, income, …
P(X): probability that sample data is observed
P(XH) (posteriori probability), the probability of observing
the sample X, given that the hypothesis holds
E.g., Given that X will buy computer, the prob. that X is
31..40, medium income
September 7, 2012 Data Mining: Concepts and Techniques 38
Bayesian Theorem
Given training data X, posteriori probability of a
hypothesis H, P(HX), follows the Bayes theorem
Informally, this can be written as
posteriori = likelihood x prior/evidence
Predicts X belongs to C
2
iff the probability P(C
i
X) is the
highest among all the P(C
k
X) for all the k classes
Practical difficulty: require initial knowledge of many
probabilities, significant computational cost
) (
) ( )  (
)  (
X
X
X
P
H P H P
H P =
September 7, 2012 Data Mining: Concepts and Techniques 39
Towards Naïve Bayesian Classifier
Let D be a training set of tuples and their associated class
labels, and each tuple is represented by an nD attribute
vector X = (x
1
, x
2
, …, x
n
)
Suppose there are m classes C
1
, C
2
, …, C
m
.
Classification is to derive the maximum posteriori, i.e., the
maximal P(C
i
X)
This can be derived from Bayes‘ theorem
Since P(X) is constant for all classes, only
needs to be maximized
) (
) ( )  (
)  (
X
X
X
P
i
C P
i
C P
i
C P =
) ( )  ( )  (
i
C P
i
C P
i
C P X X =
September 7, 2012 Data Mining: Concepts and Techniques 40
Derivation of Naïve Bayes Classifier
A simplified assumption: attributes are conditionally
independent (i.e., no dependence relation between
attributes):
This greatly reduces the computation cost: Only counts
the class distribution
If A
k
is categorical, P(x
k
C
i
) is the # of tuples in C
i
having
value x
k
for A
k
divided by C
i, D
 (# of tuples of C
i
in D)
If A
k
is continousvalued, P(x
k
C
i
) is usually computed
based on Gaussian distribution with a mean μ and
standard deviation σ
and P(x
k
C
i
) is
)  ( ... )  ( )  (
1
)  ( )  (
2 1
C
i
x
P
C
i
x
P
C
i
x
P
n
k
C
i
x
P
C
i
P
n k
× × × =
[
=
= X
2
2
2
) (
2
1
) , , (
o
µ
o t
o µ
÷
÷
=
x
e x g
) , , ( )  (
i i
C C k
x g
C
i
P o µ = X
September 7, 2012 Data Mining: Concepts and Techniques 41
Naïve Bayesian Classifier: Training Dataset
Class:
C1:buys_computer = ‗yes‘
C2:buys_computer = ‗no‘
Data sample
X = (age <=30,
Income = medium,
Student = yes
Credit_rating = Fair)
age income studentcredit_rating buys_computer
<=30 high no fair no
<=30 high no excellent no
31…40 high no fair yes
>40 medium no fair yes
>40 low yes fair yes
>40 low yes excellent no
31…40 low yes excellent yes
<=30 medium no fair no
<=30 low yes fair yes
>40 medium yes fair yes
<=30 medium yes excellent yes
31…40 medium no excellent yes
31…40 high yes fair yes
>40 medium no excellent no
September 7, 2012 Data Mining: Concepts and Techniques 42
Naïve Bayesian Classifier: An Example
P(C
i
): P(buys_computer = ―yes‖) = 9/14 = 0.643
P(buys_computer = ―no‖) = 5/14= 0.357
Compute P(XC
i
) for each class
P(age = ―<=30‖  buys_computer = ―yes‖) = 2/9 = 0.222
P(age = ―<= 30‖  buys_computer = ―no‖) = 3/5 = 0.6
P(income = ―medium‖  buys_computer = ―yes‖) = 4/9 = 0.444
P(income = ―medium‖  buys_computer = ―no‖) = 2/5 = 0.4
P(student = ―yes‖  buys_computer = ―yes) = 6/9 = 0.667
P(student = ―yes‖  buys_computer = ―no‖) = 1/5 = 0.2
P(credit_rating = ―fair‖  buys_computer = ―yes‖) = 6/9 = 0.667
P(credit_rating = ―fair‖  buys_computer = ―no‖) = 2/5 = 0.4
X = (age <= 30 , income = medium, student = yes, credit_rating = fair)
P(XC
i
) : P(Xbuys_computer = ―yes‖) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
P(Xbuys_computer = ―no‖) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(XC
i
)*P(C
i
) : P(Xbuys_computer = ―yes‖) * P(buys_computer = ―yes‖) = 0.028
P(Xbuys_computer = ―no‖) * P(buys_computer = ―no‖) = 0.007
Therefore, X belongs to class (“buys_computer = yes”)
September 7, 2012 Data Mining: Concepts and Techniques 43
Avoiding the 0Probability Problem
Naïve Bayesian prediction requires each conditional prob. be non
zero. Otherwise, the predicted prob. will be zero
Ex. Suppose a dataset with 1000 tuples, income=low (0), income=
medium (990), and income = high (10),
Use Laplacian correction (or Laplacian estimator)
Adding 1 to each case
Prob(income = low) = 1/1003
Prob(income = medium) = 991/1003
Prob(income = high) = 11/1003
The ―corrected‖ prob. estimates are close to their ―uncorrected‖
counterparts
[
=
=
n
k
C
i
x
k
P
C
i
X P
1
)  ( )  (
September 7, 2012 Data Mining: Concepts and Techniques 44
Naïve Bayesian Classifier: Comments
Advantages
Easy to implement
Good results obtained in most of the cases
Disadvantages
Assumption: class conditional independence, therefore
loss of accuracy
Practically, dependencies exist among variables
E.g., hospitals: patients: Profile: age, family history, etc.
Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.
Dependencies among these cannot be modeled by Naïve
Bayesian Classifier
How to deal with these dependencies?
Bayesian Belief Networks
September 7, 2012 Data Mining: Concepts and Techniques 45
Bayesian Belief Networks
Bayesian belief network allows a subset of the variables
conditionally independent
A graphical model of causal relationships
Represents dependency among the variables
Gives a specification of joint probability distribution
X
Y
Z
P
Nodes: random variables
Links: dependency
X and Y are the parents of Z, and Y is
the parent of P
No dependency between Z and P
Has no loops or cycles
September 7, 2012 Data Mining: Concepts and Techniques 46
Bayesian Belief Network: An Example
Family
History
LungCancer
PositiveXRay
Smoker
Emphysema
Dyspnea
LC
~LC
(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)
0.8
0.2
0.5
0.5
0.7
0.3
0.1
0.9
Bayesian Belief Networks
The conditional probability table
(CPT) for variable LungCancer:
[
=
=
n
i
Y Parents
i
x
i
P x x P
n
1
)) (  ( ) ,..., (
1
CPT shows the conditional probability for
each possible combination of its parents
Derivation of the probability of a
particular combination of values of X,
from CPT:
September 7, 2012 Data Mining: Concepts and Techniques 47
Training Bayesian Networks
Several scenarios:
Given both the network structure and all variables
observable: learn only the CPTs
Network structure known, some hidden variables:
gradient descent (greedy hillclimbing) method,
analogous to neural network learning
Network structure unknown, all variables observable:
search through the model space to reconstruct
network topology
Unknown structure, all hidden variables: No good
algorithms known for this purpose
Ref. D. Heckerman: Bayesian networks for data mining
September 7, 2012 Data Mining: Concepts and Techniques 48
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 49
Using IFTHEN Rules for Classification
Represent the knowledge in the form of IFTHEN rules
R: IF age = youth AND student = yes THEN buys_computer = yes
Rule antecedent/precondition vs. rule consequent
Assessment of a rule: coverage and accuracy
n
covers
= # of tuples covered by R
n
correct
= # of tuples correctly classified by R
coverage(R) = n
covers
/D /* D: training data set */
accuracy(R) = n
correct
/ n
covers
If more than one rule is triggered, need conflict resolution
Size ordering: assign the highest priority to the triggering rules that has
the ―toughest‖ requirement (i.e., with the most attribute test)
Classbased ordering: decreasing order of prevalence or misclassification
cost per class
Rulebased ordering (decision list): rules are organized into one long
priority list, according to some measure of rule quality or by experts
September 7, 2012 Data Mining: Concepts and Techniques 50
age?
student? credit rating?
<=30
>40
no yes
yes
yes
31..40
fair excellent
yes no
Example: Rule extraction from our buys_computer decisiontree
IF age = young AND student = no THEN buys_computer = no
IF age = young AND student = yes THEN buys_computer = yes
IF age = midage THEN buys_computer = yes
IF age = old AND credit_rating = excellent THEN buys_computer = yes
IF age = young AND credit_rating = fair THEN buys_computer = no
Rule Extraction from a Decision Tree
Rules are easier to understand than large trees
One rule is created for each path from the root
to a leaf
Each attributevalue pair along a path forms a
conjunction: the leaf holds the class prediction
Rules are mutually exclusive and exhaustive
September 7, 2012 Data Mining: Concepts and Techniques 51
Rule Extraction from the Training Data
Sequential covering algorithm: Extracts rules directly from training data
Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER
Rules are learned sequentially, each for a given class C
i
will cover many
tuples of C
i
but none (or few) of the tuples of other classes
Steps:
Rules are learned one at a time
Each time a rule is learned, the tuples covered by the rules are
removed
The process repeats on the remaining tuples unless termination
condition, e.g., when no more training examples or when the quality
of a rule returned is below a userspecified threshold
Comp. w. decisiontree induction: learning a set of rules simultaneously
September 7, 2012 Data Mining: Concepts and Techniques 52
How to LearnOneRule?
Star with the most general rule possible: condition = empty
Adding new attributes by adopting a greedy depthfirst strategy
Picks the one that most improves the rule quality
RuleQuality measures: consider both coverage and accuracy
Foilgain (in FOIL & RIPPER): assesses info_gain by extending
condition
It favors rules that have high accuracy and cover many positive tuples
Rule pruning based on an independent set of test tuples
Pos/neg are # of positive/negative tuples covered by R.
If FOIL_Prune is higher for the pruned version of R, prune R
) log
' '
'
(log ' _
2 2
neg pos
pos
neg pos
pos
pos Gain FOIL
+
÷
+
× =
neg pos
neg pos
R Prune FOIL
+
÷
= ) ( _
September 7, 2012 Data Mining: Concepts and Techniques 53
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 54
Classification:
predicts categorical class labels
E.g., Personal homepage classification
x
i
= (x
1
, x
2
, x
3
, …), y
i
= +1 or –1
x
1
: # of a word ―homepage‖
x
2
: # of a word ―welcome‖
Mathematically
x e X = 9
n
, y e Y = {+1, –1}
We want a function f: X Y
Classification: A Mathematical Mapping
September 7, 2012 Data Mining: Concepts and Techniques 55
Linear Classification
Binary Classification
problem
The data above the red
line belongs to class ‗x‘
The data below red line
belongs to class ‗o‘
Examples: SVM,
Perceptron, Probabilistic
Classifiers
x
x
x
x
x x
x
x
x
x
o
o
o
o
o
o
o
o
o o
o
o
o
September 7, 2012 Data Mining: Concepts and Techniques 56
Discriminative Classifiers
Advantages
prediction accuracy is generally high
As compared to Bayesian methods – in general
robust, works when training examples contain errors
fast evaluation of the learned target function
Bayesian networks are normally slow
Criticism
long training time
difficult to understand the learned function (weights)
Bayesian networks can be used easily for pattern discovery
not easy to incorporate domain knowledge
Easy in the form of priors on the data or distributions
September 7, 2012 Data Mining: Concepts and Techniques 57
Perceptron & Winnow
• Vector: x, w
• Scalar: x, y, w
Input: {(x
1
, y
1
), …}
Output: classification function f(x)
f(x
i
) > 0 for y
i
= +1
f(x
i
) < 0 for y
i
= 1
f(x) => wx + b = 0
or w
1
x
1
+w
2
x
2
+b = 0
x
1
x
2
• Perceptron: update W
additively
• Winnow: update W
multiplicatively
September 7, 2012 Data Mining: Concepts and Techniques 58
Classification by Backpropagation
Backpropagation: A neural network learning algorithm
Started by psychologists and neurobiologists to develop
and test computational analogues of neurons
A neural network: A set of connected input/output units
where each connection has a weight associated with it
During the learning phase, the network learns by
adjusting the weights so as to be able to predict the
correct class label of the input tuples
Also referred to as connectionist learning due to the
connections between units
September 7, 2012 Data Mining: Concepts and Techniques 59
Neural Network as a Classifier
Weakness
Long training time
Require a number of parameters typically best determined
empirically, e.g., the network topology or ``structure."
Poor interpretability: Difficult to interpret the symbolic meaning
behind the learned weights and of ``hidden units" in the network
Strength
High tolerance to noisy data
Ability to classify untrained patterns
Wellsuited for continuousvalued inputs and outputs
Successful on a wide array of realworld data
Algorithms are inherently parallel
Techniques have recently been developed for the extraction of
rules from trained neural networks
September 7, 2012 Data Mining: Concepts and Techniques 60
A Neuron (= a perceptron)
The ndimensional input vector x is mapped into variable y by
means of the scalar product and a nonlinear function mapping
µ
k

f
weighted
sum
Input
vector x
output y
Activation
function
weight
vector w
¿
w
0
w
1
w
n
x
0
x
1
x
n
) sign( y
Example For
n
0 i
k i i
x w µ + =
¿
=
September 7, 2012 Data Mining: Concepts and Techniques 61
A MultiLayer FeedForward Neural Network
Output layer
Input layer
Hidden layer
Output vector
Input vector: X
w
ij
¿
+ =
i
j i ij j
O w I u
j
I
j
e
O
÷
+
=
1
1
) )( 1 (
j j j j j
O T O O Err ÷ ÷ =
jk
k
k j j j
w Err O O Err
¿
÷ = ) 1 (
i j ij ij
O Err l w w ) ( + =
j j j
Err l) ( + =u u
September 7, 2012 Data Mining: Concepts and Techniques 62
How A MultiLayer Neural Network Works?
The inputs to the network correspond to the attributes measured
for each training tuple
Inputs are fed simultaneously into the units making up the input
layer
They are then weighted and fed simultaneously to a hidden layer
The number of hidden layers is arbitrary, although usually only one
The weighted outputs of the last hidden layer are input to units
making up the output layer, which emits the network's prediction
The network is feedforward in that none of the weights cycles
back to an input unit or to an output unit of a previous layer
From a statistical point of view, networks perform nonlinear
regression: Given enough hidden units and enough training
samples, they can closely approximate any function
September 7, 2012 Data Mining: Concepts and Techniques 63
Defining a Network Topology
First decide the network topology: # of units in the
input layer, # of hidden layers (if > 1), # of units in each
hidden layer, and # of units in the output layer
Normalizing the input values for each attribute measured in
the training tuples to [0.0—1.0]
One input unit per domain value, each initialized to 0
Output, if for classification and more than two classes,
one output unit per class is used
Once a network has been trained and its accuracy is
unacceptable, repeat the training process with a different
network topology or a different set of initial weights
September 7, 2012 Data Mining: Concepts and Techniques 64
Backpropagation
Iteratively process a set of training tuples & compare the network's
prediction with the actual known target value
For each training tuple, the weights are modified to minimize the
mean squared error between the network's prediction and the
actual target value
Modifications are made in the ―backwards‖ direction: from the output
layer, through each hidden layer down to the first hidden layer, hence
―backpropagation‖
Steps
Initialize weights (to small random #s) and biases in the network
Propagate the inputs forward (by applying activation function)
Backpropagate the error (by updating weights and biases)
Terminating condition (when error is very small, etc.)
September 7, 2012 Data Mining: Concepts and Techniques 65
Backpropagation and Interpretability
Efficiency of backpropagation: Each epoch (one interation through the
training set) takes O(D * w), with D tuples and w weights, but # of
epochs can be exponential to n, the number of inputs, in the worst
case
Rule extraction from networks: network pruning
Simplify the network structure by removing weighted links that
have the least effect on the trained network
Then perform link, unit, or activation value clustering
The set of input and activation values are studied to derive rules
describing the relationship between the input and hidden unit
layers
Sensitivity analysis: assess the impact that a given input variable has
on a network output. The knowledge gained from this analysis can be
represented in rules
September 7, 2012 Data Mining: Concepts and Techniques 66
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 67
SVM—Support Vector Machines
A new classification method for both linear and nonlinear
data
It uses a nonlinear mapping to transform the original
training data into a higher dimension
With the new dimension, it searches for the linear optimal
separating hyperplane (i.e., ―decision boundary‖)
With an appropriate nonlinear mapping to a sufficiently
high dimension, data from two classes can always be
separated by a hyperplane
SVM finds this hyperplane using support vectors
(―essential‖ training tuples) and margins (defined by the
support vectors)
September 7, 2012 Data Mining: Concepts and Techniques 68
SVM—History and Applications
Vapnik and colleagues (1992)—groundwork from Vapnik
& Chervonenkis‘ statistical learning theory in 1960s
Features: training can be slow but accuracy is high owing
to their ability to model complex nonlinear decision
boundaries (margin maximization)
Used both for classification and prediction
Applications:
handwritten digit recognition, object recognition,
speaker identification, benchmarking timeseries
prediction tests
September 7, 2012 Data Mining: Concepts and Techniques 69
SVM—General Philosophy
Support Vectors
Small Margin Large Margin
September 7, 2012 Data Mining: Concepts and Techniques 70
SVM—Margins and Support Vectors
September 7, 2012 Data Mining: Concepts and Techniques 71
SVM—When Data Is Linearly Separable
m
Let data D be (X
1
, y
1
), …, (X
D
, y
D
), where X
i
is the set of training tuples
associated with the class labels y
i
There are infinite lines (hyperplanes) separating the two classes but we want to
find the best one (the one that minimizes classification error on unseen data)
SVM searches for the hyperplane with the largest margin, i.e., maximum
marginal hyperplane (MMH)
September 7, 2012 Data Mining: Concepts and Techniques 72
SVM—Linearly Separable
A separating hyperplane can be written as
W ● X + b = 0
where W={w
1
, w
2
, …, w
n
} is a weight vector and b a scalar (bias)
For 2D it can be written as
w
0
+ w
1
x
1
+ w
2
x
2
= 0
The hyperplane defining the sides of the margin:
H
1
: w
0
+ w
1
x
1
+ w
2
x
2
≥ 1 for y
i
= +1, and
H
2
: w
0
+ w
1
x
1
+ w
2
x
2
≤ – 1 for y
i
= –1
Any training tuples that fall on hyperplanes H
1
or H
2
(i.e., the
sides defining the margin) are support vectors
This becomes a constrained (convex) quadratic optimization
problem: Quadratic objective function and linear constraints
Quadratic Programming (QP) Lagrangian multipliers
September 7, 2012 Data Mining: Concepts and Techniques 73
Why Is SVM Effective on High Dimensional Data?
The complexity of trained classifier is characterized by the # of
support vectors rather than the dimensionality of the data
The support vectors are the essential or critical training examples —
they lie closest to the decision boundary (MMH)
If all other training examples are removed and the training is
repeated, the same separating hyperplane would be found
The number of support vectors found can be used to compute an
(upper) bound on the expected error rate of the SVM classifier, which
is independent of the data dimensionality
Thus, an SVM with a small number of support vectors can have good
generalization, even when the dimensionality of the data is high
September 7, 2012 Data Mining: Concepts and Techniques 74
SVM—Linearly Inseparable
Transform the original input data into a higher dimensional
space
Search for a linear separating hyperplane in the new space
A
1
A
2
September 7, 2012 Data Mining: Concepts and Techniques 75
SVM—Kernel functions
Instead of computing the dot product on the transformed data tuples,
it is mathematically equivalent to instead applying a kernel function
K(X
i
, X
j
) to the original data, i.e., K(X
i
, X
j
) = Φ(X
i
) Φ(X
j
)
Typical Kernel Functions
SVM can also be used for classifying multiple (> 2) classes and for
regression analysis (with additional user parameters)
September 7, 2012 Data Mining: Concepts and Techniques 76
Scaling SVM by Hierarchical MicroClustering
SVM is not scalable to the number of data objects in terms of training
time and memory usage
―Classifying Large Datasets Using SVMs with Hierarchical Clusters
Problem‖ by Hwanjo Yu, Jiong Yang, Jiawei Han, KDD‘03
CBSVM (ClusteringBased SVM)
Given limited amount of system resources (e.g., memory),
maximize the SVM performance in terms of accuracy and the
training speed
Use microclustering to effectively reduce the number of points to
be considered
At deriving support vectors, decluster microclusters near
―candidate vector‖ to ensure high classification accuracy
September 7, 2012 Data Mining: Concepts and Techniques 77
CBSVM: ClusteringBased SVM
Training data sets may not even fit in memory
Read the data set once (minimizing disk access)
Construct a statistical summary of the data (i.e., hierarchical
clusters) given a limited amount of memory
The statistical summary maximizes the benefit of learning SVM
The summary plays a role in indexing SVMs
Essence of Microclustering (Hierarchical indexing structure)
Use microcluster hierarchical indexing structure
provide finer samples closer to the boundary and coarser
samples farther from the boundary
Selective declustering to ensure high accuracy
September 7, 2012 Data Mining: Concepts and Techniques 78
CFTree: Hierarchical Microcluster
September 7, 2012 Data Mining: Concepts and Techniques 79
CBSVM Algorithm: Outline
Construct two CFtrees from positive and negative data
sets independently
Need one scan of the data set
Train an SVM from the centroids of the root entries
Decluster the entries near the boundary into the next
level
The children entries declustered from the parent
entries are accumulated into the training set with the
nondeclustered parent entries
Train an SVM again from the centroids of the entries in
the training set
Repeat until nothing is accumulated
September 7, 2012 Data Mining: Concepts and Techniques 80
Selective Declustering
CF tree is a suitable base structure for selective declustering
Decluster only the cluster E
i
such that
D
i
– R
i
< D
s
, where D
i
is the distance from the boundary to
the center point of E
i
and R
i
is the radius of E
i
Decluster only the cluster whose subclusters have
possibilities to be the support cluster of the boundary
―Support cluster‖: The cluster whose centroid is a
support vector
September 7, 2012 Data Mining: Concepts and Techniques 81
Experiment on Synthetic Dataset
September 7, 2012 Data Mining: Concepts and Techniques 82
Experiment on a Large Data Set
September 7, 2012 Data Mining: Concepts and Techniques 83
SVM vs. Neural Network
SVM
Relatively new concept
Deterministic algorithm
Nice Generalization
properties
Hard to learn – learned
in batch mode using
quadratic programming
techniques
Using kernels can learn
very complex functions
Neural Network
Relatively old
Nondeterministic
algorithm
Generalizes well but
doesn‘t have strong
mathematical foundation
Can easily be learned in
incremental fashion
To learn complex
functions—use multilayer
perceptron (not that
trivial)
September 7, 2012 Data Mining: Concepts and Techniques 84
SVM Related Links
SVM Website
http://www.kernelmachines.org/
Representative implementations
LIBSVM: an efficient implementation of SVM, multiclass
classifications, nuSVM, oneclass SVM, including also various
interfaces with java, python, etc.
SVMlight: simpler but performance is not better than LIBSVM,
support only binary classification and only C language
SVMtorch: another recent implementation also written in C.
September 7, 2012 Data Mining: Concepts and Techniques 85
SVM—Introduction Literature
―Statistical Learning Theory‖ by Vapnik: extremely hard to
understand, containing many errors too.
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern
Recognition. Knowledge Discovery and Data Mining, 2(2), 1998.
Better than the Vapnik‘s book, but still written too hard for
introduction, and the examples are so notintuitive
The book ―An Introduction to Support Vector Machines‖ by N.
Cristianini and J. ShaweTaylor
Also written hard for introduction, but the explanation about the
mercer‘s theorem is better than above literatures
The neural network book by Haykins
Contains one nice chapter of SVM introduction
September 7, 2012 Data Mining: Concepts and Techniques 86
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 87
Associative Classification
Associative classification
Association rules are generated and analyzed for use in classification
Search for strong associations between frequent patterns
(conjunctions of attributevalue pairs) and class labels
Classification: Based on evaluating a set of rules in the form of
P
1
^ p
2
… ^ p
l
―A
class
= C‖ (conf, sup)
Why effective?
It explores highly confident associations among multiple attributes
and may overcome some constraints introduced by decisiontree
induction, which considers only one attribute at a time
In many studies, associative classification has been found to be more
accurate than some traditional classification methods, such as C4.5
September 7, 2012 Data Mining: Concepts and Techniques 88
Typical Associative Classification Methods
CBA (Classification By Association: Liu, Hsu & Ma, KDD‘98)
Mine association possible rules in the form of
Condset (a set of attributevalue pairs) class label
Build classifier: Organize rules according to decreasing precedence
based on confidence and then support
CMAR (Classification based on Multiple Association Rules: Li, Han, Pei, ICDM‘01)
Classification: Statistical analysis on multiple rules
CPAR (Classification based on Predictive Association Rules: Yin & Han, SDM‘03)
Generation of predictive rules (FOILlike analysis)
High efficiency, accuracy similar to CMAR
RCBT (Mining topk covering rule groups for gene expression data, Cong et al. SIGMOD‘05)
Explore highdimensional classification, using topk rule groups
Achieve high classification accuracy and high runtime efficiency
September 7, 2012 Data Mining: Concepts and Techniques 89
A Closer Look at CMAR
CMAR (Classification based on Multiple Association Rules: Li, Han, Pei, ICDM‘01)
Efficiency: Uses an enhanced FPtree that maintains the distribution of
class labels among tuples satisfying each frequent itemset
Rule pruning whenever a rule is inserted into the tree
Given two rules, R
1
and R
2
, if the antecedent of R
1
is more general
than that of R
2
and conf(R
1
) ≥ conf(R
2
), then R
2
is pruned
Prunes rules for which the rule antecedent and class are not
positively correlated, based on a χ
2
test of statistical significance
Classification based on generated/pruned rules
If only one rule satisfies tuple X, assign the class label of the rule
If a rule set S satisfies X, CMAR
divides S into groups according to class labels
uses a weighted χ
2
measure to find the strongest group of rules,
based on the statistical correlation of rules within a group
assigns X the class label of the strongest group
September 7, 2012 Data Mining: Concepts and Techniques 90
Associative Classification May Achieve High
Accuracy and Efficiency (Cong et al. SIGMOD05)
September 7, 2012 Data Mining: Concepts and Techniques 91
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 92
Lazy vs. Eager Learning
Lazy vs. eager learning
Lazy learning (e.g., instancebased learning): Simply
stores training data (or only minor processing) and
waits until it is given a test tuple
Eager learning (the above discussed methods): Given a
set of training set, constructs a classification model
before receiving new (e.g., test) data to classify
Lazy: less time in training but more time in predicting
Accuracy
Lazy method effectively uses a richer hypothesis space
since it uses many local linear functions to form its
implicit global approximation to the target function
Eager: must commit to a single hypothesis that covers
the entire instance space
September 7, 2012 Data Mining: Concepts and Techniques 93
Lazy Learner: InstanceBased Methods
Instancebased learning:
Store training examples and delay the processing
(―lazy evaluation‖) until a new instance must be
classified
Typical approaches
knearest neighbor approach
Instances represented as points in a Euclidean
space.
Locally weighted regression
Constructs local approximation
Casebased reasoning
Uses symbolic representations and knowledge
based inference
September 7, 2012 Data Mining: Concepts and Techniques 94
The kNearest Neighbor Algorithm
All instances correspond to points in the nD space
The nearest neighbor are defined in terms of
Euclidean distance, dist(X
1
, X
2
)
Target function could be discrete or real valued
For discretevalued, kNN returns the most common
value among the k training examples nearest to x
q
Vonoroi diagram: the decision surface induced by 1
NN for a typical set of training examples
.
_
+
_
x
q
+
_
_
+
_
_
+
.
.
.
.
.
September 7, 2012 Data Mining: Concepts and Techniques 95
Discussion on the kNN Algorithm
kNN for realvalued prediction for a given unknown tuple
Returns the mean values of the k nearest neighbors
Distanceweighted nearest neighbor algorithm
Weight the contribution of each of the k neighbors
according to their distance to the query x
q
Give greater weight to closer neighbors
Robust to noisy data by averaging knearest neighbors
Curse of dimensionality: distance between neighbors could
be dominated by irrelevant attributes
To overcome it, axes stretch or elimination of the least
relevant attributes
2
) , (
1
i
x
q
x d
w÷
September 7, 2012 Data Mining: Concepts and Techniques 96
CaseBased Reasoning (CBR)
CBR: Uses a database of problem solutions to solve new problems
Store symbolic description (tuples or cases)—not points in a Euclidean
space
Applications: Customerservice (productrelated diagnosis), legal ruling
Methodology
Instances represented by rich symbolic descriptions (e.g., function
graphs)
Search for similar cases, multiple retrieved cases may be combined
Tight coupling between case retrieval, knowledgebased reasoning,
and problem solving
Challenges
Find a good similarity metric
Indexing based on syntactic similarity measure, and when failure,
backtracking, and adapting to additional cases
September 7, 2012 Data Mining: Concepts and Techniques 97
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 98
Genetic Algorithms (GA)
Genetic Algorithm: based on an analogy to biological evolution
An initial population is created consisting of randomly generated rules
Each rule is represented by a string of bits
E.g., if A
1
and ¬A
2
then C
2
can be encoded as 100
If an attribute has k > 2 values, k bits can be used
Based on the notion of survival of the fittest, a new population is
formed to consist of the fittest rules and their offsprings
The fitness of a rule is represented by its classification accuracy on a
set of training examples
Offsprings are generated by crossover and mutation
The process continues until a population P evolves when each rule in P
satisfies a prespecified threshold
Slow but easily parallelizable
September 7, 2012 Data Mining: Concepts and Techniques 99
Rough Set Approach
Rough sets are used to approximately or “roughly” define
equivalent classes
A rough set for a given class C is approximated by two sets: a lower
approximation (certain to be in C) and an upper approximation
(cannot be described as not belonging to C)
Finding the minimal subsets (reducts) of attributes for feature
reduction is NPhard but a discernibility matrix (which stores the
differences between attribute values for each pair of data tuples) is
used to reduce the computation intensity
September 7, 2012 Data Mining: Concepts and Techniques 100
Fuzzy Set
Approaches
Fuzzy logic uses truth values between 0.0 and 1.0 to
represent the degree of membership (such as using
fuzzy membership graph)
Attribute values are converted to fuzzy values
e.g., income is mapped into the discrete categories
{low, medium, high} with fuzzy values calculated
For a given new sample, more than one fuzzy value may
apply
Each applicable rule contributes a vote for membership
in the categories
Typically, the truth values for each predicted category
are summed, and these sums are combined
September 7, 2012 Data Mining: Concepts and Techniques 101
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 102
What Is Prediction?
(Numerical) prediction is similar to classification
construct a model
use model to predict continuous or ordered value for a given input
Prediction is different from classification
Classification refers to predict categorical class label
Prediction models continuousvalued functions
Major method for prediction: regression
model the relationship between one or more independent or
predictor variables and a dependent or response variable
Regression analysis
Linear and multiple regression
Nonlinear regression
Other regression methods: generalized linear model, Poisson
regression, loglinear models, regression trees
September 7, 2012 Data Mining: Concepts and Techniques 103
Linear Regression
Linear regression: involves a response variable y and a single
predictor variable x
y = w
0
+ w
1
x
where w
0
(yintercept) and w
1
(slope) are regression coefficients
Method of least squares: estimates the bestfitting straight line
Multiple linear regression: involves more than one predictor variable
Training data is of the form (X
1
, y
1
), (X
2
, y
2
),…, (X
D
, y
D
)
Ex. For 2D data, we may have: y = w
0
+ w
1
x
1
+ w
2
x
2
Solvable by extension of least square method or using SAS, SPlus
Many nonlinear functions can be transformed into the above
¿
¿
=
=
÷
÷ ÷
=
 
1
2
 
1
) (
) )( (
1
D
i
i
D
i
i i
x x
y y x x
w
x w y w
1 0
÷ =
September 7, 2012 Data Mining: Concepts and Techniques 104
Some nonlinear models can be modeled by a polynomial
function
A polynomial regression model can be transformed into
linear regression model. For example,
y = w
0
+ w
1
x + w
2
x
2
+ w
3
x
3
convertible to linear with new variables: x
2
= x
2
, x
3
= x
3
y = w
0
+ w
1
x + w
2
x
2
+ w
3
x
3
Other functions, such as power function, can also be
transformed to linear model
Some models are intractable nonlinear (e.g., sum of
exponential terms)
possible to obtain least square estimates through
extensive calculation on more complex formulae
Nonlinear Regression
September 7, 2012 Data Mining: Concepts and Techniques 105
Generalized linear model:
Foundation on which linear regression can be applied to modeling
categorical response variables
Variance of y is a function of the mean value of y, not a constant
Logistic regression: models the prob. of some event occurring as a
linear function of a set of predictor variables
Poisson regression: models the data that exhibit a Poisson
distribution
Loglinear models: (for categorical data)
Approximate discrete multidimensional prob. distributions
Also useful for data compression and smoothing
Regression trees and model trees
Trees to predict continuous values rather than class labels
Other RegressionBased Models
September 7, 2012 Data Mining: Concepts and Techniques 106
Regression Trees and Model Trees
Regression tree: proposed in CART system (Breiman et al. 1984)
CART: Classification And Regression Trees
Each leaf stores a continuousvalued prediction
It is the average value of the predicted attribute for the training
tuples that reach the leaf
Model tree: proposed by Quinlan (1992)
Each leaf holds a regression model—a multivariate linear equation
for the predicted attribute
A more general case than regression tree
Regression and model trees tend to be more accurate than linear
regression when the data are not represented well by a simple linear
model
September 7, 2012 Data Mining: Concepts and Techniques 107
Predictive modeling: Predict data values or construct
generalized linear models based on the database data
One can only predict value ranges or category distributions
Method outline:
Minimal generalization
Attribute relevance analysis
Generalized linear model construction
Prediction
Determine the major factors which influence the prediction
Data relevance analysis: uncertainty measurement,
entropy analysis, expert judgement, etc.
Multilevel prediction: drilldown and rollup analysis
Predictive Modeling in Multidimensional Databases
September 7, 2012 Data Mining: Concepts and Techniques 108
Prediction: Numerical Data
September 7, 2012 Data Mining: Concepts and Techniques 109
Prediction: Categorical Data
September 7, 2012 Data Mining: Concepts and Techniques 110
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 111
Classifier Accuracy Measures
Accuracy of a classifier M, acc(M): percentage of test set tuples that are
correctly classified by the model M
Error rate (misclassification rate) of M = 1 – acc(M)
Given m classes, CM
i,j
, an entry in a confusion matrix, indicates #
of tuples in class i that are labeled by the classifier as class j
Alternative accuracy measures (e.g., for cancer diagnosis)
sensitivity = tpos/pos /* true positive recognition rate */
specificity = tneg/neg /* true negative recognition rate */
precision = tpos/(tpos + fpos)
accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg)
This model can also be used for costbenefit analysis
classes buy_computer = yes buy_computer = no total recognition(%)
buy_computer = yes 6954 46 7000 99.34
buy_computer = no 412 2588 3000 86.27
total 7366 2634 10000 95.52
C
1
C
2
C
1
True positive False negative
C
2
False positive True negative
September 7, 2012 Data Mining: Concepts and Techniques 112
Predictor Error Measures
Measure predictor accuracy: measure how far off the predicted value is
from the actual known value
Loss function: measures the error betw. y
i
and the predicted value y
i
‘
Absolute error:  y
i
– y
i
‘
Squared error: (y
i
– y
i
‘)
2
Test error (generalization error): the average loss over the test set
Mean absolute error: Mean squared error:
Relative absolute error: Relative squared error:
The mean squarederror exaggerates the presence of outliers
Popularly use (square) root meansquare error, similarly, root relative
squared error
d
y y
d
i
i i ¿
=
÷
1
 ' 
d
y y
d
i
i i ¿
=
÷
1
2
) ' (
¿
¿
=
=
÷
÷
d
i
i
d
i
i i
y y
y y
1
1
 
 ' 
¿
¿
=
=
÷
÷
d
i
i
d
i
i i
y y
y y
1
2
1
2
) (
) ' (
September 7, 2012 Data Mining: Concepts and Techniques 113
Evaluating the Accuracy of a Classifier
or Predictor (I)
Holdout method
Given data is randomly partitioned into two independent sets
Training set (e.g., 2/3) for model construction
Test set (e.g., 1/3) for accuracy estimation
Random sampling: a variation of holdout
Repeat holdout k times, accuracy = avg. of the accuracies
obtained
Crossvalidation (kfold, where k = 10 is most popular)
Randomly partition the data into k mutually exclusive subsets,
each approximately equal size
At ith iteration, use D
i
as test set and others as training set
Leaveoneout: k folds where k = # of tuples, for small sized data
Stratified crossvalidation: folds are stratified so that class dist. in
each fold is approx. the same as that in the initial data
September 7, 2012 Data Mining: Concepts and Techniques 114
Evaluating the Accuracy of a Classifier
or Predictor (II)
Bootstrap
Works well with small data sets
Samples the given training tuples uniformly with replacement
i.e., each time a tuple is selected, it is equally likely to be
selected again and readded to the training set
Several boostrap methods, and a common one is .632 boostrap
Suppose we are given a data set of d tuples. The data set is sampled d
times, with replacement, resulting in a training set of d samples. The data
tuples that did not make it into the training set end up forming the test set.
About 63.2% of the original data will end up in the bootstrap, and the
remaining 36.8% will form the test set (since (1 – 1/d)
d
≈ e
1
= 0.368)
Repeat the sampling procedue k times, overall accuracy of the
model:
) ) ( 368 . 0 ) ( 632 . 0 ( ) (
_
1
_ set train i
k
i
set test i
M acc M acc M acc × + × =
¿
=
September 7, 2012 Data Mining: Concepts and Techniques 115
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 116
Ensemble Methods: Increasing the Accuracy
Ensemble methods
Use a combination of models to increase accuracy
Combine a series of k learned models, M
1
, M
2
, …, M
k
,
with the aim of creating an improved model M*
Popular ensemble methods
Bagging: averaging the prediction over a collection of
classifiers
Boosting: weighted vote with a collection of classifiers
Ensemble: combining a set of heterogeneous classifiers
September 7, 2012 Data Mining: Concepts and Techniques 117
Bagging: Boostrap Aggregation
Analogy: Diagnosis based on multiple doctors‘ majority vote
Training
Given a set D of d tuples, at each iteration i, a training set D
i
of d
tuples is sampled with replacement from D (i.e., boostrap)
A classifier model M
i
is learned for each training set D
i
Classification: classify an unknown sample X
Each classifier M
i
returns its class prediction
The bagged classifier M* counts the votes and assigns the class
with the most votes to X
Prediction: can be applied to the prediction of continuous values by
taking the average value of each prediction for a given test tuple
Accuracy
Often significant better than a single classifier derived from D
For noise data: not considerably worse, more robust
Proved improved accuracy in prediction
September 7, 2012 Data Mining: Concepts and Techniques 118
Boosting
Analogy: Consult several doctors, based on a combination of weighted
diagnoses—weight assigned based on the previous diagnosis accuracy
How boosting works?
Weights are assigned to each training tuple
A series of k classifiers is iteratively learned
After a classifier M
i
is learned, the weights are updated to allow the
subsequent classifier, M
i+1
, to pay more attention to the training
tuples that were misclassified by M
i
The final M* combines the votes of each individual classifier, where
the weight of each classifier's vote is a function of its accuracy
The boosting algorithm can be extended for the prediction of
continuous values
Comparing with bagging: boosting tends to achieve greater accuracy,
but it also risks overfitting the model to misclassified data
September 7, 2012 Data Mining: Concepts and Techniques 119
Adaboost (Freund and Schapire, 1997)
Given a set of d classlabeled tuples, (X
1
, y
1
), …, (X
d
, y
d
)
Initially, all the weights of tuples are set the same (1/d)
Generate k classifiers in k rounds. At round i,
Tuples from D are sampled (with replacement) to form a
training set D
i
of the same size
Each tuple‘s chance of being selected is based on its weight
A classification model M
i
is derived from D
i
Its error rate is calculated using D
i
as a test set
If a tuple is misclssified, its weight is increased, o.w. it is
decreased
Error rate: err(X
j
) is the misclassification error of tuple X
j
. Classifier
M
i
error rate is the sum of the weights of the misclassified tuples:
The weight of classifier M
i
‘s vote is
) (
) ( 1
log
i
i
M error
M error ÷
¿
× =
d
j
j i
err w M error ) ( ) (
j
X
September 7, 2012 Data Mining: Concepts and Techniques 120
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 121
Model Selection: ROC Curves
ROC (Receiver Operating Characteristics)
curves: for visual comparison of
classification models
Originated from signal detection theory
Shows the tradeoff between the true
positive rate and the false positive rate
The area under the ROC curve is a
measure of the accuracy of the model
Rank the test tuples in decreasing order:
the one that is most likely to belong to the
positive class appears at the top of the list
The closer to the diagonal line (i.e., the
closer the area is to 0.5), the less accurate
is the model
Vertical axis represents
the true positive rate
Horizontal axis rep. the
false positive rate
The plot also shows a
diagonal line
A model with perfect
accuracy will have an
area of 1.0
September 7, 2012 Data Mining: Concepts and Techniques 122
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rulebased classification
Classification by back
propagation
Support Vector Machines (SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
September 7, 2012 Data Mining: Concepts and Techniques 123
Summary (I)
Classification and prediction are two forms of data analysis that can
be used to extract models describing important data classes or to
predict future data trends.
Effective and scalable methods have been developed for decision
trees induction, Naive Bayesian classification, Bayesian belief
network, rulebased classifier, Backpropagation, Support Vector
Machine (SVM), associative classification, nearest neighbor classifiers,
and casebased reasoning, and other classification methods such as
genetic algorithms, rough set and fuzzy set approaches.
Linear, nonlinear, and generalized linear models of regression can be
used for prediction. Many nonlinear problems can be converted to
linear problems by performing transformations on the predictor
variables. Regression trees and model trees are also used for
prediction.
September 7, 2012 Data Mining: Concepts and Techniques 124
Summary (II)
Stratified kfold crossvalidation is a recommended method for
accuracy estimation. Bagging and boosting can be used to increase
overall accuracy by learning and combining a series of individual
models.
Significance tests and ROC curves are useful for model selection
There have been numerous comparisons of the different classification
and prediction methods, and the matter remains a research topic
No single method has been found to be superior over all others for all
data sets
Issues such as accuracy, training time, robustness, interpretability, and
scalability must be considered and can involve tradeoffs, further
complicating the quest for an overall superior method
September 7, 2012 Data Mining: Concepts and Techniques 125
References (1)
C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future
Generation Computer Systems, 13, 1997.
C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press,
1995.
L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression
Trees. Wadsworth International Group, 1984.
C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition.
Data Mining and Knowledge Discovery, 2(2): 121168, 1998.
P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned
data for scaling machine learning. KDD'95.
W. Cohen. Fast effective rule induction. ICML'95.
G. Cong, K.L. Tan, A. K. H. Tung, and X. Xu. Mining topk covering rule groups for
gene expression data. SIGMOD'05.
A. J. Dobson. An Introduction to Generalized Linear Models. Chapman and Hall,
1990.
G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and
differences. KDD'99.
September 7, 2012 Data Mining: Concepts and Techniques 126
References (2)
R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley and
Sons, 2001
U. M. Fayyad. Branching on attribute values in decision tree generation. AAAI‘94.
Y. Freund and R. E. Schapire. A decisiontheoretic generalization of online
learning and an application to boosting. J. Computer and System Sciences, 1997.
J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision
tree construction of large datasets. VLDB‘98.
J. Gehrke, V. Gant, R. Ramakrishnan, and W.Y. Loh, BOAT  Optimistic Decision Tree
Construction. SIGMOD'99.
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. SpringerVerlag, 2001.
D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The
combination of knowledge and statistical data. Machine Learning, 1995.
M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision
tree induction: Efficient classification in data mining. RIDE'97.
B. Liu, W. Hsu, and Y. Ma. Integrating Classification and Association Rule. KDD'98.
W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on
Multiple ClassAssociation Rules, ICDM'01.
September 7, 2012 Data Mining: Concepts and Techniques 127
References (3)
T.S. Lim, W.Y. Loh, and Y.S. Shih. A comparison of prediction accuracy,
complexity, and training time of thirtythree old and new classification
algorithms. Machine Learning, 2000.
J. Magidson. The Chaid approach to segmentation modeling: Chisquared
automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of
Marketing Research, Blackwell Business, 1994.
M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data
mining. EDBT'96.
T. M. Mitchell. Machine Learning. McGraw Hill, 1997.
S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi
Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345389, 1998
J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81106, 1986.
J. R. Quinlan and R. M. CameronJones. FOIL: A midterm report. ECML‘93.
J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
J. R. Quinlan. Bagging, boosting, and c4.5. AAAI'96.
September 7, 2012 Data Mining: Concepts and Techniques 128
References (4)
R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building
and pruning. VLDB‘98.
J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for
data mining. VLDB‘96.
J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann,
1990.
P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley,
2005.
S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification
and Prediction Methods from Statistics, Neural Nets, Machine Learning, and
Expert Systems. Morgan Kaufman, 1991.
S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997.
I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and
Techniques, 2ed. Morgan Kaufmann, 2005.
X. Yin and J. Han. CPAR: Classification based on predictive association rules.
SDM'03
H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with
hierarchical clusters. KDD'03.
September 7, 2012 Data Mining: Concepts and Techniques 129
September 7, 2012
Data Mining: Concepts and Techniques
2
Chapter 6. Classification and Prediction
What is classification? What is prediction?
Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors)
Issues regarding classification and prediction
Classification by decision tree induction
Other classification methods Prediction Accuracy and error measures
Bayesian classification
Rulebased classification
Classification by back propagation
Ensemble methods
Model selection Summary
3
September 7, 2012
Data Mining: Concepts and Techniques
e..Classification vs. Prediction Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data Prediction models continuousvalued functions. i. 2012 . predicts unknown or missing values Typical applications Credit approval Target marketing Medical diagnosis Fraud detection Data Mining: Concepts and Techniques 4 September 7.
as determined by the class label attribute The set of tuples used for model construction is training set The model is represented as classification rules. decision trees. otherwise overfitting will occur If the accuracy is acceptable. 2012 .Classification—A TwoStep Process Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class. or mathematical formulae Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accuracy rate is the percentage of test set samples that are correctly classified by the model Test set is independent of training set. use the model to classify data tuples whose class labels are not known Data Mining: Concepts and Techniques 5 September 7.
2012 Data Mining: Concepts and Techniques .Process (1): Model Construction Classification Algorithms Training Data NAME M ike M ary B ill Jim D ave A nne RANK YEARS TENURED A ssistant P rof 3 no A ssistant P rof 7 yes P rofessor 2 yes A ssociate P rof 7 yes A ssistant P rof 6 no A ssociate P rof 3 no Classifier (Model) IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’ 6 September 7.
2012 Data Mining: Concepts and Techniques 7 .Process (2): Using the Model in Prediction Classifier Testing Data Unseen Data (Jeff. 4) NAME RANK T om M erlisa G eorge Joseph A ssistant P rof A ssociate P rof P rofessor A ssistant P rof YEARS TENURED 2 7 5 7 no no yes yes Tenured? September 7. Professor.
Supervised vs. observations. with the aim of establishing the existence of classes or clusters in the data Data Mining: Concepts and Techniques 8 Unsupervised learning (clustering) September 7.) are accompanied by labels indicating the class of the observations New data is classified based on the training set The class labels of training data is unknown Given a set of measurements. measurements. etc. 2012 . etc. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations.
Chapter 6. 2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 9 September 7.
Issues: Data Preparation Data cleaning Preprocess data in order to reduce noise and handle missing values Relevance analysis (feature selection) Remove the irrelevant or redundant attributes Generalize and/or normalize data Data transformation September 7. 2012 Data Mining: Concepts and Techniques 10 .
.Issues: Evaluating Classification Methods Accuracy classifier accuracy: predicting class label predictor accuracy: guessing value of predicted attributes Speed time to construct the model (training time) time to use the model (classification/prediction time) Robustness: handling noise and missing values Scalability: efficiency in diskresident databases Interpretability understanding and insight provided by the model Other measures.g. e. goodness of rules. such as decision tree size or compactness of classification rules Data Mining: Concepts and Techniques 11 September 7. 2012 .
Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 12 September 7. 2012 Data Mining: Concepts and Techniques .Chapter 6.
Decision Tree Induction: Training Dataset age <=30 <=30 31…40 >40 >40 >40 31…40 <=30 <=30 >40 <=30 31…40 31…40 >40 income student credit_rating high no fair high no excellent high no fair medium no fair low yes fair low yes excellent low yes excellent medium no fair low yes fair medium yes fair medium yes excellent medium no excellent high yes fair medium no excellent Data Mining: Concepts and Techniques This follows an example of Quinlan‘s ID3 (Playing Tennis) buys_computer no no yes yes yes no yes no yes yes yes yes yes no 13 September 7. 2012 .
40 overcast >40 credit rating? excellent fair yes yes September 7.Output: A Decision Tree for “buys_computer” age? <=30 student? no no yes yes 31.. 2012 Data Mining: Concepts and Techniques 14 .
all the training examples are at the root Attributes are categorical (if continuousvalued. 2012 . they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.. information gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf There are no samples left Data Mining: Concepts and Techniques 15 September 7.Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) Tree is constructed in a topdown recursive divideandconquer manner At start.g.
Attribute Selection Measure: Information Gain (ID3/C4. 2012 Data Mining: Concepts and Techniques 16 .5) Select the attribute with the highest information gain Let pi be the probability that an arbitrary tuple in D belongs to class Ci. estimated by Ci. D/D Expected information (entropy) needed to classify a tuple m in D: Info ( D) pi log 2 ( pi ) i 1 Information needed (after using A to split D into v v D  partitions) to classify D: j InfoA ( D) I (Dj ) j 1  D  Information gained by branching on attribute A Gain(A) Info(D) Info A(D) September 7.
048 17 .029 Gain( student) 0. with 2 yes‘es and 3 no‘s.Attribute Selection: Information Gain Class P: buys_computer = ―yes‖ Class N: buys_computer = ―no‖ 9 9 5 5 log 2 ( ) log 2 ( ) 0.694 14 age <=30 31…40 >40 pi 2 4 3 ni I(pi. Hence Gain (age) Info ( D ) Info age ( D ) 0.971 5 I (2.3) I (4.2) 0. 2012 no >40 medium credit_rating fair excellent fair fair fair excellent excellent fair fair fair excellent excellent fair excellent buys_computer no no yes yes yes no yes no yes yes yes yes yes Data no Mining: Concepts and Techniques Similarly.5) 5 I (3. ni) 3 0.940 14 14 14 14 Infoage ( D ) 5 4 I (2.246 age income student <=30 high no <=30 high no 31…40 high no >40 medium no >40 low yes >40 low yes 31…40 low yes <=30 medium no <=30 low yes >40 medium yes <=30 medium yes 31…40 medium no 31…40 high yes September 7.151 Gain(credit _ rating) 0.0) 14 14 Info( D) I (9. Gain(income) 0.3) means ―age <=30‖ has 5 14 out of 14 samples.971 0 0 2 0.
and D2 is the set of tuples in D satisfying A > splitpoint Data Mining: Concepts and Techniques 18 Split: September 7. the midpoint between each pair of adjacent values is considered as a possible split point (ai+ai+1)/2 is the midpoint between the values of ai and ai+1 The point with the minimum expected information requirement for A is selected as the splitpoint for A D1 is the set of tuples in D satisfying A ≤ splitpoint. 2012 .Computing InformationGain for ContinuousValue Attributes Let attribute A be a continuousvalued attribute Must determine the best split point for A Sort the value A in increasing order Typically.
926 = 0.031 4 4 6 6 4 4 log 2 ( ) log 2 ( ) log 2 ( ) 0.Gain Ratio for Attribute Selection (C4.029/0.926 14 14 14 14 14 14 The attribute with the maximum gain ratio is selected as the splitting attribute Data Mining: Concepts and Techniques 19 September 7.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) SplitInfoA ( D) j 1 v  Dj   D log 2 (  Dj   D ) GainRatio(A) = Gain(A)/SplitInfo(A) SplitInfo ( D) A Ex. 2012 . gain_ratio(income) = 0.5) Information gain measure is biased towards attributes with a large number of values C4.
2012 . gini(D) is defined as n where pj is the relative frequency of class j in D If a data set D is split on A into two subsets D1 and D2. gini index. IBM IntelligentMiner) If a data set D contains examples from n classes.Gini index (CART. the gini index gini(D) is defined as gini(D) 1 p 2 j j 1 gini A ( D) D1 D  gini( D1) 2 gini( D2) D D Reduction in Impurity: gini( A) gini(D) giniA(D) The attribute provides the smallest ginisplit(D) (or the largest reduction in impurity) is chosen to split the node (need to enumerate all the possible splitting points for each attribute) Data Mining: Concepts and Techniques 20 September 7.
IBM IntelligentMiner) Ex. 2012 . medium} 2 2 14 1 14 1 but gini{medium.Gini index (CART. D has 9 tuples in buys_computer = ―yes‖ and 5 in ―no‖ 9 5 gini( D) 1 0. 10 4 medium} and 4 in D2 gini ( D) Gini ( D ) Gini ( D ) income{low . e..high} is 0.459 14 14 Suppose the attribute income partitions D into 10 in D1: {low.30 and thus the best since it is the lowest All attributes are assumed continuousvalued May need other tools.g. to get the possible split values Can be modified for categorical attributes Data Mining: Concepts and Techniques 21 September 7. clustering.
return good results but Information gain: biased towards multivalued attributes tends to prefer unbalanced splits in which one partition is much smaller than the others biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equalsized partitions and purity in both partitions Data Mining: Concepts and Techniques 22 Gain ratio: Gini index: September 7. in general. 2012 .Comparing Attribute Selection Measures The three measures.
2012 . Most give good results.e.. none is significantly superior than others Data Mining: Concepts and Techniques 23 Multivariate splits (partition based on multiple variable combinations) Which attribute selection measure is the best? September 7. the simplest solution is preferred): The best tree as the one that requires the fewest # of bits to both (1) encode the tree. of attrs.Other Attribute Selection Measures CHAID: a popular decision tree algorithm. gain and gini index in certain cases Gstatistics: has a close approximation to χ2 distribution MDL (Minimal Description Length) principle (i. measure based on χ2 test for independence CSEP: performs better than info. and (2) encode the exceptions to the tree CART: finds multivariate splits based on a linear comb.
2012 .Overfitting and Tree Pruning Overfitting: An induced tree may overfit the training data Too many branches. some may reflect anomalies due to noise or outliers Poor accuracy for unseen samples Prepruning: Halt tree construction early—do not split a node if this would result in the goodness measure falling below a threshold Two approaches to avoid overfitting Difficult to choose an appropriate threshold Postpruning: Remove branches from a ―fully grown‖ tree—get a sequence of progressively pruned trees Use a set of data different from the training data to decide which is the ―best pruned tree‖ Data Mining: Concepts and Techniques 24 September 7.
Enhancements to Basic Decision Tree Induction Allow for continuousvalued attributes Dynamically define new discretevalued attributes that partition the continuous attribute value into a discrete set of intervals Assign the most common value of the attribute Assign probability to each of the possible values Create new attributes based on existing ones that are sparsely represented This reduces fragmentation. and replication Data Mining: Concepts and Techniques 25 Handle missing attribute values Attribute construction September 7. repetition. 2012 .
Classification in Large Databases Classification—a classical problem extensively studied by statisticians and machine learning researchers Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed Why decision tree induction in data mining? relatively faster learning speed (than other classification methods) convertible to simple and easy to understand classification rules can use SQL queries for accessing databases comparable classification accuracy with other methods Data Mining: Concepts and Techniques 26 September 7. 2012 .
2012 .Scalable Decision Tree Induction Methods SLIQ (EDBT‘96 — Mehta et al. class label) BOAT (PODS‘99 — Gehrke. Ramakrishnan & Loh) Uses bootstrapping to create several small samples Data Mining: Concepts and Techniques 27 September 7.) Constructs an attribute list data structure PUBLIC (VLDB‘98 — Rastogi & Shim) Integrates tree splitting and tree pruning: stop growing the tree earlier RainForest (VLDB‘98 — Gehrke. Shafer et al. value. Ramakrishnan & Ganti) Builds an AVClist (attribute. Ganti.) Builds an index for each attribute and only class list and the current attribute list reside in memory SPRINT (VLDB‘96 — J.
2012 .Scalability Framework for RainForest Separates the scalability aspects from the criteria that determine the quality of the tree Builds an AVClist: AVC (Attribute. Value. Class_label) AVCset (of an attribute X ) Projection of training dataset onto the attribute X and class label where counts of individual class label are aggregated AVCgroup (of a node n ) Set of AVCsets of all predictor attributes at the node n Data Mining: Concepts and Techniques 28 September 7.
. 2012 .40 4 0 medium no fair yes >40 3 2 low yes fair yes low yes excellent no low yes excellent yes AVCset on Student medium no fair no low yes fair yes student Buy_Computer medium yes fair yes yes no medium yes excellent yes medium no excellent yes yes 6 1 high yes fair yes no 3 4 medium no excellent no Data Mining: Concepts and Techniques credit_rating Buy_Computer Credit rating fair excellent yes 6 3 no 2 3 29 AVCset on September 7.Rainforest: Training Set and Its AVC Sets Training Examples age <=30 <=30 31…40 >40 >40 >40 31…40 <=30 <=30 >40 <=30 31…40 31…40 >40 AVCset on Age AVCset on income income Buy_Computer yes high medium low 2 4 3 no 2 2 1 income studentcredit_rating buys_computerAge Buy_Computer high no fair no yes no high no excellent no <=30 3 2 high no fair yes 31.
2012 . precise temperature. outlook.g. humidity.Data CubeBased DecisionTree Induction Integration of generalization with decisiontree induction (Kamber et al. etc.‘97) Classification at primitive concept levels E. scattered classes.. Lowlevel concepts. bushy classificationtrees Semantic interpretation problems Relevance analysis at multilevels Informationgain analysis with dimension + level Data Mining: Concepts and Techniques 30 Cubebased multilevel classification September 7.
2012 . Data Mining: Concepts and Techniques 31 September 7.BOAT (Bootstrapped Optimistic Algorithm for Tree Construction) Use a statistical technique called bootstrapping to create several smaller samples (subsets). resulting in several trees These trees are examined and used to construct a new tree T’ It turns out that T’ is very close to the tree that would be generated using the whole data set together Adv: requires only two scans of DB. each fits in memory Each subset is used to create a tree. an incremental alg.
2012 Data Mining: Concepts and Techniques 32 .Presentation of Classification Results September 7.
Visualization of a Decision Tree in SGI/MineSet 3.0 September 7. 2012 Data Mining: Concepts and Techniques 33 .
2012 Data Mining: Concepts and Techniques 34 .Interactive Visual Mining by PerceptionBased Classification (PBC) September 7.
Chapter 6. 2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 35 September 7.
naïve Bayesian classifier. they can provide a standard of optimal decision making against which other methods can be measured Data Mining: Concepts and Techniques 36 September 7. i..Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction. Performance: A simple Bayesian classifier. predicts class membership probabilities Foundation: Based on Bayes‘ Theorem.e. 2012 . has comparable performance with decision tree and selected neural network classifiers Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct — prior knowledge can be combined with observed data Standard: Even when Bayesian methods are computationally intractable.
medium income Data Mining: Concepts and Techniques 37 September 7. 2012 . the prob. the probability of observing the sample X. Given that X will buy computer. given that the hypothesis holds E. that X is 31. … P(X): probability that sample data is observed P(XH) (posteriori probability). regardless of age. income. the probability that the hypothesis holds given the observed data sample X P(H) (prior probability).. the initial probability E.40.g.Bayesian Theorem: Basics Let X be a data sample (―evidence‖): class label is unknown Let H be a hypothesis that X belongs to class C Classification is to determine P(HX). X will buy computer...g.
Bayesian Theorem Given training data X. 2012 . posteriori probability of a hypothesis H. follows the Bayes theorem P(H  X) P(X  H )P(H ) P(X) Informally. significant computational cost Data Mining: Concepts and Techniques 38 September 7. this can be written as posteriori = likelihood x prior/evidence Predicts X belongs to C2 iff the probability P(CiX) is the highest among all the P(CkX) for all the k classes Practical difficulty: require initial knowledge of many probabilities. P(HX).
and each tuple is represented by an nD attribute vector X = (x1. ….e. the maximal P(CiX) This can be derived from Bayes‘ theorem P(X  C )P(C ) i i P(C  X) i P(X) Since P(X) is constant for all classes.Towards Naïve Bayesian Classifier Let D be a training set of tuples and their associated class labels.. Cm. xn) Suppose there are m classes C1. only P(C  X) P(X C )P(C ) i i i needs to be maximized Data Mining: Concepts and Techniques 39 September 7. 2012 . Classification is to derive the maximum posteriori. C2. i. x2. ….
P(xkCi) is usually computed based on Gaussian distribution with a mean μ and standard deviation σ ( x ) 2 P( X  C i) P( x  C i ) P( x  C i ) P( x  C i ) . D (# of tuples of Ci in D) If Ak is continousvalued.e.. . P( x  C i ) k 1 2 n k 1 g ( x.. no dependence relation between attributes): n This greatly reduces the computation cost: Only counts the class distribution If Ak is categorical.. 2012 1 e 2 2 2 P ( X  C i ) g ( xk . ) and P(xkCi) is September 7. C i .Derivation of Naïve Bayes Classifier A simplified assumption: attributes are conditionally independent (i. Ci ) Data Mining: Concepts and Techniques 40 . P(xkCi) is the # of tuples in Ci having value xk for Ak divided by Ci.
Student = yes Credit_rating = Fair) income student redit_rating c buys_compu high no fair no high no excellent no high no fair yes medium no fair yes low yes fair yes low yes excellent no low yes excellent yes medium no fair no low yes fair yes medium yes fair yes medium yes excellent yes medium no excellent yes high yes fair yes medium no excellent no 41 September 7. Income = medium. 2012 Data Mining: Concepts and Techniques .Naïve Bayesian Classifier: Training Dataset age <=30 <=30 31…40 >40 >40 >40 31…40 <=30 <=30 >40 <=30 31…40 31…40 >40 Class: C1:buys_computer = ‗yes‘ C2:buys_computer = ‗no‘ Data sample X = (age <=30.
4 x 0.667 = 0.4 = 0.007 Therefore.667 P(credit_rating = ―fair‖  buys_computer = ―no‖) = 2/5 = 0.2 P(credit_rating = ―fair‖  buys_computer = ―yes‖) = 6/9 = 0.028 P(Xbuys_computer = ―no‖) * P(buys_computer = ―no‖) = 0.Naïve Bayesian Classifier: An Example P(Ci): P(buys_computer = ―yes‖) = 9/14 = 0. X belongs to class (“buys_computer = yes”) September 7.6 P(income = ―medium‖  buys_computer = ―yes‖) = 4/9 = 0.667 P(student = ―yes‖  buys_computer = ―no‖) = 1/5 = 0.4 P(student = ―yes‖  buys_computer = ―yes) = 6/9 = 0. 2012 Data Mining: Concepts and Techniques 42 .357 P(age = ―<=30‖  buys_computer = ―yes‖) = 2/9 = 0.6 x 0.2 x 0.643 P(buys_computer = ―no‖) = 5/14= 0. student = yes.444 x 0. income = medium. credit_rating = fair) P(XCi) : P(Xbuys_computer = ―yes‖) = 0.444 P(income = ―medium‖  buys_computer = ―no‖) = 2/5 = 0.4 Compute P(XCi) for each class X = (age <= 30 .222 x 0.667 x 0.044 P(Xbuys_computer = ―no‖) = 0.019 P(XCi)*P(Ci) : P(Xbuys_computer = ―yes‖) * P(buys_computer = ―yes‖) = 0.222 P(age = ―<= 30‖  buys_computer = ―no‖) = 3/5 = 0.
Use Laplacian correction (or Laplacian estimator) Adding 1 to each case Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 The ―corrected‖ prob. the predicted prob. be nonzero. estimates are close to their ―uncorrected‖ counterparts Data Mining: Concepts and Techniques 43 September 7.Avoiding the 0Probability Problem Naïve Bayesian prediction requires each conditional prob. income= medium (990). and income = high (10). 2012 . Suppose a dataset with 1000 tuples. income=low (0). Otherwise. will be zero n P( X  C i ) P( x k  C i) k 1 Ex.
Naïve Bayesian Classifier: Comments
Advantages Easy to implement Good results obtained in most of the cases Disadvantages Assumption: class conditional independence, therefore loss of accuracy Practically, dependencies exist among variables
E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. Dependencies among these cannot be modeled by Naïve Bayesian Classifier
How to deal with these dependencies? Bayesian Belief Networks
Data Mining: Concepts and Techniques 44
September 7, 2012
Bayesian Belief Networks
Bayesian belief network allows a subset of the variables
conditionally independent
A graphical model of causal relationships
Represents dependency among the variables Gives a specification of joint probability distribution
Nodes: random variables Links: dependency
X
Z
September 7, 2012
Y
P
X and Y are the parents of Z, and Y is
the parent of P No dependency between Z and P Has no loops or cycles
Data Mining: Concepts and Techniques 45
Bayesian Belief Network: An Example
Family History Smoker
The conditional probability table (CPT) for variable LungCancer:
(FH, S) (FH, ~S) (~FH, S) (~FH, ~S)
LC
LungCancer Emphysema
0.8
0.5
0.7
0.1
~LC
0.2
0.5
0.3
0.9
CPT shows the conditional probability for each possible combination of its parents
PositiveXRay
Dyspnea
Bayesian Belief Networks
September 7, 2012
Derivation of the probability of a particular combination of values of X, from CPT:
n P ( x1 ,...,xn ) P ( xi  Parents(Y i )) i 1
46
Data Mining: Concepts and Techniques
all hidden variables: No good algorithms known for this purpose Ref.Training Bayesian Networks Several scenarios: Given both the network structure and all variables observable: learn only the CPTs Network structure known. all variables observable: search through the model space to reconstruct network topology Unknown structure. some hidden variables: gradient descent (greedy hillclimbing) method. Heckerman: Bayesian networks for data mining Data Mining: Concepts and Techniques 47 September 7. analogous to neural network learning Network structure unknown. D. 2012 .
Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 48 September 7.Chapter 6. 2012 Data Mining: Concepts and Techniques .
e.. rule consequent ncovers = # of tuples covered by R ncorrect = # of tuples correctly classified by R Assessment of a rule: coverage and accuracy coverage(R) = ncovers /D /* D: training data set */ accuracy(R) = ncorrect / ncovers If more than one rule is triggered. need conflict resolution Size ordering: assign the highest priority to the triggering rules that has the ―toughest‖ requirement (i. 2012 . according to some measure of rule quality or by experts Data Mining: Concepts and Techniques 49 September 7.Using IFTHEN Rules for Classification Represent the knowledge in the form of IFTHEN rules R: IF age = youth AND student = yes THEN buys_computer = yes Rule antecedent/precondition vs. with the most attribute test) Classbased ordering: decreasing order of prevalence or misclassification cost per class Rulebased ordering (decision list): rules are organized into one long priority list.
Rule Extraction from a Decision Tree age? Rules are easier to understand than large trees One rule is created for each path from the root to a leaf Each attributevalue pair along a path forms a conjunction: the leaf holds the class prediction <=30 31.40 >40 student? no yes yes credit rating? excellent fair no yes yes Rules are mutually exclusive and exhaustive Example: Rule extraction from our buys_computer decisiontree IF age = young AND student = no IF age = young AND student = yes IF age = midage IF age = young AND credit_rating = fair THEN buys_computer = no THEN buys_computer = yes THEN buys_computer = yes THEN buys_computer = no 50 IF age = old AND credit_rating = excellent THEN buys_computer = yes September 7. 2012 Data Mining: Concepts and Techniques ..
w. e. decisiontree induction: learning a set of rules simultaneously Data Mining: Concepts and Techniques 51 September 7.. CN2.Rule Extraction from the Training Data Sequential covering algorithm: Extracts rules directly from training data Typical sequential covering algorithms: FOIL. the tuples covered by the rules are removed The process repeats on the remaining tuples unless termination condition. each for a given class Ci will cover many tuples of Ci but none (or few) of the tuples of other classes Steps: Rules are learned one at a time Each time a rule is learned. RIPPER Rules are learned sequentially. when no more training examples or when the quality of a rule returned is below a userspecified threshold Comp. 2012 . AQ.g.
How to LearnOneRule? Star with the most general rule possible: condition = empty Adding new attributes by adopting a greedy depthfirst strategy Picks the one that most improves the rule quality Foilgain (in FOIL & RIPPER): assesses info_gain by extending condition pos' pos FOIL _ Gain pos'(log 2 log 2 ) pos' neg ' pos neg It favors rules that have high accuracy and cover many positive tuples RuleQuality measures: consider both coverage and accuracy Rule pruning based on an independent set of test tuples FOIL _ Prune( R) pos neg pos neg Pos/neg are # of positive/negative tuples covered by R. prune R September 7. If FOIL_Prune is higher for the pruned version of R. 2012 Data Mining: Concepts and Techniques 52 .
2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 53 September 7.Chapter 6.
. …). Personal homepage classification xi = (x1. x2. yi = +1 or –1 x1 : # of a word ―homepage‖ x2 : # of a word ―welcome‖ Mathematically n x X = . 2012 . –1} We want a function f: X Y Data Mining: Concepts and Techniques 54 September 7. x3.g. y Y = {+1.Classification: A Mathematical Mapping Classification: predicts categorical class labels E.
Linear Classification x x x x x x x x x x o o o o o o o oo o o o o Binary Classification problem The data above the red line belongs to class ‗x‘ The data below red line belongs to class ‗o‘ Examples: SVM. Probabilistic Classifiers September 7. Perceptron. 2012 Data Mining: Concepts and Techniques 55 .
Discriminative Classifiers Advantages prediction accuracy is generally high As compared to Bayesian methods – in general robust. 2012 . works when training examples contain errors fast evaluation of the learned target function Bayesian networks are normally slow Criticism long training time difficult to understand the learned function (weights) Bayesian networks can be used easily for pattern discovery Easy in the form of priors on the data or distributions Data Mining: Concepts and Techniques 56 not easy to incorporate domain knowledge September 7.
…} f(xi) > 0 for yi = +1 f(xi) < 0 for yi = 1 Output: classification function f(x) f(x) => wx + b = 0 or w1x1+w2x2+b = 0 • Perceptron: update W additively x1 September 7. w Input: {(x1. y.Perceptron & Winnow x2 • Vector: x. y1). w • Scalar: x. 2012 Data Mining: Concepts and Techniques • Winnow: update W multiplicatively 57 .
the network learns by adjusting the weights so as to be able to predict the correct class label of the input tuples Also referred to as connectionist learning due to the connections between units Data Mining: Concepts and Techniques 58 September 7.Classification by Backpropagation Backpropagation: A neural network learning algorithm Started by psychologists and neurobiologists to develop and test computational analogues of neurons A neural network: A set of connected input/output units where each connection has a weight associated with it During the learning phase. 2012 .
2012 . e.Neural Network as a Classifier Weakness Long training time Require a number of parameters typically best determined empirically. the network topology or ``structure.." Poor interpretability: Difficult to interpret the symbolic meaning behind the learned weights and of ``hidden units" in the network High tolerance to noisy data Ability to classify untrained patterns Wellsuited for continuousvalued inputs and outputs Successful on a wide array of realworld data Algorithms are inherently parallel Techniques have recently been developed for the extraction of rules from trained neural networks Data Mining: Concepts and Techniques 59 Strength September 7.g.
k x0 w0 x1 xn w1 wn f output y For Example Input weight vector x vector w weighted sum Activation function y sign( wi xi k ) i 0 n The ndimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping Data Mining: Concepts and Techniques 60 September 7.A Neuron (= a perceptron) . 2012 .
A MultiLayer FeedForward Neural Network Output vector Output layer Errj O j (1 O j ) Errk w jk k j j (l) Errj wij wij (l ) Errj Oi Hidden layer Errj O j (1 O j )(T j O j ) wij Input layer Input vector: X September 7. 2012 Data Mining: Concepts and Techniques 1 e I j wijOi j i Oj 1 I j 61 .
networks perform nonlinear regression: Given enough hidden units and enough training samples.How A MultiLayer Neural Network Works? The inputs to the network correspond to the attributes measured for each training tuple Inputs are fed simultaneously into the units making up the input layer They are then weighted and fed simultaneously to a hidden layer The number of hidden layers is arbitrary. which emits the network's prediction The network is feedforward in that none of the weights cycles back to an input unit or to an output unit of a previous layer From a statistical point of view. they can closely approximate any function Data Mining: Concepts and Techniques 62 September 7. 2012 . although usually only one The weighted outputs of the last hidden layer are input to units making up the output layer.
each initialized to 0 Output. one output unit per class is used Once a network has been trained and its accuracy is unacceptable. and # of units in the output layer Normalizing the input values for each attribute measured in the training tuples to [0. repeat the training process with a different network topology or a different set of initial weights Data Mining: Concepts and Techniques 63 September 7. 2012 . # of units in each hidden layer.0—1. # of hidden layers (if > 1). if for classification and more than two classes.0] One input unit per domain value.Defining a Network Topology First decide the network topology: # of units in the input layer.
2012 . through each hidden layer down to the first hidden layer. the weights are modified to minimize the mean squared error between the network's prediction and the actual target value Modifications are made in the ―backwards‖ direction: from the output layer.Backpropagation Iteratively process a set of training tuples & compare the network's prediction with the actual known target value For each training tuple.) Data Mining: Concepts and Techniques 64 September 7. hence ―backpropagation‖ Steps Initialize weights (to small random #s) and biases in the network Propagate the inputs forward (by applying activation function) Backpropagate the error (by updating weights and biases) Terminating condition (when error is very small. etc.
Backpropagation and Interpretability Efficiency of backpropagation: Each epoch (one interation through the training set) takes O(D * w). in the worst case Rule extraction from networks: network pruning Simplify the network structure by removing weighted links that have the least effect on the trained network Then perform link. 2012 . the number of inputs. The knowledge gained from this analysis can be represented in rules Data Mining: Concepts and Techniques 65 September 7. with D tuples and w weights. but # of epochs can be exponential to n. or activation value clustering The set of input and activation values are studied to derive rules describing the relationship between the input and hidden unit layers Sensitivity analysis: assess the impact that a given input variable has on a network output. unit.
2012 Data Mining: Concepts and Techniques .Chapter 6. Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 66 September 7.
it searches for the linear optimal separating hyperplane (i.SVM—Support Vector Machines A new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training data into a higher dimension With the new dimension. data from two classes can always be separated by a hyperplane SVM finds this hyperplane using support vectors (―essential‖ training tuples) and margins (defined by the support vectors) Data Mining: Concepts and Techniques 67 September 7.. 2012 .e. ―decision boundary‖) With an appropriate nonlinear mapping to a sufficiently high dimension.
SVM—History and Applications Vapnik and colleagues (1992)—groundwork from Vapnik & Chervonenkis‘ statistical learning theory in 1960s Features: training can be slow but accuracy is high owing to their ability to model complex nonlinear decision boundaries (margin maximization) Used both for classification and prediction Applications: handwritten digit recognition. 2012 Data Mining: Concepts and Techniques 68 . benchmarking timeseries prediction tests September 7. speaker identification. object recognition.
2012 Data Mining: Concepts and Techniques 69 .SVM—General Philosophy Small Margin Large Margin Support Vectors September 7.
SVM—Margins and Support Vectors September 7. 2012 Data Mining: Concepts and Techniques 70 .
i. maximum marginal hyperplane (MMH) September 7.SVM—When Data Is Linearly Separable m Let data D be (X1. yD). 2012 Data Mining: Concepts and Techniques 71 . (XD..e. …. where Xi is the set of training tuples associated with the class labels yi There are infinite lines (hyperplanes) separating the two classes but we want to find the best one (the one that minimizes classification error on unseen data) SVM searches for the hyperplane with the largest margin. y1).
2012 .e.SVM—Linearly Separable A separating hyperplane can be written as W●X+b=0 where W={w1. w2. and H2: w0 + w1 x1 + w2 x2 ≤ – 1 for yi = –1 Any training tuples that fall on hyperplanes H1 or H2 (i. the sides defining the margin) are support vectors This becomes a constrained (convex) quadratic optimization problem: Quadratic objective function and linear constraints Quadratic Programming (QP) Lagrangian multipliers Data Mining: Concepts and Techniques 72 September 7. wn} is a weight vector and b a scalar (bias) For 2D it can be written as w0 + w1 x1 + w2 x2 = 0 The hyperplane defining the sides of the margin: H1: w0 + w1 x1 + w2 x2 ≥ 1 for yi = +1. …..
even when the dimensionality of the data is high September 7. the same separating hyperplane would be found The number of support vectors found can be used to compute an (upper) bound on the expected error rate of the SVM classifier. 2012 Data Mining: Concepts and Techniques 73 .Why Is SVM Effective on High Dimensional Data? The complexity of trained classifier is characterized by the # of support vectors rather than the dimensionality of the data The support vectors are the essential or critical training examples — they lie closest to the decision boundary (MMH) If all other training examples are removed and the training is repeated. which is independent of the data dimensionality Thus. an SVM with a small number of support vectors can have good generalization.
A2 SVM—Linearly Inseparable Transform the original input data into a higher dimensional space A1 Search for a linear separating hyperplane in the new space Data Mining: Concepts and Techniques 74 September 7. 2012 .
i. K(Xi..e. it is mathematically equivalent to instead applying a kernel function K(Xi. Xj) to the original data. 2012 Data Mining: Concepts and Techniques 75 . Xj) = Φ(Xi) Φ(Xj) Typical Kernel Functions SVM can also be used for classifying multiple (> 2) classes and for regression analysis (with additional user parameters) September 7.SVM—Kernel functions Instead of computing the dot product on the transformed data tuples.
Jiawei Han.Scaling SVM by Hierarchical MicroClustering SVM is not scalable to the number of data objects in terms of training time and memory usage ―Classifying Large Datasets Using SVMs with Hierarchical Clusters Problem‖ by Hwanjo Yu. KDD‘03 CBSVM (ClusteringBased SVM) Given limited amount of system resources (e.. decluster microclusters near ―candidate vector‖ to ensure high classification accuracy Data Mining: Concepts and Techniques 76 September 7.g. memory). 2012 . Jiong Yang. maximize the SVM performance in terms of accuracy and the training speed Use microclustering to effectively reduce the number of points to be considered At deriving support vectors.
hierarchical clusters) given a limited amount of memory The statistical summary maximizes the benefit of learning SVM The summary plays a role in indexing SVMs Essence of Microclustering (Hierarchical indexing structure) Use microcluster hierarchical indexing structure provide finer samples closer to the boundary and coarser samples farther from the boundary Selective declustering to ensure high accuracy Data Mining: Concepts and Techniques 77 September 7.. 2012 .CBSVM: ClusteringBased SVM Training data sets may not even fit in memory Read the data set once (minimizing disk access) Construct a statistical summary of the data (i.e.
CFTree: Hierarchical Microcluster September 7. 2012 Data Mining: Concepts and Techniques 78 .
2012 .CBSVM Algorithm: Outline Construct two CFtrees from positive and negative data sets independently Need one scan of the data set Train an SVM from the centroids of the root entries Decluster the entries near the boundary into the next level The children entries declustered from the parent entries are accumulated into the training set with the nondeclustered parent entries Train an SVM again from the centroids of the entries in the training set Repeat until nothing is accumulated Data Mining: Concepts and Techniques 79 September 7.
2012 Data Mining: Concepts and Techniques 80 . where Di is the distance from the boundary to the center point of Ei and Ri is the radius of Ei Decluster only the cluster whose subclusters have possibilities to be the support cluster of the boundary ―Support cluster‖: The cluster whose centroid is a support vector September 7.Selective Declustering CF tree is a suitable base structure for selective declustering Decluster only the cluster Ei such that Di – Ri < Ds.
Experiment on Synthetic Dataset September 7. 2012 Data Mining: Concepts and Techniques 81 .
Experiment on a Large Data Set September 7. 2012 Data Mining: Concepts and Techniques 82 .
2012 Data Mining: Concepts and Techniques . Neural Network SVM Relatively new concept Deterministic algorithm Nice Generalization properties Hard to learn – learned in batch mode using quadratic programming techniques Using kernels can learn very complex functions Neural Network Relatively old Nondeterministic algorithm Generalizes well but doesn‘t have strong mathematical foundation Can easily be learned in incremental fashion To learn complex functions—use multilayer perceptron (not that trivial) 83 September 7.SVM vs.
including also various interfaces with java. python. 2012 . oneclass SVM. SVMlight: simpler but performance is not better than LIBSVM. multiclass classifications. Data Mining: Concepts and Techniques 84 September 7. support only binary classification and only C language SVMtorch: another recent implementation also written in C.org/ Representative implementations LIBSVM: an efficient implementation of SVM. nuSVM.kernelmachines. etc.SVM Related Links SVM Website http://www.
ShaweTaylor Also written hard for introduction.SVM—Introduction Literature ―Statistical Learning Theory‖ by Vapnik: extremely hard to understand. 1998. but the explanation about the mercer‘s theorem is better than above literatures The neural network book by Haykins Contains one nice chapter of SVM introduction Data Mining: Concepts and Techniques 85 September 7. C. Knowledge Discovery and Data Mining. C. and the examples are so notintuitive The book ―An Introduction to Support Vector Machines‖ by N. Better than the Vapnik‘s book. containing many errors too. J. Cristianini and J. but still written too hard for introduction. 2012 . A Tutorial on Support Vector Machines for Pattern Recognition. Burges. 2(2).
Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 86 September 7. 2012 Data Mining: Concepts and Techniques .Chapter 6.
such as C4.Associative Classification Associative classification Association rules are generated and analyzed for use in classification Search for strong associations between frequent patterns (conjunctions of attributevalue pairs) and class labels Classification: Based on evaluating a set of rules in the form of P1 ^ p2 … ^ pl ―Aclass = C‖ (conf. sup) Why effective? It explores highly confident associations among multiple attributes and may overcome some constraints introduced by decisiontree induction.5 Data Mining: Concepts and Techniques 87 September 7. 2012 . which considers only one attribute at a time In many studies. associative classification has been found to be more accurate than some traditional classification methods.
ICDM‘01) CPAR (Classification based on Predictive Association Rules: Yin & Han. SIGMOD‘05) September 7. 2012 . Pei. SDM‘03) High efficiency. Hsu & Ma. using topk rule groups Achieve high classification accuracy and high runtime efficiency Data Mining: Concepts and Techniques 88 RCBT (Mining topk covering rule groups for gene expression data. Cong et al. KDD‘98) Mine association possible rules in the form of Condset (a set of attributevalue pairs) class label Build classifier: Organize rules according to decreasing precedence based on confidence and then support Classification: Statistical analysis on multiple rules Generation of predictive rules (FOILlike analysis) CMAR (Classification based on Multiple Association Rules: Li. accuracy similar to CMAR Explore highdimensional classification.Typical Associative Classification Methods CBA (Classification By Association: Liu. Han.
2012 . then R2 is pruned Prunes rules for which the rule antecedent and class are not positively correlated. ICDM‘01) Efficiency: Uses an enhanced FPtree that maintains the distribution of class labels among tuples satisfying each frequent itemset Rule pruning whenever a rule is inserted into the tree Given two rules. based on a χ2 test of statistical significance Classification based on generated/pruned rules If only one rule satisfies tuple X. based on the statistical correlation of rules within a group assigns X the class label of the strongest group Data Mining: Concepts and Techniques 89 September 7.A Closer Look at CMAR CMAR (Classification based on Multiple Association Rules: Li. assign the class label of the rule If a rule set S satisfies X. Han. CMAR divides S into groups according to class labels 2 uses a weighted χ measure to find the strongest group of rules. Pei. if the antecedent of R1 is more general than that of R2 and conf(R1) ≥ conf(R2). R1 and R2.
SIGMOD05) September 7. 2012 Data Mining: Concepts and Techniques 90 .Associative Classification May Achieve High Accuracy and Efficiency (Cong et al.
Chapter 6. 2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 91 September 7.
eager learning Lazy learning (e. test) data to classify Lazy: less time in training but more time in predicting Accuracy Lazy method effectively uses a richer hypothesis space since it uses many local linear functions to form its implicit global approximation to the target function Eager: must commit to a single hypothesis that covers the entire instance space Data Mining: Concepts and Techniques 92 September 7.g.g. constructs a classification model before receiving new (e... instancebased learning): Simply stores training data (or only minor processing) and waits until it is given a test tuple Eager learning (the above discussed methods): Given a set of training set. 2012 . Eager Learning Lazy vs.Lazy vs.
Lazy Learner: InstanceBased Methods Instancebased learning: Store training examples and delay the processing (―lazy evaluation‖) until a new instance must be classified Typical approaches knearest neighbor approach Instances represented as points in a Euclidean space. Locally weighted regression Constructs local approximation Casebased reasoning Uses symbolic representations and knowledgebased inference Data Mining: Concepts and Techniques 93 September 7. 2012 .
dist(X1. + _ + xq _ + . 2012 . kNN returns the most common value among the k training examples nearest to xq Vonoroi diagram: the decision surface induced by 1NN for a typical set of training examples _ + _ _ _ . .valued For discretevalued. .The kNearest Neighbor Algorithm All instances correspond to points in the nD space The nearest neighbor are defined in terms of Euclidean distance. 94 September 7. . X2) Target function could be discrete. Data Mining: Concepts and Techniques .or real.
Discussion on the kNN Algorithm kNN for realvalued prediction for a given unknown tuple Returns the mean values of the k nearest neighbors Weight the contribution of each of the k neighbors according to their distance to the query xq 1 Distanceweighted nearest neighbor algorithm Give greater weight to closer neighbors w d ( xq . x )2 i Robust to noisy data by averaging knearest neighbors Curse of dimensionality: distance between neighbors could be dominated by irrelevant attributes To overcome it. axes stretch or elimination of the least relevant attributes Data Mining: Concepts and Techniques 95 September 7. 2012 .
and problem solving Find a good similarity metric Indexing based on syntactic similarity measure. and when failure. legal ruling Methodology Instances represented by rich symbolic descriptions (e.CaseBased Reasoning (CBR) CBR: Uses a database of problem solutions to solve new problems Store symbolic description (tuples or cases)—not points in a Euclidean space Applications: Customerservice (productrelated diagnosis). and adapting to additional cases Data Mining: Concepts and Techniques 96 Challenges September 7. function graphs) Search for similar cases. multiple retrieved cases may be combined Tight coupling between case retrieval. backtracking..g. 2012 . knowledgebased reasoning.
Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 97 September 7. 2012 Data Mining: Concepts and Techniques .Chapter 6.
2012 .Genetic Algorithms (GA) Genetic Algorithm: based on an analogy to biological evolution An initial population is created consisting of randomly generated rules Each rule is represented by a string of bits E. if A1 and ¬A2 then C2 can be encoded as 100 If an attribute has k > 2 values. a new population is formed to consist of the fittest rules and their offsprings The fitness of a rule is represented by its classification accuracy on a set of training examples Offsprings are generated by crossover and mutation The process continues until a population P evolves when each rule in P satisfies a prespecified threshold Slow but easily parallelizable Data Mining: Concepts and Techniques 98 September 7..g. k bits can be used Based on the notion of survival of the fittest.
Rough Set Approach
Rough sets are used to approximately or “roughly” define equivalent classes A rough set for a given class C is approximated by two sets: a lower approximation (certain to be in C) and an upper approximation (cannot be described as not belonging to C) Finding the minimal subsets (reducts) of attributes for feature reduction is NPhard but a discernibility matrix (which stores the differences between attribute values for each pair of data tuples) is used to reduce the computation intensity
September 7, 2012
Data Mining: Concepts and Techniques
99
Fuzzy Set Approaches
Fuzzy logic uses truth values between 0.0 and 1.0 to represent the degree of membership (such as using fuzzy membership graph) Attribute values are converted to fuzzy values e.g., income is mapped into the discrete categories {low, medium, high} with fuzzy values calculated For a given new sample, more than one fuzzy value may apply Each applicable rule contributes a vote for membership in the categories Typically, the truth values for each predicted category are summed, and these sums are combined
Data Mining: Concepts and Techniques 100
September 7, 2012
Chapter 6. Classification and Prediction
What is classification? What is prediction?
Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors)
Issues regarding classification and prediction
Classification by decision tree induction
Other classification methods Prediction Accuracy and error measures
Bayesian classification
Rulebased classification
Classification by back propagation
Ensemble methods
Model selection Summary
101
September 7, 2012
Data Mining: Concepts and Techniques
What Is Prediction? (Numerical) prediction is similar to classification construct a model use model to predict continuous or ordered value for a given input Prediction is different from classification Classification refers to predict categorical class label Prediction models continuousvalued functions Major method for prediction: regression model the relationship between one or more independent or predictor variables and a dependent or response variable Regression analysis Linear and multiple regression Nonlinear regression Other regression methods: generalized linear model. 2012 . Poisson regression. regression trees Data Mining: Concepts and Techniques 102 September 7. loglinear models.
SPlus Many nonlinear functions can be transformed into the above Data Mining: Concepts and Techniques 103 September 7. (XD. 2012 .…. we may have: y = w0 + w1 x1+ w2 x2 Solvable by extension of least square method or using SAS. y2).Linear Regression Linear regression: involves a response variable y and a single predictor variable x y = w0 + w1 x where w0 (yintercept) and w1 (slope) are regression coefficients Method of least squares: estimates the bestfitting straight line w 1 (x i 1  D i x )( yi y ) i (x i 1  D x )2 w y w x 0 1 Multiple linear regression: involves more than one predictor variable Training data is of the form (X1. For 2D data. yD) Ex. y1). (X2.
x3= x3 y = w0 + w1 x + w2 x2 + w3 x3 Other functions. 2012 . y = w0 + w1 x + w2 x2 + w3 x3 convertible to linear with new variables: x2 = x2. such as power function.Nonlinear Regression Some nonlinear models can be modeled by a polynomial function A polynomial regression model can be transformed into linear regression model. can also be transformed to linear model Some models are intractable nonlinear (e.g. For example. sum of exponential terms) possible to obtain least square estimates through extensive calculation on more complex formulae Data Mining: Concepts and Techniques 104 September 7..
2012 . distributions Also useful for data compression and smoothing Trees to predict continuous values rather than class labels Data Mining: Concepts and Techniques 105 Loglinear models: (for categorical data) Regression trees and model trees September 7.Other RegressionBased Models Generalized linear model: Foundation on which linear regression can be applied to modeling categorical response variables Variance of y is a function of the mean value of y. not a constant Logistic regression: models the prob. of some event occurring as a linear function of a set of predictor variables Poisson regression: models the data that exhibit a Poisson distribution Approximate discrete multidimensional prob.
2012 Data Mining: Concepts and Techniques 106 .Regression Trees and Model Trees Regression tree: proposed in CART system (Breiman et al. 1984) CART: Classification And Regression Trees Each leaf stores a continuousvalued prediction It is the average value of the predicted attribute for the training tuples that reach the leaf Model tree: proposed by Quinlan (1992) Each leaf holds a regression model—a multivariate linear equation for the predicted attribute A more general case than regression tree Regression and model trees tend to be more accurate than linear regression when the data are not represented well by a simple linear model September 7.
Predictive Modeling in Multidimensional Databases
Predictive modeling: Predict data values or construct generalized linear models based on the database data One can only predict value ranges or category distributions Method outline: Minimal generalization Attribute relevance analysis Generalized linear model construction Prediction Determine the major factors which influence the prediction Data relevance analysis: uncertainty measurement, entropy analysis, expert judgement, etc. Multilevel prediction: drilldown and rollup analysis
Data Mining: Concepts and Techniques 107
September 7, 2012
Prediction: Numerical Data
September 7, 2012
Data Mining: Concepts and Techniques
108
Prediction: Categorical Data
September 7, 2012
Data Mining: Concepts and Techniques
109
Chapter 6. 2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 110 September 7.
CMi. indicates # of tuples in class i that are labeled by the classifier as class j Alternative accuracy measures (e. acc(M): percentage of test set tuples that are correctly classified by the model M Error rate (misclassification rate) of M = 1 – acc(M) Given m classes. for cancer diagnosis) sensitivity = tpos/pos /* true positive recognition rate */ specificity = tneg/neg /* true negative recognition rate */ precision = tpos/(tpos + fpos) accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg) This model can also be used for costbenefit analysis September 7.27 buy_computer = no 46 2588 total 7366 2634 10000 95.52 Accuracy of a classifier M.j.34 86.. an entry in a confusion matrix. 2012 Data Mining: Concepts and Techniques 111 .C1 C2 Classifier Accuracy Measures classes buy_computer = yes buy_computer = no buy_computer = yes 6954 412 C1 C2 True positive False positive total 7000 3000 False negative True negative recognition(%) 99.g.
2012 Data Mining: Concepts and Techniques 112 . similarly. yi and the predicted value yi‘ Absolute error:  yi – yi‘ Squared error: (yi – yi‘)2 Mean absolute error: Test error (generalization error): the average loss over the test set d d  y i 1 d i 1 d i yi '  Mean squared error: (y i 1 i yi ' ) 2 d Relative absolute error:  y i 1 i yi '  y Relative squared error: d ( yi yi ' ) 2 d i 1 d  y i (y i 1 i y)2 The mean squarederror exaggerates the presence of outliers Popularly use (square) root meansquare error.Predictor Error Measures Measure predictor accuracy: measure how far off the predicted value is from the actual known value Loss function: measures the error betw. root relative squared error September 7.
use Di as test set and others as training set Leaveoneout: k folds where k = # of tuples. 2012 . each approximately equal size At ith iteration. in each fold is approx.Evaluating the Accuracy of a Classifier or Predictor (I) Holdout method Given data is randomly partitioned into two independent sets Training set (e. the same as that in the initial data Data Mining: Concepts and Techniques 113 September 7. 2/3) for model construction Test set (e..g.g.. 1/3) for accuracy estimation Random sampling: a variation of holdout Repeat holdout k times. where k = 10 is most popular) Randomly partition the data into k mutually exclusive subsets. of the accuracies obtained Crossvalidation (kfold. for small sized data Stratified crossvalidation: folds are stratified so that class dist. accuracy = avg.
About 63. resulting in a training set of d samples. and a common one is . overall accuracy of the k model: acc( M ) (0.Evaluating the Accuracy of a Classifier or Predictor (II) Bootstrap Works well with small data sets Samples the given training tuples uniformly with replacement i. with replacement.2% of the original data will end up in the bootstrap.632 boostrap Suppose we are given a data set of d tuples. and the remaining 36. The data set is sampled d times.e. 2012 Data Mining: Concepts and Techniques 114 . The data tuples that did not make it into the training set end up forming the test set.368 acc( M i ) train _ set ) i 1 September 7.368) Repeat the sampling procedue k times.8% will form the test set (since (1 – 1/d)d ≈ e1 = 0.632 acc( M i ) test _ set 0.. each time a tuple is selected. it is equally likely to be selected again and readded to the training set Several boostrap methods.
Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 115 September 7. 2012 Data Mining: Concepts and Techniques .Chapter 6.
M2. M1. with the aim of creating an improved model M* Popular ensemble methods Bagging: averaging the prediction over a collection of classifiers Boosting: weighted vote with a collection of classifiers Ensemble: combining a set of heterogeneous classifiers Data Mining: Concepts and Techniques 116 September 7.Ensemble Methods: Increasing the Accuracy Ensemble methods Use a combination of models to increase accuracy Combine a series of k learned models. 2012 . …. Mk.
Bagging: Boostrap Aggregation Analogy: Diagnosis based on multiple doctors‘ majority vote Training Given a set D of d tuples. boostrap) A classifier model Mi is learned for each training set Di Classification: classify an unknown sample X Each classifier Mi returns its class prediction The bagged classifier M* counts the votes and assigns the class with the most votes to X Prediction: can be applied to the prediction of continuous values by taking the average value of each prediction for a given test tuple Accuracy Often significant better than a single classifier derived from D For noise data: not considerably worse.. at each iteration i. 2012 . a training set Di of d tuples is sampled with replacement from D (i. more robust Proved improved accuracy in prediction Data Mining: Concepts and Techniques 117 September 7.e.
but it also risks overfitting the model to misclassified data Data Mining: Concepts and Techniques 118 September 7. based on a combination of weighted diagnoses—weight assigned based on the previous diagnosis accuracy How boosting works? Weights are assigned to each training tuple A series of k classifiers is iteratively learned After a classifier Mi is learned.Boosting Analogy: Consult several doctors. Mi+1. 2012 . to pay more attention to the training tuples that were misclassified by Mi The final M* combines the votes of each individual classifier. the weights are updated to allow the subsequent classifier. where the weight of each classifier's vote is a function of its accuracy The boosting algorithm can be extended for the prediction of continuous values Comparing with bagging: boosting tends to achieve greater accuracy.
w.Adaboost (Freund and Schapire. Tuples from D are sampled (with replacement) to form a training set Di of the same size Each tuple‘s chance of being selected is based on its weight A classification model Mi is derived from Di Its error rate is calculated using Di as a test set If a tuple is misclssified. …. At round i. its weight is increased. all the weights of tuples are set the same (1/d) Generate k classifiers in k rounds. 2012 Data Mining: Concepts and Techniques . y1). o. yd) Initially. it is decreased Error rate: err(Xj) is the misclassification error of tuple Xj. (Xd. 1997) Given a set of d classlabeled tuples. Classifier Mi error rate is the sum of the weights of the misclassified tuples: error ( M i ) w j err ( X j ) j d The weight of classifier Mi‘s vote is log 1 error ( M i ) error ( M i ) 119 September 7. (X1.
Chapter 6. 2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 120 September 7.
5).0 121 September 7. the less accurate is the model Vertical axis represents the true positive rate Horizontal axis rep. the false positive rate The plot also shows a diagonal line A model with perfect accuracy will have an area of 1.Model Selection: ROC Curves ROC (Receiver Operating Characteristics) curves: for visual comparison of classification models Originated from signal detection theory Shows the tradeoff between the true positive rate and the false positive rate The area under the ROC curve is a measure of the accuracy of the model Rank the test tuples in decreasing order: the one that is most likely to belong to the positive class appears at the top of the list The closer to the diagonal line (i.. the closer the area is to 0. 2012 Data Mining: Concepts and Techniques .e.
2012 Data Mining: Concepts and Techniques . Classification and Prediction What is classification? What is prediction? Support Vector Machines (SVM) Associative classification Lazy learners (or learning from your neighbors) Issues regarding classification and prediction Classification by decision tree induction Other classification methods Prediction Accuracy and error measures Bayesian classification Rulebased classification Classification by back propagation Ensemble methods Model selection Summary 122 September 7.Chapter 6.
Summary (I) Classification and prediction are two forms of data analysis that can be used to extract models describing important data classes or to predict future data trends. nonlinear. and other classification methods such as genetic algorithms. nearest neighbor classifiers. Naive Bayesian classification. 2012 . and generalized linear models of regression can be used for prediction. Data Mining: Concepts and Techniques 123 September 7. associative classification. Effective and scalable methods have been developed for decision trees induction. rulebased classifier. and casebased reasoning. Many nonlinear problems can be converted to linear problems by performing transformations on the predictor variables. Support Vector Machine (SVM). Backpropagation. rough set and fuzzy set approaches. Regression trees and model trees are also used for prediction. Bayesian belief network. Linear.
interpretability. and scalability must be considered and can involve tradeoffs. training time. 2012 Data Mining: Concepts and Techniques 124 .Summary (II) Stratified kfold crossvalidation is a recommended method for accuracy estimation. further complicating the quest for an overall superior method September 7. robustness. Significance tests and ROC curves are useful for model selection There have been numerous comparisons of the different classification and prediction methods. and the matter remains a research topic No single method has been found to be superior over all others for all data sets Issues such as accuracy. Bagging and boosting can be used to increase overall accuracy by learning and combining a series of individual models.
C. Stolfo. KDD'95. Data mining with decision trees and decision rules. 1995. Classification and Regression Trees. Burges. K. A. K.L. 1990. 13. Breiman. Weiss. A. KDD'99. L. Bishop.References (1) C. R. Oxford University Press. 1984. Wadsworth International Group. Friedman. G. Learning arbiter and combiner trees from partitioned data for scaling machine learning. A Tutorial on Support Vector Machines for Pattern Recognition. Xu. C. W. G. ICML'95. 2(2): 121168. 1997. Tung. Chapman and Hall. Chan and S. 1998. J. M. An Introduction to Generalized Linear Models. Fast effective rule induction. Tan. Olshen. Li. Neural Networks for Pattern Recognition. Data Mining and Knowledge Discovery. J. K. Data Mining: Concepts and Techniques 125 September 7. and X. Stone. Efficient mining of emerging patterns: Discovering trends and differences. H. Dong and J. Apte and S. P. Future Generation Computer Systems. J. SIGMOD'05. Dobson. Cohen. J. Cong. and C. Mining topk covering rule groups for gene expression data. C. 2012 .
Computer and System Sciences. Ramakrishnan. Freund and R. Cheng. Gant. RIDE'97.References (2) R. Gong. B. Pattern Classification. Branching on attribute values in decision tree generation. and J. Inference. 2ed. Y. and Prediction. Duda. and J. Gehrke. Hastie. John Wiley and Sons. Machine Learning. Ganti. and V. Li. and D. Rainforest: A framework for fast decision tree construction of large datasets. E. J. KDD'98. A decisiontheoretic generalization of online learning and an application to boosting. J. Han. W. J. BOAT . Hart. D. Geiger. Han. and Y. Loh. Ma. D. G. V. R. and W. ICDM'01. R. M. and D. Gehrke. Tibshirani. M. The Elements of Statistical Learning: Data Mining. 1995. Winstone. Kamber. Ramakrishnan. Hsu. P. CMAR: Accurate and Efficient Classification Based on Multiple ClassAssociation Rules. SpringerVerlag. Chickering. O. 1997. E. Schapire. W. Heckerman. L. Data Mining: Concepts and Techniques 126 September 7.Y. S. Pei. Liu. Fayyad.Optimistic Decision Tree Construction. Learning Bayesian networks: The combination of knowledge and statistical data. VLDB‘98. AAAI‘94. Stork. 2001 U. SIGMOD'99. W. Friedman. Generalization and decision tree induction: Efficient classification in data mining. T. 2012 . R. M. J. Integrating Classification and Association Rule. and J. 2001.
EDBT'96. Agrawal.5: Programs for Machine Learning. Advanced Methods of Marketing Research. Lim. Quinlan. Machine Learning. ECML‘93. Mitchell. 2012 . Mehta. Bagozzi.5. Machine Learning. R. Quinlan.S. R. M. 1986.References (3) T. Quinlan. and training time of thirtythree old and new classification algorithms. A comparison of prediction accuracy. Data Mining: Concepts and Techniques 127 September 7. Bagging. J. R. editor. Loh. 2000. Automatic Construction of Decision Trees from Data: A MultiDisciplinary Survey. Data Mining and Knowledge Discovery 2(4): 345389. W. Quinlan and R. 1998 J. P. T. The Chaid approach to segmentation modeling: Chisquared automatic interaction detection. M. R. J. Shih. Blackwell Business. C4. 1997. AAAI'96. 1993. and Y. McGraw Hill. and J. Magidson.Y. S. In R. 1:81106. complexity. J. 1994. and c4. Induction of decision trees. SLIQ : A fast scalable classifier for data mining. J. Murthy. boosting.S. Rissanen. CameronJones. K. Machine Learning. R. M. FOIL: A midterm report. Morgan Kaufmann.
Han. Weiss and C. and J. Shavlik and T. Machine Learning. Classifying large data sets using SVM with hierarchical clusters. 1991. Morgan Kaufman. Rastogi and K. Morgan Kaufmann. Data Mining: Concepts and Techniques 128 September 7. and V. M. Kumar. Witten and E. VLDB‘98. SPRINT : A scalable parallel classifier for data mining. M. VLDB‘96. A. S. SDM'03 H. J. Predictive Data Mining. CPAR: Classification based on predictive association rules. Morgan Kaufmann. R. Yu. Neural Nets. P. Steinbach. X. H. Dietterich. Kulikowski. 2012 . J. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Readings in Machine Learning. Agrawal. 2ed. Morgan Kaufmann. and Expert Systems. W. 1997. J. Public: A decision tree classifier that integrates building and pruning. Frank. Weiss and N. Han. and M. Yang. Shim. KDD'03. Mehta. Computer Systems that Learn: Classification and Prediction Methods from Statistics. Addison Wesley. 1990. S. M. Tan. 2005.References (4) R. Yin and J. Shafer. Introduction to Data Mining. Indurkhya. G. I.
2012 Data Mining: Concepts and Techniques 129 .September 7.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.