Professional Documents
Culture Documents
Data Mining Classification: Alternative Techniques: Lecture Notes For Chapter 5 Introduction To Data Mining
Data Mining Classification: Alternative Techniques: Lecture Notes For Chapter 5 Introduction To Data Mining
Tan,Steinbach, Kumar
4/18/2004
Rule-Based Classifier
O
Rule:
(Condition) y
where
Tan,Steinbach, Kumar
4/18/2004
Blood Type
human
python
salmon
whale
frog
komodo
bat
pigeon
cat
leopard shark
turtle
penguin
porcupine
eel
salamander
gila monster
platypus
owl
dolphin
eagle
warm
cold
cold
warm
cold
cold
warm
warm
warm
cold
cold
warm
warm
cold
cold
cold
warm
warm
warm
warm
Give Birth
yes
no
no
yes
no
no
yes
no
yes
yes
no
no
yes
no
no
no
no
no
yes
no
Can Fly
no
no
no
no
no
no
yes
yes
no
no
no
no
no
no
no
no
no
yes
no
yes
Live in Water
no
no
yes
yes
sometimes
no
no
no
no
yes
sometimes
sometimes
no
yes
sometimes
no
no
no
yes
no
Class
mammals
reptiles
fishes
mammals
amphibians
reptiles
mammals
birds
mammals
fishes
reptiles
birds
mammals
fishes
amphibians
reptiles
mammals
birds
mammals
birds
4/18/2004
hawk
grizzly bear
Blood Type
warm
warm
Give Birth
Can Fly
Live in Water
Class
no
yes
yes
no
no
no
?
?
4/18/2004
Coverage of a rule:
Status
1
Yes
Single
Fraction of records
2
No
Married
that satisfy the
3
No
Single
antecedent of a rule
4
Yes
Married
5
No
Divorced
O Accuracy of a rule:
6
No
Married
Fraction of records
7
Yes
Divorced
that satisfy both the
8
No
Single
9
No
Married
antecedent and
10 No
Single
consequent of a
(Status=Single) No
rule
O
Taxable
Income Class
125K
No
100K
No
70K
No
120K
No
95K
Yes
60K
No
220K
No
85K
Yes
75K
No
90K
Yes
10
4/18/2004
lemur
turtle
dogfish shark
Blood Type
warm
cold
cold
Give Birth
Can Fly
Live in Water
Class
yes
no
yes
no
no
no
no
sometimes
yes
?
?
?
Tan,Steinbach, Kumar
4/18/2004
Exhaustive rules
Classifier has exhaustive coverage if it
accounts for every possible combination of
attribute values
Each record is covered by at least one rule
Tan,Steinbach, Kumar
4/18/2004
Refund
Yes
No
NO
{Single,
Divorced}
Marita l
Status
NO
{Married}
NO
Taxable
Income
< 80K
> 80K
YES
Tan,Steinbach, Kumar
4/18/2004
No
NO
{Single,
Divorced}
Marita l
Status
NO
Taxable
Income
< 80K
{Married}
> 80K
NO
YES
Taxable
Income Cheat
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced 95K
No
Married
Yes
Divorced 220K
No
No
Single
85K
Yes
No
Married
75K
No
10
No
Single
90K
Yes
60K
Yes
No
10
Initial Rule:
(Refund=No) (Status=Married) No
4/18/2004
Tan,Steinbach, Kumar
4/18/2004
10
turtle
Blood Type
Give Birth
Can Fly
Live in Water
Class
no
no
sometimes
cold
Tan,Steinbach, Kumar
4/18/2004
11
Rule-based ordering
Individual rules are ranked based on their quality
Class-based ordering
Rules that belong to the same class appear together
Rule-based Ordering
Class-based Ordering
(Refund=Yes) ==> No
(Refund=Yes) ==> No
Tan,Steinbach, Kumar
4/18/2004
12
Direct Method:
Extract rules directly from data
e.g.: RIPPER, CN2, Holtes 1R
Indirect Method:
Extract rules from other classification models (e.g.
decision trees, neural networks, etc).
e.g: C4.5rules
Tan,Steinbach, Kumar
4/18/2004
13
Tan,Steinbach, Kumar
4/18/2004
14
(ii) Step 1
Tan,Steinbach, Kumar
4/18/2004
15
R1
R1
R2
(iii) Step 2
Tan,Steinbach, Kumar
(iv) Step 3
4/18/2004
16
Rule Growing
Instance Elimination
Rule Evaluation
Stopping Criterion
Rule Pruning
Tan,Steinbach, Kumar
4/18/2004
17
Rule Growing
O
{}
Yes: 3
No: 4
Refund=No,
Status=Single,
Income=85K
(Class=Yes)
Refund=
No
Status =
Single
Status =
Divorced
Status =
Married
Yes: 3
No: 4
Yes: 2
No: 1
Yes: 1
No: 0
Yes: 0
No: 3
...
Income
> 80K
Yes: 3
No: 1
(a) General-to-specific
Tan,Steinbach, Kumar
Refund=No,
Status=Single,
Income=90K
(Class=Yes)
Refund=No,
Status = Single
(Class = Yes)
(b) Specific-to-general
4/18/2004
18
CN2 Algorithm:
Start from an empty conjunct: {}
Add conjuncts that minimizes the entropy measure: {A}, {A,B},
Determine the rule consequent by taking majority class of instances
covered by the rule
RIPPER Algorithm:
Start from an empty rule: {} => class
Add conjuncts that maximizes FOILs information gain measure:
R0: {} => class (initial rule)
R1: {A} => class (rule after adding conjunct)
Gain(R0, R1) = t [ log (p1/(p1+n1)) log (p0/(p0 + n0)) ]
where t: number of positive instances covered by both R0 and R1
p0: number of positive instances covered by R0
n0: number of negative instances covered by R0
p1: number of positive instances covered by R1
n1: number of negative instances covered by R1
Tan,Steinbach, Kumar
4/18/2004
19
Instance Elimination
O
Why do we need to
eliminate instances?
R3
Why do we remove
positive instances?
R1
+
class = +
Why do we remove
negative instances?
class = -
Prevent underestimating
accuracy of rule
Compare rules R2 and R3
in the diagram
Tan,Steinbach, Kumar
+ +
+
+
+
+
++
+
+
+++
+ + +
+ +
-
R2
+
+
+
+ +
-
4/18/2004
20
Rule Evaluation
O
Metrics:
Accuracy
Laplace
nc
n
nc + 1
n+k
n + kp
M-estimate = c
n+k
Tan,Steinbach, Kumar
n : Number of instances
covered by rule
nc : Number of instances
covered by rule
k : Number of classes
p : Prior probability
4/18/2004
21
Stopping criterion
Compute the gain
If gain is not significant, discard the new rule
Rule Pruning
Similar to post-pruning of decision trees
Reduced Error Pruning:
Remove one of the conjuncts in the rule
Compare error rate on validation set before and
after pruning
If error improves, prune the conjunct
Tan,Steinbach, Kumar
4/18/2004
22
Repeat
Tan,Steinbach, Kumar
4/18/2004
23
Tan,Steinbach, Kumar
4/18/2004
24
Growing a rule:
Start from empty rule
Add conjuncts as long as they improve FOILs
information gain
Stop when rule no longer covers negative examples
Prune the rule immediately using incremental reduced
error pruning
Measure for pruning: v = (p-n)/(p+n)
p: number of positive examples covered by the rule in
the validation set
n: number of negative examples covered by the rule in
the validation set
4/18/2004
25
Tan,Steinbach, Kumar
4/18/2004
26
Compare the rule set for r against the rule set for r*
and r
Choose rule set that minimizes MDL principle
Tan,Steinbach, Kumar
4/18/2004
27
Indirect Methods
P
No
Yes
Q
No
Rule Set
R
Yes
No
Yes
Q
No
Tan,Steinbach, Kumar
Yes
4/18/2004
28
Tan,Steinbach, Kumar
4/18/2004
29
Tan,Steinbach, Kumar
4/18/2004
30
Example
Name
Give Birth
human
python
salmon
whale
frog
komodo
bat
pigeon
cat
leopard shark
turtle
penguin
porcupine
eel
salamander
gila monster
platypus
owl
dolphin
eagle
yes
no
no
yes
no
no
yes
no
yes
yes
no
no
yes
no
no
no
no
no
yes
no
Lay Eggs
no
yes
yes
no
yes
yes
no
yes
no
no
yes
yes
no
yes
yes
yes
yes
yes
no
yes
Tan,Steinbach, Kumar
Can Fly
no
no
no
no
no
no
yes
yes
no
no
no
no
no
no
no
no
no
yes
no
yes
no
no
yes
yes
sometimes
no
no
no
no
yes
sometimes
sometimes
no
yes
sometimes
no
no
no
yes
no
yes
no
no
no
yes
yes
yes
yes
yes
no
yes
yes
yes
no
yes
yes
yes
yes
no
yes
Class
mammals
reptiles
fishes
mammals
amphibians
reptiles
mammals
birds
mammals
fishes
reptiles
birds
mammals
fishes
amphibians
reptiles
mammals
birds
mammals
birds
4/18/2004
31
Give
Birth?
No
Yes
Live In
Water?
Mammals
Yes
( ) Amphibians
RIPPER:
No
Sometimes
Fishes
Yes
Birds
Tan,Steinbach, Kumar
Can
Fly?
Amphibians
No
() Mammals
Reptiles
4/18/2004
32
0
0
0
3
0
Mammals
0
1
0
0
6
0
0
0
2
0
Mammals
2
0
1
1
4
RIPPER:
PREDICTED CLASS
Amphibians Fishes Reptiles Birds
0
0
0
ACTUAL Amphibians
0
3
0
CLASS Fishes
0
0
3
Reptiles
0
0
1
Birds
0
2
1
Mammals
Tan,Steinbach, Kumar
4/18/2004
33
Tan,Steinbach, Kumar
4/18/2004
34
Instance-Based Classifiers
Set of Stored Cases
Atr1
...
AtrN
Class
A
B
B
C
A
Unseen Case
Atr1
...
AtrN
C
B
Tan,Steinbach, Kumar
4/18/2004
35
Examples:
Rote-learner
Memorizes entire training data and performs
classification only if attributes of record match one of
the training examples exactly
Nearest neighbor
Uses k closest points (nearest neighbors) for
performing classification
Tan,Steinbach, Kumar
4/18/2004
36
Basic idea:
If it walks like a duck, quacks like a duck, then
its probably a duck
Compute
Distance
Training
Records
Test
Record
Choose k of the
nearest records
Tan,Steinbach, Kumar
4/18/2004
37
Nearest-Neighbor Classifiers
Unknown record
Tan,Steinbach, Kumar
4/18/2004
38
4/18/2004
39
4/18/2004
40
1 nearest-neighbor
Voronoi Diagram
Tan,Steinbach, Kumar
d ( p, q ) =
O
q )
( pi
Tan,Steinbach, Kumar
4/18/2004
41
Tan,Steinbach, Kumar
4/18/2004
42
Scaling issues
Attributes may have to be scaled to prevent
distance measures from being dominated by
one of the attributes
Example:
height of a person may vary from 1.5m to 1.8m
weight of a person may vary from 90lb to 300lb
income of a person may vary from $10K to $1M
Tan,Steinbach, Kumar
4/18/2004
43
curse of dimensionality
vs
100000000000
011111111111
000000000001
d = 1.4142
d = 1.4142
Tan,Steinbach, Kumar
4/18/2004
44
Tan,Steinbach, Kumar
4/18/2004
45
Example: PEBLS
O
Tan,Steinbach, Kumar
4/18/2004
46
Example: PEBLS
Tid Refund Marital
Status
Taxable
Income Cheat
Yes
Single
125K
No
d(Single,Married)
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
d(Single,Divorced)
No
Divorced 95K
Yes
No
Married
No
d(Married,Divorced)
Yes
Divorced 220K
No
No
Single
85K
Yes
d(Refund=Yes,Refund=No)
No
Married
75K
No
10
No
Single
90K
Yes
60K
10
Refund
Marital Status
Class
Single
Married
Divorced
Yes
No
Tan,Steinbach, Kumar
Class
Yes
No
Yes
No
d (V1 ,V2 ) =
n1i n 2 i
n1 n 2
4/18/2004
47
Example: PEBLS
Tid Refund Marital
Status
Taxable
Income Cheat
Yes
Single
125K
No
No
Married
100K
No
10
( X , Y ) = w X wY d ( X i , Yi ) 2
i =1
where:
wX =
4/18/2004
48
Bayes Classifier
A probabilistic framework for solving classification
problems
O Conditional Probability:
P( A, C )
P (C | A) =
P ( A)
O
P( A | C ) =
O
P( A, C )
P (C )
Bayes theorem:
P (C | A) =
Tan,Steinbach, Kumar
P ( A | C ) P (C )
P ( A)
4/18/2004
49
Given:
A doctor knows that meningitis causes stiff neck 50% of the
time
Prior probability of any patient having meningitis is 1/50,000
Prior probability of any patient having stiff neck is 1/20
P( M | S ) =
Tan,Steinbach, Kumar
P ( S | M ) P ( M ) 0.5 1 / 50000
=
= 0.0002
P( S )
1 / 20
Introduction to Data Mining
4/18/2004
50
Bayesian Classifiers
O
Tan,Steinbach, Kumar
4/18/2004
51
Bayesian Classifiers
O
Approach:
compute the posterior probability P(C | A1, A2, , An) for
all values of C using the Bayes theorem
P (C | A A K A ) =
1
P ( A A K A | C ) P (C )
P(A A K A )
1
Tan,Steinbach, Kumar
4/18/2004
52
Tan,Steinbach, Kumar
4/18/2004
53
Tid
Refund
Marital
Status
Taxable
Income
Evade
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced
95K
Yes
No
Married
60K
No
Yes
Divorced
220K
No
No
Single
85K
Yes
No
Married
75K
No
10
No
Single
90K
Yes
10
Tan,Steinbach, Kumar
P(Status=Married|No) = 4/7
P(Refund=Yes|Yes)=0
4/18/2004
54
Tan,Steinbach, Kumar
4/18/2004
55
Refund
Yes
Marital
Status
Single
Taxable
Income
125K
Evade
Normal distribution:
1
P( A | c ) =
e
2
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced
95K
Yes
No
Married
60K
No
Yes
Divorced
220K
No
No
Single
85K
Yes
No
Married
75K
No
10
No
Single
90K
Yes
( Ai ij ) 2
2 ij2
ij
10
1
P ( Income = 120 | No) =
e
2 (54.54)
Tan,Steinbach, Kumar
( 120 110 ) 2
2 ( 2975 )
= 0.0072
4/18/2004
56
P(X|Class=No) = P(Refund=No|Class=No)
P(Married| Class=No)
P(Income=120K| Class=No)
= 4/7 4/7 0.0072 = 0.0024
=> Class = No
4/18/2004
57
Original : P ( Ai | C ) =
N ic
Nc
N +1
Laplace : P( Ai | C ) = ic
Nc + c
N + mp
m - estimate : P( Ai | C ) = ic
Nc + m
Tan,Steinbach, Kumar
c: number of classes
p: prior probability
m: parameter
4/18/2004
58
human
python
salmon
whale
frog
komodo
bat
pigeon
cat
leopard shark
turtle
penguin
porcupine
eel
salamander
gila monster
platypus
owl
dolphin
eagle
Give Birth
yes
Give Birth
yes
no
no
yes
no
no
yes
no
yes
yes
no
no
yes
no
no
no
no
no
yes
no
Can Fly
no
no
no
no
no
no
yes
yes
no
no
no
no
no
no
no
no
no
yes
no
yes
Can Fly
no
Tan,Steinbach, Kumar
no
no
yes
yes
sometimes
no
no
no
no
yes
sometimes
sometimes
no
yes
sometimes
no
no
no
yes
no
Class
yes
no
no
no
yes
yes
yes
yes
yes
no
yes
yes
yes
no
yes
yes
yes
yes
no
yes
mammals
non-mammals
non-mammals
mammals
non-mammals
non-mammals
mammals
non-mammals
mammals
non-mammals
non-mammals
non-mammals
mammals
non-mammals
non-mammals
non-mammals
mammals
non-mammals
mammals
non-mammals
yes
no
A: attributes
M: mammals
N: non-mammals
6 6 2 2
P ( A | M ) = = 0.06
7 7 7 7
1 10 3 4
P ( A | N ) = = 0.0042
13 13 13 13
7
P ( A | M ) P( M ) = 0.06 = 0.021
20
13
P ( A | N ) P( N ) = 0.004 = 0.0027
20
Class
4/18/2004
59
Tan,Steinbach, Kumar
4/18/2004
60
X1 X2 X3
1
1
1
1
0
0
0
0
0
0
1
1
0
1
1
0
0
1
0
1
1
0
1
0
Input
0
1
1
1
0
0
1
0
X1
Black box
Output
X2
X3
Tan,Steinbach, Kumar
4/18/2004
61
X1 X2 X3
1
1
1
1
0
0
0
0
0
0
1
1
0
1
1
0
0
1
0
1
1
0
1
0
Y
0
1
1
1
0
0
1
0
Input
nodes
Black box
X1
Output
node
0.3
0.3
X2
X3
0.3
t=0.4
Y = I ( 0 .3 X 1 + 0 .3 X 2 + 0 .3 X 3 0 .4 > 0 )
1
where I ( z ) =
0
Tan,Steinbach, Kumar
if z is true
otherwise
4/18/2004
62
Model is an assembly of
inter-connected nodes
and weighted links
Output node sums up
each of its input value
according to the weights
of its links
Input
nodes
Black box
X1
Output
node
w1
w2
X2
w3
X3
Perceptron Model
Y = I ( wi X i t )
or
Y = sign( wi X i t )
i
Tan,Steinbach, Kumar
4/18/2004
63
x2
x3
Input
Layer
x4
x5
Input
I1
I2
Hidden
Layer
I3
Neuron i
wi1
wi2
wi3
Si
Activation
function
g(Si )
Output
Oi
Oi
threshold, t
Output
Layer
Tan,Steinbach, Kumar
4/18/2004
64
Tan,Steinbach, Kumar
4/18/2004
65
Find a linear hyperplane (decision boundary) that will separate the data
Tan,Steinbach, Kumar
4/18/2004
66
Tan,Steinbach, Kumar
4/18/2004
67
4/18/2004
68
B2
Tan,Steinbach, Kumar
B2
Tan,Steinbach, Kumar
4/18/2004
69
4/18/2004
70
B2
O
O
Tan,Steinbach, Kumar
B2
b21
b22
margin
b11
b12
O
Tan,Steinbach, Kumar
4/18/2004
71
r r
w x + b = 0
r r
w x + b = +1
r r
w x + b = 1
b11
1
r
f (x) =
1
r r
if w x + b 1
r r
if w x + b 1
Tan,Steinbach, Kumar
b12
Margin =
4/18/2004
2
r 2
|| w ||
72
2
We want to maximize: Margin = r 2
|| w ||
r
|| w ||2
Which is equivalent to minimizing: L( w) =
2
Tan,Steinbach, Kumar
4/18/2004
73
Tan,Steinbach, Kumar
4/18/2004
74
Subject to:
1
r
f ( xi ) =
1
Tan,Steinbach, Kumar
r r
if w x i + b 1 - i
r r
if w x i + b 1 + i
4/18/2004
75
Tan,Steinbach, Kumar
4/18/2004
76
Tan,Steinbach, Kumar
4/18/2004
77
Ensemble Methods
O
Tan,Steinbach, Kumar
4/18/2004
78
General Idea
D
Step 1:
Create Multiple
Data Sets
Step 2:
Build Multiple
Classifiers
D1
D2
C1
C2
Step 3:
Combine
Classifiers
....
Original
Training data
Dt-1
Dt
Ct -1
Ct
C*
Tan,Steinbach, Kumar
4/18/2004
79
25
Tan,Steinbach, Kumar
4/18/2004
80
Tan,Steinbach, Kumar
4/18/2004
81
Bagging
O
1
7
1
1
2
8
4
8
3
10
9
5
4
8
1
10
5
2
2
5
6
5
3
5
7
10
2
9
8
10
7
6
9
5
3
3
10
9
2
7
Tan,Steinbach, Kumar
4/18/2004
82
Boosting
O
Tan,Steinbach, Kumar
4/18/2004
83
Boosting
Records that are wrongly classified will have their
weights increased
O Records that are classified correctly will have
their weights decreased
O
Original Data
Boosting (Round 1)
Boosting (Round 2)
Boosting (Round 3)
1
7
5
4
2
3
4
4
3
2
9
8
4
8
4
10
5
7
2
4
6
9
5
5
7
4
1
4
8
10
7
6
9
6
4
3
10
3
2
4
Tan,Steinbach, Kumar
4/18/2004
84
Example: AdaBoost
O
Error rate:
1
i =
N
O
w (C ( x ) y )
N
j =1
Importance of a classifier:
1 1 i
i = ln
2 i
Tan,Steinbach, Kumar
4/18/2004
85
Example: AdaBoost
O
Weight update:
exp j if C j ( xi ) = yi
wi
exp j if C j ( xi ) yi
where Z j is the normalization factor
( j +1)
wi( j )
=
Zj
C * ( x ) = arg max j (C j ( x ) = y )
y
Tan,Steinbach, Kumar
j =1
4/18/2004
86
Illustrating AdaBoost
Initial weights for each data point
0.1
Original
Data
0.1
0.1
0.0094
0.4623
Data points
for training
- - - - - ++
+++
B1
Boosting
Round 1
0.0094
- - - - - - -
+++
Tan,Steinbach, Kumar
= 1.9459
4/18/2004
87
Illustrating AdaBoost
B1
Boosting
Round 1
0.0094
0.0094
+++
0.4623
- - - - - - -
= 1.9459
B2
Boosting
Round 2
0.0009
0.3037
- - -
0.0422
- - - - - ++
= 2.9323
B3
0.0276
0.1819
0.0038
Boosting
Round 3
+++
++ ++ + ++
Overall
+++
- - - - - ++
Tan,Steinbach, Kumar
= 3.8744
4/18/2004
88