You are on page 1of 40

Decision Tree

Data Science Program

Bangalore
2 Introduction
Given an event, predict is category.
Examples:
Who won a given ball game?
How should we file a given email?
What word sense was intended for a
occurrence of a word?
given
Event =list of features.
Ball game: Which players were on
Examples:
offense?
Email: Who sent the email?
Disambiguation: What was the
word?
preceding
Introduction
3

Use a decision tree to predict categories for new events.


Use training data to build the decision tree.

New
Events

Training
Decision
Events and
Tree
Categories

Category
4 Decision Tree for PlayTennis

Outlook

Sunny Overcast Rain

Humidity Each internal node tests an attribute

High Normal Each branch corresponds to an


attribute value node
No Yes Each leaf node assigns a classification
5 Word Sense Disambiguation

Given an occurrence of a word, decide which sense, or meaning, was


intended.
Example: "run"
run1: move swiftly (I ran to the store.)
run2: operate (I run a store.)
run3: flow (Water runs from the spring.)
run4: length of torn stitches (Her stockings had a run.)
etc.
6 Word Sense Disambiguation
Categories
Use word sense labels (run1, run2, etc.) to name
the possible categories.
Features
Features describe the context of the word we want to
disambiguate.
Possible features include:
near(w): is the given word near an occurrence of word w?
pos: the word’s part of speech
left(w): is the word immediately preceded by the word w?
etc.
Word Sense Disambiguation
7

Example decision tree:


pos
noun verb

near(stocking) near(race)
yes no yes no

run4 run1 near(river)


yes no

run3
(Note: Decision trees for WSD tend to be quite large)
WSD: Sample Training Data
8

Features Word
pos near(race) near(river) near(stockings) Sense
noun no no no run4
verb no no no run1
verb no yes no run3
noun yes yes yes run4
verb no no yes run1
verb yes yes no run2
verb no yes yes run3
9 Decision Tree for Conjunction
Outlook=Sunny  Wind=Weak

Outlook

Sunny Overcast Rain

Wind No No

Strong Weak

No Yes
10 Decision Tree for Disjunction
Outlook=Sunny  Wind=Weak

Outlook

Sunny Overcast Rain

Yes Wind Wind

Strong Weak Strong Weak

No Yes No Yes
11 Decision Tree for XOR
Outlook=Sunny XOR Wind=Weak

Outlook

Sunny Overcast Rain

Wind Wind Wind

Strong Weak Strong Weak Strong Weak

Yes No No Yes No Yes


12 Decision Tree
• decision trees represent disjunctions of conjunctions
Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak


No Yes No Yes

(Outlook=Sunny  Humidity=Normal)
 (Outlook=Overcast)
 (Outlook=Rain  Wind=Weak)
13
When toconsider Decision Trees
Instances describable by attribute-value pairs
Target function is discrete valued
Disjunctive hypothesis may be required
Possibly noisy training data
Missing attribute values
Examples:
Medical diagnosis
Credit riskanalysis
Object classification for robot
manipulator (Tan 1993)
Top-Down Induction of Decision
Trees ID3
14

1. A  the “best” decision attribute for next node


2. Assign A as decision attribute for node
3. For each value of A create new descendant
4. Sort training examples to leaf node according to
the attribute value of the branch
5. If all training examples are perfectly classified
(same value of target attribute) stop, else
iterate over new leaf nodes.
15 Which attribute isbest?

[29+,35-] A1=? A2=? [29+,35-]

G H L M

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]


16 Entropy

Sis a sample of training examples


p+ is the proportion of positive examples
p- is the proportion of negative
examples
Entropy measures the impurity of S
Entropy(S) =-p+ log2 p+ - p- log2 p-
17 Entropy
Entropy(S)= expected number of bits needed to
encode class (+ or -) of randomly drawn members
of S(under the optimal, shortestlength-code)
Why?
Information theory optimal length code assign
–log2 p bits to messages having probability p.
So the expected number of bits to encode
(+ or -) of random member of S:
-p+ log2 p+ - p- log2 p-
Information Gain (S=E)
18

Gain(S,A): expected reduction in entropy due to


sorting Son attributeA

Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64


= 0.99
[29+,35-] A1=? A2=? [29+,35-]

G H True False

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]


19 Information Gain
Entropy([21+,5-]) =0.71 Entropy([18+,33-]) = 0.94
Entropy([8+,30-]) =0.74 Entropy([11+,2-]) = 0.62
Gain(S,A1)=Entropy(S) Gain(S,A2)=Entropy(S)
-26/64*Entropy([21+,5-]) -51/64*Entropy([18+,33-])
-38/64*Entropy([8+,30-]) -13/64*Entropy([11+,2-])
=0.27 =0.12

[29+,35-] A1=? A2=? [29+,35-]

True False True False

[21+, 5-] [8+, 30-] [18+, 33-] [11+, 2-]


20 Training Examples
Day Outlook Temp. Humidity Wind Play Tennis
D1 Sunny Hot High Weak No
D2 Sunny Hot High Strong No
D3 Overcast Hot High Weak Yes
D4 Rain Mild High Weak Yes
D5 Rain Cool Normal Weak Yes
D6 Rain Cool Normal Strong No
D7 Overcast Cool Normal Weak Yes
D8 Sunny Mild High Weak No
D9 Sunny Cold Normal Weak Yes
D10 Rain Mild Normal Strong Yes
D11 Sunny Mild Normal Strong Yes
D12 Overcast Mild High Strong Yes
D13 Overcast Hot Normal Weak Yes
D14 Rain Mild High Strong No
21 Selecting the NextAttribute
S=[9+,5-] S=[9+,5-]
E=0.940 E=0.940
Humidity Wind

High Normal Weak Strong

[3+, 4-] [6+, 1-] [6+, 2-] [3+, 3-]


E=0.985 E=0.592 E=0.811 E=1.0
Gain(S,Humidity) Gain(S,Wind)
=0.940-(7/14)*0.985 =0.940-(8/14)*0.811
– (7/14)*0.592 – (6/14)*1.0
=0.151 =0.048
Humidity provides greater info. gain than Wind, w.r.t target classification.
22 Selecting the NextAttribute
S=[9+,5-]
E=0.940
Outlook

Over
Sunny Rain
cast

[2+, 3-] [4+, 0] [3+, 2-]


E=0.971 E=0.0 E=0.971
Gain(S,Outlook)
=0.940-(5/14)*0.971
-(4/14)*0.0 – (5/14)*0.0971
=0.247
23 Selecting the NextAttribute
The information gain values for the 4 attributes
are:
• Gain(S,Outlook) =0.247
• Gain(S,Humidity) =0.151
• Gain(S,Wind) =0.048
• Gain(S,Temperature) =0.029

where S denotes the collection of training


examples
24 ID3 Algorithm
[D1,D2,…,D14] Outlook
[9+,5-]

Sunny Overcast Rain

Ssunny =[D1,D2,D8,D9,D11] [D3,D7,D12,D13] [D4,D5,D6,D10,D14]


[2+,3-] [4+,0-] [3+,2-]
? Yes ?
Gain(Ssunny, Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970
Gain(Ssunny, Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570
Gain(Ssunny, Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019
25 ID3 Algorithm
Outlook

Sunny Overcast Rain

Humidity Yes Wind


[D3,D7,D12,D13]

High Normal Strong Weak

No Yes No Yes

[D1,D2] [D8,D9,D11] [D6,D14] [D4,D5,D10]


[mistake]
Occam’s Razor
26

”If two theories explain the facts equally weel, then the
simpler theory is to be preferred”

Arguments in favor:
Fewer short hypotheses than long
hypotheses
A short hypothesis that fits the data is unlikely to be
a coincidence
A long hypothesis that fits the data might be a
coincidence

Arguments opposed:
There are many ways to define small sets of
hypotheses
Overfitting
27

One of the biggest problems with decision


trees
is Overfitting
28 Avoid Overfitting
Stop growing when split not statistically significant
Grow full tree, then post-prune

Select “best” tree:


measure performance over trainingdata
measure performance over separate validation data set
min( |tree|+|misclassifications(tree)|)
29
Effect of Reduced Error Pruning
30 Converting a Tree to Rules
Outlook

Sunny Overcast Rain

Humidity Yes Wind

High Normal Strong Weak


No Yes No Yes

R1: If (Outlook=Sunny)  (Humidity=High) Then PlayTennis=No


R2: If (Outlook=Sunny)  (Humidity=Normal) Then PlayTennis=Yes
R3: If (Outlook=Overcast) Then PlayTennis=Yes
R4: If (Outlook=Rain)  (Wind=Strong) Then PlayTennis=No
R5: If (Outlook=Rain)  (Wind=Weak) Then PlayTennis=Yes
31 Continuous Valued Attributes
Create a discrete attribute to test continuous
Temperature =24.50C
(Temperature >20.00C) ={true, false}
Where to set the threshold?

Temperature 150C 180C 190C 220C 240C 270C

PlayTennis No No Yes Yes Yes No


Random forest classifier

Random forest classifier, an extension to bagging which uses de-correlated


trees.

To say it in simple words: Random forest builds multiple decision trees and
merges them together to get a more accurate and stable prediction
Random Forest Classifier
Training Data

M features
N examples
Random Forest Classifier
Create bootstrap samples
from the training data

M features
N examples

....

Random Forest Classifier
Construct a decision tree

M features
N examples

....

Random Forest Classifier
At each node in choosing the split feature
choose only among m<M features

M features
N examples

....

Random Forest Classifier
Create decision tree
from each bootstrap sample

M features
N examples

....
....



Random Forest Classifier

M features
N examples

Take the
majority
vote

....
....



39 Unknown Attribute Values
What if some examples have missing values of A?
Use training example anyway sort through tree
If node n tests A, assign most common value of A
among other examples sorted to node n.
Assign most common value of A among other
examples with same target value
Assign probability pi to each possible value vi of A
Assign fraction pi of example to each descendantin
tree

Classify new examples in the same fashion


40 Cross-Validation
Estimate the accuracy of an hypothesis induced by a supervised
learning algorithm
Predict the accuracy of an hypothesis over future unseen
instances
Select the optimal hypothesis from a given set of alternative
hypotheses
Pruning decision trees
Model selection
Feature selection
Combining multiple classifiers(boosting)

You might also like