Professional Documents
Culture Documents
Introduction Introduction
It is a method that induces concepts from examples The target function can be Boolean or discrete valued
(inductive learning)
1 2
3 4
Example
Outlook Decision Tree Representation
Sunny Rain
Overcast
Decision trees represent a disjunction of conjunctions of
Humidity Wind constraints on the attribute values of instances
High Normal Strong Weak Each path from the tree root to a leaf corresponds to a
conjunction of attribute tests (one rule for classification)
1
DECISION TREES DECISION TREES
7 8
Basic Decision Tree Learning Algorithm Basic Decision Tree Learning Algorithm
We have
First of all we select the best attribute to be tested at the root
of the tree D12 D11
D1 - 12 observations
9 10
Basic Decision Tree Learning Algorithm Basic Decision Tree Learning Algorithm
D1 D8 D10 D6
D3
D14
D11 D4
D9 D12
D2 D7 D5
D13
11 12
2
DECISION TREES DECISION TREES
Outlook
Sunny Rain Basic Decision Tree Learning Algorithm
Overcast
This forms a greedy search for an acceptable decision tree, in
D1 D8 D10 D6 which the algorithm never backtracks to reconsider earlier
D3
D14 choices
D11 D4
D9 D12
D2 D7
D13 D5
What is the
“best” attribute to test at this point? The possible choices are
Temperature, Wind & Humidity
13 14
Which Attribute is the Best Classifier? Which Attribute is the Best Classifier?: Definition of Entropy
The central choice in the ID3 algorithm is selecting which In order to define information gain precisely, we begin by
attribute to test at each node in the tree defining entropy
We would like to select the attribute which is most useful for Entropy is a measure commonly used in information theory,
classifying examples called.
For this we need a good quantitative measure Entropy characterizes the impurity of an arbitrary collection
of examples
For this purpose a statistical property, called information
gain is used
15 16
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Definition of Entropy
Suppose we have four independent values of a variable X: Someone tells you that there probabilities of occurrence is
not equal:
A, B, C, D
p(A) = 1/2
These values are independent and occur randomly p(B) = 1/4
p(C) = 1/8
You might transmit these values over a binary serial link by p(D) = 1/8
encoding each reading with two bits
It is now possible to invent a coding that only uses 1.75 bits
A = 00B = 01 C = 10 D = 11 on average per symbol, for the transmission, e.g.
3
DECISION TREES DECISION TREES
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Definition of Entropy
If all p’s are equal for a given m, we need the highest number
Suppose X can have m values, V1, V2, …, Vm, with of bits for transmission
probabilities: p1, p2, …, pm
If there are m possible values of an attribute, then the
The smallest number of bits, on average, per value, needed to entropy can be as large as log2 m
transmit a stream of values of X is
19 20
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Information Gain
This formula is called Entropy H Suppose we are trying to predict output Y (Like Film
Gladiator) & we have input X (College Major = v)
H(X) =
Major
High Entropy means that the examples have more equal Math CS
probability of occurrence (and therefore not easily
History
predictable)
21 22
Which Attribute is the Best Classifier?: Information gain Which Attribute is the Best Classifier?: Information Gain
We have H(X) = 1.5 H(Y) = 1.0 Conditional Entropy of Y
H(Y | X = Math) = 1.0
Conditional Entropy H(Y | X = v) H(Y | X = History) = 0
The Entropy of Y among only those records in which X = v H(Y | X = CS) = 0
Major Major
Math CS Math CS
History History
23 24
4
DECISION TREES DECISION TREES
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
Average Conditional Entropy of Y
Information Gain is the expected reduction in entropy
caused by partitioning the examples according to an
H(Y | X) =
attribute’s value
Major
Math CS Info Gain (Y | X) = H(Y) – H(Y | X) = 1.0 – 0.5 = 0.5
History
For transmitting Y, how much bits would be saved if both
side of the line knew X
25 26
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
27 28
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
The information gain obtained by separating the We calculate the Info Gain for each attribute and select
examples according to the attribute Wind is calculated as: the attribute having the highest Info Gain
29 30
5
DECISION TREES DECISION TREES
Select Attributes which Minimize Disorder Select Attributes which Minimize Disorder
Make decision tree by selecting tests which minimize Make decision tree by selecting tests which minimize
disorder (maximize gain) disorder (maximize gain)
31 32
33 34
Example
35 36
6
DECISION TREES DECISION TREES
Example Example
The process of selecting a new attribute is now repeated for This process continues for each new leaf node until either:
each (non-terminal) descendant node, this time using only
training examples associated with that node 1. Every attribute has already been included along this path
through the tree
Attributes that have been incorporated higher in the tree are
excluded, so that any given attribute can appear at most once 2. The training examples associated with a leaf node have
along any path through the tree zero entropy
37 38
7
DECISION TREES DECISION TREES
• The hypothesis space of all decision trees is a complete By determining only a single hypothesis, ID3 loses the
space. Hence the target function is guaranteed to be capabilities that follow from explicitly representing all
present in it. consistent hypotheses.
• ID3 performs no backtracking, therefore it is • Given a collection of training examples, there are
susceptible to converging to locally optimal solutions typically many decision trees consistent with these
examples
• ID3 uses all training examples at each step to refine its
current hypothesis. This makes it less sensitive to • Describing the inductive bias of ID3 means describing the
errors in individual training examples. basis by which it chooses one of these consistent
However, this requires that all the training examples hypotheses over the others
are present right from the beginning and the learning
cannot be done incrementally with time
45 46
• We cannot describe precisely the bias, but we can say • We can say “it absolutely prefers shorter trees over
longer ones” if there is an algorithm such that:
approximately that:
- Its selection prefers shorter trees over longer ones
- It begins with an empty tree and searches breadth
- Trees that place high info. gain attributes close to the
first through progressively more complex trees,
root are preferred over those that do not
first considering “all” trees of depth 1, then “all”
trees of depth 2, etc.
8
DECISION TREES DECISION TREES
• William of Occam, in the year 1320, thought of the We might expect to be able to find many 500 node
following bias (called Occam’s razor): decision trees consistent with these examples, than 5 node
Prefer the simplest hypothesis that fits the data decision trees
• One argument in its favor is that because there are fewer We might therefore believe that a 5-node tree is less likely
short hypotheses than long ones, it is less likely that a to be a statistical coincidence and prefer this hypothesis
short hypothesis coincidentally fit the training data over the 500 node hypothesis
49 50
Practical issues in learning decision trees include: The ID3 algorithm grows each branch of the tree just deeply
enough to perfectly classify the training examples
• How deeply to grow the decision tree
While this is sometimes a reasonable strategy, in fact it can
• Handling continuous attributes lead to difficulties when there is noise in the data, or when
the number of training examples is too small to produce a
• Choosing an appropriate attribute selection measure representative sample of the true target function
• Handling training data with missing attribute values In either of these cases, ID3 can produce trees that over-fit
the training examples
• Handling attributes with differing costs
51 52
53 54
9
DECISION TREES DECISION TREES
55 56
In this case, it is possible for coincidental regularities to One popular approach is to prune over-fit trees
occur, in which some attribute happens to partition the
examples very well, despite being unrelated to the actual A key question is: what criterion is to be used to determine
target function the correct final tree size
Whenever such coincidental regularities exist, there is a risk A commonly used practice is to use a separate set of
of overfitting examples, distinct from training examples, (called validation
set) for post-pruning nodes
Example: If Days is an attribute, and we have only one or
two observations for each day
57 58
In this approach, the available observations are separated Therefore, the validation set can be expected to provide a
into sets: safety check against over-fitting the spurious
characteristics of the training set
- A training set: which is used to learn the decision
tree Of course, it is important that the validation set be large
- A validation set: which is used to prune the tree enough to itself provide a statistically significant
sample of the instances
The motivation: Even though the learner may be misled by
random errors and coincidental regularities within One common heuristic is to withhold one-third of the
the training set, the validation set is unlikely to exhibit available examples for the validation set, using the
the same random fluctuations other two-thirds for training
59 60
10
DECISION TREES DECISION TREES
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Reduced Error Pruning
One approach is called “reduced error pruning” Nodes are removed only if the resulting pruned tree
It is a form of backtracking in the hill climbing search performs no worse than the original over the
of decision tree hypotheses space validation set
It considers each of the decision nodes in the tree to be This has the effect that any leaf node added due to
candidates for pruning coincidental regularities in the training set is likely to
be pruned because these same coincidences are
Pruning a decision node consists of unlikely to occur in the validation set
- removing the sub-tree rooted at that node, making it
a leaf node, and Nodes are pruned iteratively, always choosing the node
- assigning it the most common classification of the whose removal most increases the decision tree
training examples affiliated with that node accuracy over the validation set
61 62
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Reduced Error Pruning
Here, the available data has been split into three sub-sets:
- the training examples
- the validation examples for pruning
- the test examples used to provide an unbiased estimate
of accuracy of the pruned tree
63 64
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Rule Post-Pruning
Rule post-pruning involves the following steps:
The major drawback of this approach is that when data is
limited, withholding part of it for the validation set 1. Infer the decision tree from the training set (allowing
reduces even further the number of examples over-fitting to occur)
available for training 2. Convert the learned tree into an equivalent set of rules by
creating one rule for each path from the root node to a
leaf node
3. Prune (generalize) each rule by pruning any
preconditions that result in improving its estimated
accuracy
4. Sort the pruned rules by their estimated accuracy, and
consider them in this sequence when classifying
subsequent instances
65 66
11
DECISION TREES DECISION TREES
Avoiding Over-fitting the Data: Rule Post-Pruning Avoiding Over-fitting the Data: Rule Post-Pruning
Example: If (Outlook = sunny) and (Humidity = high) The main advantage of this approach:
then Play Tennis = no
Each distinct path through the decision tree produces a
Rule post-pruning would consider removing the distinct rule
preconditions one by one Hence removing a precondition in a rule does not
mean that it has to be removed from other rules as
It would select whichever of these removals produced the well
greatest improvement in estimated rule accuracy, then In contrast, in the previous approach, the only two
consider pruning the second precondition as a further choices would be to remove the decision node
pruning step completely, or to retain it in its original form
Avoiding Over-fitting the Data: Rule Post-Pruning Avoiding Over-fitting the Data: Rule Post-Pruning
Rule post-pruning involves the following steps:
Example: If (Outlook = sunny) and (Humidity = high)
then Play Tennis = no
1. Infer the decision tree from the training set (allowing
over-fitting to occur)
Rule post-pruning would consider removing the
2. Convert the learned tree into an equivalent set of rules by
preconditions one by one
creating one rule for each path from the root node to a
leaf node
It would select whichever of these removals produced the
3. Prune (generalize) each rule by pruning any
greatest improvement in estimated rule accuracy, then
preconditions that result in improving its estimated
consider pruning the second precondition as a further
accuracy
pruning step
4. Sort the pruned rules by their estimated accuracy, and
consider them in this sequence when classifying
No pruning is done if it reduces the estimated rule accuracy
subsequent instances
69 70
Avoiding Over-fitting the Data: Rule Post-Pruning Decision Trees: Issues in Learning
The main advantage of this approach: Practical issues in learning decision trees include:
Each distinct path through the decision tree produces a • How deeply to grow the decision tree
distinct rule
Hence removing a precondition in a rule does not • Handling continuous attributes
mean that it has to be removed from other rules as
well • Choosing an appropriate attribute selection measure
In contrast, in the previous approach, the only two
choices would be to remove the decision node • Handling training data with missing attribute values
completely, or to retain it in its original form
• Handling attributes with differing costs
71 72
12
DECISION TREES DECISION TREES
• The only question is how to select the best value for the
threshold c
73 74
• We sort the examples according to the continuous • In the current example, there are two candidate
attribute A thresholds, corresponding to the values of
Temperature at which the value of Play Tennis
• Then identify adjacent examples that differ in their changes: (48 + 60)/2 and (80 + 90)/2
target classification
• The information gain is computed for each of these
• We generate a set of candidate thresholds midway attributes, Temperature > 54 and Temperature > 85,
between the corresponding values of A and the best is selected (Temperature > 54)
75 76
13
DECISION TREES DECISION TREES
Training Examples with Missing Attribute Values Training Examples with Missing Attribute Values
• One strategy for filling in the missing value: Assign it • Another procedure is to assign a probability to each of
the value most common for the attribute A among the possible values of A (rather than assigning only
training examples at node n the highest probability value)
• Alternatively, we might assign it the most common • These probabilities can be estimated by observing the
value among examples at node n that have the frequencies of the various values of A among the
classification c(x) examples at node n
• The training example using the estimated value can • For example, given a Boolean attribute A, if node n
then be used directly by the decision tree learning contains six known examples with A = 1 and four with
algorithm A = 0, then we would say the probability that A(x) = 1
is 0.6 and the probability that A(x) = 0 is 0.4
79 80
Training Examples with Missing Attribute Values Classification of Instances with Missing Attribute Values
• A fractional 0.6 of instance x is distributed down the • The fractioning of examples can also be applied to
branch for A = 1, and a fractional 0.4 of x down the classify new instances whose attribute values are
other tree branch unknown
• These fractional examples, along with other “integer” • In this case, the classification of the new instance is
examples are used for the purpose of computing simply the most probable classification, computed by
information Gain summing the weights of the instance fragments
classified in different ways at the leaf nodes of the tree
• This method for handling missing attribute values is
used in C4.5
81 82
Handling Attributes with Differing Costs Handling Attributes with Differing Costs
• In some learning tasks, the attributes may have • In ID3, attribute costs can be taken into account by
associated costs introducing a cost term into the attribute selection
measure
• For example, we may attributes such as Temperature,
Biopsy Result, Pulse, Blood Test Result, etc. • For example, we might divide the Gain by the cost of the
attribute, so that lower-cost attributes would be
• These attributes vary significantly in their costs preferred
(monetary costs, patient comfort, time involved)
• Such cost-sensitive measures do not guarantee
• In such tasks, we would prefer decision trees that use finding an optimal cost-sensitive decision tree
low-cost attributes where possible, relying on high
cost attributes only when needed to provide reliable • However, they do bias the search in favor of low cost
classifications attributes
83 84
14
DECISION TREES DECISION TREES
Handling Attributes with Differing Costs Alternate Measures for Selecting Attributes
• Another example of selection measure is:
• There is a problem in the information gain measure. It
favors attributes with many values over those with few
Gain2 (S,A) / Cost(A)
values
where S = collection of examples & A = attribute • Example: An attribute “Date” would have the highest
information gain (as it would alone perfectly fit the
• Yet another selection measure can be training data)
• To cushion this problem the Info. Gain is divided by a
2Gain (S,A) – 1 / {Cost(A) + 1}w term called “Split Info”
Alternate Measures for Selecting Attributes Alternate Measures for Selecting Attributes
• Example:
Let there be 100 training examples at a node A1, with
where Si is the subset of S for which A has value vi 100 branches (one sliding down each branch)
Note that the attribute A can take on c different values,
Split Info (S, A1) = - 100 * 1/100 * log2 (0.01)
e.g. if A = Outlook,
= log2(100) = 6.64
then v1 = Sunny, v2 = Rain, v3 = Overcast
87 88
89 90
15
DECISION TREES DECISION TREES
91 92
Advantages Disadvantages
• Classification is rapid & computationally inexpensive • They may generate very complex (long) rules, which
are very hard to prune
• Trees provide a natural way to incorporate prior
• They generate large number of rules. Their number
knowledge from human experts can become excessively large unless some pruning
techniques are used to make them more
comprehensible.
93 94
• They do not easily support incremental learning. • Instances are represented by discrete attribute-value pairs
Although ID3 would still work if examples are (though the basic algorithm was extended to real-valued
supplied one at a time, but it would grow a new attributes as well)
decision tree from scratch every time a new example
is given
• The target function has discrete output values
• There may be portions of concept space which are not
labeled • Disjunctive hypothesis descriptions may be required
e.g. If low income and bad credit history then high
risk • The training data may contain errors
but what about low income and good credit history?
• The training data may contain missing attribute values
95 96
16
DECISION TREES
Reference
97
17