Professional Documents
Culture Documents
Introduction Introduction
It is a method that induces concepts from examples The target function can be Boolean or discrete valued
(inductive learning)
1 2
3 4
Example
Outlook Decision Tree Representation
Sunny Rain
Overcast
Decision trees represent a disjunction of conjunctions of
Humidity Wind constraints on the attribute values of instances
High Normal Strong Weak Each path from the tree root to a leaf corresponds to a
conjunction of attribute tests (one rule for classification)
1
DECISION TREES DECISION TREES
7 8
Basic Decision Tree Learning Algorithm Basic Decision Tree Learning Algorithm
We have
First of all we select the best attribute to be tested at the root
of the tree D12 D11
D1 - 12 observations
9 10
Basic Decision Tree Learning Algorithm Basic Decision Tree Learning Algorithm
D1 D8 D10 D6
D3
D14
D11 D4
D9 D12
D2 D7 D5
D13
11 12
2
DECISION TREES DECISION TREES
Outlook
Sunny Rain Basic Decision Tree Learning Algorithm
Overcast
This forms a greedy search for an acceptable decision tree, in
D1 D8 D10 D6 which the algorithm never backtracks to reconsider earlier
D3
D14 choices
D11 D4
D9 D12
D2 D7
D13 D5
What is the
“best” attribute to test at this point? The possible choices are
Temperature, Wind & Humidity
13 14
Which Attribute is the Best Classifier? Which Attribute is the Best Classifier?: Definition of Entropy
The central choice in the ID3 algorithm is selecting which In order to define information gain precisely, we begin by
attribute to test at each node in the tree defining entropy
We would like to select the attribute which is most useful for Entropy is a measure commonly used in information theory,
classifying examples called.
For this we need a good quantitative measure Entropy characterizes the impurity of an arbitrary collection
of examples
For this purpose a statistical property, called information
gain is used
15 16
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Definition of Entropy
Suppose we have four independent values of a variable X: Someone tells you that there probabilities of occurrence is
not equal:
A, B, C, D
p(A) = 1/2
These values are independent and occur randomly p(B) = 1/4
p(C) = 1/8
You might transmit these values over a binary serial link by p(D) = 1/8
encoding each reading with two bits
It is now possible to invent a coding that only uses 1.75 bits
A = 00B = 01 C = 10 D = 11 on average per symbol, for the transmission, e.g.
3
DECISION TREES DECISION TREES
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Definition of Entropy
If all p’s are equal for a given m, we need the highest number
Suppose X can have m values, V1, V2, …, Vm, with of bits for transmission
probabilities: p1, p2, …, pm
If there are m possible values of an attribute, then the
The smallest number of bits, on average, per value, needed to entropy can be as large as log2 m
transmit a stream of values of X is
19 20
Which Attribute is the Best Classifier?: Definition of Entropy Which Attribute is the Best Classifier?: Information Gain
This formula is called Entropy H Suppose we are trying to predict output Y (Like Film
Gladiator) & we have input X (College Major = v)
H(X) =
Major
High Entropy means that the examples have more equal Math CS
probability of occurrence (and therefore not easily
History
predictable)
21 22
Which Attribute is the Best Classifier?: Information gain Which Attribute is the Best Classifier?: Information Gain
We have H(X) = 1.5 H(Y) = 1.0 Conditional Entropy of Y
H(Y | X = Math) = 1.0
Conditional Entropy H(Y | X = v) H(Y | X = History) = 0
The Entropy of Y among only those records in which X = v H(Y | X = CS) = 0
Major Major
Math CS Math CS
History History
23 24
4
DECISION TREES DECISION TREES
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
Average Conditional Entropy of Y
Information Gain is the expected reduction in entropy
caused by partitioning the examples according to an
H(Y | X) =
attribute’s value
Major
Math CS Info Gain (Y | X) = H(Y) – H(Y | X) = 1.0 – 0.5 = 0.5
History
For transmitting Y, how much bits would be saved if both
side of the line knew X
25 26
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
27 28
Which Attribute is the Best Classifier?: Information Gain Which Attribute is the Best Classifier?: Information Gain
The information gain obtained by separating the We calculate the Info Gain for each attribute and select
examples according to the attribute Wind is calculated as: the attribute having the highest Info Gain
29 30
5
DECISION TREES DECISION TREES
Select Attributes which Minimize Disorder Select Attributes which Minimize Disorder
Make decision tree by selecting tests which minimize Make decision tree by selecting tests which minimize
disorder (maximize gain) disorder (maximize gain)
31 32
33 34
Example
35 36
6
DECISION TREES DECISION TREES
Example Example
The process of selecting a new attribute is now repeated for This process continues for each new leaf node until either:
each (non-terminal) descendant node, this time using only
training examples associated with that node 1. Every attribute has already been included along this path
through the tree
Attributes that have been incorporated higher in the tree are
excluded, so that any given attribute can appear at most once 2. The training examples associated with a leaf node have
along any path through the tree zero entropy
37 38
7
DECISION TREES DECISION TREES
• The hypothesis space of all decision trees is a complete By determining only a single hypothesis, ID3 loses the
space. Hence the target function is guaranteed to be capabilities that follow from explicitly representing all
present in it. consistent hypotheses.
• ID3 performs no backtracking, therefore it is • Given a collection of training examples, there are
susceptible to converging to locally optimal solutions typically many decision trees consistent with these
examples
• ID3 uses all training examples at each step to refine its
current hypothesis. This makes it less sensitive to • Describing the inductive bias of ID3 means describing the
errors in individual training examples. basis by which it chooses one of these consistent
However, this requires that all the training examples hypotheses over the others
are present right from the beginning and the learning
cannot be done incrementally with time
45 46
• We cannot describe precisely the bias, but we can say • We can say “it absolutely prefers shorter trees over
longer ones” if there is an algorithm such that:
approximately that:
- Its selection prefers shorter trees over longer ones
- It begins with an empty tree and searches breadth
- Trees that place high info. gain attributes close to the
first through progressively more complex trees,
root are preferred over those that do not
first considering “all” trees of depth 1, then “all”
trees of depth 2, etc.
8
DECISION TREES DECISION TREES
• William of Occam, in the year 1320, thought of the We might expect to be able to find many 500 node
following bias (called Occam’s razor): decision trees consistent with these examples, than 5 node
Prefer the simplest hypothesis that fits the data decision trees
• One argument in its favor is that because there are fewer We might therefore believe that a 5-node tree is less likely
short hypotheses than long ones, it is less likely that a to be a statistical coincidence and prefer this hypothesis
short hypothesis coincidentally fit the training data over the 500 node hypothesis
49 50
Practical issues in learning decision trees include: The ID3 algorithm grows each branch of the tree just deeply
enough to perfectly classify the training examples
• How deeply to grow the decision tree
While this is sometimes a reasonable strategy, in fact it can
• Handling continuous attributes lead to difficulties when there is noise in the data, or when
the number of training examples is too small to produce a
• Choosing an appropriate attribute selection measure representative sample of the true target function
• Handling training data with missing attribute values In either of these cases, ID3 can produce trees that over-fit
the training examples
• Handling attributes with differing costs
51 52
53 54
9
DECISION TREES DECISION TREES
55 56
In this case, it is possible for coincidental regularities to One popular approach is to prune over-fit trees
occur, in which some attribute happens to partition the
examples very well, despite being unrelated to the actual A key question is: what criterion is to be used to determine
target function the correct final tree size
Whenever such coincidental regularities exist, there is a risk A commonly used practice is to use a separate set of
of overfitting examples, distinct from training examples, (called validation
set) for post-pruning nodes
Example: If Days is an attribute, and we have only one or
two observations for each day
57 58
In this approach, the available observations are separated Therefore, the validation set can be expected to provide a
into sets: safety check against over-fitting the spurious
characteristics of the training set
- A training set: which is used to learn the decision
tree Of course, it is important that the validation set be large
- A validation set: which is used to prune the tree enough to itself provide a statistically significant
sample of the instances
The motivation: Even though the learner may be misled by
random errors and coincidental regularities within One common heuristic is to withhold one-third of the
the training set, the validation set is unlikely to exhibit available examples for the validation set, using the
the same random fluctuations other two-thirds for training
59 60
10
DECISION TREES DECISION TREES
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Reduced Error Pruning
One approach is called “reduced error pruning” Nodes are removed only if the resulting pruned tree
It is a form of backtracking in the hill climbing search performs no worse than the original over the
of decision tree hypotheses space validation set
It considers each of the decision nodes in the tree to be This has the effect that any leaf node added due to
candidates for pruning coincidental regularities in the training set is likely to
be pruned because these same coincidences are
Pruning a decision node consists of unlikely to occur in the validation set
- removing the sub-tree rooted at that node, making it
a leaf node, and Nodes are pruned iteratively, always choosing the node
- assigning it the most common classification of the whose removal most increases the decision tree
training examples affiliated with that node accuracy over the validation set
61 62
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Reduced Error Pruning
Here, the available data has been split into three sub-sets:
- the training examples
- the validation examples for pruning
- the test examples used to provide an unbiased estimate
of accuracy of the pruned tree
63 64
Avoiding Over-fitting the Data: Reduced Error Pruning Avoiding Over-fitting the Data: Rule Post-Pruning
Rule post-pruning involves the following steps:
The major drawback of this approach is that when data is
limited, withholding part of it for the validation set 1. Infer the decision tree from the training set (allowing
reduces even further the number of examples over-fitting to occur)
available for training 2. Convert the learned tree into an equivalent set of rules by
creating one rule for each path from the root node to a
leaf node
3. Prune (generalize) each rule by pruning any
preconditions that result in improving its estimated
accuracy
4. Sort the pruned rules by their estimated accuracy, and
consider them in this sequence when classifying
subsequent instances
65 66
11
DECISION TREES DECISION TREES
Avoiding Over-fitting the Data: Rule Post-Pruning Avoiding Over-fitting the Data: Rule Post-Pruning
Example: If (Outlook = sunny) and (Humidity = high) The main advantage of this approach:
then Play Tennis = no
Each distinct path through the decision tree produces a
Rule post-pruning would consider removing the distinct rule
preconditions one by one Hence removing a precondition in a rule does not
mean that it has to be removed from other rules as
It would select whichever of these removals produced the well
greatest improvement in estimated rule accuracy, then In contrast, in the previous approach, the only two
consider pruning the second precondition as a further choices would be to remove the decision node
pruning step completely, or to retain it in its original form
Practical issues in learning decision trees include: • If an attribute has continuous values, we can dynamically
define new discrete-valued attributes that partition
• How deeply to grow the decision tree the continuous attribute value into a discrete set of
intervals
• Handling continuous attributes
• In particular, for an attribute A that is continuous
• Choosing an appropriate attribute selection measure valued, the algorithm can dynamically create a new
Boolean attribute Ac that is true if A < c and false
• Handling training data with missing attribute values otherwise
• Handling attributes with differing costs • The only question is how to select the best value for the
threshold c
69 70
71 72
12
DECISION TREES DECISION TREES
• In the current example, there are two candidate • This dynamically created Boolean attribute can then
thresholds, corresponding to the values of compete with other discrete valued candidate
Temperature at which the value of Play Tennis attributes available for growing the decision tree
changes: (48 + 60)/2 and (80 + 90)/2
• An extension to this approach is to split the continuous
• The information gain is computed for each of these attribute into multiple intervals rather than just two
attributes, Temperature > 54 and Temperature > 85, intervals (i.e. the attribute become multi-valued,
and the best is selected (Temperature > 54) instead of Boolean)
73 74
Training Examples with Missing Attribute Values Training Examples with Missing Attribute Values
• In certain cases, the available data may have some
• One strategy for filling in the missing value: Assign it
examples with missing values for some attributes
the value most common for the attribute A among
training examples at node n
• In such cases the missing attribute value can be
estimated based on other examples for which this
• Alternatively, we might assign it the most common
attribute has a known value
value among examples at node n that have the
classification c(x)
• Suppose Gain(S,A) is to be calculated at node n in the
decision tree to evaluate whether the attribute A is the
• The training example using the estimated value can
best attribute to test at this decision node
then be used directly by the decision tree learning
algorithm
• Suppose that <x, c(x)> is one of the training examples
with the value A(x) unknown
75 76
Training Examples with Missing Attribute Values Training Examples with Missing Attribute Values
• Another procedure is to assign a probability to each of • A fractional 0.6 of instance x is distributed down the
the possible values of A (rather than assigning only branch for A = 1, and a fractional 0.4 of x down the
the highest probability value) other tree branch
• These probabilities can be estimated by observing the • These fractional examples, along with other “integer”
frequencies of the various values of A among the examples are used for the purpose of computing
examples at node n information Gain
• For example, given a Boolean attribute A, if node n • This method for handling missing attribute values is
contains six known examples with A = 1 and four with used in C4.5
A = 0, then we would say the probability that A(x) = 1
is 0.6 and the probability that A(x) = 0 is 0.4
77 78
13
DECISION TREES DECISION TREES
Classification of Instances with Missing Attribute Values Handling Attributes with Differing Costs
• In some learning tasks, the attributes may have
• The fractioning of examples can also be applied to associated costs
classify new instances whose attribute values are
unknown • For example, we may attributes such as Temperature,
Biopsy Result, Pulse, Blood Test Result, etc.
• In this case, the classification of the new instance is
simply the most probable classification, computed by • These attributes vary significantly in their costs
summing the weights of the instance fragments (monetary costs, patient comfort, time involved)
classified in different ways at the leaf nodes of the tree
• In such tasks, we would prefer decision trees that use
low-cost attributes where possible, relying on high
cost attributes only when needed to provide reliable
79 classifications 80
Handling Attributes with Differing Costs Handling Attributes with Differing Costs
• In ID3, attribute costs can be taken into account by • Another example of selection measure is:
introducing a cost term into the attribute selection
measure Gain2 (S,A) / Cost(A)
• For example, we might divide the Gain by the cost of the where S = collection of examples & A = attribute
attribute, so that lower-cost attributes would be
preferred • Yet another selection measure can be
• Such cost-sensitive measures do not guarantee 2Gain (S,A) – 1 / {Cost(A) + 1}w
finding an optimal cost-sensitive decision tree
where w [0, 1] is a constant that determines the
• However, they do bias the search in favor of low cost relative importance of cost versus information gain
attributes
81 82
Alternate Measures for Selecting Attributes Alternate Measures for Selecting Attributes
83 84
14
DECISION TREES DECISION TREES
Alternate Measures for Selecting Attributes Alternate Measures for Selecting Attributes
85 86
87 88
Advantages Advantages
• Easy Interpretation: They reveal relationships between • Classification is rapid & computationally inexpensive
the rules, which can be derived from the tree. Because
of this it is easy to see the structure of the data. • Trees provide a natural way to incorporate prior
knowledge from human experts
• We can occasionally get clear interpretations of the
categories (classes) themselves from the disjunction of
rules produced, e.g. Apple = (green AND medium) OR
(red AND medium)
89 90
15
DECISION TREES DECISION TREES
Disadvantages Disadvantages
• They may generate very complex (long) rules, which • They do not easily support incremental learning.
are very hard to prune Although ID3 would still work if examples are
supplied one at a time, but it would grow a new
decision tree from scratch every time a new example
• They generate large number of rules. Their number is given
can become excessively large unless some pruning
techniques are used to make them more
comprehensible. • There may be portions of concept space which are not
labeled
e.g. If low income and bad credit history then high
• They require big amounts of memory to store the
entire tree for deriving the rules. risk
but what about low income and good credit history?
91 92
16